The trend of asking ChatGPT to generate personalized caricatures has gained popularity on social media in recent days. Users of different platforms exhibit illustrations that reflect physical traits, work references, and personal aspects, resulting from the information accumulated in interactions with the artificial intelligence assistant.
In cases like this, specialists always warn about the privacy risks associated with the public dissemination of images, as they can expose sensitive data and facilitate cyberattacks or situations of harassment.
Risks of Disclosing Sensitive Information
The process to obtain a personalized caricature begins with a message addressed to ChatGPT requesting an image based on the conversation history. In many cases, the assistant also asks for a photograph to refine the visual details. Thus, the AI integrates physical, symbolic, and personal elements, generating an illustration tailored to the user's digital profile.We recommend reading:ChatGPT falls worldwide
The virality of these cartoons, which are often shared on platforms like Instagram and X (Twitter), has prompted cybersecurity experts to warn about the amount of personal information that is exposed. The publication of this type of images can reveal names, occupations, tastes, habits, and visual details that, combined, allow a person to be identified even if explicit data such as the address or phone number is not disclosed. The care of sharing sensitive data should not be limited only to the public sphere, but also to our interaction with chatbots.When the trend of adapting photographs to the Ghibli style gained momentum, users were warned that the images shared with the AI may contain hidden metadata. These include information about the location, the date and time the photo was taken, the device used, and other technical details that go unnoticed by the end user.
According to the privacy policy of OpenAI, the company responsible for ChatGPT, all information provided by users —texts, images, and files— may be temporarily stored and used for the training of AI models. Furthermore, the company reserves the right to use this data to improve its services, develop new features, or even for commercial purposes. Although the company assures that the data is not stored indefinitely, the exact time it remains on its servers is unknown. The risk increases when the shared images include minors, vulnerable people, or data that could be used to impersonate identity. In the event of a security breach, the photos, along with their metadata, could be exposed and exploited for phishing attacks, cyberstalking, or social engineering campaigns. You can also read: Telegram founder spreads a message to Spanish users against Pedro Sánchez






