What's with the AI caricatures taking over social media feeds?

The caricatures are meant to be a culmination of everything ChatGPT knows about a person, but they have also brought up questions around privacy and personal information.

Rawan SaadiDigital Journalist
7 min read
The AI caricatures taking over social media feeds.
Caption:The AI caricatures taking over social media feeds.Photo credit:SUPPLIED

Has your social media feed been populated by amusing caricatures of your friends and whānau this week? As fun as these images may seem, their creation has raised questions around privacy and what is being done with personal information shared with AI software.

What is this trend?

People upload an image of themselves to the OpenAI platform and give it prompts to generate an animated image based off everything it knows about them.

The resulting image is a caricature of the person surrounded by their hobbies, job or any other interests that ChatGPT knows they might have. 

If users have ChatGPT memory on, and have given the programme ample information about themselves in the past, it will likely create a more accurate depiction.

Tom Hovey, director of AI consulting company Diagram, says that although this particular trend is recent, the idea of asking AI to create something that represents us is not new. 

Tom Hovey joined in on the trend and made his own Ai Generated caricature.

Tom Hovey joined in on the trend and made his own AI-generated caricature.

SUPPLIED

“People made all like the Studio Gimli images and people, you know, with access to VO3, the video generation model, can make videos, they can upload images of themselves and make deepfake videos."

Hovey says that people may not know that ChatGPT's default settings are created to remember things about them.

Why are people doing it?

Auckland makeup artist Ari Dayal hopped on the trend after seeing her friends make caricatures on Facebook.

“I decided to try it for myself, being curious to see what my outcome would be and what else ChatGPT knew about me. One of my friends posted the prompt and I used that.

“The outcome was fairly accurate as I have used ChatGPT for my makeup artistry Instagram page, so it knew I was a makeup artist.”

Ari Dayal's ChatGPT generated caricature.

Ari Dayal's ChatGPT generated caricature.

SUPPLIED

As for privacy concerns, Dayal says she knows there is always a risk when using any type of AI, but she feels she has done everything she can to be safe.

“I made sure that I used simple language and I have always given ChatGPT very basic information about myself so it was interesting to see what it would generate.”

University of Waikato Professor of AI Albert Bifet says he understands the appeal of the caricatures.

“It feels personal, creative, and a bit flattering. It’s also social, you share it, friends react, and it becomes a conversation.”

He says there is a novelty in showing off what AI can do with the information it already knows about us.

Professor of AI, Director of the AI Institute at the University of Waikato, Albert Bifet.

Professor of AI, Director of the AI Institute at the University of Waikato, Albert Bifet.

SUPPLIED

“These trends often start on social media when a few people post something fun, visual, and easy to copy. Once it’s simple to do and share, it spreads quickly.”

Hovey says the trend is addictive and creative, and it can be a way for people to get a positive reflection of themselves.

“ChatGPT is biased towards making you happy. It will take your conversations, if you have memory turned on, it will take those conversations and reflect the best version of yourself.”

Jess, another user, says they got in on the trend out of curiosity after being a frequent user of ChatGPT. 

“I guess I was just curious to see what it would come up with. I’ve used ChatGPT before so I was interested to see what it knew about me and my life and how I might be pictured through AI, I guess.”

Although the end result was fun, Jess says it raised some questions over the information that ChatGPT had. 

“I actually thought the outcome was pretty cool. But afterwards I started thinking about how much personal information I must have shared for it to create something so specific. It made me realise just how much private detail goes into ChatGPT, and now I’m kind of like, maybe that wasn’t the best idea?”

Is it all fun and games, or something more sinister?

Hovey says he doesn’t think there is necessarily anything sinister about the trend, but it is a “teaching moment”.

“It's revealing that our AI literacy is still quite low.

“... The average person doesn't know where the information is going, how large language models actually work, what the impacts are of using them.

“It's a cute, fun game, but it is worth taking a moment and thinking about, do you really want to share that with Chat GPT?”

Tom Hovey

Tom Hovey

SUPPLIED

He says it not only raises questions for the user, but it also shows that there was more OpenAI could do to help people make decisions about privacy upfront.

Hovey says it would be useful for ChatGPT to produce a prompt that asked users if they wanted the memory turned off or if they wanted to learn more about privacy settings when presented with a sensitive topic.

Bifet says the trend risks normalising how much personal data exists online.

“When we find it cute or fun, we stop asking hard questions. AI doesn’t magically know us — it uses data we’ve shared, directly or indirectly.”

He says it's important to be careful of the information people are sharing publicly and the prompts given to AI.

Information such as personal data, location, work details and opinions can all be inferred or reused, he says.

“We should also remember that these tools are run by companies, not neutral entities, and the data can be stored or used in ways we don’t fully see.

“We should enjoy the tools, but also stay aware, curious, and a bit sceptical. Asking “what am I giving away?” is just as important as asking “what can AI do?”

More from Screens