A data scientist used AI to clone his entire group chat 😲

So I recently came across this article …

… which I think is rather interesting.

A short, TL;DR summary of the article is below (courtesy of Bing Chat):

  • A data scientist named Izzy Miller cloned his best friends’ group chat using AI, using a leaked language model from Meta and 500,000 messages from their iMessage history.
  • The AI system can mimic the personalities and mannerisms of the six friends, but has some limitations, such as no sense of chronology and a tendency to act as if they were still in college.
  • The project shows how easy and powerful it is to create AI chatbots that can reproduce specific individuals, and raises some ethical and social implications for the future of human communication.

Now, Izzy did mention that while he thought AI clones could replicate real humans, they would not be able to replace them. And while I do agree with him in the sense that AI would (probably) never be able to replace someone whom we are close with e.g. your parents, your significant other, etc. The possibility of them replacing strangers whom you have no special relations to (in a digital, asynchronous communication context) is rather plausible, though I don’t see it happening in at least another 4-5 years. :thinking:

Who knows, maybe in the future we may think that we are having a normal (text-based) conversation with customer support when in fact, it was a chat bot behind the screen. :person_shrugging:

And who knows, maybe in the future I will relegate most of my tasks to Discobot to create topics and answer questions. :woozy_face:

Let me know what you think about this and where do you think these kind of AI-replicating-human-conversation projects could be implemented in the future :point_down:


:point_right: Read this blog post for more information on how Izzy Miller managed to clone his group chat

1 Like

@jmeintjesn7 Would love to hear your take on this. For example, do you think make-believe conversations like this one will be the norm in the future? :wink:

This whole faking your group chat thingy also reminds me of an article that I had to analyze back in my uni day…

It provides a unique insight into the country’s issue with social isolation, especially with young people, and how there is a whole economy that (I suppose) exploits, and at the same time, provides a temporary fix for that. I really you all to read it if you have the time.

But back to the whole group chat thing. Sooooo, what if one day, any lonesome individual could download an app and can start chatting instantly with a stranger, or a group of them, those strangers are actually bots that have been trained using LLMs, they all have different personalities, and the data come from past conversations with other users? :thinking:

And of course, you could pay a subscription to access cooler features e.g. chatting with more than one person, having multiple group chats, etc.

Sounds like something of a Black Mirror episode. but I would not be surprised if that turns out to be a reality. I mean, we all thought the social credit system of the Nosedive episode was rather radical until China started implementing it.


Anyway, that’s enough philosophy for one day. Have a nice weekend, y’all! :v:

I think that a make-believe conversation like this would improve certain aspects of our everyday lives, but could also impact it in a negative as well.

Let’s look at the positive side first. Let’s say your company has a chat bot for customer support and you feed it all the previous message and interactions of your employees that ran these live chats on the site. You would be able to create a more realistic chat bot that can sympathize with the end user that is having the problem and perhaps enhance their experience and trust in the live chat support feature. Thus giving them quicker support, more accurate support and 24/7 support. The company could also benefit due to less resources that is spent on hiring and training employees for this specific task.

Now the negative of this scenario is that it would replace the people who are employed in these positions such and helpdesk or first line support technicians. In the social environment and country I live in, when something can replace a person’s job even though it is more efficient and effective, it is very frowned upon and usually rejected.

What does scare me about these AI driven chat rooms is the fact that some people might start to rely on it as a replacement for social interaction or companionship and their ability to function in society might degrade.

So do I think this would become the norm in the future? I would say that it might become a bigger thing in the future, but not necessarily the norm. I do see that this might become a better way to customize a personal assistance to act and reply in certain ways that suite you, but not to completely replace a relationship you would have with another person. Something that popped into my mind while typing this, is that this could possible be used soon to create a chat bot of lost loved ones. ie. a company creates a chat bot that can simulate a person that is no longer with us based on chat histories of this person.

Now I am not sure if that Data Scientist infridged some regulations (perhaps a lot for the sake of proof-of-concept) … but at least Italy is concerned about such and made a critical step:

More may follow, at least from EU perspective

Yeah the article did not mention whether or not he had his friends’ approvals for doing such (I would imagine he did).

Supposedly if most people in Europe decided to “opt-out” (not providing personal data to train ChatGPT), I believe it would deal some damage to OpenAI’s efforts in training its system to recognize and generate different languages.

In the video from Vox below, they pointed out how LLMs have an English bias in which the bulk of data used for training is mostly English and that most are from Western companies i.e. Microsoft, Google, etc.

…But I’m veering off-topic now, this probably deserves a topic of its own :sweat_smile: