Social 'Counterfeit people': The dangers posed by Meta’s AI celebrity lookalike chatbots

LeonardoBjj

Professional Wrestler
@Brown
Joined
Jan 17, 2010
Messages
4,541
Reaction score
5,501
Meta announced on Wednesday the arrival of chatbots with personalities similar to certain celebrities, with whom it will be possible to chat. Presented as an entertaining evolution of ChatGPT and other forms of AI, this latest technological development could prove dangerous.

71hdt3XQH9L.jpg

By:Sébastian SEIBT

Meta (formerly known as Facebook) sees these as "fun" artificial intelligence. Others, however, feel that this latest technological development could mark the first step towards creating "the most dangerous artefacts in human history", to quote from American philosopher Daniel C. Dennett’s essay about "counterfeit people".

On Wednesday, September 27, the social networking giant announced the launch of 28 chatbots (conversational agents), which supposedly have their own personalities and have been specially designed for younger users. These include Victor, a so-called triathlete who can motivate "you to be your best self", and Sally, the "free-spirited friend who’ll tell you when to take a deep breath".

mqdefault.jpg


Internet users can also chat to Max, a "seasoned sous chef" who will give you "culinary tips and tricks", or engage in a verbal joust with Luiz, who "can back up his trash talk".

A chatbot that looks like Paris Hilton
s-l400.jpg

To reinforce the idea that these chatbots have personalities and are not simply an amalgam of algorithms, Meta has given each of them a face. Thanks to partnerships with celebrities, these robots look like American jet-setter and DJ Paris Hilton, TikTok star Charli D'Amelio and American-Japanese tennis player Naomi Osaka.

And that's not all. Meta has opened Facebook and Instagram accounts for each of its conversational agents to give them an existence outside chat interfaces and is working on giving them a voice by next year. The parent company of Mark Zuckerberg's empire was also looking for screenwriters who can "write character, and other supporting narrative content that appeal to wide audiences".

Meta may present these 28 chatbots as an innocent undertaking to massively distract young internet users, but all these efforts point towards an ambitious project to build AIs that resemble humans as much as possible, writes The Rolling Stone.

This race to "counterfeit people" worries many observers, who are already concerned about recent developments made in large language model (LLM) research such as ChatGPT and Llama 2, its Facebook counterpart. Without going as far as Dennett, who is calling for people like Zuckerberg to be locked up, "there are a number of thinkers who are denouncing these major groups’ deliberately deceptive approach", said Ibo van de Poel, professor of ethics and technology at the Delft University of Technology in the Netherlands.

1289638.8078577FC891BEFD.png

AIs with personalities are 'literally impossible'


The idea of conversational agents "with a personality is literally impossible", said van de Poel. Algorithms are incapable of demonstrating "intention in their actions or 'free will', two characteristics that are considered to be intimately linked to the idea of a personality".
maxresdefault.jpg

Meta and others can, at best, imitate certain traits that make up a personality. "It must be technologically possible, for example, to teach a chatbot to act like the person they represent," said van de Poel. For instance, Meta's AI Amber, which is supposed to resemble Hilton, may be able to speak the same way as its human alter ego.

The next step will be to train these LLMs to express the same opinions as the person they resemble. This is a much more complicated behaviour to programme, as it involves creating a sort of accurate mental picture of all of a person's opinions. There is also a risk that chatbots with personalities could go awry. One of the conversational agents that Meta tested expressed "misogynistic" opinions, according to the Wall Street Journal, which was able to consult internal company documents. Another committed the "mortal sin" of criticising Zuckerberg and praising TikTok.
acf3a3bbaecf62d377ef878842c9687f

To build these chatbots, Meta explains that it set out to give them "unique personal stories". In other words, these AIs’ creators have written biographies for them in the hopes that they will be able to develop a personality based on what they have read about themselves. "It's an interesting approach, but it would have been beneficial to add psychologists to these teams to get a better understanding of personality traits", said Anna Strasser, a German philosopher who was involved in a project to create a large language model capable of philosophising.

Meta’s latest AI project is clearly driven by a thirst for profit. "People will no doubt be prepared to pay to be able to talk and have a direct relationship with Paris Hilton or another celebrity," said Strasser.

The more users feel like they are speaking with a human being, "the more comfortable they'll feel, the longer they'll stay and the more likely they'll come back", said van de Poel. And in the world of social media, time – spent on Facebook and its ads – is money.

Tool, living thing or somewhere between?

It is certainly not surprising that Meta’s first foray into AI with "personality" are chatbots aimed primarily at teenagers. "We know that young people are more likely to be anthropomorphic," said Strasser.

However, the experts interviewed feel that Meta is playing a dangerous game by stressing the "human characteristics" of their AIs. "I really would have preferred if this group had put more effort into explaining the limits of these conversational agents, rather than trying to make them seem more human", said van de Poel.

The emergence of these powerful LLMs has upset "the dichotomy between what is a tool or object and what is a living thing. These ChatGPTs are a third type of agent that stands somewhere between the two extremes", said Strasser. Human beings are still learning how to interact with these strange new entities, so by making people believe that a conversational agent can have a personality Meta is suggesting that it be treated more like another human being than a tool.

"Internet users tend to trust what these AIs say" which make them dangerous, said van de Poel. This is not just a theoretical risk: a man in Belgium ended up committing suicide in March 2023 after discussing the consequences of global warming with a conversational agent for six weeks.

Above all, if the boundary between the world of AIs and humans is eventually blurred completely, "this could potentially destroy trust in everything we find online because we won't know who wrote what", said Strasser. This would, as Dennett warned in his essay, open the door to "destroying our civilisation. Democracy depends on the informed (not misinformed) consent of the governed [which cannot be obtained if we no longer know what and whom to trust]".

It remains to be seen if chatting with an AI lookalike of Hilton means that we are on the path to destroying the world as we know it.

Spider-Man-Clone-Saga.jpg

https://www.france24.com/en/technol...sed-by-meta-s-ai-celebrity-lookalike-chatbots
 
Meta announced on Wednesday the arrival of chatbots with personalities similar to certain celebrities, with whom it will be possible to chat. Presented as an entertaining evolution of ChatGPT and other forms of AI, this latest technological development could prove dangerous.

71hdt3XQH9L.jpg

By:Sébastian SEIBT

Meta (formerly known as Facebook) sees these as "fun" artificial intelligence. Others, however, feel that this latest technological development could mark the first step towards creating "the most dangerous artefacts in human history", to quote from American philosopher Daniel C. Dennett’s essay about "counterfeit people".

On Wednesday, September 27, the social networking giant announced the launch of 28 chatbots (conversational agents), which supposedly have their own personalities and have been specially designed for younger users. These include Victor, a so-called triathlete who can motivate "you to be your best self", and Sally, the "free-spirited friend who’ll tell you when to take a deep breath".

mqdefault.jpg


Internet users can also chat to Max, a "seasoned sous chef" who will give you "culinary tips and tricks", or engage in a verbal joust with Luiz, who "can back up his trash talk".

A chatbot that looks like Paris Hilton
s-l400.jpg

To reinforce the idea that these chatbots have personalities and are not simply an amalgam of algorithms, Meta has given each of them a face. Thanks to partnerships with celebrities, these robots look like American jet-setter and DJ Paris Hilton, TikTok star Charli D'Amelio and American-Japanese tennis player Naomi Osaka.

And that's not all. Meta has opened Facebook and Instagram accounts for each of its conversational agents to give them an existence outside chat interfaces and is working on giving them a voice by next year. The parent company of Mark Zuckerberg's empire was also looking for screenwriters who can "write character, and other supporting narrative content that appeal to wide audiences".

Meta may present these 28 chatbots as an innocent undertaking to massively distract young internet users, but all these efforts point towards an ambitious project to build AIs that resemble humans as much as possible, writes The Rolling Stone.

This race to "counterfeit people" worries many observers, who are already concerned about recent developments made in large language model (LLM) research such as ChatGPT and Llama 2, its Facebook counterpart. Without going as far as Dennett, who is calling for people like Zuckerberg to be locked up, "there are a number of thinkers who are denouncing these major groups’ deliberately deceptive approach", said Ibo van de Poel, professor of ethics and technology at the Delft University of Technology in the Netherlands.

1289638.8078577FC891BEFD.png

AIs with personalities are 'literally impossible'


The idea of conversational agents "with a personality is literally impossible", said van de Poel. Algorithms are incapable of demonstrating "intention in their actions or 'free will', two characteristics that are considered to be intimately linked to the idea of a personality".
maxresdefault.jpg

Meta and others can, at best, imitate certain traits that make up a personality. "It must be technologically possible, for example, to teach a chatbot to act like the person they represent," said van de Poel. For instance, Meta's AI Amber, which is supposed to resemble Hilton, may be able to speak the same way as its human alter ego.

The next step will be to train these LLMs to express the same opinions as the person they resemble. This is a much more complicated behaviour to programme, as it involves creating a sort of accurate mental picture of all of a person's opinions. There is also a risk that chatbots with personalities could go awry. One of the conversational agents that Meta tested expressed "misogynistic" opinions, according to the Wall Street Journal, which was able to consult internal company documents. Another committed the "mortal sin" of criticising Zuckerberg and praising TikTok.
acf3a3bbaecf62d377ef878842c9687f

To build these chatbots, Meta explains that it set out to give them "unique personal stories". In other words, these AIs’ creators have written biographies for them in the hopes that they will be able to develop a personality based on what they have read about themselves. "It's an interesting approach, but it would have been beneficial to add psychologists to these teams to get a better understanding of personality traits", said Anna Strasser, a German philosopher who was involved in a project to create a large language model capable of philosophising.

Meta’s latest AI project is clearly driven by a thirst for profit. "People will no doubt be prepared to pay to be able to talk and have a direct relationship with Paris Hilton or another celebrity," said Strasser.

The more users feel like they are speaking with a human being, "the more comfortable they'll feel, the longer they'll stay and the more likely they'll come back", said van de Poel. And in the world of social media, time – spent on Facebook and its ads – is money.

Tool, living thing or somewhere between?

It is certainly not surprising that Meta’s first foray into AI with "personality" are chatbots aimed primarily at teenagers. "We know that young people are more likely to be anthropomorphic," said Strasser.

However, the experts interviewed feel that Meta is playing a dangerous game by stressing the "human characteristics" of their AIs. "I really would have preferred if this group had put more effort into explaining the limits of these conversational agents, rather than trying to make them seem more human", said van de Poel.

The emergence of these powerful LLMs has upset "the dichotomy between what is a tool or object and what is a living thing. These ChatGPTs are a third type of agent that stands somewhere between the two extremes", said Strasser. Human beings are still learning how to interact with these strange new entities, so by making people believe that a conversational agent can have a personality Meta is suggesting that it be treated more like another human being than a tool.

"Internet users tend to trust what these AIs say" which make them dangerous, said van de Poel. This is not just a theoretical risk: a man in Belgium ended up committing suicide in March 2023 after discussing the consequences of global warming with a conversational agent for six weeks.

Above all, if the boundary between the world of AIs and humans is eventually blurred completely, "this could potentially destroy trust in everything we find online because we won't know who wrote what", said Strasser. This would, as Dennett warned in his essay, open the door to "destroying our civilisation. Democracy depends on the informed (not misinformed) consent of the governed [which cannot be obtained if we no longer know what and whom to trust]".

It remains to be seen if chatting with an AI lookalike of Hilton means that we are on the path to destroying the world as we know it.

Spider-Man-Clone-Saga.jpg

https://www.france24.com/en/technol...sed-by-meta-s-ai-celebrity-lookalike-chatbots

Is there a Conor McGregor chatbot?
 
Did anyone else watch this year's "Meta Connect"?
Understandable if noone did, Zuckerberg is painfully awkward to watch.
The Snoop Dogg, "D&D Dungeon Master" was funny, but not particularly convincing (I'd love to know what the contracts were with the celebrities they are using).
The "AI Stickers" were... even more underwhelming.
I should probably point out that the practical application of LLMs trained on informational subsets is less in emulating personality and more that restricting them to curated, reliable information can improve the quality of their responses on particular subjects. AI expertise in other words.
Zuckerberg wasn't wrong about AR/MR/XR, although his use of "holograms" was a contrived nonsense.
God knows I hope Valve and other companies don't let Meta get the stranglehold they are chasing. Although I'm certainly not a fan of Apple's walled garden either.
 
Did anyone else watch this year's "Meta Connect"?
Understandable if noone did, Zuckerberg is painfully awkward to watch.
The Snoop Dogg, "D&D Dungeon Master" was funny, but not particularly convincing (I'd love to know what the contracts were with the celebrities they are using).
The "AI Stickers" were... even more underwhelming.
I should probably point out that the practical application of LLMs trained on informational subsets is less in emulating personality and more that restricting them to curated, reliable information can improve the quality of their responses on particular subjects. AI expertise in other words.
Zuckerberg wasn't wrong about AR/MR/XR, although his use of "holograms" was a contrived nonsense.
God knows I hope Valve and other companies don't let Meta get the stranglehold they are chasing. Although I'm certainly not a fan of Apple's walled garden either.

- One thing that crossed my mind latter night, was that some guys will get a crush or fall in love with a AI. When they fall to themselfes and see they arent real, maybe they goon a killing spree. But i dont think we can held Meta or any other business responsible for those outcomes.
 
- One thing that crossed my mind latter night, was that some guys will get a crush or fall in love with a AI. When they fall to themselfes and see they arent real, maybe they goon a killing spree. But i dont think we can held Meta or any other business responsible for those outcomes.

It's funny to see how hard Meta is trying, and yet the social "metaverse" they are spruiking is essentially just VRchat.
The fact they've spent so much time and money, and are nowhere near that benchmark is pretty damning.
Not that VR or AR social media is something I'm all that excited about.
I haven't spent much time in VR social media (about 90 hours total over the last 3 years, and that includes with the headset off using headphones to listen to muffled conversations and the ambient sounds of wind, rain and the ocean to help me sleep), but it's already in that unfiltered early internet stage.
I logged into VRchat earlier tonight and was greeted with a bunch of teenagers screaming racial slurs, a guy with a rat avatar doing karaoke that had the rats elephantitis testicles changing size with the volume of his voice, someone using a third party application to teleport other users and crash them, someone else had put a video of some guy showing his arsehole on all the media players and there were MS Paint style penises drawn everywhere. Yup... internet "culture".
The AI implementations I've encountered in there have been rudimentary chat bots, the most advanced had voice interaction. I'm way more concerned about the real people and globalised internet culture, although no doubt commercialisation and a more mainstream audience will dilute that.
 
It's funny to see how hard Meta is trying, and yet the social "metaverse" they are spruiking is essentially just VRchat.
The fact they've spent so much time and money, and are nowhere near that benchmark is pretty damning.
Not that VR or AR social media is something I'm all that excited about.
I haven't spent much time in VR social media (about 90 hours total over the last 3 years, and that includes with the headset off using headphones to listen to muffled conversations and the ambient sounds of wind, rain and the ocean to help me sleep), but it's already in that unfiltered early internet stage.
I logged into VRchat earlier tonight and was greeted with a bunch of teenagers screaming racial slurs, a guy with a rat avatar doing karaoke that had the rats elephantitis testicles changing size with the volume of his voice, someone using a third party application to teleport other users and crash them, someone else had put a video of some guy showing his arsehole on all the media players and there were MS Paint style penises drawn everywhere. Yup... internet "culture".
The AI implementations I've encountered in there have been rudimentary chat bots, the most advanced had voice interaction. I'm way more concerned about the real people and globalised internet culture, although no doubt commercialisation and a more mainstream audience will dilute that.

- I'm not on VR. Dont even play online. I can see why it would be charming, like swiming btw sharks without to risk of being biten. But will atrackt the lowest denominator on the net too.
 
- I'm not on VR. Dont even play online. I can see why it would be charming, like swiming btw sharks without to risk of being biten. But will atrackt the lowest denominator on the net too.

To be fair it's just the nature of anonymous public lobbies and forums, without much in the way of moderation, anywhere online.
Presumably the people that are more interested in it are using private instances with online friends from other locations (Discords, forums etc), rather than just joining public lobbies or exploring empty worlds solo.
The actual world creation in VR Chat can be quite impressive (not that hard to do either give it's Unity toolkit).
There's a good one called Organism I went through not that long ago, which plays with perspective and works as an absurd, Kafkaesque narrative of life as well as a psychedelic escape room with a Soviet aesthetic.
You can even check it out flatscreen, although it'd lose the sense of scale and immersion.
That's what Zuckerberg was right about in his presentation (although it's been said before, and said better, plenty of times). VR/AR/MR/XR does give us the ability to immerse ourselves in the digital world, and/or effectively combine it with our physical world, rather than simply looking into it through small windows.
 
To be fair it's just the nature of anonymous public lobbies and forums, without much in the way of moderation, anywhere online.
Presumably the people that are more interested in it are using private instances with online friends from other locations (Discords, forums etc), rather than just joining public lobbies or exploring empty worlds solo.
The actual world creation in VR Chat can be quite impressive (not that hard to do either give it's Unity toolkit).
There's a good one called Organism I went through not that long ago, which plays with perspective and works as an absurd, Kafkaesque narrative of life as well as a psychedelic escape room with a Soviet aesthetic.
You can even check it out flatscreen, although it'd lose the sense of scale and immersion.
That's what Zuckerberg was right about in his presentation (although it's been said before, and said better, plenty of times). VR/AR/MR/XR does give us the ability to immerse ourselves in the digital world, and/or effectively combine it with our physical world, rather than simply looking into it through small windows.

- This can be great for studying. Going to a class about China history, without leaving my room.
Or exploring a cave.
 
- This can be great for studying. Going to a class about China history, without leaving my room.
Or exploring a cave.

No doubt it's on the way.
I went to boarding school as a kid, and that's already been replaced with online learning. VR might take another 10-20 years in developed countries I guess, depending on the demographic of the students. Although they've been talking about telecommuting and studying since before I was born, so I wouldn't be holding my breath.
I put my nieces and nephews in VR (with constant supervision via casting, and no online access since they are aged 3-9), and they love it for games, VR content consumption, VR modelling and drawing.
Even just using the VR version of Google Earth to get an idea of where everything is.
The tech for capturing 3D video and images with LIDAR has been on phones for years now, so the content will grow.
The kids aren't slow to pick it up either. They do have it at their schools (along with 3D printers), but in very limited forms. So it'll be a while before it's standardised.
 
No doubt it's on the way.
I went to boarding school as a kid, and that's already been replaced with online learning. VR might take another 10-20 years in developed countries I guess, depending on the demographic of the students. Although they've been talking about telecommuting and studying since before I was born, so I wouldn't be holding my breath.
I put my nieces and nephews in VR (with constant supervision via casting, and no online access since they are aged 3-9), and they love it for games, VR content consumption, VR modelling and drawing.
Even just using the VR version of Google Earth to get an idea of where everything is.
The tech for capturing 3D video and images with LIDAR has been on phones for years now, so the content will grow.
The kids aren't slow to pick it up either. They do have it at their schools (along with 3D printers), but in very limited forms. So it'll be a while before it's standardised.

- I've watched a plan to lauch VR taekwondo. Dont know if its already a thing. Butkids are so sedentary today, that maybe can stimulate them to move.



- My little cousin doens't like legos or similars. Just wants to play games at the cell. And is a little sedentary.

VR modelling and drawing is something that interests me. I used to be talented in wood workd. But i cant give the boy not even a little hammer:(
 
- I've watched a plan to lauch VR taekwondo. Dont know if its already a thing. Butkids are so sedentary today, that maybe can stimulate them to move.



- My little cousin doens't like legos or similars. Just wants to play games at the cell. And is a little sedentary.

VR modelling and drawing is something that interests me. I used to be talented in wood workd. But i cant give the boy not even a little hammer:(


VR games are generally more active, and there's a whole category of "VR Fitness" games. Unsurprisingly, as VR got a real boost from the lockdowns and gym closures during the pandemic.



Mostly boxing though, because leg tracking currently requires a much more expensive high end setup, usually with a dedicated play space.
More for tinkerers than really being consumer ready.

 
Back
Top