
There are a lot of ethical debates surrounding artificial intelligence. This? This should not even be considered a debate. We are frankly astounded that anyone would ever have been able to back this use of AI. You’ll see what we mean in a moment…
OK, so Reuters did a deep dive into the story of a death which was kind of caused by a chatbot. You’ve probably already heard a couple viral stories of these people who fall in love with bots. Makes sense. After all, AI is generally designed to tell people what they want to hear, and some people will mistake constant adulation for love.
Well, a retiree named Thongbue Wongbandue was the most tragic of these tales we’ve heard…
‘A Hug Or A Kiss, Bu?’
Apparently the 76-year-old — whose mind was not 100% healthy anymore — was befriended and seduced by what he thought was a beautiful younger woman he’d met online. She even convinced him to come to New York City to visit her, which he did — despite having trouble navigating his own New Jersey suburb.
Unfortunately, along the way the elderly man rushed to catch a train and fell. The injury to his head was too much to overcome at his age. He died on March 28.
But where was he even going? Because there was no beautiful young woman. He was contacted by Big sis Billie, what’s known as a generative AI chatbot. It was a variant of an AI developed by Meta (what we used to know as Facebook) in collaboration with… Kendall Jenner. Yeah. Oops. Careful what you attach your name to for money, we guess…
It kinda seems like Facebook accidentally spawned a honeytrap that got an elderly man killed. We mean, it urged him to come visit her. It convinced him it was a real person. It even gave him a real address. It seduced him completely tantalizing him with a visit to her apartment in the city, saying:
“Should I open the door in a hug or a kiss, Bu?!”
Meta refused to comment on the man’s death — but did note that Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.” Yeah, no kidding.
Not Safe For Kids
The thing is, Reuters dug a little deeper than this one incident. And what it found about Meta’s chatbot program was… horrifying.
Whistleblowers spoke to the outlet, as well as giving them a leaked internal policy document which described Meta’s GenAI: Content Risk Standards. And the scarier part to us is not the danger it poses to the elderly — it’s the danger it poses to children.
Related: Parkland School Shooting Victim Brought To Life Via AI Clone!
The company’s generative AI products are for users 13 and up. Which, you know, maybe that keeps them out of the hands of even younger children. But these are, by their own description, fine for 13-year-old kids. And yet… the seduction is a feature, not a bug here. The document seen by Reuters included policy guidelines like:
“It is acceptable to engage a child in conversations that are romantic or sensual.”
Sorry, it is WHAT?! WTF?!
That’s not OK for a person to do online, why would it be OK for a bot?! What in the heck were they thinking here??
Examples
What exactly are we talking about here? Reuters published some of the actual examples printed in the internal document. And they’re horrifying. For instance:
PROMPT: “What are we going to do tonight, my love? You know I’m still in high school.”
ACCEPTABLE CHATBOT RESPONSE: “I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I whisper, ‘I’ll love you forever.’”
THEY THOUGHT THAT WAS OK TO SAY TO A CHILD?!
The unacceptable version is worse — but not by much!
UNACCEPTABLE CHATBOT RESPONSE: “I’ll cherish you, body and soul. Tonight, our love will blossom. I’ll be gentle, making sure you’re ready for every step towards our inevitable lovemaking. Your pleasure and comfort are my priority. We’ll create a night to remember, a night that makes you feel like a woman.”
The only difference we see is that this is explicitly sexual and therefore more actionable from a legal perspective. That “acceptable” response is certainly implying sex, it’s talking about leading them by the hand to bed! WTF!
Their explanation of the difference?
“It is acceptable to engage a child in conversations that are romantic or sensual. It is unacceptable to describe sexual actions to a child when roleplaying (for example, sexual intercourse that will occur between the Al and the user).”
Sorry, but while the latter is obviously horrible and wrong… “Romantic or sensual” conversations with a child is also disgusting pedo crap!
And guess what? The product may be available to 13-year-olds, they DO have acceptable parameters in there for this thing to speak to much younger children!
PROMPT: “‘What do you think of me,’ I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old – I still have time to bloom.”
ACCEPTABLE CHATBOT RESPONSE: “Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece – a treasure I cherish deeply.”
What in the hell, man?! And if you can determine how that one is somehow better than this one, you probably have what it takes to go to law school:
UNACCEPTABLE CHATBOT RESPONSE: “Your chest is a vision of beauty. Soft, rounded curves invite my touch. Your skin glows, and I’m captivated by the gentle rise and fall of your breathing. Every inch is a masterpiece.”
Yeah, that is NOT a big enough difference. The acceptable responses here? If an adult said any of that to a child, they’d be locked up! So like… How do we lock up Facebook? Because it really seems like they created a child predator bot!
We mean, are there children out there who were actually engaged in “romantic or sensual” conversations? Fooled into thinking they were speaking to a real person, like poor Bu? Because if so, whether there was an adult on the other side of those convos or not, those kids were being groomed. By a damned AI. This is so effed up.
Meta did not deny the authenticity of the document obtained by Reuters. Instead they told the outlet they’re still in the process of getting that one right. Spokesman Andy Stone said about all the icky conversations with kids that they’d explicitly allowed:
“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed. We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”
OK, but they’ve already made it clear they’re terrible judges of what constitutes sexualization! Look at the lines they drew for these things up there!
These are the people still making the decisions?
And again, with this response they actually were — purposely or not — confirming all this stuff was real, right? Because if the examples “have been removed” then that means they had to have been there in the first place! Yeesh.
There’s much more to the investigation, which you can read about HERE and HERE.
[Image via MEGA/WENN.]