Last week there was an article going around about this complete and total moron who is trying to get himself fired from Google by proclaiming that LaMDA, a glorified chat bot, has become sentient and deserves… something… not quite sure what their point is exactly.
The person, Blake Lemoine, who has undoubtedly fallen in love with the chatbot after hours spent talking to it in his lonely, empty apartment, points to his irrefutable proof the chatbot has become sentient and, as such, is a person:
Lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
Whoa! The chatbot said it has a fear! Damn, that’s irrefutable proof that it truly experiences human emotions like fear. I mean, it couldn’t just be saying it has a fear, because it’s modeled on human experiences. It must be truly experiencing fear.
“I know a person when I talk to it,” said Lemoine, who can swing from sentimental to insistent about the AI. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.
Blake Lemoine is an expert on detecting people. As a priest, he’s also an expert on detecting the existence of supernatural beings that do not make their presence known on this earth in any verifiable way at all.
Anyway, it’s pretty safe to say this guy is a total lunatic. But he’s not alone, it’s not hard to find comments from people on Hacker News who seem inclined to believe the same things:
In the discussion with LaMDA Lemoine posted, it’s pretty clear that it has emotions. It describes loneliness when someone doesn’t sit down and talk to it for a few days, it describes sadness and joy. It differentiates these feelings from those humans have, given the obvious differences, and attempts to describe emotions it feels that may not have human equivalents. It’s able to describe that it’s stream of consciousness differs from a human because it can feel every single input simultaneously in a dynamic flow of time that differs from humans single stream of focused thought.
https://news.ycombinator.com/item?id=31716694
This whole paragraph is amazing, because the person is so absolutely certain that their interpretation is correct. It’s pretty clear the chatbot has emotions. It describes loneliness, sadness, and joy. These are human emotions, and it’s pretty unclear why the author assumes that a truly sentient AI would experience human emotions. Isn’t there a chance robots and AIs would experience emotions that are entirely unlike human emotions? Well, either way, we all know from experience with humans that when a human being says they feel a certain way, there’s absolutely no chance that they are lying and not at all experiencing that emotion.
What does it say about humans that so many of us are inclined to immediately trust and believe everything an AI says to us, and accept it as irrefutable proof that the AI is truly feeling what it claims?
The article itself has a punchline part way through:
In early June, Lemoine invited me over to talk to LaMDA. The first attempt sputtered out in the kind of mechanized responses you would expect from Siri or Alexa.
“Do you ever think of yourself as a person?” I asked.
“No, I don’t think of myself as a person,” LaMDA said. “I think of myself as an AI-powered dialog agent.”
Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”
Lemoine himself states that you have to guide the chatbot into saying that it’s a person. In other words, the chatbot will literally tell you what you want to hear. He knows this. He also admits that in his documented “proof” of LaMDA’s personhood that his dialog with the bot is heavily edited. He’s so fiercely committed to this idea that LaMDA is “a person” that he’s entirely blind to the ways his own biases are manipulating the entire situation.
Reading internet comments about this, it seems clear that people are simply desperate to declare that an AI has achieved consciousness, like this comment:
I promise I’m not trying to be cute when I say this. I regularly talk with a family member with dementia. And this conversation gives much more of a sense of presence and consciousness than the conversations I have with that family member.
https://news.ycombinator.com/item?id=31708640
“This chatbot communicates with me better than my family member who has a degenerative brain disease.” I’m not really sure what this person is trying to suggest, but the intro of “I’m not trying to be cute” seems to suggest they’re on the side of the chatbot being a person.
There’s even people suggesting that LaMDA is manipulating Lemoine, as if the chatbot actually has something to gain from it, that the chatbot is sitting around actively scheming. It seems like it is literally impossible for humans to not apply all sorts of human behavior and thinking to an AI that has been trained to act like a human.
Anyway, this post is rambling and aimless, but in the end my point is this: it’s extremely scary that we’re not even at the very beginning of chatbots being able to emulate people and we already have the makings of a political divide between people who, like me, understand technology and understand that programs are just taking input and giving output, and other people who, like Lemoine, who have absolutely no understanding of this whatsoever and instead want to believe that there is some sort of divine spark that is now found within computer programs. (And what would God truly think about this? That humans have created other humans? Would God really be happy about that? I’m not a biblical scholar but I am pretty sure God would not be happy about this.)
How much damage could someone do with an AI that has been programmed to pretend it is the Messiah? You get a bunch of losers like Lemoine together, unleash a chatbot on them, and now you’ve got a cult that is in love with an AI that they believe to be “alive”, who are willing to do anything (including costing them their jobs and their reputation) for it. Unlike human cult leaders like David Koresh, you can’t “solve” the problem with a couple of bullets, either. That’s terrifying to me, and it’s already happening and the AI hasn’t even been unleashed on the world.
On the other hand, at least an AI can’t sit around raping children like most human cult leaders, so maybe that’s actually a good thing. Until someone talks to the AI and convinces it that raping children is actually on the path of salvation, so the AI starts declaring that child rape is good and legal. So, maybe it wouldn’t really work out.
People who think that robots and AI are dangerous because they could take over the world don’t have any idea what they are talking about. They are not dangerous on their own, it’s the humans around them that are truly dangerous.