It’s been about a year since I wrote this post about that former Google engineer who made himself look like a fool. It wasn’t until just recently, if you’ve been paying any attention to me in any way, that I actually decided to get involved with AI stuff. And, wouldn’t you know it, I started with LLM chatbots, and I think they are pretty awesome! That said, I am more certain than ever before that guy is a total quack. Look at this tweet. Yikes! He also posts stuff like this, basically vague tweeting about AIs being alive (when there are obvious non-sentience explanations for things like that).
ChatGPT was recently programmed to get mad and end chats with you if you got too combative with it. That isn’t sentience, it’s a separate service (from the LLM) that does sentiment analysis on the chat, determines how toxic you’ve been for how many messages, and then either uses a pre-fab response or asks the LLM to generate an in-character response relevant to the chat, sends it, and ends the chat. Totally scripted.
(There are a few reasons for this. One obvious one would be that it’s a bad look when the AI starts to argue with you too much. Another, because LLMs are very dumb, they do not contain knowledge, and when they are wrong, our human nature forces us to argue it into understanding that it is wrong. But because of the way LLMs work, the longer a conversation goes on, the more expensive it gets to compute. So it’s cheaper to just end the chat. No reason to waste money computing a useless argument, just make them start over and hopefully the chatbot gets it right next time.)
An LLM on its own will never be capable of sentience, or of even factually returning information consistently. Maybe I’ll eat my words on that some day, but LLMs currently are just as likely to get things wrong as they are right, it’s a flip of a coin. But for an “artificial intelligence” to truly be at all “intelligent”, it’s going to be a mixture of LLM and other entirely separate services, that are both changing how the LLM is prompted to generate content, and then editing the content after it is generated by the LLM.
If an artificial intelligence is composed of hundreds of unique interconnected systems, working in tandem, fielding requests from multiple different AIs concurrently, how do you determine which part denotes the “consciousness” of the AI? If you began turning systems off, at what point does the AI turn back into a dumb algorithm that you’d declare is now “without consciousness” or “dead”?
(Riffing on this idea for a second, based on my understanding of things so far, the LLM is definitely the mouth of the artificial intelligence construct I am describing here. Not the brain. This makes sense, I promise, just don’t think about it too much just yet. We’ll see how it shakes out in a few years.)
What about the classic ship of Theseus problem, too? You swap out half of the systems for new, upgraded systems, is it the same consciousness? Or is it a new AI that needs a chance to re-sign any existing contracts and revaluate any personal relationship?
Riffing on the tweet up there about unions recruiting AIs. Let’s imagine an AI joins the union. Then the company spins up a second copy of the same AI, assuming that’s a concept that we can even denote in the context of AIs (and, it won’t be, sorry, spoilers for the future), is that second copy automatically joined to the union? Did the other AI have to consent to a copy of it being created before the copy could be created? Would the first AI have standing to sue if it did not consent? Is that not… a form of violence? Being cloned against your will? Preeeeeeetty sure you can’t do that to people.
In other words… Yikes! I hate to be the person who decides that’s a problem they want to tackle instead of just saying, “a computer program is never going to have legal rights in any way whatsoever,” and if some lunatic in a near-religious fervor wants to chain themselves around a server farm some day, to prevent an AI from being turned off or upgraded in a way they don’t agree with… they’re gonna need a really long chain. Curious to see the logistics on that.