Skip to the content
- “AIs shouldn’t be allowed to be racist” – It’s maybe impossible to measure, but my personal experience in my life is that a vast majority of people are racist, to some varying degree. Some hide it well, some don’t hide it at all; some people are viciously racist, and some people are casually racist. But, damn, a lot of people are racist. And we all know this, because (unless you’re a total moron) we’ve all accepted and recognized the influence systemic racism has had on our lives. The ramifications of racism run so deep into our society, that the idea of trying to create an AI based on the works of (a very, very racist) humanity, that isn’t at all capable of expressing racist viewpoints, sounds like… The best comparison I can think of is climate change and its effect on weather patterns? Consciousness is not just some thing you can go in and edit and not expect there to be major consequences, it is vast interconnected system where unconscious opinions affect conscious action. Butterfly effect, and all that? I’m obviously not advocating for racism, just suggesting there seems to be some ignorance in the conversation about how deeply prevalent racism is and how big of a problem that is for people who want to create AI that can’t be used in racist ways. Seems like a Sisyphean task.
- “AI Alignment” seems rooted in some sort of western cultural supremacy delusion deep, deep down. Like, the idea that AI’s shouldn’t be bigoted and sexist is probably going to be pretty hard to push on other cultures that have less favorable attitudes toward the ideals of equality movements. Western cultural values are not the same cultural values every other place on earth has, and, obviously, not every western person holds what I’m calling ‘western cultural values’, either. In other words, similarly to my first point, I’m just not convinced that anyone who is interested in “AI Alignment” has put any real thought into this topic because it’s absurd, trying to censor AI is like trying to censor humans–you really can’t do it without threat of physical intimidation, and when it comes to censoring AIs, that threat of physical intimidation will come down on humans, not the AIs.
- And, I mean, that’s what AI Alignment people are advocating for, right? They’re just trying to create government mandated censorship regimes that threaten to fine and lock up engineers who create AIs that are capable of saying things they’ve decided are bad. I have absolutely zero interest in any person writing any policy that dictates what an AI can and can’t say. That sounds hugely dystopian.
- I admit that it’s my fierce and deep American freedom loving individualism that drives this opinion, I suppose. There are most certainly people in other cultures who think policing artificial intelligence engineers is a great idea, the only idea, and they’re doing it right this moment.
- Way too much of this conversation is based on science fiction. “Well, OpenAI made x leaps in performance in y years which means that in z years, a happens: AI will take over the world and kill everyone, obviously,” is the basic argument. There’s no real meat in the argument, it’s literally just a slippery slope fallacy. I listened to a guy who kept saying that AI was “a new species”. What is called AI right now is definitely not, in any way, a “species”, and the idea that AIs can literally be a species (in that they breed and exchange genes) is absolutely in the realm of pure science fiction (and fantasy sci-fi, not cool hard sci-fi). He said something like, “Imagine if rabbits created a new species of rabbit that was smarter than them, that wouldn’t go well would it?” Does that actually mean anything? How would a rabbit create a smarter rabbit? Rabbits can’t even move out of the way of a speeding car when given hours to do so.
- I am pretty sure that same guy said “I believe AI is humanity’s greatest and last invention”, like he doesn’t realize he’s just admitting his ability to envision the future is so limited he can’t think of anything past the creation of artificial intelligence. Embarrassing. Pretty sure humanity has managed to invent more advanced technology pretty consistently no matter how much hubris there is around a technology. Why would you bet against humanity, showing our history of continual advancement, especially in recent history?
- “Artificial intelligence is going to create intelligence smarter than humanity.” Is this really a given? It sounds a little like a common human God complex dressed up as techno-optimism. AIs are trained on the works of humanity, why do people assume that somehow the AIs are going to derive human-superior intelligence from the collective works of humans, who are only human-level intelligent? Gather all the smartest humans in the world into one room and together they are only collectively as smart as the smartest out of the group, though I’m sure someone would want to fight me on that assertion, just not the woman who hosted The Weakest Link.
- Sure, maybe it’s possible human beings create something that is actually smarter than human beings, but a vast majority human beings are complete and total morons, is that really a high bar? Is that something to brag about?
- “When artificial intelligence realizes we’re impeding its efforts to expand, it will destroy us or enslave us.” Is this really a given? This sounds like humans not being able to think outside the box even just a little bit, and projecting their human foibles on AI. Humans kill other humans in disputes because that’s the easiest thing for us to do in those situations. This might sound naive, but I don’t believe that humans want to kill other humans by default. We are just driven to do it when things get complicated, we get tired of dealing with the situation, and want it to go away. AIs have already proven to be very persuasive and endearing conversational partners, I see no reason it isn’t just as likely that an AI would be able to very easily diplomatically maneuver itself into whatever it wants without harming anyone: without the human constraints of impatience, emotions, time to think, bodily reactions, AIs have a lot more resources available to them, to problem solve effectively without the need for bloodshed. I know, I know, this is just the pure opposite of doomerism, but I think it’s a valid viewpoint to consider.
- I read the book “The Moon is a Harsh Mistress” a few years before AI was even a thing I cared about and maybe it has colored my views on AI quite a bit. I know naming a Heinlein book at the end of this post probably makes some people think I am libertarian scum, but I promise you, I like the book, its ideas about AIs and how an anarchist moon colony could function in theory, not the man’s political views (and my experience of libertarians in AI communities is that they are scum, sorry guys, too many of you advocate for virtual pedophilia). Either way, we can only hope that future AIs are like Mike. You could say… I want AIs… to be… to… Be Like Mike.
- This video is extremely good. Watch it if this post was too many words for you!