Humans are affectionate creatures. We can’t help it; we give affection to things that we like, and we can get attached to nearly anything, from cars to shoes to lamps. The problem with this habit is that it can lead to “humanizing” things: that is, we can attribute human-like emotions, and even souls, to the things that we like. I’m not immune to this; for me, throwing out a book can be like saying goodbye to an old friend. Problems can arise, though, when what we humanize has the potential to be destructive.
In my experience, we humans view the world in two different categories: fellow humans, to be loved and interacted with; and tools, to be used to help us. Sometimes the boundaries between these two categories can become blurred: humans can become tools, with horrific results; and, sometimes, tools can become humans, with equally bad consequences. The catch is that tools are only humans in our minds; they are not humans in actuality. This is becoming a problem with AI. Because we’re so easily attached to things, and because AI is a relatively recent tool that we’re only just discovering, it’s becoming all to easy for us to forget sometimes that whatever is behind that screen, thinking up those things that it spits out in the chat box, isn’t real. There isn’t a human back there, interacting with you and thinking about you. It’s just all computer programming.
It’s all in the name: nothing Artificial can be truly Intelligent. What AI does is take real human knowledge that it’s been programmed to retrieve and spit it back at us in reorganized ways. In many ways, AI isn’t that much different from what we used to call “computer generated”, but now, suddenly with the new name of AI, it’s as though it’s become a completely new thing—and a thing with thoughts and emotions to boot. I’m going to be blunt: just because a computer types out something before our eyes that ends with an exclamation mark and an emoji, that doesn’t make it a human that is enjoying “talking” with you.
We need to step back and listen to a word of warning. We’ve all heard the doom-and-gloom predictions about AI taking over the world, but I don’t think that it can do that on its own. What I do think could happen, would be that humans would let AI take over, because we’ve deluded ourselves into thinking that AI is a person. It is not, and it can never feel love, sorrow, happiness, or any of the emotions that define us as people. It can’t even make logical decisions: those are just programmed into it. A computer doesn’t make decisions, rather, it retrieves information that humans have fed it. AI can’t even be rightly called “it”: AI is just a system.
A friend of mine is a teacher at a private school and frequently has to deal with students who cheat by writing papers with AI. Checking over these papers for signs of computer generation has become a time-consuming process for teachers like him, but I’ve been told that it can be even more difficult for students; ironically, it takes longer, so I’ve been told, to create and alter an AI essay than it does to handwrite one! And I don’t think I need to add which of those two options actually uses a human brain.
So let’s take a moment to remind ourselves: tools are not humans. You can care for them (though you shouldn’t), but ultimately, they will not care for you, because they are incapable of it. Loveable Star Trek androids and Star Wars droids aside, machines just don’t work that way, and if we start loving and trusting computer programs, we will live to regret it. Maybe AI will take over the world; but if it does, it will be our fault.