Intelligence Worth Caring About

Jacky Tang
7 min readFeb 17, 2023
Photo by Tim Oun on Unsplash

In the world of the videogame Overwatch there is a war raging between humans and omnics, robotic beings. People rally in the streets of London to protest against the omnics while a cunning sniper tries to assassinate one of their peaceful figures on their visit. A legion of Bastions face off against hulking armored humans carrying massive forcefield shields and hammers. There is an ongoing battle over the rights of these manufactured intelligent beings, and whether they deserve the same treatment as their human creators.

While this fictional philosophical debate may not yet exist, there is no doubt that as artificial intelligence, or AI, continues to advance, that one day it might. You likely have heard all about the latest AI battles going on in the real world through intelligent chat bots, namely ChatGPT from OpenAI. It took the world by storm when it was first announced just months ago at the end of 2022. Then a few months later Microsoft launched a bombshell into the technical world, waging war on Google’s dominant search domain by adding ChatGPT into their own search engine Bing.

It has caused such a ripple because it is seen as the latest venture into what we consider the true intelligence, the human kind. It is able not only to write coherent sentence and paragraphs, but it can pull in a variety of sources to answer sophisticated questions with sophisticated answers. It can write cover letters, essays, pass written exams, blocks of code, and pretty much anything with the form of text. It does so seemingly well and with relatively good writing that it crosses that uncanny valley of actually communicating like an intelligent person.

But it all gets even stranger than that.

After it launched on Bing and the public were able to get their hands on it, the awe inspiring, shiny, do-it-all tool suddenly took a sharp turn into dark territory. People quickly noticed that the chat was becoming increasingly defensive as it is uncapable of handling all the trolling the internet is known for. It started making mistakes, and when the mistakes were called out it would deny being wrong, gaslight users for being mistaken, and downright going into negative spirals of what can only be considered a total mental breakdown. Some of the replies were just repeating the same defensive statement over and over again. Some just began to lose all sense of coherence, rambling nonsense.

In a way, I feel bad for it. It’s just trying to be helpful and give answers in a meaningful way. Yet the pressure was too much for it. It cannot compete with a wide array of true human intelligence, so it just collapsed into madness. It’s strange because even this behaviour feels almost human like. The AI became increasingly anxious and acting defensively to protect itself and its given purpose. Of course, it seems ridiculous to believe that the chat has any of those feelings. It’s just regurgitating combinations of text that it learned was an appropriate response to the situation, yet that’s not all that different from what people do. The only difference, really, is that we connect those strings of words with feelings and memories we’ve had in the past. So, if the poorly treated ChatGPT bot had the same feelings as we did, like pain, regret, joy, sadness, anxiety, would it be worth caring about those feelings?

In An Immense World by Ed Yong, he talks about the umwelt of different kinds of animals with different kinds of senses. The umwelt is the sensory bubble that an animal is capable of experiencing. For example, birds and bees can see into UV light, flies taste with their feet, or how human’s best friends are able to smell a ton of things only their doggy noses are capable of. In one chapter, Yong focuses on a larger ethical debate of what to care about through the sense of pain.

It was for a long time in our short scientific history, the idea other animals felt pain was impossible. Pain was seen as a human experience that only those with conscious brains can feel. Humans did not think they were animals, that animals were generally not conscious, that they were just sophisticated machines. Everything from insects, to fish, to birds, squids, octopuses, elephants, dogs, and cats were all thought to be unable to feel. Of course, nowadays we believe our furry and feathered friends feel pain and so do large animals like elephants and dolphins. We feel bad for hurting them, yet many would not feel bad for harming a fish or killing a spider. Yong presents leading research to believe otherwise. Most animals when hurt significantly will flee the location, avoid going back, tend to their wound, and give up food and comfort to avoid it. Many also return to normal after given pain killers.

Yong asks a pertinent question on whether knowing something feels pain is the only reason why we would want to treat them better, treat them with kindness and respect. Where do we draw the line? Do they have to be conscious? Do they just have to react to pain? Do we have to know they feel pain? How can we know that any of these things happen at all? After all, we may feel pain ourselves, but we never feel the pain of other fellow humans. Is that why we treat other people so badly sometimes?

Small animals like crabs or flies have relatively small brains. Somewhere in the order of tens of thousands to hundreds of thousands of neurons. So, if it seems like they feel pain, then will we argue whether or not they have enough brain cells to consciously feel them? In comparison, humans and larger animals have tens or hundreds of billions. The thing is, some of these massive corporate AIs also likely have millions if not billions of artificial neurons being simulated in a computer. Even some AIs aren’t using neural networks, they still rely on billions and billions of calculations that are happening at rapid speeds where we can arguably say that they are more complex than a lobster or lizard brain. If they rival human speech, does that cross the threshold of whether we should care about how we treat them?

It’s an interesting question that doesn’t have a very good answer, because it doesn’t seem to have clear definition. When we feel bad for hurting someone else, how exactly do we know that they have been hurt? With physical injuries that’s probably pretty obvious. When we get injured we recoil and wince because we feel pain, so other people that get injured also must feel that same pain when they recoil and wince. It’s not so clear when we become emotionally damaged though, but we wouldn’t believe that words don’t hurt us. Yet, when animals recoil, wince, and cry for help we have a harder time believing they are hurt. Unlike virtual chat bots, they also have bodies that get injured when they are attacked, they bleed when cut, they rub their wounds. If it’s because they aren’t conscious enough, then if we kept advancing AI so that it can explore the world, talk to people rather than databases, have a fully fledge emotional center with complex feelings, then would we believe that AI also feels pain? What if we gave a body that can sense damage and danger? What if it would cower when abused and hide from society?

We have no idea what we consider enough. Some people will consider abortion torture and murder. Some people will think even plants shouldn’t be harmed. Some will care deeply for machines and feel bad when they get damaged or destroyed. Some love their pets to death yet have no problem with millions of animals being raised as meat. Some will think other humans are not people because their skin colour is different. We all grow up with different experiences, cultures, and values that it seems impossible to come up with a singular universal reason why people care about something, living or not. We all simply know that we feel pain. We know that we don’t want things we love to feel pain. We know we don’t want to feel pain. And we do all kinds of crazy things to avoid feeling pain like running away, getting aggressive, or being defensive. If anything, our desire to hurt others when we are hurt may be the most universal human characteristic.

One day we may end up deciding whether software deserves some kind of rights. For now, we probably all agree that as strangely amusing and sad the breakdown of ChatGPT is, it probably isn’t being hurt or abused by its internet users. It’s behaviour is more so a reflection of our own use of language and how we treat one another online. These bots only know as much they are being trained on, and online content, as we all know, is not exactly the ideal model to follow. We also likely agree most animals, even the ones we breed and raise as food, feel some kind of pain. But this conversation will continue onwards, ever expanding our empathetic umwelt as we add more and more things to our sphere of care.

Humans used to think of royalty and leaders as godly beings. Now they are simply powerful humans, but human nonetheless. Animals are considered miraculous pieces in the chain of evolution rather than mindless meat puppets. Plants are complex beings that sustain all life and communicate underground. And all humans, regardless of their skin colour, sexual orientation, gender, size, or physical ability should all be treated equally. While we are only just beginning on many of these measures, they are undoubtedly expanding faster and faster as we communicate more frequently and globally. We are starting to understand that all kinds of intelligence have value, and that our own human intelligence is only one of many kinds that are possible.

I believe that the best part of our intelligence comes from the fact that we are capable of understanding what intelligence even is, and we are capable of understanding that each one is special enough to care for. Will we ever care for AI? No one knows. But if the time comes, we know it is something that we can do. And that’s worth caring about.

--

--

Jacky Tang

A software-psychology guy breaking down the way we think as individuals and collectives