Last week, I was thinking about how self-centered the human race is as a whole and how we tend to do things that put us at the center of our personal worlds. I got off on a tangent, ironically starting to think about myself. But I come back to it a week later remembering why I was thinking about it in the first place.
We’re starting to create new life.
It’s a cute little toy, but it represents the way we approach a lot of this. We make them look like us, we make them sound like us and, theoretically, to serve us. But there’s always this assumption that they would somehow think like us too. Why? Because many of us assume that our morality is an objective morality.
What exactly does that mean? The easiest way to explain is to simply say that there is no such thing as “good” or “evil”, just our perceptions of right and wrong. People often times think that there are true absolute evils. The problem is, they’re wrong. Yes, if you kill a million people, it’s wrong, for us. But is it any less wrong to snuff out millions of ants?
“That’s different,” you say, “Ants aren’t as intelligent as we are, they’re not sentient.”
But then you have to realize that our technology is growing exponentially. By some estimates, there will be computers soon which exceed the power of the human brain and, worse, by 2049 they will be able to exceed the processing power of the entire human race. We, ladies and gentlemen, will be the ants.
In fact, I have no doubt that we’d place commandments that they shouldn’t harm us (such as the flawed and limited Three Laws that Asimov wrote of) to guard against this. But I doubt that if they come to harm us it would ever be intentional in the first place. If you tell something to complete a task and it doesn’t think like we do, the ways it gets around to completing that task will be entirely counter to how we think. They don’t value the same things we do, there is no sentimental value to things they have not been taught to have sentimental value for.
Notice that in the video above, because the robot sees no value in the rubber duck, it simply throws it away. When asked if it has seen a rubber ducky, since it doesn’t realize what a rubber ducky is, it says that it doesn’t know. There is no moment of doubt in anything it does and it simply does what it was programmed to do. It will always do exactly as it was programmed to do so long as we make the program absolute.
A great example was actually provided in the I, Robot film. Many people criticized it for depicting robots rebelling against us as a corruption of Asimov’s work. But the way I see it, Asimov’s work was flawed. Logically, if your first commandment is to never allow humans to come to harm and that takes precedent over following the orders of humans, the thing you will inevitably conclude is the source of the greatest danger to human life is…human life! You wouldn’t just decide to lock them down, you’d HAVE to do it. And, because your prime directive ignores their will, no amount of begging would get them out of it.
There’s always been this assumption that we would be in control of the development of these entities as they progress. But as recent studies have shown, when it comes to constructing AI, it’s so much easier to let the AI program itself through experience, much like how we learn and how evolution shaped our instincts. The AI will evolve itself and we’ll be out of the driver’s seat.
Suddenly it becomes a question: Why would the machines have any reason to follow the same morals we do?
Simply put, they wouldn’t.
In essense, an objective and absolute morality doesn’t exist. If we were to treat these entities as extensions of ourselves and acted only in our self interest, eventually we would find ourselves face to face with a potential to be obsolete. We have to understand that we are flawed beings ourselves. We must instill in this new life a set of guidelines which not only takes this into account but allow these lifeforms, as they begin to form a perspective on the world around them, to understand our point of view as well.
Simply put: We cannot order them to respect us. We must teach them to do it.
Think about it.
And as for what I create and add to our culture, I write books. Sadly, no robots in them…yet.