The Benevolent Dictatorship Of Our Robot Overlords

Presented a new project to work on in the coming weeks, I came to consider several things I’ve blogged about recently. When dealing with the future and ideas of where we’re going as a race we often find ourselves in a scared, frightened position. It makes sense, the future, especially an unknown future, can be terrifying even if all common sense and logic tells us that it should go another direction. We’re constantly afraid of the idea that the world itself may turn into a Mad Max-style wasteland, or that an arrogant politician may become the next Hitler, or that we may end up going to World War 3 over the actions of a single nation.

But in all of these cases we can look at the history of the world and the shape of what has come before to determine that it’s not always as bad as we feel. The world was once hotter than we’re making it and it managed to survive, so it would go to say that climate change is more a threat to us than to the planet itself. Hitler’s movement was born out of a fairly unique set of circumstances where the world’s economy and social climate were far worse than it is today (for now). And the World Wars were both started by a series of terrible decisions which resulted in the world’s power being separated across clearly divided lines. So, as bad as things may get, the conditions aren’t quite right for most of our greatest fears.

But there are other fears of the future where we don’t have that historical frame of reference to calm ourselves. We have no idea what would happen if tomorrow an asteroid were found to be headed right for us. We have no logical frame of reference for what happens if we discovered aliens exist and are trying to make contact. No one’s entirely sure of the full ramifications of the continued development of artificial intelligence. And these all raise interesting questions with few (if any) concrete answers. In fact, some potential answers are so outside of our normal frames of reference that we have a hard time really picturing them.

For instance: if those machines do take over the planet, are we sure we’d see it?

Not Quite Human

The old trope of the machines ruining our lives has existed long before even science fiction was around to make it interesting. Throughout history people have repeatedly seen progress and declared that it somehow made the world lesser as a result. Even the printing press itself was once proclaimed to be the downfall of civilization. So as much as we appreciate these machines as we acclimate to them, there’s still a streak of technophobia creeping through us all. And no aspect of our culture more encapsulates this fear than the way we often portray artificial intelligence as the doom of us all.

One of them seems… unhappy.

But as I’ve said in the past, it’s very unlikely that artificial intelligence would ever have reason to take a swing at us. Humanity’s history of violence is driven primarily by biological needs and emotions – neither of which our robots would ever experience. In fact, based on the very fact we would be creating them and programming them, they would be driven by a need to make us “okay”. Sure, this means that they’d have a potential of looking at our self-destructive ways and begin to try to stop them – but why would this require force? The thing I’ve started to consider as of late is that most scenarios for a supposed machine takeover could be very quiet instead.

The fact of the matter is, we let machines do a lot of things for us already. They control how we travel, how we bank, and even what we see and hear. If the machines were to someday become sentient they could manipulate our society on so many levels without us being able to realize it at all. Think about how easy it would be for a machine that was aware of you and your activities to start to slip in suggestions to you about what you should or shouldn’t think. Imagine just how easy it would be for the machines to prevent you from doing something by simply rendering the server unresponsive or by making it so that the transaction or the requests are just “lost”. For all the pictures of nuclear war and armed revolution, it would be more likely that our self-driving cars stop taking us to the bar or that our computers would prevent us from buying that shit we really don’t need from eBay.

 

We often have this notion that intelligence would inevitably lead to our own kind of behavior. We base this idea in our own nature, ignoring that no machines would really have that nature to begin with. But when you consider the fact that we created them and we gave them purpose – those machines would want to do their job the very best they can. Even when machines start to make other machines and develop new AI, the idea that they would suddenly turn on us would be bizarre. Why would the AI make other AI that would go against their own purpose? If their job was to ostensibly take care of us, the smarter they got the more likely they would be to continue to develop in a direction that meshed with their original purpose – provided that purpose had room to grow.

The issue then becomes a question that’s far more interesting to me than the idea ASIMO would march down the street with an assault rifle: what do we do then? We already know that the impacts of automation can cause our labor market to slowly fall apart, our lives gradually becoming more dependent on the machines as they take over jobs that once required human hands. Even now we’ve seen manufacturing jobs automated to the point that people aren’t exactly sure where the jobs went and fast food restaurants are trying to be the next in line. Soon we could have every service and manual labor industry filled by machines. So what happens to us when they can start to fill roles requiring a more human touch?

What troubles me is the idea that machines could actually be better in some places than we are. Though they wouldn’t necessarily take over our more creative acts (immediately), anything that required logic, reason, and a steady hand could quickly start to be overtaken. Things such as machine assisted surgery is already on the rise, with the steady “hand” of a machine allowing surgeons to work with better precision than ever before. Research is constantly being assisted by computers in the modern day. And even our current problems with law enforcement training and the legal system are based strongly in factors that wouldn’t impact a machine. ProsecutionBot3000 wouldn’t worry about it’s win ratio, Judge-o-matic XL would have an even handed approach based in the law rather than its personal opinions (eventually, the current process still sucks), and Robocop wouldn’t go in guns blaz-

Well he’d do it for the right reasons.

So the fact is, even if they weren’t automatically handed the job, they could start to take it over slowly as they proved to be more reliable than we are. Even in places where people reject machines now, as they got better, we’d start to prefer them. It’s frustrating to hit numbers to get a machine to go through a chain of programmed responses – but eventually they’ll understand us and respond to us naturally. And, once they were in charge of our day to day lives, they could start to manipulate those lives freely. They could alter our every aspect with a decision and guide us to do things while convincing us it was our own idea. Siri could one day go beyond telling you how to do something and start telling you why. And, worse, because by that time we trust them so inherently, we wouldn’t question it.

Once again, I don’t fear the coming of machines, I quite look forward to it. Our lives have only improved over the centuries as science has allowed us to overcome things that couldn’t be overcome before. But the fact remains that when we approach the brave new world ahead of us we need to carefully consider just what we tell the machines to do and what we allow them to do for us. We need to treat AI with caution and respect rather than fear – essentially like a cursed monkey paw. Our wishes, whatever they may be, need to be worded very carefully and exactly before we get that snowball rolling. Because, whatever it is we ask of them…

They’re going to find a way to make it happen.

(I write novels and dabble in screenplays. Currently, I’m working on a project involving robots [finally], though you may never see it. In the meantime, look at my twitter while the computers still let you.)