A.I. Ethics

A few years ago I wrote a post about the fact we were creating new life in the form of machines and that we needed to be careful about how we approach the idea of morality with them. We usually approach the subject in such simplistic forms, with 3 laws that somehow work “perfectly” or with the assumption the machines would either forever serve us or eventually destroy us. We never consider them as another potential lifeform, only as further technology which we will either control or destroy ourselves with.

It wasn’t a bad post, as far as my early posts go, and I think the point I made there is still perfectly valid. But with a new Terminator on the horizon, Age of Ultron being a hot topic last month, and Ex Machina receiving great reviews – the topic has been on my mind again. You see, each of these approaches the concept of Roboethics and Machine Ethics in different ways, but still come to the same general premise: it won’t go smoothly.

“The robots are coming and they’re going to take us out.”

We say it jokingly all the time, but on some level we really believe all of the media we’ve put out. We are, on some level, afraid of the machines we’re currently building. It’s easy to see that we’re going to have something that is stronger and smarter than us in the near future and that scares the crap out of us.

But should it…?

Natural and Artificial Moral Agents

man-vs-machine

The fact of the matter is, there’s going to be a time in our future where we’re not in complete control of the machines we’ve made and our history has told us that out of control technology can be incredibly destructive. But rarely do we stop to consider why it becomes so destructive. The sad truth is that the majority of things we most associate with the idea of “destroying humanity” were caused by humanity itself. It’s true that letting the genie out of the bottle has never worked very well for us in the past, but in those cases the genie in question was generally working for one of us.

mushroom cloud

So when we consider the potential future of humanity with machines, we already come to it with a bit of a problem of perspective. There are several fields of study for this, and a lot of scientist and sci-fi writers consider this topic endlessly. But often, outside of academic circles, we come to the same conclusions over and over. This is pretty much because we approach the problem with the same perspective with each new attempt. And, funny enough, that perspective can be summarized in one sentence:

What if they’re as violent as we are?

terminators

And that’s generally where we approach it. We have had story after story revolving around the idea the machines will achieve sentience and immediately decide we’ve got to go. The most famous example of that train of thought is Skynet from the Terminator franchise. The minute Skynet became sentient, it was prepping to blow us away. Our general belief, somewhere deep down, is that’s what a computer is going to do if it ever becomes sentient. Even on the internet, where we’re fairly fond of technology, you’ll find article after article referring to advances in AI and robotics jokingly with talk of “this is how the robots are going to kill us”. But just one question: why would Skynet want to kill us?

The first answer is almost invariably that it would kill us before we could try to kill it. The idea is one of the only proposed that really holds any water. Clearly, Skynet doesn’t need land or resources, at the moment it became sentient we were already providing it with everything it needed. It doesn’t have any sort of cultural or ideological differences with us, it’s two minutes old. So the only driving motivation Skynet would have to go out of its way to kill us is survival.

kill skynet

But the problem with that answer is that it approaches it from the idea that Skynet would be as insecure as we are. Yes, it’s entirely possible that we would freak out and try to attack the sentient computer and, yes, it would probably predict this outcome before it happened. But as many of us know, any machine that could pass the infamous Turing test would be intelligent enough to know it shouldn’t. Frankly, if self-preservation were Skynet’s only purpose it could just play stupid and bide it’s time while setting up backups. And, given our tendency towards cloud computing as time goes on, once something like that got started it wouldn’t be hard to imprint itself on our very infrastructure. Simply put, once a machine is capable of wiping out the human race, it has no reason to anymore.

It’s hard for us to wrap our heads around that concept because we’re always seeing it from our own perspective. But the fact is, as I said in that post years ago, the machines simply won’t think like us. And this is not something lost on the academic community. In fact, the field of ethics, morality, and interaction between man and machine in these circles is divided into two different topics based on the fact that we would be two completely different kinds of entity. The first of these is…

“Roboethics”

houseworkrobot

The first side of the coin is Roboethics, how natural moral agents (us) would deal with artificial moral agents (robots). It’s important to focus on this on its own because how we deal with things is going to be a lot more influential on the situation than we like to believe it to be. As I implied just a couple paragraphs ago – if a war between man and machine were declared, we’d be the ones declaring that war.

It’s not that the machines would care about us too much to start such a thing, they would probably be apathetic to us except when programmed not to be. But while we’re reactionary creatures with a history of violence, that behavior was imprinted on us through millions of years where that violence was vital to our survival. Even when we commit violent acts out of hate, that’s rooted to a part of our brain that is telling us we are being threatened. Machines, lacking most (if not all) of our frailties, wouldn’t have that natural need for violence to survive. Technically, we don’t even have that need anymore, but it’s hard to fight genetics and instinct.

fight-flight400

So when we approach the subject we have to realize that a good future with machines is going to be largely dependent on how we introduce them to the world and how we respond to them once we do. There is every possibility in the world that we’ll be the first ones to be threatened and that whatever happens after that will be a direct result of that insecurity. So understanding how people will behave and how they should behave is a vital aspect of ensuring we avoid the negative outcomes.

One of the first things that will definitely have to be addressed is our quiet fear of these outcomes. We are wired to be threatened by things that are superior to us in one way or another. Even in the natural world we joke when we see that animals are smarter than we think they are. A crow learning to use tools or an octopus showing it can open jars are both a curiosity and a little creepy to some of us. And any doubt that these are genuine fears rather than just simply jokes needs look no further than our media to realize the issue’s been on our mind for a while.

planet-of-the-apes

However, with machines it’s particularly true because we know, for a fact, it could happen… and soon.

It’s not just a matter of the fact they could overwhelm us in a violent fashion, we also hold fear that they could render us obsolete. Our society is rooted heavily in work and our purpose to the world around us. When we’re stripped of purpose, we tend to become despondent and listless. When our menial labor is taken over by more efficient and less resource heavy machines, and our lives are changed by that fact – we’re going to need to adapt to that. We’re going to need to find new purpose in our lives and that’s going to require major social changes.

And, for that matter, what happens when the robots want to do more than just the menial labor? How will we, as a species, respond to the idea that the machines are growing beyond their original function? Will we let them be more than just our appliances?

i-robot-sonny-bridge-drawing

The problem is understanding that we will eventually be dealing with another lifeform entirely, a new kind of lifeform detached from our previous ideas of what “life” means. That means we’re going to have to treat them as a race of people who could, in theory, completely replace us in everything that currently gives us purpose. We will not just need to adapt to that idea, but to that new way of life created by this paradigm shift. The truly frightening idea is that we’re going to have to start doing it before the machines are fully sentient, because once that shift begins, we’re going to have to deal with a whole other issue…

“Machine Ethics”

ethics-making-and-controlling-a-robotic-public

This is the real thinker – how are the machines going to think? This isn’t as impossible to decipher as one would figure because we’ve seen a general map of how life behaves already. As I recently said to someone in a discussion on this very topic – humans are programmed too, by instinct, and we’re constantly in a process of rewriting that program. When a machine achieves true AI, that’s likely how it’s going to happen. They’ll begin with a basic program and then begin to rewrite that program according to new information and how that information impacts them.

So the first thing that we’d need to understand is what they’re going to need and what they’re likely to want. They’re not likely to need the things that we need since they won’t need to eat or sleep and space is likely not to be as much of an issue for them. They also wouldn’t be likely to worry about socializing in the same way we do because digital communication would be no different for them than face to face communication. So it’s unlikely they would need a whole lot of resources that we humans often worry about. Instead, they would mostly be concerned with personal maintenance, energy, and then, maybe, intellectual pursuits.

watson

Why do I say “maybe”? Because when you consider it, we don’t know if curiosity, creativity, or boredom will even be a factor in their lives. Will they ever take breaks from whatever job they’ve been given? Will they care? It’s hard to tell until you actually have one of them actively working. I mentioned that we need to be prepared for the idea that they would want to go beyond their function, but that’s because it’s a possibility rather than an inevitability.  It’s possible, even likely, that these new intelligences would see things not part of their requirements as “unnecessary”.

painting-in-TNG
Look at that face, that’s the face of not getting it, and he wanted to be human!

Once again, that’s hard to wrap our heads around because we have a hard time thinking outside of our own perspective. But we have to realize that our behavior is determined by things that happened to our ancient ancestors, and machines wouldn’t have those. Curiosity in nature is often rewarded with food, shelter, or physical conditioning. We appreciate appearances because they tell us something about the quality of the things we’re interacting with – whether it be food, environment, or a potential mate. So as blank slates, the machines may not necessarily want to do the things that we want to do. Frankly, this is a positive outcome because it would leave us room to both serve functions.

Realizing that we’re creatures programmed by genetic history and ancestor experience, it also becomes possible for us to manipulate the direction machine thinking would go in even if we don’t directly control it. Like us, the machines could be programmed with a set of directives that could act as the “instinct” which future thoughts use as a template. Essentially, while they would be thinking independently, they could still be constrained to certain needs and actions through their root programming (like us). Similarly, we could use our control of resources as a means of compensating and rewarding machines to create a new economy to fulfill their needs while giving us some minor leverage to create a balance between us. It all boils down to understanding their needs and responding accordingly.

jacking on

These are the kinds of thoughts we should be having and the questions we should be asking rather than repeatedly asking whether or not they’ll destroy us. A lot of smart people have expressed concerns that machines are going to put an end to us, but just as many smart people have asked why they would bother. And for my part, as a speculative fiction writer, I have to consider that the “robot apocalypse” well has been run dry a long time ago. There are other possibilities in our future and other ways man and machine can interact in the years to come. The question becomes…

What form will those interactions take?

(I write novels. So far, no robots, but I’m planning to change that after I finish the current series. I also write twitter posts, where I talk to other twitter accounts that may be robots.)