A few years ago I wrote a post about the fact we were creating new life in the form of machines and that we needed to be careful about how we approach the idea of morality with them. We usually approach the subject in such simplistic forms, with 3 laws that somehow work “perfectly” or with the assumption the machines would either forever serve us or eventually destroy us. We never consider them as another potential lifeform, only as further technology which we will either control or destroy ourselves with.
It wasn’t a bad post, as far as my early posts go, and I think the point I made there is still perfectly valid. But with a new Terminator on the horizon, Age of Ultron being a hot topic last month, and Ex Machina receiving great reviews – the topic has been on my mind again. You see, each of these approaches the concept of Roboethics and Machine Ethics in different ways, but still come to the same general premise: it won’t go smoothly.
“The robots are coming and they’re going to take us out.”
We say it jokingly all the time, but on some level we really believe all of the media we’ve put out. We are, on some level, afraid of the machines we’re currently building. It’s easy to see that we’re going to have something that is stronger and smarter than us in the near future and that scares the crap out of us.
But should it…? Continue reading A.I. Ethics