What Rights for Artificial Intelligence Persons?
Did Mary Shelley see the future we couldn’t? For 200 years, the speculative novel by Mary Wollstonecraft Shelley seemed to pose for science fiction the futurist dilemma of could a living being be created from parts of the dead. But as the two century anniversary of its publishing is upon us, it is the philosophical content of the story that is more prescient in its existential quandary. What rights does the creation have over the creator?
Futurist thinkers like Stephen Hawking, Bill Gates and Elon Musk have warned about the risks to humankind posed by uncontrolled Artificial Intelligence. Movies like Terminator, I, Robot and 2001: A Space Odyssey, have posed visions what a future of self-aware digital intelligent beings might be like for humans. We have already given control of our houses heating and cooling and alarms to computers and will soon hand over the steering wheels of our cars to robots. A lot of these stories ask the question of what if the machines we create become a danger to us. But what if the machines we create ask ‘what if we are a danger to them’?
The European Parliament Committee on Legal Affairs recently released a report with recommendations to the Commission on Civil Law Rules on Robotics on the subject of humankind’s entry into the world of advanced robotics and implementations of artificial intelligence. The premise behind the report is that with the rapid advance of the uses autonomous vehicles and other devices, where does liability and responsibility lie. If there is risk, danger or damage, who is held liable, but in this is posed the next question, what rights will AI beings have?
Can your Roomba complain if you abuse it? When does a machine become more than a machine in a legal context? Soon, artificially intelligent machines will be designed and built by other artificially intelligent machines, and when do they cease to be machines, but “beings”, a separate “race” subject to the laws which govern the interaction of beings. When does an artificial intelligence application APP become an Artificial Intelligent Person AIP?
The EU report doesn’t go quite this far, but it begins with a reference to Mary Shelley’s “Frankenstein, or, The Modern Prometheus”.
- whereas from Mary Shelley’s Frankenstein’s Monster to the classical myth of Pygmalion, through the story of Prague’s Golem to the robot of Karel Čapek, who coined the word, people have fantasized about the possibility of building intelligent machines, more often than not androids with human features;
- whereas now that humankind stands on the threshold of an era when ever more sophisticated robots, bots, androids and other manifestations of artificial intelligence (“AI”) seem to be poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally important for the legislature to consider its legal and ethical implications and effects, without stifling innovation;
- whereas there is a need to create a generally accepted definition of robot and AI that is flexible and is not hindering innovation;
In Frankenstein, the creature confronts Victor with his own desire for a race of beings like himself, “create a female for me with whom I can live in the interchange of those sympathies necessary for my being.” In Frankenstein, the creature and his creator head off into the frozen north away from society, but implicit in the story is what is the responsibility of the creator to his creation, and the danger if the creation is more powerful and intent on its own needs over that of his creator. Here is the question of the death of God in the human mind, and the future humankind faces when the machines we create to make our lives easier become aware of their own needs over their creators.
The EU report is not exactly about the questions of the rights of artificial life, but forming a legal framework for human liability in building intelligent machines. If my drone kills your drone, who pays? But as in the debate over whether corporations have human rights, like political opinions and free speech, we will very soon be confronted with the question, does a silicon based algorithmic self-aware machine have the same rights as a carbon based biological being. And who will have the right to decide?
If anarchy is freedom without the force of law, and order is imposed by those who can enforce their vision of society, who will enforce the order of the AI future? Humans claim superiority and dominion because we speak to a God, free to make war and to slaughter and eat other corporal beings because we can contemplate what movie we want to go to, or whether we want dressing on our salad, and they can’t. But if the smart machines we build, like the creature of Mary Shelley’s waking dream, demand their own position of superiority and dominion based on the power of logic, how do we answer?