Machine Right and Its Related Ethics Rules

In the popular science fiction movie, The Matrix by Zach Staenberg, it depicts the world where human beings are subdued by machines.
Main character in the science fiction movie The Matrix Actually, this is not the only movie that describes such future enhanced by technology. With the rapid development of science and technology, artificial intelligence becomes a heatedly-debated term. The continuous triumph of Alpha Go, the artificial intelligence(AI) machine that beat world champions of board game Go, once again implies the possible future which may coincide with what happen in the movies. Currently, AI is merely used on algorithm-based for prediction or calculation. However, in the coming decades, AI machine may be fully developed as intelligent as human beings. This paper will discuss the necessity of ethics rules imposed on AI machines based on the assumption that AI can be as intelligent as and will not exceed the intelligence possessed by human beings. Since AI machines cannot self-develop based on today’s technology, by applying right test, this paper will argue that AI machines deserve machine right if they’re constraint by ethics rules, the human or entity that owns the machine has the full responsibility for machine’s behavior. To justify this argument, this paper will use philosophical argument to prove its necessity. Furthermore, supported by virtual test, this paper will state that other social entities’ right, instead of being harmed by machine right, will actually be protected by upholding machine right. Although the application of standard ethics rules may still involve subjectivity, this the best approach that humans can take right now. After all, this paper will wrap up by adopting justice test to validate the fairness of such distribution of responsibility, and such distribution won’t change until the day when AI machines can actually think for themselves.

It is justifiable by right test that AI machines deserve machine right. The necessity of machine right lies in the possibility that machines may some day become sophisticated, rational and intelligent enough to interact with and act as human beings. If it comes to the day when they will be treated as human beings, who are inborn with human rights, they should have same equivalent right like machine right on the condition that they are restraint by certain moral and ethics requirements, which is the most significant part that differentiates intellectual beings like human beings or machines with artificial intelligence from other mechanic machines. In order to uphold this righteousness of universally-acknowledged right, intellectual machines should be programmed with ethical standard.

From philosophical perspective AI machines should deserve such machine right for they are the equivalence of animals. The equality among animals, machines and humans have been discussed in philosophical world as well. For philosophers, it is necessary to admits the right for machines. Just like the animal, machines, once considered as counterparts of animals, are admitted to be creatures of rights just like humans.
Main character in the science fiction movie The Matrix In 1637, the French philosopher Rene Descartes once introduced a significant concept – the doctrine of the bete-machine or animal-machine. Although he argues that not only

do the beasts have less reason than men, but they have no reason at all,

he also concludes that

“animals are nothing more than mindless automata that, like clockwork mechanisms, simply followed predetermined instruction programmed in the disposition of their various parts or organs”.

In other words,

“animals and machine [are] effectively indistinguishable and ontologically the same”.

Stuart M. ShieberThe Turing Test: Verbal Behavior as the Hallmark of Intelligence
Based on this theory which popularized at that time and served as the origin of other related arguments, in current social context, we admit animals have same rights as humans, for they can sense the feeling of pain. Animals are like machines, and since animals are inborn with rights like humans, machines share that certain rights as well. Thus, even from a philosophical point of view, it is necessary and sufficient to conclude that philosophers agree that there exist certain guaranteed rights for machines as well. However, in order for machines to fully enjoys their rights, they should be able to protect themselves when they get involved in the complicated real world situation with ethical implication. Since machines cannot self-program to learn ethics standards by themselves, this responsibility yields to engineers who actually design and implement the machines. It is crucial and necessary to develop a common set of rules shared by all artificial intelligence engineers which will be applied for implementation with no exception. Such ethics constraints protect AI machines from unexpected harms. The ethical awareness of engineers ensures that AI machines are instilled with right ethics standard, and meanwhile protect them from malicious harms from the outside world. Based on today’s technology, a good AI system is “a modular program”, which strictly follows its direction. For human beings,

“our consciousness and ethics are associated with out morality due to our evolutionary and culture history”.

Roger K. MooreAI Ethics: Artificial Intelligence
Currently, the framework developed by engineers that applied to AI machines emphasizes on two dimensions: autonomy and sensitivity to morally relevant facts [1]. By setting these two dimensions, machines become “explicit ethicist” who understands what ethics rules mean. Ideally, in order to appear in market, the machines should be fully tested by Turing test, in this case the moral Turing test before their first debut in market [1][2]. With that being said, for AI machines,

“moral obligation is not tied by either logical or mechanical necessity to awareness or feelings”.

Roger K. MooreAI Ethics: Artificial Intelligence
It is wrong to punish them for something that is merely programmed on them. Yielding the responsibility to their developers protects the AI machines from malicious suits or criticism. Moreover, such protection, instead of jeopardizing others’ rights, will protect other social entities as well, just as stated by UK government. This idea that humans should be in charge of their own intelligence machines aligns with government’s interest. In the report on Artificial Intelligence by the UK Government Office for Science following its recent House of Commons Committee report on Robotics and AI, it states that

despite current uncertainty over the nature of responsibility for choices informed by artificial intelligence, there will need to be clear lines of accountability.

It further suggests that

“it may be thought necessary for a chief executive or senior stakeholder to be held ultimately accountable for the decisions made by algorithms” [5].

By holding humans responsible for their AI machines, it becomes easier for government’s regulations and supervision, and it is the best for overall social harmony. Furthermore, rather than jeopardize other entities’ right, such regulation also protects and stimulates company’s development.

For companies that manufacture AI machines, it is justifiable by virtual test that by applying ethical rules to AI machines manufactured draws company to better balance between excellence and success. By asking designers or engineers to strictly follow the ethics standards and implementation frameworks, the company will demonstrate its own ethical responsibility in the society, which should and will be respected by the general public. Business wise, it passes the Mirror test set by company. For the company image carries the fair weight on company’s future development, understanding the ethics values of developing AI machines enhances company’s overall reputation. Moreover, such input maintains the right balance between excellence and success for the firm. The supervision of implementing strict ethics rules doesn’t cost company extra money, only more devoted efforts. This devotion shows the perfection of its products, which will definitely lead the company to long-term development. Since the public always trust more reliable products, the sales of such supervised machines will also lead the company to great profits. Undoubtedly, this perfect balance between excellence and success is all company asks for. However, there exists counter-arguments for this practice.

It is possible to argue that although the standard is objective and fixed, it is still subject to engineers and designers about how to put everything in real implementation. “By setting the rules for machine directly, it is always clear why the machine makes the choice that it does, since the designers set the rules”, as stated by advocates of rule-based approach of machine implementation [3][4]. In fact, this is the best approach engineers can take. The share of general rules among engineers while designing machines ensures that the existence of different machines by different countries or individuals share the common ground in interaction. Moreover, if some conflicts occur, it is easier for human to verdict. If there doesn’t exist a set of rules which acknowledged by tech industry, where engineers can design different robots or machines of different moral standard, although it may simplify the design process, it will cause mess and unimaginable troubles in the future. Machine of “evil spirit” may jeopardize the human reality or may hurt or damage other machines in existence.
With this being said, by identifying machine right with interpretation from philosophical point of view, followed by explaining its necessity from engineering side, and proving that machine right doesn’t conflict with companies or government, this argument satisfies all the reasoning of importance of machine rights by adopting right test.

Throwing back to the assumption made at the beginning of the paper that due the limit of technology development nowadays, AI can be as intelligent as and will not exceed the intelligence possessed by human beings, there is no way to determine whether such analysis made in this paper will always stay true in the future while applying justice test. Since humans design AI machines and AI machines cannot self-evolve and do rational thinking, it is natural that the burden of guaranteeing the rightful behavior of machines falls on humans, whereas machines don’t need to take care of anything under current circumstances. It situation, however, will fail if in the future that AI machines actually become intelligent enough to think or behave beyond human. At that point of time, the burden should be shared by both parties in some degree. However, how to divide such responsibility depends on the discussion between both parties, which assumes that the AI machines at that time can be responsible for their own behavior and display rational thinking during such discussion. In conclusion, the argument discussed in this paper will provide a fair distribution of responsibility between human and AI machines, however, in the future, this distribution will be re-discussed and share by both parties if the AI machines are rational and intelligent enough to make decisions for themselves.

Reference:
[1] Wendell Wallach and Colin Allen, “Moral Machines: Teaching Robots Right from Wrong”, 2008. [Online]
https://global.oup.com/academic/product/moral-machines-9780195374049?cc=us&lang=en&
[2] Roger Clarke, “Asimov’s laws of robotics: Implications for information technology”, Volume 27, Issue 1, 1994. [Online].
http://zb5lh7ed7a.search.serialssolutions.com/?ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Asimov%27s+Laws+of+Robotics%3A+implications+for+information+technology&rft.jtitle=Computer&rft.au=Clarke%2C+Roger&rft.date=1994-01-01&rft.pub=IEEE+Computer+Society&rft.issn=0018-9162&rft.eissn=1558-0814&rft.volume=27&rft.issue=1&rft.spage=57&rft.externalDBID=BSHEE&rft.externalDocID=15323348&paramdict=en-US
[3] Boer Deng, “Machine Ethics: The Robot’s Dillemma”, 2015. [Online].
http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881
[4] Joanna J. Bryson, Philip P. Kime, “Just an Artifact: Why Machines are Perceived as Moral Agents”, 2011. [Online].
http://www.cs.bath.ac.uk/~jjb/ftp/BrysonKime-IJCAI11.pdf