top of page
  • Writer's pictureTeam SACJ

EXPLORING CRIMINAL LIABILITY IN SELF-LEARNING ARTIFICIAL INTELLIGENCE.

Parul Anand



Could Artificial intelligence commit a crime? If yes, who will be held liable? Would it be the manufacturer, programmer, user, or perhaps the impugned Artificial Intelligence itself? Queries such as these which seemed fantastical or belonging solely to the realm of science fiction find themselves in a suffusive want of an answer now.

Introduction to Artifical Intelligence

Before proceeding any further, one has to necessarily understand what self-learning Artificial Intelligence is and then examine whether it could possibly bear consonance with basic elements for culpability in criminal law to understand if the direct liability model for artificial intelligence bears any true possibility. Artificial intelligence refers to machines that can think intelligently like humans in some capacity. It is defined to stimulate human thought patterns using computing power of a computer.[1] It is said to be a process of creating machines that can act in a manner perceived intelligent by man.[2] Now, self-learning Artificial Intelligence machines go one step further.

Generally, Artificial intelligence though plenty ‘intelligent’ in a humanlike manner, it is then restrained to the contours of what it was programmed and initially trained to do. However, self-learning Artificial Intelligence machines employ a mechanism where they are in a perpetual state of learning to the point that they are not only significantly different from the initial entity that a programmer designed but also different in a way that the initial creator could not foresee or contemplate; a key insight that will also contribute to a general ousting from using negligence on the part of programmers in the direct liability model.

Artificial intelligence and its self-learning capabilities were well demonstrated in the case of Google Alpha Zero. Google Alpha Zero engaged itself in a rigorous learning of masterful chess and constantly played against itself to improve. The notable fact here is that the program did not contact any person during training. It trained itself like that for 4 hours.[3] Questions of liability then become more relevant than ever.

TayBot & Current Liability Lacunae

Could an entity become ‘aware’ or ‘human’ enough for it to be granted a personae of interest in criminal law and be assigned criminal liability, is then the imminent inquiry at hand. These questions arise not only from a legitimate theoretical concern but now also as a practical inquiry as numerous instances necessitating such contemplation bear existence. The most notable instance for such a query has been the Tay-Bot developed by Microsoft. It was an AI chatbot equipped with self-learning capabilities. Tay could use statements use by its correspondents to improve its conversational abilities. This led to an immediate controversy when Tay started applying some pro-Nazi phrases in conversations.[4] It is imminent that Microsoft had no intention or knowledge to have such phrases incorporated; moreover the correspondents who fed such phrases to Tay had no intent or knowledge to have such phrases distributed by it.[5] This was highly unpredictable since neither the developers nor the participants anticipated such behavior. When a behavior exhibited by an Artificial Intelligence cannot be foreseen by or traced back to any human entity, assigning any legitimate liability becomes nearly impossible. No one can be currently held culpable in such a case. This is incredibly so in the case of self-learning Artificial intelligence which continues to develop on its own. Often escaping foreseeability as well, the immediate choice to hold programmers liable for such an AI’s actions becomes untenable. This lacunae in criminal law with regards to how to such situations leaves a considerable accountability gap in the current techno-legal order.

The above illustrated scenario where no human actor could potentially be held liable is the one that requires deeper contemplation. The categorizations and possibilities for criminal liability currently formulated in the research literature for Artificial Intelligence Robot liability more or less falls in three brackets. The first model referred to by Hallevy as the Perpetration-by Another Liability Model[6] involves Artificial intelligence strictly acting as an agent to a human actor; The second model being the Natural-Probable-Consequence Liability Model,[7] where where though the Artificial intelligence is not acting as an agent, the act committed could bear liability on its programmers or users on accounts such as foreseeability. The last model known as the Direct Liability Model[8] seeks to make Artificial intelligence itself liable for the criminal act committed.

It is the last model which is involved for the kind of Artificial Intelligence based crimes where there exists no possible human perpetrator to levy true criminal liability for the act upon. Though, Hallevy still postulates it in addition to criminal liability imposed on the human programmer or the user in addition to one levied upon the Artificial Intelligence Robot.[9] Direct Liability model would be the one most likely invoked in case such as the one with Microsoft’s Tay-Bot. However, such a possibility remains considerably distanced when general liability regime when it comes to AI and liability is not entirely clear.

Artificial intelligence and application of Mens Rea

Could Artificial Intelligence’s ability to self-learn possibly contribute to it being such an entity that lends itself to application of criminal law? The answer lies not in its nature but the effect it entails. The fact that Artificial Intelligence has the ability to self-learn does not possibly lend itself to an application of criminal law but its effect which are forlorn with unpredictability and a lack of foreseeability on the part of user or programmer lend a strong case to incorporate it under the criminal law system.

Apart from its ability to self-learn, an important element for Artificial intelligence would its possession of will and cognition; since only that will put it closely under the ambit of a ‘legal mind’ for the purposes of culpability. The two basic elements of a crime are actus reus and mens rea. Actus reus denotes a wrongful act or omission and mens rea denotes a guilty state of mind including intention, or knowledge. It is the latter category that Artificial Intelligence struggles to neatly conform in order to be held liable.


Artificial intelligence is defined to be reactive, self-controlling, goal oriented, and temporally continuous.[10] All these traits point to a relatively conscious state of Artificial Intelligence. They indicate cognitive abilities and autonomous will.[11] Mens rea is defined as cognitive and volitional elements.[12] Artificial Intelligence’s cognition and volitional elements of being self-controlling, autonomy, goal orientation point to an overlap with Mens rea’s underpinnings of similarly conscious elements. Though, it could certainly not be equivalent to humans, it still possesses elements of a constant learning and a discernible decision making skill that could potentially be included under mens rea. With actus reus easily noticeable under the principle of res ipsa loquitor and a strong possibility of mens rea; artificial intelligence seems to not as easily escape the liability net.

Highly important it is to note that as opposed to a principle-agent liability which would contemplate Artificial Intelligence as a “gun” used to shoot someone, the gun we hold here is a special one. It has its own cognition and will. This very elaborate gun self-learns and develops, [13] and shoots in different targets out of its own decision making criterions that it substantiates with continual self-learning. This gun now is not a mere instrument. It is not like a pair of scissors, it is more than that and therefore, it needs to be treated differently. As the self-learning Artificial Intelligence progresses far beyond the level where it could be treated just as an intermediary, its unique liability matrix raises newer questions.

Self-Learning AI and Negligence

Self-learning Artificial Intelligence poses an additionally complex problem. The factum of holding programmers liable for negligence becomes highly untenable in the case of self-learning Artificial Intelligence. Negligence includes two components: reasonable foreseeability and knowledge of the probable consequence of an act or omission and a reasonable duty of care. When it comes to such self-learning Artificial Intelligence, predicting its behaviour becomes very difficult. It continually develops to take a form completely different from the one it started with. Developers can predict some future risks[14] but cases like Tay demonstrate the complexity otherwise.

Moreover, third party mala fide intervention can also not be accounted for.[15] Therefore, in such a case, holding programmers liable becomes inappropriate. In many cases, through no lacking of their own but owing to the model of self-education promoted by the impugned Artificial Intelligence, programmers may simply not be skilled enough to foresee the risks of potential harm posed by the Artificial Intelligence.[16] By extension, reasonable standard of care will be unclear.[17] Knowledge based mens rea present in negligence may thus be “absent entirely”.[18]

With negligence-based liability model demonstrated as untenable in the case of self-learning Artificial Intelligence, the model of direct liability propounded by Hallevy is but the natural choice. For its acts and omissions, the machine itself must be made liable. It has also been demonstrated that artificial intelligence does contain cognition and volitional elements as in it makes autonomous decisions and can engage in self-learning. For the purposes of liability, therefore, Artificial Intelligence could fall very well under the ambit of an entity that could be assigned a very limited though a discernibly present element of an autonomous mind. The accountability gap created in situations such as the one demonstrated by Taybot can be addressed in such a manner.

However, the practical expositions of such an endeavour still remain murky. Questions such as would holding Artificial Intelligence liable actually benefit the victims arise. Punishments are a critical part of the criminal justice system. It is punishments that ensure accountability, Therefore, to address the accountability gap in a true sense; the same has to be contemplated for artificial intelligence offenders. Another connected query is about the possibility of devising equally efficacious punishments as are for humans for the machine offenders. Punishment of Artificial intelligence seems to be out of a science fiction dystopian novel, but with the pace of technology, the time is imminent for exactly such an inquiry. Thinkers such as Hallevy have already contemplated such possibilities.

Complexities of devising a punishment for AI

The starkest punishment for humans in the current scenario is the death penalty. The same effect can be achieved for artificial intelligence by carrying a deletion sentence whereby the software controlling the AI robot is deleted. This erases the offending Artificial Intelligence’s being and makes it further incapable of committing any crimes. Another more common punishment is imprisonment. The same could also be extended to artificial intelligence offender where temporary deletion of its software or in any other way of putting it out of use for a determinate period achieves the same ends of restraining the offender’s liberty and freedom.[19] Since, it is not a physical entity, locking the digital offender in a jail would not achieve any viable aims. For all its purposes, it is in just another computer room, thereby, the jails imagined thus have to be more metaphorical in such a case. The goal is to produce the same effects; the manner of all human punishment has to be reimagined in this case for it to effectively achieve the aims it seeks.

Notably, another common punishment of levying fines remains untenable. Since, the artificial intelligence does not in itself possess any valuable properties or other assets that could be used as a monetary fine; such a punishment accomplishes no purpose. While some punishments could be remodelled, a punishment such as a fine demonstrates the difficulty in devising punishment for Artificial Intelligence. The absence of transferability of fine as a punishment is significant because fining as a punishment is highly efficacious. Providing damages is an essential component of the criminal system to compensate for the harm caused to the victim. It accomplishes the twofold task of providing compensation for the victim and also punishes the offender. Since the same can’t be extended to our machine friends, including them in the criminal liability net becomes more difficult.

Conclusion

While in the general public consensus, it may seem that contemplating direct liability for self-learning Artificial intelligence should be a task left to future, incidents such as what happened with Taybot demonstrate otherwise. Self-learning Artificial intelligence has been around for more than a decade now. The time is ripe enough to start exploring these questions to keep pace with the swift pace of technology in this information age.


[The author is a 2nd year BA LLB student in NLU, Jodhpur.]

[1] S. Dorogunov, M. I. Baumgarten. Potential problems arising from the creation of artificial intelligence. VestnikKuzGTU, 4. (2013) as mentioned in Kirpichnikov, D., Pavlyuk, A., Grebneva, Y. and Okagbue, H., 2020. Criminal Liability of the Artificial Intelligence. E3S Web of Conferences, 159, p.04025. [2] Id. [3] Peter Dockrill, In Just 4 Hours, Google's AI Mastered All The Chess Knowledge in History, https://www.sciencealert.com/it-took-4-hours-google-s-ai-world-s-best-chess-player-deepmind-alphazero, (Last visited on February 25, 2023). [4] Amy Kraft, Microsoft shuts down AI chatbot after it turned into a Nazi, https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/ (Last visited on February 25, 2023). [5] Elle Hunt, Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter, https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter (Last visited on February 25, 2023) [6] Gabriel Hallevy, I, Robot - I, Criminal: When Science Fiction Becomes Reality: Legal Liability of AI Robots Committing Criminal Offenses, 2010 Syracuse Sci. & TECH. L. REP. 1 (2010). [7] Id. [8] Id. [9] Id. [10] BroĪek, B., & Jakubiec, M. (2017). On the legal responsibility of autonomous machines. Artificial Intelligence and Law, 25(3), 293-304. https://doi.org/10.1007/s10506-017-9207-8 [11] Roman Dremliuga & Natalia Prisekina, The Concept of Culpability in Criminal Law and AI Systems, 13 J. POL. & L. 256 (2020). [12] Mohamed, E. B., & Marchuk, I. (2013). A comparative study of the principles governing criminal responsibility in the major legal systems of the world (England, United States, Germany, France, Denmark, Russia, China, and Islamic legal tradition). Criminal Law Forum, 24(1), 1-48. https://doi.org/10.1007/s10609-012-9187-z [13] Id. [14] Id. [15] Id. [16] Nora Osmani, The Complexity of Criminal Liability of AI Systems, 14 Masaryk U. J.L. & TECH. 53 (2020). [17] Id. [18] King, T. C., Aggarwal, N., Taddeo, M., & Floridi, L. (2020) Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. Science and Engineering Ethics, 26, 89–120. https://doi.org/10.1007/s11948-018-00081-0, also see supra 12. [19] Supra 3.

884 views0 comments

Recent Posts

See All
bottom of page