Vicarious Liability: A Solution to a Problem of AI Responsibility

Main Article Content

Deepak Kumar Sahoo

Abstract

Who is capable when a man-made intelligence machine makes something turn out badly? Or is there a void in the assignation of blame? The responses can either state that there is a single responsibility gap, multiple responsibility gaps, or none at all. The issue can be summarized as follows: On the one hand, it seems right to hold someone accountable for something that an AI machine did wrong; On the other hand, it doesn't appear that anyone deserves to be held accountable for this error. The study concentrates on a specific aspect of the AI responsibility gap in this article: In cases where AI machines have design flaws, it makes sense that someone should bear the legal costs; However, there does not appear to be such a suitable bearer. The study approaches this issue according to the legitimate point of view and propose vicarious responsibility of computer-based intelligence makers as an answer for this issue. Our proposition comes in two variations: The first one is more limited in scope, but it is simple to incorporate into existing legal frameworks; The second one can be used in a wider range of situations, but it requires legal frameworks to be updated. A broader definition of vicarious liability is used in the latter variant. Finally, study draw attention to the important insights that vicarious liability provides for closing the moral AI responsibility gap.

Article Details

Section
Articles