This article is written by Prakarsh and Shruti Mishra, students at National Law Institute University, Bhopal, pursuing B.A.LL.B. (Hons.). Background Microsoft has recently filed for a patent application titled “Creating a conversational chatbot of a specific chatbot of a specific person”. The patent is for a system that is programmed to collect voice samples, track past social media activities and messages etc., of a deceased individual. It essentially entails the digitalization of various character traits of someone who has passed away and employs the collected data to train a chatbot to learn certain character traits of the deceased. In theory, the patent seeks to preserve certain aspects of the deceased in a digital format. This idea has been previously conceptualised by popular culture through the science-fiction movie, ‘Transcendence’, and it recently became the subject of the Black Mirror episode titled USS Callister. The system sought to be patented is a unique creation that can be seen as the first step towards bringing to life the age-old theory of digital preservation of human consciousness. However, this would require gaining access to personal details and private cyber activity of deceased individuals, a subject that essentially falls under the purview of different Data Protection regimes across the globe. Post-mortem Privacy rights mainly deal with the preservation of ‘digital remains’, that is the protection of the data of an individual that is left behind post his/her demise. More specifically, such rights may also govern the concept of personality rights which protect the personal character traits of an individual along with the reputation and dignity that is attached to the same. Post-mortem privacy rights have been widely discussed in the context of international data protection mechanisms. The patent if granted might engender a conflict situation with various Data Protection Regimes. It would raise questions such as whether the system would require any permission or license for the collection of data belonging to the deceased or whether it would be infringing any rights using such collected data? In this piece, the authors shed light on whether the privacy regimes would pose any obstacles for the system, and if yes, the extent to which it would affect its functioning. Position in the European Union It is trite in the traditional English Common Law System that upon the demise of an individual, any cause of action for ‘personal claims’ is exhausted. This is reflected by the maxim ‘actio personalis moritur cum persona.’ Therefore, prima facie it does not seem that the deceased would have any rights with respect to personality claims. However, the Civil Law system does not seem to follow such a maxim. For example, in Germany, the Federal Court held that the constitutional provision for the protection of human dignity transcends beyond the living and applies to the deceased as well. The European Union consists of both Common law as well as Civil Law jurisdiction countries. The grundnorm of data protection, the General Data Protection Regulation (GDPR), does not provide for post-mortem rights and neither does the European Convention of Human Rights. The protection that is afforded to the citizens of the member states only applies to the living and this has been explicitly stated in both the Conventions. However, the GDPR, under Recital 27, does provide for members to be able to implement laws for protecting the privacy of the deceased. Some countries within the EU have made use of this discretion to implement laws that protect digital legacy and presence beyond the living. For example, Germany as was mentioned above. Other countries include:
In the present context, if an individual has directed his data to be erased or to be made inaccessible, or if the legal heirs do not allow access to the deceased individual’s data, then the system would not be allowed to collect the sensitive information that would enable the chatbot to imitate the characteristics of the deceased. Position in the United States of America The law in the U.S. varies from state to state within the country. The jurisprudence on these matters are usually related to the personality rights of celebrities, however, the same may be applied to the data for other deceased individuals. For example, the California Civil Code has two sections, one which provides personality rights to living individuals and one which was passed as the Image Protection Act in the year 1985 for providing post-mortem estate rights. The Act vested the personality rights of a deceased individual with their estate, similar to the Spanish Law. It was based on this provision that the estate of Princess Diana was successful in getting an injunction against unauthorized use of her image, post her death. Other States also protect post-mortem personality rights within the Country with varying durations of protection, although within 100 years post the demise. Hence, it can be said that the Microsoft system to be able to collect data uniformly across the nation would not be easy. It would entail seeking permission from the estate of the deceased in accordance with the laws of different States. Concluding Remarks & the Way Forward In our opinion, post-mortem privacy rights should be afforded strong protection that can protect any sensitive ‘digital relic’ or legacy that an individual leaves behind. This is being increasingly followed by various countries across the globe and might even find a place in the GDPR eventually. Currently, it cannot be said that the Microsoft System will find bothersome impediments on a global or regional level. However, when it comes to national laws and data protection regimes, Microsoft would be required to take permissions, sanctions, licenses, etc. to be able to collect and use data of the deceased. The system would be working on a case to case basis depending upon the nationality of the deceased. Any social media platform that it would tie-up with would require similar permissions for revealing the data to a third party. The extent to which such requirements would serve as impediments would mainly depend on the specific data protection regime in question. Countries eventually may allow for a certain level of relaxation when it comes to such pioneer initiatives so as to preserve creativity and innovation. Hence, it can be safely said that the system will require a fair amount of time for the creation of each personalized chatbot and to be able to offer service to people who are willing to digitally preserve the deceased unless the individual seeking the same is left in charge of the required data. Regardless, it is a bold initiative that will definitely be improved in the days to come and one day may be able to match the standards set by science fiction works. The consumer base for chatbots is increasing by the day and has already reached a staggering level. Considering how the Microsoft chatbot would be a personalized one, the need for the same could be almost considered as pressing.
1 Comment
This article has been authored by Shrishti Mishra, a fourth-year student at the Institute of Law, Nirma University, Ahmedabad pursuing B.A., LL.B. (Hons.) course.
Introduction Lethal autonomous weapon systems (‘LAWS’) are machines having the ability to automatically target and deploy lethal force. Such weapons can be build to possess different degrees of autonomy, however, those that have little or no human control beyond the development stage of the weapon have been particularly identified as conflicting with international humanitarian law (‘IHL’). This concern seems to have a valid ground since States are already advancing to achieve higher levels of autonomy in their weapons, Israel’s loitering munition system HARPY and China’s progress in developing swarm technology are but few examples of the same. Currently, there is no specific international treaty or customary law that regulates the use of LAWS; hence, recourse must be made to available laws in order to check its validity. Amidst this backdrop, one principle of IHL that has frequently appeared in the discourse around LAWS is the Martens Clause. Interpreting the Martens Clause The Martens Clause prevents the assumption that anything that is not explicitly prohibited is permitted in international law. The Hague and Geneva Conventions institutionalize this notion referring to phrases such as “principles of humanity” and “dictates of public conscience” to regulate State’s conduct in situations of armed conflicts. Although, the status of Martens Clause as a principle of Customary International Law (‘CIL’) is not disputed among commentators, there is much confusion on its application. The modern application of this principle can be seen in the Nuclear Weapon’s Case where the International Court of Justice (‘ICJ’) extensively deliberated on the use of nuclear weapons. The deliberations presented two contrasting interpretations of the clause. One view suggests that the Martens Clause simply reaffirms the idea that in absence of any specific international legal instrument, States remain bound by already established rules of CIL. This would essentially mean that the Martens Clause does not provide for a separate norm of State conduct and would only be a reminder of State’s obligation towards other customary laws. Countries such as the United Kingdom expressed in their written opinion that the existence of Martens Clause would not be sufficient to prohibit the use of nuclear weapons in absence of other established rules of CIL prohibiting the same. The other view expresses that existence of Martens Clause is sufficient in itself to regulate States’ conduct. This means that weapons in question would not just have to demonstrate compliance with existing CIL norms but additionally, they would be tested to see if they outrage “principles of humanity” and “dictates of public conscience.” The necessary implication is that the Martens Clause provides additional criteria on which conduct of States could be regulated. Support for this view can be found in the dissenting opinion of Justice Shahabuddeen where he argued that the 1977 Hague Convention provides for protection of civilians and belligerents under principles of international law in the absence of a written code of conduct. It notes that principles of international law arise from “established customs, principles of humanity and from the dictates of public conscience.” Had the text merely intended to accord protection under rules of CIL, it would have restricted itself to the first phrase. The mention of other two phrases is indicative of separate sources that could give rise to international law. In the context of LAWS, the first view would mean that LAWS would be judged on the already existing CIL principles of unnecessary suffering and indiscriminate effect as there is no specific CIL norm prohibiting autonomous weapons. This would require a technical analysis of the weapon on the ground of proportionality of suffering caused by the weapon in comparison to the military advantage offered by it. Notably, the IHL standard of satisfying proportionality is relatively low and practical and States only have to adhere to a standard of “feasibility” during the situations of armed conflict. Therefore, the standard of proportionality requires States to anticipate a balance between collateral damage and the military advantage offered by the weapon as a whole. Further, the Additional Protocol I stresses that military advantage emanating from an attack is subject to constant change according to the “circumstances ruling at that time”. States have on multiple occasions used this interpretation to justify collateral damages such as civilian deaths caused by the attacks. See for instance the response of United Kingdom, U.S.A. and India in the Nuclear Weapons Case. These nuclear enabled nations have stated that harm caused by a nuclear attack to civilians and civilian property would not be disproportionate in certain circumstances such as reprisals. This interpretation would involve the issue of predictability and reliability that is central to the debate on legality of LAWS. Predictability and reliability of a weapon are widely accepted legal standards to assess legality of a weapon. Many commentators have expressed that LAWS cannot satisfy these criteria as first, there is concern regarding the availability of effective mechanisms to test predictability and second, these programmes function in an opaque or “black box” manner which makes it almost impossible to understand how they reach to a particular output. This leads to the threat that the algorithm may function well outside the creator’s goal in already unpredictable environments of armed conflicts. These concerns are not out of place because incidents of AI weapons going rogue are not unprecedented. Hence, the first view restricts itself to a technical evaluation of LAWS. Importantly, there is no definitive understanding on the applicability of these criteria which would inevitably allow technologically empowered States to benefit from the lack of consistent interpretation. Judging from this, it is not unimaginable that States would be tempted to deploy LAWS because of their immense military potential as compared to other weapons. On the other hand, ethical normativity lies at the heart of the second interpretation where the discussion is framed by the question whether the decision to kill a human should be given to machine. This has been supported by multiple bodies including the International Committee of the Red Cross, Human Rights Watch, Article 36, United Nations Institute for Disarmament Research and the UN Special Rapporteur on Arbitrary Executions who have expressed that the use of fully LAWS undermine human dignity and “denigrates the value of life itself” by removing human agency in the decision to kill and therefore will attract the application of Martens Clause. Hence, evaluation of LAWS on this premise makes the machine’s predictability and reliability secondary to this interpretation. This is not to say that the dictates of public conscious would not be affected in the least by technical capabilities of LAWS but this would ensure that the power to determine their legality does not remain confined to a few powerful States possessing LAWS like it happened in the case of nuclear weapons. The second view would give legal validation to voices across the world that eventually would have to face any undesired consequences that may arise out of violations committed by the use of LAWS. Conclusion Admittedly, there is no settled interpretation of the Martens Clause, yet it would be erroneous to cite it as redundant. The ICJ in Nicaragua recognized the separate status and importance of CIL principles even if they are codified in treaties. Hence, rules of CIL continue to exist and be applicable separately from a treaty embodying the same rule. Applying the same reasoning to the Martens Clause, it becomes clear that even though instances of armed conflict can be covered by conventional instruments of IHL, the separate status of Martens Clause as a legally binding norm could not be disputed. It is also admitted that currently and even in the foreseeable future, existence of a robot war-fare seems restricted to science fiction. However, the Secretary General on Chemical and Biological Weapons while arguing on legality of chemical warfare also admitted that there is insufficient knowledge of any comparable substance likely to be used as a chemical agent; still the potentially disruptive nature of such an event can prompt a ban on development and deployment of chemical weapons. Similarly, noting the potential that AI could reach and the unique conflictions such technology will have with international law, it is not incorrect to evaluate possible regulations within the existing framework and the Martens Clause would be indispensable to the discussion. |
Details
Archives
June 2023
Categories |