UN High Commissioner for Human Rights Michelle Bachelet. UN Photo / Jean-Marc Ferré (file photo)
Artificial intelligence (AI) has the potential to drive innovation and help people and societies overcome some of the great challenges of our times. Yet, as the military uses of AI increase, the nature and scope of their impact on human rights are still unclear.
If designed and used without sufficient regard to how they affect human rights, AI technologies can have catastrophic effects. These were among the remarks delivered by Michelle Bachelet, United Nations High Commissioner for Human Rights, at the “Women in International Security Switzerland CoLab” launch event organized on April 21 by the United Nations Institute for Disarmament Research (UNIDIR).
Ms Bachelet referred to her report of last September to the Human Rights Council (HRC) on the impact of AI on the right to privacy as an effort to address the human rights dimensions of the use of AI and formulates a series of key messages and recommendations.
The report emphasizes that the risk of discrimination linked to AI-based decisions is all too real, and stresses that only a comprehensive human rights-based approach can ensure sustainable solutions to the benefit of all.
The risk of gender discrimination linked to AI-based systems and measures is pervasive and apparent in the various discriminatory outcomes for women affected by AI-powered systems, which often already contain built-in biases. Examples include systems used for policing and the administration of justice, and in other areas such as employment or access to services. Ms Bachelet said these risks are most acute for women and marginalized groups.
Therefore, she said, bolder action is needed now to put human rights guardrails on the use of AI in general, including its military uses. This includes systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks. The requirements of legality, legitimacy, necessity and proportionality must be consistently applied to AI technologies.
More specifically, Ms Bachelet added, this means that States and businesses should ensure that comprehensive human rights due diligence is conducted when AI systems are designed, developed, deployed and operated, as well as before big data held about individuals are shared or used.
This process, which should be conducted through the entire life cycle of an AI system, involves assessing its impact on human rights, with particular attention to be paid to the rights of marginalized or excluded people.
AI applications that cannot be operated in compliance with international human rights law should be banned, and moratoriums should be imposed on the sale and use of AI systems that carry a high risk for the enjoyment of human rights, unless and until adequate safeguards to protect human rights are in place.
Companies and States should also be more transparent in how they are developing and using AI. The complexity of the data environment, models and algorithms, as well as the secrecy of government and private actors in this area make it difficult for the public to fully grasp the effects of AI systems on human rights and society. This is especially true in situations where international security is at stake.
In this regard, Ms Bachelet welcomed and supported UNIDIR’s call for transparency in how the development of military applications of AI integrate related human rights risks and gender bias. She also appreciated UNIDIR’s recommendation that gender-based reviews of military applications of AI should make explicit how the system represents and responds to gender and how harmful effects have been mitigated.