To promote respect for human rights, democracy, and the rule of law in the design, development, procurement, and use of AI systems, the FOC calls on states to work towards the following actions in collaboration with the private sector, civil society, academia, and all other relevant stakeholders:
1.States should take action to oppose and refrain from the use of AI systems for repressive and authoritarian purposes, including the targeting of or discrimination against persons and communities in vulnerable and marginalized positions and human rights defenders, in violation of international human rights law.
2.States should refrain from arbitrary or unlawful interference in the operations of online platforms, including those using AI systems. States have a responsibility to ensure that any measures affecting online platforms, including counter-terrorism and national security legislation, are consistent with international law, including international human rights law. States should refrain from restrictions on the right to freedom of opinion and expression, including in relation to political dissent and the work of journalists, civil society, and human rights defenders, except when such restrictions are in accordance with international law, particularly international human rights law.
3.States should promote international multi-stakeholder engagement in the development of relevant norms, rules, and standards for the development, procurement, use, certification, and governance of AI systems that, at a minimum, are consistent with international human rights law. States should welcome input from a broad and geographically representative group of states and stakeholders.
4.States need to ensure the design, development and use of AI systems in the public sector is conducted in accordance with their international human rights obligations. States should respect their commitments and ensure that any interference with human rights is consistent with international law.
5.States, and any private sector or civil society actors working with them or on their behalf, should protect human rights when procuring, developing and using AI systems in the public sector, through the adoption of processes such as due diligence and impact assessments, that are made transparent wherever possible. These processes should provide an opportunity for all stakeholders, particularly those who face disproportionate negative impacts, to provide input. AI impact assessments should, at a minimum, consider the risks to human rights posed by the use of AI systems, and be continuously evaluated before deployment and throughout the system’s lifecycle to account for unintended and/or unforeseen outcomes with respect to human rights. States need to provide an effective remedy against alleged human rights violations.
6.States should encourage the private sector to observe principles and practices of responsible business conduct (RBC) in the use of AI systems throughout their operations and supply and value chains, in a consistent manner and across all contexts. By incorporating RBC, companies are better equipped to manage risks, identify and resolve issues proactively, and adapt operations accordingly for long-term success. RBC activities of both states and the private sector should be in line with international frameworks such as the UN Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises.
7.States should consider how domestic legislation, regulation and policies can identify, prevent, and mitigate risks to human rights posed by the design, development and use of AI systems, and take action where appropriate. These may include national AI and data strategies, human rights codes, privacy laws, data protection measures, responsible business practices, and other measures that may protect the interests of persons or groups facing multiple and intersecting forms of discrimination. National measures should take into consideration such guidance provided by human rights treaty bodies and international initiatives, such as human-centered values identified in the OECD Recommendation of the Council on Artificial Intelligence, 5 which was also endorsed by the G20 AI Principles. 6 States should promote the meaningful inclusion of persons or groups who can be disproportionately and negatively impacted, as well as civil society and academia, in determining if and how AI systems should be used in different contexts (weighing potential benefits against potential human rights impacts and developing adequate safeguards).
8.States should promote, and where appropriate, support efforts by the private sector, civil society, and all other relevant stakeholders to increase transparency and accountability related to the use of AI systems, including through approaches that strongly encourage the sharing of information between stakeholders, on topics such as the following:
o user privacy, including the use of user data to refine AI systems, the sharing of data collected through AI systems with third parties, and if reasonable, how to opt-out of the collection, sharing, or use of user-generated data
o the automated moderation of user generated content including, but not limited to, the removal, downranking, flagging, and demonetization of content
o recourse or appeal mechanisms, when content is removed as the result of an automated decision
o oversight mechanisms, such as human monitoring for potential human rights impacts
9.States, as well as the private sector, should work towards increased transparency, which could include providing access to appropriate data and information for the benefit of civil society and academia, while safeguarding privacy and intellectual property, in order to facilitate collaborative and independent research into AI systems and their potential impacts on human rights, such as identifying, preventing, and mitigating bias in the development and use of AI systems.
10.States, as well as the private sector, should work towards increased transparency, which could include providing access to appropriate data and information for the benefit of civil society and academia, while safeguarding privacy and intellectual property, in order to facilitate collaborative and independent research into AI systems and their potential impacts on human rights, such as identifying, preventing, and mitigating bias in the development and use of AI systems.
OECD, Guidelines for Multinational Enterprises, 2011.
http://mneguidelines.oecd.org/guidelines/
OECD, Recommendation of the Council on Artificial Intelligence, May 21, 2019.
https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
6“G20 Ministerial Statement on Trade and Digital Economy - Annex, G20 AI Principles,” June 9, 2019.
https://www.mofa.go.jp/files/000486596.pdf