Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI

The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these "AI principles," there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.

To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Underlying this “normative core,” our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus.


Introduction
Alongside the rapid development of artificial intelligence (AI) technology, we have witnessed a proliferation of "principles" documents aimed at providing normative guidance regarding AI-based systems. Our desire for a way to compare these documents -and the individual principles they contain -side by side, to assess them and identify trends, and to uncover the hidden momentum in a fractured, global conversation around the future of AI, resulted in this white paper and the associated data visualization.
It is our hope that the Principled Artificial Intelligence project will be of use to policymakers, advocates, scholars, and others working on the frontlines to capture the benefits and reduce the harms of AI technology as it continues to be developed and deployed around the globe.
One existing governance regime with significant potential relevance to the impacts of AI systems is international human rights law. Scholars, advocates, and professionals have increasingly been attentive to the connection between AI governance and human rights laws and norms, 2 and we observed the impacts of this attention among the principles documents we studied. 64% of our documents contained a reference to human rights, and five documents took international human rights as a framework for their overall effort. Existing mechanisms for the interpretation and protection of human rights may well provide useful input as principles documents are brought to bear on individuals cases and decisions, which will require precise adjudication of standards like "privacy" and "fairness," as well as solutions for complex situations in which separate principles within a single document are in tension with one another.
The thirty-six documents in the Principled Artificial Intelligence were curated for variety, with a focus on documents that have been especially visible or influential. As noted above, a range of sectors, geographies, and approaches are represented. Given our subjective sampling method and the fact that the field of ethical and rights-respecting AI is still very much emergent, we expect that perspectives will continue to evolve beyond those reflected here. We hope that this paper and the data visualization that accompanies it can be a resource to advance the conversation on ethical and rightsrespecting AI.

Data Visualization
The Principled AI visualization, designed by Arushi Singh and Melissa Axelrod, is arranged like a wheel. Each document is represented by a spoke of that wheel, and labeled with the sponsoring actors, date, and place of origin. The one exception is that the OECD and G20 documents are represented together on a single spoke, since the text of the principles in these two documents is identical. 3 The spokes are sorted first alphabetically by the actor type and then by date, from earliest to most recent.
Inside the wheel are nine rings, which represent the eight themes and the extent to which each document makes reference to human rights. In the theme rings, the dot at the intersection with each spoke indicates the percentage of principles falling under the theme that the document addresses: the larger the dot, the broader the coverage. Because each theme contains different numbers of principles (ranging from three to ten), it's instructive to compare circle size within a given theme, but not between then.
In the human rights ring, a diamond indicates that the document references human rights or related international instruments, and a star indicates that the document uses international human rights law as an overall framework.

White Paper
Much as the principles documents underlying our research come from a wide variety of stakeholders in the ongoing conversation around ethical and rights-respecting AI, so too we expect a variety of readers for these materials. It is our hope that they will be useful to policymakers, academics, advocates, and technical experts. However, different groups may wish to engage with the white paper in different ways: • Those looking for a high-level snapshot of the current state of thinking in the governance of AI may be best served by reviewing the data visualization (p. 8), and reading the Executive Summary (p. 4) and Human Rights section (p. 64), dipping into the discussion of themes (beginning p. 20) only where they are necessary to resolve a particular interest or question.
• Those looking to do further research on AI principles will likely find the discussions of the themes and principles (beginning p. 20) and Human Rights section (p. 64) most useful, and are also invited to contact the authors with requests to access the underlying data.
• Those tasked with drafting a new set of principles may find that the data visualization (p. 8) and discussions of the themes and principles within them (beginning p. 20) can function to offer a head start on content and approaches thereto, particularly as references to existing principles that are most likely to be useful source material.
• Those seeking closer engagement with primary source documents may variously find the data visualization (p. 8), timeline (p. 18), or bibliography (p. 68) to act as a helpful index.

Definition of Artificial Intelligence
The definition of artificial intelligence, or "AI", has been widely debated over the years, in part because the definition changes as technology advances. 4 In collecting our dataset, we did not exclude documents based on any particular definition of AI. Rather, we included documents that refer specifically to AI or a closely equivalent term (for example, IEEE uses "autonomous and intelligent systems"). 5 In keeping with the descriptive approach we have taken in this paper, we'll share a few definitions found in our dataset. The European Commission's High-Level Expert Group on Artificial Intelligence offers a good place to start: "Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions." 6 Aspects of this definition are echoed in those found in other documents. For example, some documents define AI as systems that take action, with autonomy, to achieve a predefined goal, and some add that these actions are generally tasks that would otherwise require human intelligence. 7 Other documents define AI by the types of tasks AI systems accomplish -like "learning, reasoning, adapting, and performing tasks in ways inspired by the human mind," 8 or by its sub-fields like knowledge-based systems, robotics, or machine learning. 9

Definition of Relevant Documents
While all of the documents use the term "AI" or an equivalent, not all use the term "principles," and delineating which documents on the subject of ethical or rights-respecting AI should be considered "principles" documents was a significant challenge. Our working definition was that principles are normative (in the sense that lawyers use this term) declarations about how AI generally ought to be developed, deployed, and governed. While the intended audience of our principles documents varies, they all endeavor to shape behavior of an audience -whether internal company principles to follow in AI development or broadly targeted principles meant to further develop societal norms about AI.
Because a number of documents employed terminology other than "principles" while otherwise conforming to this definition, we included them. 10 The concept of "ethical principles" for AI has encountered pushback both from ethicists, some of whom object to the imprecise usage of the term in this context, as well as from some human rights practitioners, who resist the recasting of fundamental human rights in this language. Rather than disaggregate AI principles from the other structures (international human rights, domestic or regional regulations, professional norms) in which they are intertwined, our research team took pains to assess principles documents in context and to flag external frameworks where relevant. In doing so, we drew inspiration from the work of Urs Gasser, Executive Director of the Berkman Klein Center for Internet & Society and Professor of Practice at Harvard Law School, whose theory on "digital constitutionalism" describes the significant role the articulation of principles by a diverse set of actors might play as part of the "protoconstitutional discourse" that leads to the crystallization of comprehensive governance norms.
Our definition of principles excluded documents that were time-bound in the sense of observations about advances made in a particular year 11 or goals to be accomplished over a particular period. It also excluded descriptive statements about AI's risks and benefits. For example, there are numerous compelling reports that assess or comment on the 8 Information Technology Industry Council, 'AI Policy Principles' (2017) <https:// www.itic.org/resources/AI-Policy-Principles-FullReport2.pdf>. 9 German Federal Ministry of Education and Research, the Federal Ministry for Economic Affairs and Energy, and the Federal Ministry of Labour and Social Affairs, 'Artificial Intelligence Strategy' (2018) <https://www. ki-strategie-deutschland.de/home.html>; Access Now, 'Human Rights in the Age of Artificial Intelligence' (2018) <https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf>. 10 For example, the Partnership on AI's document is the "Tenets," the Public Voice and European High Level Expert Group's documents are styled as "guidelines," the Chinese AI Industry's document is a "Code of Conduct" and the Toronto Declaration refers to "responsibilities" in Principle 8. 11 AI Now Institute, New York University, 'AI Now Report 2018' (December 2018) https://ainowinstitute.org/AI_Now_2018_Report.pdf.
13 cyber.harvard.edu ethical implications of AI, some even containing recommendations for next steps, that don't advance a particular set of principles 12 and were thus excluded from this dataset. However, where a report included a recommendations section which did correspond to our definition, we included that section (but not the rest of the report) in our dataset, 13 and more generally, when only a certain page range from a broader document conformed to our definition, we limited our sample to those pages. The result of these choices is a narrower set of documents that we hope lends itself to side-by-side comparison, but notably excludes some significant literature.
We also excluded documents that were formulated solely as calls to a discrete further action, for example that that funding be committed, new agencies established, or additional research done on a particular topic, because they function more as a policy objective than a principle. By this same logic, we excluded national AI strategy documents that call for the creation of principles without advancing any. 14 However, where documents otherwise met our definition but contained individual principles such as calls for further research or regulation of AI (under the Accountability theme, see Section 3.2), we did include them. We also included the principle that those building and implementing AI should routinely consider the long-term effects of their work (under Professional Responsibility, see Section 3.7). Rather than constitute a discrete task, this call for further consideration functions as a principle in that it advocates that a process of reflection be built into the development of any AI system.
Finally, we excluded certain early instances of legislation or regulation which closely correspond to our definition of principles. 15 The process underlying the passage of governing law is markedly different than the one which resulted in other principles documents we were considering, and we were conscious of the fact that the goal of this project was to facilitate side-by-side comparison, and wanted to select documents that could fairly be evaluated that way. For the same reason, we excluded documents that looked at only a specific type of technology, such as facial recognition. We found that the content of principles documents was strongly affected by restrictions of technology type, and thus side-by-side comparison of these documents with others that focused on AI generally was unlikely to be maximally useful. On the other hand, we included principles documents that are sector-specific, focusing for example on the impacts of AI on the workforce or criminal justice, because they were typically similar in scope to the general documents.
12 AI Now Institute, New York University, 'AI Now Report 2018' (December 2018) https://ainowinstitute.org/AI_Now_2018_Report.pdf. 13 See generally, Access Now (n 9). 14  Due to the flexibility of our definition, there remains a broad range among the documents we did include, from high-level and abstract statements of values, to more narrowly focused technical and policy recommendations. While we questioned whether this should cause us to narrow our focus still further, because the ultimate goal of this project is to provide a description of the current state of the field, we decided to retain the full range of principle types we observed in the dataset, and encourage others to dive deeper into particular categories according to their interests.

Document Search Methodology
The dataset of thirty-six documents on which this report and the associated data visualization are based was assembled using a purposive sampling method. Because a key aim of the project from the start was to create a data visualization that would facilitate side by side comparison of individual documents, it was important that the dataset be manageably sized, and also that it represent a diversity of viewpoints in terms of stakeholder, content, geography, date, and more. We also wanted to ensure that widely influential documents were well represented. For this reason, we determined that purposive sampling with the goal of maximum variation among influential documents in this very much emergent field was the most appropriate strategy. 16 Our research process included a wide range of tools and search terms. To identify eligible documents, our team used a variety of search engines, citations from works in the field, and expertise and personal recommendations from others in the Berkman Klein Center community. Because the principles documents are not academic publications, we did not make extensive use of academic databases. General search terms included a combination of "AI" or "artificial intelligence" and "principles," "recommendations," "strategy," "guideline," and "declaration," amongst others. We also used knowledge from our community to generate the names of organizations -companies, governments, civil society actors, etc. -might have principles documents, and then we then searched those organizations' websites and publications.
In order to ensure that each document earned its valuable real estate in our visualization, we required that it represent the views of an organization or institution; be authored by relatively senior staff; and, in cases of multistakeholder documents, contain a breadth of involved experts. It is worth noting that some government documents are expert reports commissioned by governments rather than the work of civil servants, but all documents included in this category were officially published.
Our search methodology has some limitations. Due to the language limitations of our team, our dataset only contains documents available in English, Chinese, French, German, and Spanish. While we strove for broad geographical representation, we were unable to locate any documents from the continent of Africa, although we understand that certain African states may be currently engaged in producing AI national strategy documents which may include some form of principles. Furthermore, we recognize the possibility of network bias -because these principles documents are often shared through newsletters or mailing lists, we discovered some documents through word of mouth from those in our network. That being said, we do not purport to have a complete dataset, an admirable task which has been taken up by others. 17 Rather we have put together a selection of prominent principles documents from an array of actors.

Principle and Theme Selection Methodology
As principles documents were identified, they were reviewed in team meetings for conformity with our criteria. Those that met the criteria were assigned to an individual team member for hand coding. That team member identified the relevant pages of the document, in the case that the principles formed a sub-section of a longer document, and hand-coded all text in that section. In the initial phase, team members were actively generating the principle codes that form the basis of our database. They used the title of the principle in the document, or if no title was given or the title did not thoroughly capture the principle's content, paraphrased the content of the principle. If an identical principle had already been entered into the database, the researcher coded the new document under that principle rather than entering a duplicate.
When the team had collected and coded approximately twenty documents, we collated the list of principles, merging close equivalents, to form a final list of forty-seven principles. We then clustered the principles, identifying ones that were closely related both in terms of their dictionary meanings (e.g. fairness and non-discrimination) as well as ones that were closely linked in the principles documents themselves (e.g. transparency and explainability). We arrived at eight total themes, each with between three and ten principles under it: • Privacy (8 principles) • Accountability (10 principles) • Safety and security (4 principles) • Transparency and explainability (8 principles) • Fairness and non-discrimination (6 principles) • Human control of technology (3 principles) • Professional responsibility (5 principles) • Promotion of human values (3 principles) We also collected data on references to human rights in each document, whether to human rights as a general concept or to specific legal instruments such as the UDHR or the ICCPR. While this data is structured similarly to the principles and themes, with individual references coded under the heading of International Human Rights, because the references appear in different contexts in different documents and we do not capture that in our coding, we do not regard it as a theme in the same way that the foregoing concepts are. See Section 4 for our observations of how the documents in our dataset engage with human rights.
Both the selection of principles that would be included in the dataset and the collation of those principles into themes were subjective, though strongly informed by content of the early documents in our dataset and the researchers' immersion in them. This has led to some frustrations about their content. For example, when we released the draft data visualization for feedback, we were frequently asked why sustainability and environmental responsibility did not appear more prominently. While the authors are sensitive to the significant impact AI is having, and will have, on the environment, 18 we did not find a concentration of related concepts in this area that would rise to the level of a theme, and as such have included the principle of "environmental responsibility" under the Accountability theme as well as discussion of AI's environmental impacts in the "leveraged to benefit society" principle under the Promotion of Human Values theme. It may be that as the conversation around AI principles continues to evolve, sustainability becomes a more prominent theme.
Following the establishment of the basic structure of principles and themes, we were conservative in the changes we made because work on the data visualization, which depended on their consistency, was already underway. We did refine the language of the principles in the dataset, for example from "Right to Appeal" to "Ability to Appeal," when many of the documents that referenced an appeal mechanism did not articulate it as a user's right. We also moved a small number of principles from one theme to another when further analysis of their contents demanded; the most prominent example of this is that "Predictability," which was included under the Accountability theme at the time our draft visualization was released in summer 2019, has been moved to the Safety and Security theme.
Because the production of the data visualization required us to minimize the number of these changes, and because our early document collection (on which the principles and themes were originally based) was biased toward documents from the U.S. and E.U., there are a small number of principles from documents -predominantly non-Western documents -that do not fit comfortably into our dataset. For example, the Japanese AI principles include a principle of fair competition which combines intranational cyber.harvard.edu competition law with a caution that "[e]ven if resources related to AI are concentrated in a specific country, we must not have a society where unfair data collection and infringement of sovereignty are performed under that country's dominant position." 19 We have coded this language within the "access to technology" principle under the Promotion of Human Values theme, but it does push at the edges of our definition of that principle, and is imperfectly captured by it. Had this document been part of our initial sample, its contents might have resulted in our adding to or changing the forty-seven principles we ultimately settled on.
We therefore want to remind our readers that this is a fundamentally partial and subjective approach. We view the principles and themes we have advanced herein as simply one heuristic through which to approach AI principles documents and understand their content. Other people could have made, and will make in future, other choices about which principles to include and how to group them.

AI in Mexico
British Embassy in Mexico City

Guidelines for AI
The Public Voice Coalition

European Ethical Charter on the Use of AI in Judicial Systems
Council of Europe: CEPEJ

AI Principles and Ethics
Smart Dubai

Themes among AI Principles
This section describes in detail our findings with respect to the eight themes, as well as the principles they contain:

Privacy
Privacy -enshrined in international human rights law and strengthened by a robust web of national and regional data protection laws and jurisprudence -is significantly impacted by AI technology. Fueled by vast amounts of data, AI is used in surveillance, advertising, healthcare decision-making, and a multitude of other sensitive contexts. Privacy is not only implicated in prominent implementations of AI, but also behind the scenes, in the development and training of these systems. 20 Consequently, privacy is a prominent theme 21 across the documents in our dataset, consisting of eight principles: "consent," "control over the use of data," "ability to restrict data processing," "right to rectification," "right to erasure," "privacy by design," "recommends data protection laws," and "privacy (other/general)." The General Data Protection Regulation of the European Union (GDPR) has been enormously influential in establishing safeguards for personal data protection in the current technological environment, and many of the documents in our dataset were clearly drafted with provisions of the GDPR in mind. We also see strong connections between principles under the Privacy theme and the themes of Fairness and Non-Discrimination, Safety and Security, and Professional Responsibility. 20 Mission assigned by the French Prime Minister (n 8) p. 114 ("Yet it appears that current legislation, which focuses on the protection of the individual, is not consistent with the logic introduced by these systems [AI]-i.e. the analysis of a considerable quantity of information for the purpose of identifying hidden trends and behavior-and their effect on groups of individuals. To bridge this gap, we need to create collective rights concerning data.") . 21 Privacy principles are present in 97% of documents in the dataset. All of the principles written by government, private, and multistakeholder groups reference principles under the Privacy theme. Among documents sourced from civil society, only one, the Public Voice Coalition AI guidelines, did not refer to privacy.

PRINCIPLES UNDER THIS THEME
Percentage reflects the number of documents in the dataset that include each principle

Consent
Broadly, "consent" principles reference the notion that a person's data should not be used without their knowledge and permission. Informed consent is a closely related but more robust principle -derived from the medical field -which requires individuals be informed of risks, benefits, and alternatives. Arguably, some formulation of "consent" is a necessary component of a full realization of other principles under the Privacy theme, including "ability to restrict processing," "right to rectification," "right to erasure," and "control over the use of data." Documents vary with respect to the depth of their description of consent, breaking into two basic categories: documents that touch lightly on it, perhaps outlining a simple notice-and-consent regime, 22 and documents that invoke informed consent specifically or even expand upon it. 23 A few documents, such as Google's AI principles and IA Latam's principles, do not go beyond defining consent as permission, but as a general matter, informed consent or otherwise nonperfunctory processes to obtain consent feature prominently in the corpus. strategy. The Chinese document states that "the acquisition and informed consent of personal data in the context of AI should be redefined" and, among other recommendations, states "we should begin regulating the use of AI which could possibly be used to derive information which exceeds what citizens initially consented to be disclosed." 24 The Indian national strategy cautions against unknowing consent and recommends a mass-education and awareness campaign as a necessary component of implementing a consent principle in India. 25 Control over the Use of Data "Control over the use of data" as a principle stands for the notion that data subjects should have some degree of influence over how and why information about them is used. Certain other principles under the privacy theme, including "consent," "ability to restrict processing," "right to rectification," and "right to erasure" can be thought of as more specific instantiations of the control principle since they are mechanisms by which a data subject might exert control. Perhaps because this principle functions as a higher-level articulation, many of the documents we coded under it are light in the way of definitions for "control." Generally, the documents in our dataset are of the perspective that an individual's ability to determine On the other hand, the German AI strategy clearly states the importance of balancing and repeatedly articulates people's control over their personal data as a qualified "right." The German document suggests the use of "pseudonymized and anonymized data" as potential tools to "help strike the right balance between protecting people's right to control their personal data and harnessing the economic potential of big-data applications." 29 There is some differentiation between the documents on the question of where control ought to reside. Some dedicate it to individuals, which is typical of current systems for data control. On the other hand, some documents would locate control in specially dedicated tools, institutions, or systems. For example, the European Commission's High-Level Expert Group describes the creation of "data protocols" and "duly qualified personnel" who would govern access to data. 30  that would allow individuals to assign "an online agent" to help make "case-by-case authorization decisions as to who can process what personal data for what purpose." This technology might even be a dynamically learning AI itselfevaluating data use requests by third parties in an "autonomous and intelligent" manner. 31 Lastly, AI in the UK advocates "data trusts" that would allow individuals to "make their views heard and shape … decisions" through some combination of consultative procedures, "personal data representatives," or other mechanisms. 32

Ability to Restrict Processing
The "ability to restrict processing" refers to the power of data subjects to have their data restricted from use in connection with AI technology. Some documents coded for this principle articulate this power as a legally enforceable right, while others stop short of doing so. For example, the Access Now report would "give people the ability to request that an entity stop using or limit the use of personal information." 33 Notably, Article 18 of the GDPR has legally codified this right with respect to data processing more generally, but documents within our dataset diverge in some respects from the GDPR definition.
The extent to which data subjects should be able to restrict the processing of their data is clearly in contention. For instance, the Montreal Declaration asserts that people have a "right to digital disconnection" and imposes a positive obligation on AI-driven systems to "explicitly offer the option to disconnect at regular intervals, without encouraging people to stay connected," 34 and an earlier draft of the European High Level Expert Group guidelines placed a positive obligation on government data controllers to "systematically" offer an "express opt-out" to citizens. 35 However, the final version of the HLEG guidelines was far less expansive, narrowing the right to opt-out to "citizen scoring" technologies in "circumstances where … necessary to ensure compliance with fundamental rights." 36

Right to Rectification
The "right to rectification" refers to the right of data subjects to amend or modify information held by a data controller if it is incorrect or incomplete. As elsewhere where the word "right" is contained in the title, we only coded documents under this principle where they explicitly articulated it as a right or obligation. High-quality data contributes to safety, fairness, and accuracy in AI systems, so this principle is closely related to the themes of Fairness and Non-Discrimination and Safety and Security. Further, the "right to rectification" is closely related to the "ability to restrict processing," insofar as they are both part of a continuum of potential responses a data subject might have in response to incorrect or incomplete information.
Rectification is not a frequently invoked principle, appearing in only three documents within our dataset. The Access Now report recommends a right to rectification closely modeled after that contained in Article 16 of the GDPR. The Singapore Monetary Authority's AI principles place a positive obligation on firms to provide data subjects with "online data management tools" that enable individuals to review, update, and edit information for accuracy. 37 Finally, the T20 report on the future of work and education addresses this principle from a sector-specific viewpoint, describing a right held by employees and job applicants to "have access to the data held on them in the workplace and/or have means to ensure that the data is accurate and can be rectified, blocked, or erased if it is inaccurate." 38

Right to Erasure
The "right to erasure" refers to an enforceable right of data subjects to the removal of their personal data. Article 17 of the GDPR also contains a right to erasure, which allows data subjects to request the removal of personal data under a defined set of circumstances, and provides that the request should be evaluated by balancing rights and interests of the data holder, general public, or other relevant parties. The Access Now report models its recommendation off of Article 17, stating: [T]he Right to Erasure provides a pathway for deletion of a person's personal data held by a third party entity when it is no longer necessary, the information has been misused, or the 25 cyber.harvard.edu relationship between the user and the entity is terminated. 39 However, other documents in the dataset advance a notion of the right to erasure distinct from the GDPR. Both the Chinese AI governance principles and the Beijing AI Principles include a call for "revocation mechanisms." 40 In contrast to the Access Now articulation, the Beijing AI Principles provide for access to revocation mechanisms in "unexpected circumstances." 41 Further, the Beijing document conditions that the data and service revocation mechanism must be "reasonable" and that practices should be in place to ensure the protection of users' rights and interests. The version of the erasure principle in the T20 report on the future of work and education is even more narrowly tailored, and articulates a right to erasure for data on past, present, and potential employees held by employers if it is inaccurate or otherwise violates the right to privacy. 42 Privacy by Design "Privacy by design," also known as data protection by design, is an obligation on AI developers and operators to integrate considerations of data privacy into the construction of an AI system and the overall lifecycle of the data. Privacy by design is codified in Article 25 of the GDPR, which stipulates data controllers must "implement appropriate technical and organisational measures..." during the design and implementation stage of data processing "to protect the rights of data subjects." 43  recognition of these recent regulatory advances, IBM simply commits to adhering to national and international rights laws during the design of an AI's data access permissions. 44 In the private sector, privacy by design is regarded as an industry best practice, and it is under these terms that Google and Telefónica consider the principle. Google's AI principles document does not use the phrase "privacy by design" but it does commit the company to incorporate Google's privacy principles into the development and use of AI technologies and to "encourage architectures with privacy safeguards." 45 Telefónica also points to its privacy policy and methodologies, stating: "In order to ensure compliance with our Privacy Policy we use a Privacy by Design methodology. When building AI systems, as with other systems, we follow Telefónica's Security by Design approach." ITI goes a step further, committing to "ethics by design," a phrase that can be best understood as the integration of principles into the design of AI systems in a manner beyond what is legally required, and connects strongly with the "responsible design" principle under the Professional Responsibility theme.

Recommends Data Protection Laws
The "recommends data protection laws" principle, simply put, is that new government regulation is a necessary component of protecting privacy in the face of AI technologies. Documents produced on behalf of the governments of France, Germany, Mexico, and India each call for the development of new data privacy and data protection frameworks. These calls for regulation tend to be aspirational in their framing, with a common acknowledgementneatly articulated in the Access Now report -that "data protection legislation can anticipate and mitigate many of the human rights risks posed by AI." 46 Other documents add that the "diverse and fast changing nature of the technology" requires a "continually updated" privacy protection regime. 47 The importance of agile regulatory frameworks is reiterated in the AI in Mexico document, which advises Mexico's National Institute for Transparency, Access to Information and Protection of Personal Data "to keep pace with innovation." 48 The European documents that address this principle do so in the context of an already highly protective regime. The German strategy document suggests that there exists a gap in that regime, and calls for a new Workers' Data Protection Act "that would protect employees' data in the age of AI." 49 This narrow approach contrasts with the French strategy document, which critiques current legislation, and the rights framework more fundamentally, as too focused on "the protection of the individual" to adequately contend with the potential collective harms machine learning and AI systems can perpetuate.  53 The three documents that did not include this principle are the Public Voice Coalition AI guidelines, the Ground Rules for AI conference paper, and the Singapore Monetary Authority's AI principles. The Public Voice Coalition AI guidelines is not coded for any principle in the Privacy theme, although in external materials such as the explanatory memorandum and references section, the organization makes it clear that privacy and data protection laws were highly influential; particularly in the framing of their "transparency" principle. See The Public Voice Coalition, 'Universal Guidelines for Artificial Intelligence' (2018) <https://thepublicvoice.org/ai-universal-guidelines/>. calls for the creation of new "collective rights concerning data." 50 Even outside of Europe, the GDPR's influence is felt where the Indian AI strategy points towards existing practice in Europe -specifically, the GDPR and France's right to explanation for administrative algorithmic decisions -as a standard for Indian regulators to use as potential benchmarks. 51 Like the German AI strategy, the Indian AI strategy recommends establishing sector-specific regulatory frameworks to supplement a central privacy protection law. 52

Privacy (Other/General)
Documents that were coded for the "privacy (other/general)" principle generally contain broad statements on the relevance of privacy protections to the ethical or rights-respecting development and deployment of AI. This was the single most popular principle in our dataset; nearly all of the documents in our dataset contained it. 53 Given the breadth of coverage for this principle, it's interesting to observe significant variety in the justifications for its importance. Many actors behind principles documents root the privacy principle in compliance with law, whether international human rights instruments or national or regional laws such as the GDPR, but others offer alternative rationales.
27 cyber.harvard.edu Privacy is frequently called out as the prime example of the relevance of a rights framework to AI technology. The OECD and G20 AI principles call for "respect [for] the rule of law, human rights and democratic values," including respect for privacy. 54 The Toronto Declaration, which takes human rights as an overall framework for its approach to AI governance, also highlights the importance of privacy, stating that "States must adhere to relevant national and international laws and regulations that codify and implement human rights obligations protecting against discrimination and other related rights harms, for example data protection and privacy laws." 55 Finally, in the private sector, where AI principles most commonly take the form of internal company commitments, Telia Company engages to examine the "how we manage human rights risks and opportunities, such as privacy." 56 Other private sector actors including Microsoft, Telefónica, IA Latam, and IBM, describe respect of privacy as a legal obligation and in most cases refer to privacy as a right.
Outside of compliance, we found a wealth of other grounds for the primacy of privacy. The German AI strategy describes strong privacy standards as not only necessary from a legal and ethical standpoint but as "a competitive advantage internationally." 57 Google, and ITI describe respect of user privacy as a corporate responsibility owed to users and a business imperative. 58 The U.S. Science and Technology Council report balances consumer privacy against the value of "rich sets of data." 59 Other non-legal justifications included cybersecurity benefits, 60 alignment with public opinion, 61 and the author institution's preexisting public commitment to a set of privacy principles. 62 63 Accountability principles are present in 97% of documents in the dataset. Only one document did not mention an accountability principle. This company, Telefónica, received the highest score in the 2019 Ranking Digital Rights report, and it will be interesting to see how its ranking is impacted when, in RDR's next report, it adds AI governance to its questionnaire. Telefónica, 'AI Principles of Telefónica' (October 2018).

Accountability
On its face, the term "artificial intelligence" suggests an equivalence with human intelligence. Depending on who you ask, the age of autonomous AIs is either upon us or uncertain centuries in the future, but concerns about who will be accountable for decisions that are no longer made by humans -as well as the potentially enormous scale of this technology's impacts on the social and natural world -likely lie behind the prevalence of the Accountability theme in our dataset. 63 Almost all documents that we analyzed mention at least one Accountability principle: "recommends adoption of new regulations," "verifiability and replicability," "impact assessments," "environmental responsibility," "evaluation and auditing requirements," "creation of a monitoring body," "ability to appeal," "remedy for automated decision," "liability and legal responsibility," and "accountability per se." The documents reflect diverse perspectives on the mechanisms through which accountability should be achieved. It's possible to map the principles within the Accountability theme across the lifecycle of an AI system, in three essential stages: design (pre-deployment), monitoring (during deployment), and redress (after harm has occurred).

Design Monitoring Redress
Verifiability Of course, each principle may have applicability across multiple stages as well. For example, the "verifiability and replicability" and "environmental responsibility" principles listed under the design stage in the above table will also be relevant in the monitoring and redress phases, but for optimal implementation should be accounted for when the system is designed.
The Accountability theme shows strong connections to the themes of Safety and Security, Transparency and Explainability, and Human Control of Technology. 64 Accountability principles are frequently mentioned together with the principle of transparent and explainable AI, 65 often highlighting the need for accountability as a means to gain the public's trust 66 in AI and dissipate fears. 67

Verifiability and Replicability
The principle of "verifiability and replicability" provides for several closely related mechanisms to ensure AI systems are functioning as they should: an AI experiment ought to "exhibit[] the same behavior when repeated under the same conditions" 68 and provide sufficient detail about its operations that it may be validated. The German AI Strategy highlights that a verifiable AI system should be able to "effectively prevent distortion, discrimination, manipulation and other forms of improper use." 69 The development of verifiable AI systems may have institutional components along with technical ones. Institutionally, auditing institutions could "verify algorithmic decision-making in order to prevent improper use, discrimination and negative impacts on society" 70 and "new standards, including standards for validation or certification agencies on how AI systems have been verified" 71 could be developed.

Impact Assessments
The "impact assessments" principle captures both specific calls for human rights impact assessments (HRIAs) as well as more general calls for the advance identification, prevention, and mitigation of negative impacts of AI technology. One way to measure negative impacts of AI systems is to evaluate its "risks and opportunities" for human rights, 72  structure for their design: the Access Now report, for example, outlines that the assessment should include a consultation with relevant stakeholders "particularly any affected groups, human rights organizations, and independent human rights and AI experts." 75 For other actors -often those less closely grounded in the daily management of technology's human rights harms -this principle translated to calls for the assessment of "both direct and indirect harm as well as emotional, social, environmental, or other non-financial harm." 76 We observed that some documents use the terminology of potential harm 77 and others call for the identification of risks. 78 The emphasis, particularly among the latter category of documents, is on prevention, and impact assessments are an accountability mechanism because a sufficiently dire assessment (where risks are "too high or impossible to mitigate" 79 ) should prevent an AI technology from being deployed or even developed. Some documents suggest that an AI system should only be used after evaluating its "purpose and objectives, its 31 cyber.harvard.edu benefits, as well as its risks." 80 In this context, it is particularly important that the AI system can be tested in a controlled environment and scaled-up as appropriate. 81 The Smart Dubai AI principles document calls for the use of AI systems only if they are "backed by respected and evidencebased academic research, and AI developer organizations." 82

Environmental Responsibility
The principle of "environmental responsibility" reflects the growing recognition that AI, as a part of our human future, will necessarily interact with environmental concerns, and that those who build and implement AI technology must be accountable for its ecological impacts. The documents address environmental responsibility from two different angles.
Some documents capture this principle through an insistence that the environment should be a factor that is considered within the assessment of potential harm. 83  further, moving from a prohibition on negative ramifications to prescribe that AI technologies must be designed "to protect the environment, the climate and natural resources" 85 or to "promote the sustainable development of nature and society." 86

Evaluation and Auditing Requirement
The "evaluation and auditing requirement" principle articulates the importance of not only building technologies that are capable of being audited, 87 but also to use the learnings from evaluations to feed back into a system and to ensure that it is continually improved, "tuning AI models periodically to cater for changes to data and/or models over time." 88 A frequent focus is on the importance of humans in the auditing exercise, either as an auditing authority 89 or as users of AI systems who are solicited for feedback. 90 The Toronto Declaration calls upon developers to submit "systems that have a significant risk of resulting in human rights abuses to independent third-party audits." 91 The T20 report on the future of work and education focuses instead on breadth of input, highlighting the need for training data and features to "be reviewed by many eyes to identify possible flaws and to counter the 'garbage in garbage out' trap." 92 Some, but not all, documents have drafted their "evaluation and auditing" principles to contain significant teeth. Some documents recommend the implementation of mechanisms that allow an eventual termination of use. Such a termination is recommended, in particular, if the AI systems "would violate international conventions or human rights." 93 The Access Now report suggests the development of "a failsafe to terminate acquisition, deployment, or any continued use if at any point an identified human rights violation is too high or unable to be mitigated." 94

Creation of a Monitoring Body
The principle of "creation of a monitoring body" reflects a repeated recognition that some new organization or structure may be required to create and oversee standards and best practices in the context of AI. Visions for how these bodies may be constituted and what activities they would undertake vary. Microsoft's AI principles suggest the creation of "internal review boards" -internal, we presume, to the company, but not to the teams that are building the technology. The Toronto Declaration stresses that any monitoring body should be independent and might include "judicial authorities when necessary." 96 The German AI strategy outlines the creation of a national AI observatory, which could also be tasked to monitor that AI systems are designed socially compatible and to develop auditing standards. 97

Ability to Appeal
The principle of an "ability to appeal" concerns the possibility that an individual who is the subject of a decision made by an AI could challenge that decision. The ability to appeal connects with the theme of Human Control of Technology, in that it's often mentioned in connection with the principle of "right to human review of an automated decision." 98 Some documents in fact collapse the two. 99 The Access Now report calls the human in the loop an element that adds a "layer of accountability." 100 In some individual documents, this principle is parsed more neatly, as for example in the Access Now report which explains that there should be both an ability to challenge the use of an AI system 33 cyber.harvard.edu and an ability to appeal a decision that has been "informed or wholly made by an AI system." 101 The ability to appeal the use of or recommendation made by an AI system could be realized in form of a judicial review. 102 Further, some documents limit the ability to appeal only to "significant automated decisions." 103 A subset of documents recognize as part of this principle the importance of making AI subjects aware of existing procedures to vindicate their rights 104 or to broaden the accessibility of channels for the exercise of subjects' rights. 105 In order to enable AI subjects to challenge the outcome of AI systems, the OECD and G20 AI principles suggest that the outcome of the system must be "based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision." 106

Remedy for Automated Decision
The principle of "remedy for automated decision" is fundamentally a recognition that as AI technology is deployed in increasingly critical contexts, its decisions will have real consequences, and that remedies should be available just as they are for the consequences of human actions. The principle of remedy is intimately connected to the ability to appeal, since where appeal allows for the rectification of the decision itself, remedy rectifies its consequences. 107 There is a bifurcation in many of the documents that provide for remedy between the remedial mechanisms that are appropriate for state use of AI versus those that companies should implement for private use.

Recommends Adoption of New Regulations
The "recommends adoption of new regulations" principle reflects a position that AI technology represents a significant enough departure from the status quo that new regulatory regimes are required to ensure it is built and implemented in an ethical and rights-respecting manner. Some documents that contain this principle refer to existing regulations, 114 but there is a general consensus that it is necessary to reflect on the adequacy of those frameworks. 115 Documents that contain this principle frequently express an urgent need for clarity about parties' respective responsibilities. 116 A few documents address the fact that "one regulatory approach will not fit all AI applications" 117 and emphasize the need to adopt context specific regulations, for example, regarding the use of AI for surveillance and similar activities that are likely to interfere with human rights. 118 35 cyber.harvard.edu Among statements of this principle, we see a variety of justifications for future regulation, some of which are recognizable from other themes in our data: the regulation should ensure that the development and use of AI is safe and beneficial to society; 119 implement oversight mechanisms "in contexts that present risk of discriminatory or other rights-harming outcomes;" 120 and identify the right balance between innovation and privacy rights. 121 There is also a common emphasis on the need for careful balancing in crafting regulation. of China suggested that new regulations might be based on "universal regulatory principles" 125 that would be formulated at an international level.

Accountability Per Se
Like many of our themes, the Accountability theme contains an "accountability" principle, but in this specific case, only to those documents that explicitly use the word "accountability" or "accountable" (25 of the 36 documents) were coded under this principle. Because principles documents are frequently challenged as toothless or unenforceable, we were interested to see how documents grappled with this term specifically.
In this context, documents converge on a call for developing "accountability frameworks" 126 that define the responsibility of different entities "at each stage in research and development, design, manufacturing, operation and service." 127 Notably, a few documents emphasize that the responsibility and accountability of AI systems cannot lie with the technology itself, but should be "apportioned between those who design, develop and deploy [it]. Given early examples of AI systems' missteps 133 and the scale of harm they may cause, concerns about the safety and security of AI systems were unsurprisingly a significant theme among principles in the documents we coded. 134 There appears to be a broad consensus across different actor types on the centrality of Safety and Security, with about three-quarters of the documents addressing principles within this theme. There are four principles under it: "safety," "security," "security by design," and "predictability." It is worth distinguishing, up front, the related concepts of safety and security. The principle of safety generally refers to proper internal functioning of an AI system and the avoidance of unintended harms. By contrast, security addresses external threats to an AI system. However, documents in our dataset often mention the two principles together, and indeed they are closely intertwined. This observation becomes particularly evident when documents use the related term "reliability": 135 a system that is reliable is safe, in that it performs as intended, and also secure, in that it is not vulnerable to being compromised by unauthorized third parties.
There are connections between this theme and the Accountability, Professional Responsibility, and Human Control of Technology themes. In many ways, principles under these other themes can be seen, at least partially, as implementation mechanisms for the goals articulated under Safety and Security.

Percentage reflects the number of documents in the dataset that include each principle
Accountability measures are key guarantors of AI safety, including verifiability 136 and the need to monitor the operation of AI systems after their deployment. 137 Individuals and organizations behind AI technology have a key role in ensuring it is designed and used in ways that are safe and secure. Safety is thus frequently mentioned in connection with the need to ensure controllability by humans. 138

Safety
The principle of "safety" requires that an AI system be reliable and that likely scenarios, but also establish that a system "responds safely to unanticipated situations and does not evolve in unexpected ways." 146 Testing and monitoring of AI systems should continue after deployment according to a few articulations of the "safety" principle. This is particularly relevant where the document focuses on machine learning technology, which is likely to evolve following implementation as it continues to receive input of new information. Developers of AI systems cannot always "accurately predict the risks" 147 associated with such systems ex ante. There are also safety risks associated with AI systems being implemented in ways that their creators did not anticipate, but one document suggests that designing AI that could be called safe might require the technology makes "relatively safe decisions" "even when faced with different environments in the decision-making process." 148 Finally, two documents coded for the "safety" principle specifically call for the development of safety regulations to govern AI. One call relates 39 cyber.harvard.edu specifically to the regulation of autonomous vehicles 149 and the other is more general, calling for "high standards in terms of safety and product liability" 150 within the EU. Other documents call for public awareness campaigns to promote safety. 151 For example, IEEE's Ethically Aligned Design suggests that "in the same way police officers have given public safety lectures in schools for years; in the near future they could provide workshops on safe [AI systems]." 152

Security
The principle of "security" concerns an AI system's ability to resist external threats. Much of the language around security in our dataset is high level, but in broad terms, the documents coded here call for three specific needs to protect against security threats: the need to test the resilience of AI systems; 153 to share information on vulnerabilities 154 and cyberattacks; 155 and to protect privacy 156 and "the integrity and confidentiality of personal data." 157 With regard to the latter need, the ITI AI Policy Principles suggest that the security of data could be achieved through anonymization, de-identification, or aggregation, and they call on governments to "avoid requiring companies to transfer or provide access to technology, source code, algorithms, or encryption keys as conditions for doing business." 158 The Chinese White Paper on AI Standardization suggests that the implementation of security assurance requirements could be facilitated through a clear distribution of liability and fault between developers, product manufacturers, service providers and end users. 159 A number of documents, concentrated in the private sector, emphasize the "integral" 160 role of security in fostering trust in AI systems. 161 The ITI AI Policy Principles state that AI technology's success depends on users' "trust that their personal and sensitive data is protected and handled appropriately." 162

Security by Design
The "security by design" principle, as its name suggests, is related to the development of secure AI systems. The European High Level Expert Group guidelines observes that these "values-by-design" principles may provide a link between abstract principles and specific implementation decisions. 163 A few documents argue that existing and widely adopted security standards should apply for the development of AI systems. The German AI Strategy suggests that security standards for critical IT infrastructure should be used 164 and the Microsoft AI Principles mention that principles from other engineering disciplines of robust and fail-safe design can be valuable. 165 Similarly, the European High Level Expert Group guidelines argue for AI systems to be built with a "fallback plan" where, in the event of a problem, a system would switch its protocol "from statistical to rulebased" decision-making or require the intervention of a human before continuing. 166

Predictability
The principle of "predictability" is concisely defined in the European High Level Expert Group guidelines, which state that for a system to be predictable, the outcome of the planning process must be consistent with the input. 167 Predictability is generally presented as a key mechanism to ensure that AI systems have not been compromised by external actors. As the German AI strategy puts it, "transparent, predictable and verifiable" AI systems may "effectively prevent distortion, discrimination, manipulation and other forms of improper use." 168 As in the "security" principle, there is an observable connection between predictable AI systems and public trust, with the Beijing AI Principles observing that improving predictability, alongside other "ethical design approaches" should help "to make the system trustworthy. Perhaps the greatest challenge that AI poses from a governance perspective is the complexity and opacity of the technology. Not only can it be difficult to understand from a technical perspective, but early experience has already proven that it's not always clear when an AI system has been implemented in a given context, and for what task. The eight principles within the theme of Transparency and Explainability are a response to these challenges: "transparency," "explainability," "open source data and algorithms," "open government procurement," "right to information," "notification when interacting with an AI," "notification when AI makes a decision about an individual," and "regular reporting." The principles of transparency and explainability are some of the most frequently occurring individual principles in our dataset, each mentioned in approximately threequarters of the documents. 170 It is interesting to note a bifurcation among the principles under this theme, where some, including "explainability" and the ability to be notified when you are interacting with an AI or subject to an automated decision, are responses to entirely new governance challenges posed by the specific capabilities of current and emerging AI technologies. The rest of the principles in this theme, such as "open source data and algorithms" and "regular reporting" are well-established pillars of technology governance, now applied specifically to AI systems.

Percentage reflects the number of documents in the dataset that include each principle
Transparency and Explainability is connected to numerous other themes, most especially Accountability, 171 because principles within it may function as a "prerequisite for ascertaining that [such other] principles are observed." 172 It is also connected to the principle of predictability within the Safety and Security theme and to the Fairness and Non-discrimination theme. 173 The German government notes that individuals can only determine if an automated decision is biased or discriminatory if they can "examine the basis -the criteria, objectives, logic -upon which the decision was made." 174 Transparency and Explainability is a foundation for the realization of other many other principles.

Transparency
The principle of "transparency" is the assertion that AI systems should be designed and implemented in such a way that oversight of their operations are possible. The documents in the dataset vary in their suggestions about how transparency might be applied across institutions and technical systems throughout the AI lifecycle. The European High Level Expert Group guidelines note that transparency around "the data, the system, and the business models" all matter. Transparency throughout an AI system's life cycle means openness throughout the design, development, and deployment processes. While most documents treat transparency as binarythat is, an AI system is either transparent or it is not -several articulate the transparency principle as one that entities will strive for, with increased disclosure over time. 177 Some raise concerns about the implications of an over-broad transparency regime, which could give rise to conflicts with privacy-related principles. 178 IEEE's Ethically Aligned Design recommends the development of "new standards that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined." 179 Where sufficient transparency cannot be achieved, the Toronto Declaration calls upon states to "refrain from using these systems at all in high-risk contexts." 180 Explainability "Explainability" is defined in various ways, but is at its core about the translation of technical 43 cyber.harvard.edu concepts and decision outputs into intelligible, 181 comprehensible formats suitable for evaluation. The T20 report on the future of work and education, for example, highlights the importance of "clear, complete and testable explanations of what the system is doing and why." 182 Put another way, a satisfactory explanation "should take the same form as the justification we would demand of a human making the same kind of decision." 183 Many of the documents note that explainability is particularly important for systems that might "cause harm," 184 have "a significant effect on individuals," 185 or impact "a person's life, quality of life, or reputation." 186 The AI in the UK document suggests that if an AI system has a "substantial impact on an individual's life" and cannot provide "full and satisfactory explanation" for its decisions, then the system should not be deployed. 187 The principle of explainability is closely related to the Accountability theme as well as the principle of "right to human review of automated decision" under the Human Control of Technology theme. 188 The Toronto Declaration mentions explainability as a necessary requirement to "effectively scrutinize" the impact of AI systems on "affected individuals and groups," to establish responsibilities, and to hold actors to account. 189 The European Commission's policy statement also connects explainability to the principle of nondiscrimination, as the development of understandable AI is crucial for minimizing "the risk of bias or error." 190 The need for explainability will become increasingly important as the capabilities and impact of AI systems compound. 191

Open Source Data and Algorithms
The principle of "open source data and algorithms" is, as noted in the introduction to this theme, a familiar concept in technology governance, and it operates similarly in the context of AI as in other computer systems. The T20 report on the future of work and education focuses on the balance between transparency and the potential negative effect of open source policies on algorithmic innovation. One solution, they posit, is "algorithmic verifiability", which would "require companies to disclose information allowing the effect of their algorithms to be independently assessed, but not the actual code driving the algorithm." 197 Recognizing that data or algorithm disclosure is not sufficient to achieve transparency or explainability, the IEEE stresses the importance of disclosing the underlying algorithm to validation or certification agencies that can effectively serve as auditing and accountability bodies. 198 Open Government Procurement "Open government procurement," the requirement that governments be transparent about their use of AI systems, was only present in one document in our dataset. The Access Now report recommends that: "When a government body seeks to acquire an AI system or components thereof, procurement should be done openly and transparently according to open procurement standards. This includes publication of the purpose of the system, goals, parameters, and other information to facilitate public understanding. Procurement should include a period for public comment, and states should reach out to potentially affected groups where relevant to ensure an opportunity to input. " 199 It is notable that the Access Now report is one of the few documents in our dataset that specifically adopts a human rights framework. This principle accounts for the special duty of governments under Principle 5 of the UN Guiding Principles on Business and Human Rights to protect against human rights abuses when they contract with private businesses.

Right to Information
The "right to information" concerns the entitlement of individuals to know about various aspects of the use of, and their interaction with, AI systems. This might include "information about the personal data used in the decision-making process," 200 "access to the factors, the logic, and techniques that produced the outcome" of an AI system, 201 and generally "how automated and machine learning decision-making processes are reached." 202 As elsewhere where the word "right" is contained in the title of the principle, we only coded documents where they were explicitly articulated as a right or obligation. The OECD and G20 AI principles, for instance, do not call for an explicit "right to information" for users, and thus were not coded here, even though they recommend that those adversely affected by an AI system should be able to challenge it based on "easyto-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision." 203 One document specifically articulates the right to information as extending beyond a right to technical matter and data to the "obligation [that it] should be drawn up in plain language and be made easily accessible." 204

Notification when AI Makes a Decision about an Individual
The definition of the principle of "notification when an AI system makes a decision about an individual" is facially fairly clear: where an AI has been employed, the person to whom it was subject should know. The AI in UK document stresses the importance of this principle to allow individuals to "experience the advantages of AI, as well as to opt out of using such products should they have concerns." 205 If people don't know when they are subject to automated decisions, they won't have the autonomy to decide whether or not they consent, or the information to reach their own conclusions about the overall value that AI provides.
In this respect, the notification principle connects to the themes of Human Control of Technology and Accountability. For example, the European Commission not only suggests that individuals should be able to opt out, 206 but also that they should be "informed on how to reach a human and how to ensure that a system's decisions can be checked or corrected," 207 which is an important component of accountability. Access Now emphasizes the special importance of this principle when an AI system "makes a decision that impacts an individual's rights." 208

Notification when Interacting with an AI
The principle of "notification when interacting with an AI system," a recognition of AI's increasing ability to pass the Turing test at least in limited applications, stands for the notion that humans should always be made aware when they are engaging with technology rather than directly with another person. Examples of when this principle is relevant include chatbot interactions, 209 facial recognition systems, credit scoring systems, and generally "where machine learning systems are used in the public sphere." 210 Like "notification when an AI system makes a decision about an individual," this principle is a precondition to the actualization of other principles, including in the Accountability and Human Control of Technology themes. However, this principle is broader than the preceding one because it requires notification even in passive uses of AI systems. In the deployment of facial recognition systems, for example, the "decision" principle might be interpreted to only require disclosure if an action is taken (e.g. an arrest), whereas the "interaction" principle might require notices that the facial recognition system is in use to be posted in public spaces, much like CCTV signs. Among other glosses on this principle, the European Commission notes that "consideration should be given to when users should be informed on how to reach a human" 211 and the OECD and G20 AI principles call out that that a system of notifications of AI interactions may be especially important "in the workplace." 212

Regular
Reporting "Regular reporting" as a principle stands for the notion that organizations that implement AI systems should systematically disclose important information about their use. This might include "how outputs are reached and what actions are taken to minimize rights-harming impacts," 213 "discovery of … operating errors, unexpected or undesirable effects, security breaches, and data leaks," 214 or the "evaluation of the effectiveness" 215 of AI systems. The regular reporting principle can be interpreted as another implementation mechanism for transparency and explainability, and the OECD and G20 AI principles further call for governments to step in and develop internationally comparable metrics to measure AI research, development, and deployment and to gather the necessary evidence to support these claims.

Fairness and Non-discrimination
217 E.g., Jeffrey Dastin, "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women," Reuters, (Oct. 9, 2018), https://www.reuters. com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G 218 A bail decision algorithm, for example, may predict for "failure to appear" instead of flight risk to inform decisions about pretrial release. This conflates flight with other less severe causes of nonappearance (i.e. an individual may miss a court date due to inability to access transportation, childcare, or sickness) that may warrant a less punitive, lower-cost intervention than detention. 219 Fairness and Non-discrimination principles are present in 100% of documents in the dataset. Algorithmic bias -the systemic under-or overprediction of probabilities for a specific population -creeps into AI systems in a myriad of ways. A system might be trained on unrepresentative, flawed, or biased data. 217 Alternatively, the predicted outcome may be an imperfect proxy for the true outcome of interest 218 or the outcome of interest may be influenced by earlier decisions that are themselves biased. As AI systems increasingly inform or dictate decisions, particularly in sensitive contexts where bias long predates their introduction such as lending, healthcare, and criminal justice, ensuring fairness and nondiscrimination is imperative. Consequently, the Fairness and Non-discrimination theme is the most highly represented theme in our dataset, with every document referencing at least one of its six principles: "non-discrimination and the prevention of bias," "representative and high-quality data," "fairness," "equality," "inclusiveness in impact," and "inclusiveness in design." 219 Within this theme, many documents point to biased data -and the biased algorithms it generates -as the source of discrimination and unfairness in AI, but a few also recognize the role of human systems and institutions in perpetuating or preventing discriminatory or otherwise harmful impacts. Examples of language that focuses on the technical side of bias include the Ground Rules for AI conference paper ("[c]ompanies should strive to avoid bias in A.I. by drawing on diverse data sets") 220 and the Chinese White Paper on AI Standardization ("we should also 89% Non-discrimination and the Prevention of Bias

PRINCIPLES UNDER THIS THEME
Percentage reflects the number of documents in the dataset that include each principle be wary of AI systems making ethically biased decisions"). 221 While this concern is warranted, it points toward a narrow solution, the use of unbiased datasets, which relies on the assumption that such datasets exist. Moreover, it reflects a potentially technochauvinistic orientation -the idea that technological solutions are appropriate and adequate fixes to the deeply human problem of bias and discrimination. 222 The Toronto Declaration takes a wider view on many places bias permeates the design and deployment of AI systems: All actors, public and private, must prevent and mitigate against discrimination risks in the design, development and application of machine learning technologies. They must also ensure that there are mechanisms allowing for access to effective remedy in place before deployment and throughout a system's lifecycle. 223 Within the Fairness and Non-discrimination theme, we see significant connections to the Promotion of Human Values theme, with principles such as "fairness" and "equality" sometimes appearing alongside other values in lists coded under the "Human Values and Human Flourishing" principle. 224 There are also connections to the Human Control of Technology, and Accountability themes, principles under which can act as implementation mechanisms for some of the higher-level goals set by Fairness and Nondiscrimination principles.

Non-discrimination and the Prevention of Bias
The "non-discrimination and the prevention of bias" principle articulates that bias in AI -in the training data, technical design choices, or the technology's deployment -should be mitigated to prevent discriminatory impacts. This principle was one of the most commonly included ones in our dataset 225 and, along with others like "fairness" and "equality" frequently operates as a high-level objective for which other principles under this theme (such as "representative and high-quality data" and "inclusiveness in design") function as implementation mechanisms. 226 Deeper engagement with the principle of "nondiscrimination and the prevention of bias" included warnings that AI is not only replicating existing patterns of bias, but also has the potential to significantly scale discrimination and to discriminate in unforeseen ways. 227 Other documents recognized that AI's great capacity for classification and differentiation could and should be proactively used to identify and address discriminatory practices in current systems. 228 The German Government commits to assessing how its current legal protections against discrimination cover -or fail to cover -AI bias, and to adapt accordingly. 229

Representative and High Quality Data
The principle of "representative and high quality data," driven by what is colloquially referred to as the "garbage in, garbage out" problem, is defined as the use of appropriate inputs to an AI system, which relates accurately to the population of interest. The use of a dataset that is not representative leads to skewed representation of a group in the dataset compared to the actual composition of the target population, introduces bias, and reduces the accuracy of the system's eventual decisions. It is important that the data be high quality and apposite to the context in which the AI system will be deployed, because a representative dataset may nonetheless be informed by historical bias. 230 Some quality measures for data include accuracy, consistency, and validity. As the definition suggests, the documents in our dataset often directly connected this principle to the goal of mitigating the discriminatory impacts of AI.
The Montreal Declaration and the European Charter on AI in judicial systems call for representative and high quality data but state that even using the gold standard in data could be detrimental if the data are used for "deterministic analyses." 231 The Montreal Declaration's articulation of this principle warns against using data "to lock individuals into a user profile, fix their personal identity, or confine them to 229 German Federal Ministry of Education and Research, the Federal Ministry for Economic Affairs and Energy, and the Federal Ministry of Labour and Social Affairs (n 9) p.37. 230 For example, a lending algorithm trained on a dataset of previously successful applicants will be "representative" of the historical applicant pool but will also replicate any past biases that informed who received a loan. a filtering bubble, which would restrict and confine their possibilities for personal development." 232 Some documents, including the European Charter on AI in judicial systems, explicitly call for special protections for marginalized groups and for particularly sensitive data, defined as "alleged racial or ethnic origin, socio-economic background, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data, healthrelated data or data concerning sexual life or sexual orientation." 233

Fairness
The "fairness" principle was defined as equitable and impartial treatment of data subjects by AI systems. We used this definition, drawn from common usage, over a technical one because articulations of fairness in the documents coded under this principle are not especially technical or overly specific in spite of the rich vein of academic research by AI and machine learning academics around competing mathematical formalizations of fairness. 234 However, Microsoft adds to its principle "AI systems should treat all people fairly" the further elaboration that "industry and academia should continue the promising work underway to develop analytical techniques to detect and address potential unfairness, like methods that systematically assess the data used to train AI systems for appropriate representativeness and document information about its origins and characteristics." 235 There was general consensus in the documents about the importance of fairness with regard to marginalized populations. For example, the Japanese AI principles include the imperative that "all people are treated fairly without unjustified discrimination on the grounds of diverse backgrounds such as race, gender, nationality, age, political beliefs, religion, and so on." 236 Similarly, the Chinese AI Industry Code of Conduct states that "[t]he development of artificial intelligence should ensure fairness and justice, avoid bias or discrimination against specific groups or individuals, and avoid placing disadvantaged people at a more unfavorable position." 237 The European High Level Expert Group guidelines term this the "substantive dimension" of fairness, and also point to a "procedural dimension of fairness [which] entails the ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them," which we coded under the "ability to appeal" principle in the Accountability theme.

Equality
The principle of "equality" stands for the idea that people, whether similarly situated or not, deserve the same opportunities and protections with the rise of AI technologies. "Equality" is similar to "fairness" but goes farther, because of fairness's focus on similar outcomes for similar inputs. As the European High Level Expert Group guidelines puts it: "Equality of human beings goes beyond nondiscrimination, which tolerates the drawing of distinctions between dissimilar situations based on objective justifications. In an AI context, equality entails that the same rules should apply for everyone to access to information, data, knowledge, markets and a fair distribution of the value added being generated by technologies." 238 There are essentially three different ways that equality is represented in the documents in our dataset: in terms of human rights, access to technology, and guarantees of equal opportunity through technology. In the human rights framing, the Toronto Declaration notes that AI will pose "new challenges to equality" and that "[s]tates have a duty to take proactive measures to eliminate discrimination." 239 In the access to technology framing, documents emphasize that all people deserve access to the benefits of AI technology, and that systems should be designed to facilitate that broad access. 240 Documents that take on what we have termed the guarantees of equal opportunity framing go a bit farther in their vision for how AI systems may or should implement equality. The Montreal Declaration asserts that AI systems "must help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge" and "must produce social and economic benefits for all by reducing social inequalities and vulnerabilities." 241 This framing makes clear the relationship between the "equality" principle and the principles of "non-discrimination and the prevention of bias" and "inclusiveness in impact." 51 cyber.harvard.edu

Inclusiveness in Impact
"Inclusiveness in impact" as a principle calls for a just distribution of AI's benefits, particularly to populations that have historically been excluded. There was remarkable consensus in the language that documents employed to reflect this principle, including concepts like "shared benefits" and "empowerment":

Document Language of principle
Asilomar AI Principles Shared Benefit: AI technologies should benefit and empower as many people as possible. 242

Microsoft's AI principles
Inclusiveness -AI systems should empower everyone and engage people. If we are to ensure that AI technologies benefit and empower everyone, they must incorporate and address a broad range of human needs and experiences. Inclusive design practices will help system developers understand and address potential barriers in a product or environment that could unintentionally exclude people. This means that AI systems should be designed to understand the context, needs and expectations of the people who use them. 243

Partnership on AI Tenets
We will seek to ensure that AI technologies benefit and empower as many people as possible 244 Smart Dubai AI principles We will share the benefits of AI throughout society: AI should improve society, and society should be consulted in a representative fashion to inform the development of AI 245 T20 report on the future of work and education

Inclusiveness in Design
The "inclusiveness in design" principle stands for the idea that ethical and rights-respecting AI requires more diverse participation in the development process for AI systems. This principle is expressed in two different ways. in decisions about the design and implementation of AI in order to "ensure that systems are created and used in ways that respect rights -particularly the rights of marginalised groups who are vulnerable to discrimination." 252 This interpretation is similar to the Multistakeholder Collaboration principle in our Professional Responsibility category, but it differs in that it emphasizes bringing into conversation all of societyspecifically those most impacted by AI -and not just a range of professionals in, for example, industry, government, civil society organizations, and academia. From prominent Silicon Valley magnates' concerns about the Singularity to popular science fiction dystopias, our society, governments, and companies alike are grappling with a potential shift in the locus of control from humans to AI systems. Thus, it is not surprising that Human Control of Technology is a strong theme among the documents in our dataset, 253 with significant representation for the three principles that fall under it: "human review of automated decision," "ability to opt out of automated decision," and "human control of technology (other/general)." There are connections between the principles in the Human Control of Technology theme and a number of other themes, because human involvement is often presented as a mechanism to accomplish those ends. Human control can facilitate objectives within the themes of Safety and Security, Transparency and Explainability, Fairness and Non-discrimination, and the Promotion of Human Values. For example, the OECD and G20 AI principles refer to human control as a "safeguard" 254 and UNI Global Union claims that transparency in both decisions and outcomes requires "the right to appeal decisions made by AI/algorithms, and having it reviewed by a human being." 255

Human Review of Automated Decision
The principle of "human review of automated decision" stands for the idea that where AI systems are implemented, people who are subject to their decisions should be able to request and receive human review of those decisions. In contrast to other principles under this theme, the "human review of automated decision" principle is always ex post in is implementation, providing the opportunity to remedy an objectionable result. Although the documents in our dataset are situated in a variety of contexts, there is remarkable commonality between them in the articulation of this principle. The underlying rationale, when explicit, is that "Humans interacting with AI systems must be able to keep full and effective self-determination over themselves.

PRINCIPLES UNDER THIS THEME
Percentage reflects the number of documents in the dataset that are included each principle The most salient differences among the documents are in the breadth of circumstances in which they suggest that human review is appropriate, and the strength of the recommendation. Many of the documents apply the principle of human review in all situations in which an AI system is used, but a handful constrain its application to situations in which the decision is "significant." 257 Further, the principles generally present human review as desirable, but two documents, the Access Now report and the Public Voice Coalition AI guidelines, articulate it as a right of data subjects. The European Charter on AI in judicial systems also contains a strong version of the human review principle, specifying that if review is requested, the case should be heard by a competent court. 258

Ability to Opt out of Automated Decision
The "ability to opt out of automated decision" principle is defined, as its title suggests, as affording individuals the opportunity and choice not to be subject to AI systems where they are implemented. The AI in the UK document explains its relevance by saying: "It is important that members of the public are aware of how and when artificial intelligence is being used to make decisions about them, and what implications this will have for them personally. This clarity, and greater digital understanding, will help the public experience the advantages of AI, as well as to opt out of using such products should they have concerns. The theme of Professional Responsibility brings together principles that are targeted at individuals and teams who are responsible for designing, developing, or deploying AI-based products or systems. These principles reflect an understanding that the behavior of such professionals, perhaps independent of the organizations, systems, and policies that they operate within, may have a direct influence on the ethics and human rights impacts of AI. The theme of Professional Responsibility was widely represented in our dataset 275 and consists of five principles: "accuracy," "responsible design," "consideration of long-term effects," "multistakeholder collaboration," and "scientific integrity." There are significant connections between the Professional Responsibility theme and the Accountability theme, particularly with regard to the principle of "accuracy." Articulations of the principle of "responsible design" often connect with the theme of Promotion of Human Values, and sometimes suggest Human Control of Technology as an aspect of this objective.

Accuracy
The principle of "accuracy" is usefully defined by the European High Level Expert Group guidelines, which describe it as pertaining "to an AI's confidence and ability to correctly classify information into the correct categories, or its ability to make correct predictions, recommendations, or decisions based on data or models." 276 There is a split among the documents, with some understanding "accuracy" as a goal and others as an ongoing process. The principle of accuracy is frequently referred to alongside the similar principle of "verifiability and replicability" under the Accountability theme. The Public Voice Coalition, for instance, recommends that institutions must ensure the "accuracy, reliability, and validity of decisions." 282 The two can be distinguished as "accuracy" is targeted at developers and users, promoting careful attention to detail on their part. By contrast, the principle of replicability focuses on the technology, asking whether an AI system delivers consistent results under the same conditions, facilitating post-hoc evaluation by scientists and policymakers.

Responsible Design
The principle of "responsible design" stands for the notion that individuals must be conscientious and thoughtful when engaged in the design of AI systems. Indeed, even as the phrasing of this principle might differ from document to document, there is a strong consensus that professionals are in a unique position to exert influence on the future of AI. The French AI strategy emphasizes the crucial role that researchers, engineers and developers play as "architects of our digital society." 283 This document notes that professionals play an especially important part in emerging technologies since laws and norms cannot keep pace with code and cannot solve for every negative effect that the underlying technology may bring about. 284 The Partnership on AI Tenets prompt research and engineering communities to "remain socially responsible, and engage directly with the potential influences of AI technologies on wider society." 285 This entails, to some degree, an obligation to become informed about society, which other documents address directly. The IBM AI principles require designers and developers not only to encode values that are sensitive to different contexts but also to engage in collaboration to better recognize existing values. 286 The Tencent and Microsoft AI principles capture this idea by calling for developers to ensure that design is "aligned with human norms in reality" 287 and to involve domain experts in the design and deployment of AI systems. 288 We note a rare interaction among the documents when the Indian AI strategy recommends that evolving best practices such as the recommendations by the Global Initiative on Ethics of Autonomous and Intelligent Systems by IEEE be incorporated in the design of AI systems. 289

Consideration of Long Term Effects
The principle of "consideration of long term effects" is characterized by deliberate attention to the likely impacts, particularly distant future impacts, of an AI technology during the design and implementation process. The documents that address this principle largely view the potential long-term effects of AI in a pluralistic manner. For instance, the German AI strategy highlights that AI is a global development and policymakers will need to "think and act globally" while considering its impact during the development stage 290 ; and the Asilomar principles recognize that highly-developed AI must be for the benefit of all of humanity and not any one sub-group. 291 The Montreal Declaration recommends that professionals must anticipate the increasing risk of AI being misused in the future and incorporate mechanisms to mitigate that risk.

Scientific Integrity
The principle of "scientific integrity" means that those who build and implement AI systems should be guided by established professional values and practices. Interestingly, both documents that include this relatively little-mentioned principle are organizations driven at least in significant part by engineers and technical experts. Google's AI principles recognize scientific method and excellence as the bedrock for technological innovation, including AI. The company makes a commitment to honor "open inquiry, intellectual rigor, integrity, and collaboration" in its endeavors. 302 The IEEE acknowledges the idea of scientific rigor in its call for creators of AI systems to define metrics, make them accessible, and measure systems. 303 With the potential of AI to act as a force multiplier for any system in which it is employed, the Promotion of Human Values is a key element of ethical and rights-respecting AI. 304 The principles under this theme recognize that the ends to which AI is devoted, and the means by which it is implemented, should correspond with and be strongly influenced by social norms. As AI's use becomes more prevalent and the power of the technology increases, particularly if we begin to approach artificial general intelligence, the imposition of human priorities and judgment on AI is especially crucial. The Promotion of Human Values category consists of three principles: "human values and human flourishing," "access to technology," and "leveraged to benefit society." While principles under this theme were coded distinctly from explicit references to human rights and international instruments of human rights law, there is a strong and clear connection.
References to human values and human rights were often adjacent to one another, and where the documents provided more specific articulations of human values, they were are largely congruous with existing guarantees found in international human rights law. Moreover, principles that refer to human values often include explicit references to fundamental human rights or international human rights, or mention concepts from human rights frameworks and jurisprudence such as human dignity or autonomy. The OECD and G20 AI principles also add "internationally recognized labor rights" to this list. 305 There is also an overlap between articulations of the Promotion of Human Values and social, economic, or environmental concepts that are outside the boundaries of political and civil rights, 306 including among documents coded under the principle of AI "leveraged to benefit society." Principle 3, "Make AI Serve People and Planet," from the UNI Global Union's AI principles, is emblematic, calling for: "throughout their entire operational process, AI systems [to] remain compatible and increase the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as … fundamental human rights. In addition, AI systems must protect concerned with how the societal impacts of AI can be managed through AI system design. Tencent's AI principles state that "The R&D of artificial intelligence should respect human dignity and protect human rights and freedoms." 311 The Smart Dubai AI principles says we should "give AI systems human values and make them beneficial to society," 312 suggesting that it is possible to build AI systems that have human values embedded in their code. 313 However, most, if not all, of these documents also acknowledge that human values will also need to be promoted in the implementation of AI systems and "throughout the AI system lifecycle." 314

Access to Technology
The "access to technology" principle represents statements that the broad availability of AI technology, and the benefits thereof, is a vital element of ethical and rights-respecting AI. Given the significant transformational potential of AI, documents that include this principle worry that AI might contribute to the growth of inequality. The ITI AI Policy Principles, a private sector document, focus on the economic aspect, stating that "if the value [created by AI] favors only certain incumbent entities, there is a risk of exacerbating existing wage, income, and wealth gaps." 315 At least one civil society document shares this concern: the T20 report on the future of work and education avers that "The wealth created by AI should benefit workers and society as a whole as well as the innovators." 316 The Japanese AI principles, while acknowledging the economic dimension of this issue (observing that "AI should not generate a situation where wealth and social influence are unfairly biased towards certain stakeholders" 317 ), emphasize the sociopolitical dimensions of inequality, including the potential that AI may unfairly benefit certain states or regions as well as contribute to "a digital divide with so-called 'information poor' or 'technology poor' people left behind." 318 Some versions of the "access to technology" principle are premised on the notion that broad access to AI technology itself, as well as the education necessary to use and understand it, is the priority. The Chinese AI governance principles provide that "Stakeholders of AI systems should be able to receive education and training to help them adapt to the impact of AI development in psychological, emotional and technical aspects." 319 The ITI AI Policy Principles focus on educating and training people who have traditionally been marginalized by or excluded from technological innovation, calling for the "diversification and broadening of access to the resources necessary for AI development and use, such as computing resources, education, and training." 320 Two documents, Microsoft's AI Principles and the European High Level Expert Group guidelines, go beyond this to reflect a vision for "[a]ccessibility to this technology for persons with disabilities," 321 noting that in some cases "AI-enabled services… are already empowering those with hearing, visual and other impairments." 322

Leveraged to Benefit Society
The principle that AI be "leveraged to benefit society" stands for the notion that AI systems should be employed in service of public-spirited goals. The documents vary in the specificity with which they articulate goals. Where they are specific in the goals they list, they may include social, political, and economic factors. Examples of beneficial ends in the European High Level Expert Group guidelines include: "Respect for human dignity... Freedom of the individual... Respect for democracy, justice and the rule of law... Equality, non-discrimination and solidarity -including the rights of persons at risk of exclusion... Citizens' rights… including the right to vote, the right to good administration or access to public documents, and the right to petition the administration." 323 The High Level Expert Group and the German AI strategy were the two documents to explicitly include the environment and sustainable development as factors in their determination of AI that is "leveraged to benefit society." 324 There is a notable trend among the documents that include this principle to designate it as a 63 cyber.harvard.edu precondition for AI development and use. IEEE's Ethically Aligned Design document uses strong language to assert that it is not enough for AI systems to be profitable, safe, and legal; they must also include human well-being as a "primary success criterion for development." 325 Google's AI principles contain a similar concept, stating that the company "will proceed [with the development of AI technology] where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides" after taking "into account a broad range of social and economic factors." 326

International Human Rights
In recent years, the human rights community has become more engaged with digital rights, and with the impacts of AI technology in particular. Even outside of human rights specialists, there has been an increasing appreciation for the relevance of international human rights law and standards to the governance of artificial intelligence. 327 To an area of technology governance that is slippery and fast-moving, human rights law offers an appealingly well-established core set of concepts, against which emerging technologies can be judged. To the broad guarantees of human rights law, principles documents offer a tailored vision of the specific -and in some cases potentially novel -concerns that AI raises.
Accordingly, when coding the principles documents in our dataset, we also made observations on each document's references to human rights, whether as a general concept or specific human-rights related documents such as the Universal Declaration of Human Rights, International Covenant on Civil and Political Rights, the United Nations Guiding Principles on Business & Human Rights and the United Nations Sustainable Development Goals. Twenty-three of the documents in our dataset (64%) made a reference of this kind. We also noted when documents stated explicitly that they had employed a human rights framework, and five of the thirty-six documents (14%) did so.
Given the increasing visibility of AI in the human rights community and the apparent increasing interest in human rights among those invested in AI governance, we had expected that the data might reveal a trend toward increasing emphasis on human rights in AI principles documents. However, our dataset was small enough, and the timespan sufficiently compressed, that no such trend is apparent.
As illustrated in the table below, private sector and civil society documents were most likely to reference human rights. At the outset of our research, we had expected that principles documents from the private sector would be less likely to refer to human rights and government documents more likely. Among the principles documents we looked at -admittedly not designed to be a complete or even representative sample -we were wrong. The actor type with the single greatest proportion of human rights references were the documents from the private sector; only one omitted a reference to human rights. By contrast, less than half of documents authored by or on behalf of government actors did contain some reference to human rights. 328

Conclusion
The eight themes that surfaced in this research -Privacy, Accountability, Safety and Security, Transparency and Explainability, Fairness and Non-discrimination, Human Control of Technology, Professional Responsibility, and Promotion of Human Values -offer at least some view into the foundational requirements for AI that is ethical and respectful of human rights. However, there's a wide and thorny gap between the articulation of these high-level concepts and their actual achievement in the real world. While it is the intent of this white paper and the accompanying data visualization to provide a high-level overview, there remains more work to be done, and we close with some reflections on productive possible avenues.
In the first place, our discussion of the forty-seven principles we catalogued should make clear that while there are certainly points of convergence, by no means is there unanimity. The landscape of AI ethics is burgeoning, and if calls for increased access to technology (see Section 3.8) and multistakeholder participation (see Section 3.7) are heeded, it's likely to become yet more diverse. It would be compelling to have closer studies of the variation within the themes we uncovered, including additional mapping projects that might illustrate narrower or different versions of the themes with regard to particular geographies or stakeholder groups. It would also be interesting to look at principles geared toward specific applications of AI, such as facial recognition or autonomous vehicles.
Within topics like "fairness," the varying definitions and visions represented by the principles documents in our dataset layer on top of an existing academic literature, 329 but also on existing domestic and international legal regimes which have long interpreted these and similar concepts. Litigation over the harmful consequences of AI technology is still nascent, with just a handful of cases having been brought. Similarly, only a few jurisdictions have adopted regulations concerning AI, although certainly many of the documents in our dataset anticipate, and even explicitly call for (see Sections 3.1 and 3.2), such actions. Tracking how principles documents engage with and influence how liability for AI-related damages is apportioned by courts, legislatures, and administrative bodies, will be important.
329 Arvind Narayanan, "Translation tutorial: 21 fairness definitions and their politics," tutorial presented at the Conference on Fairness, Accountability, and Transparency, (Feb 23 2018), available at: https://www.youtube.com/embed/jIXIuYdnyyk 67 cyber.harvard.edu There will be a rich vein for further scholarship on ethical and rights-respecting AI for some time, as the norms we attempt to trace remain actively in development. What constitutes "AI for good" is being negotiated both through top-down efforts such as dialogues at the intergovernmental level, as well as bottom-up, among people most impacted by the deployment of AI technology, and the organizations who represent their interests. That there are core themes to these conversations even now is due to the hard work of the many individuals and organizations who are participating in them, and we are proud to play our part.