Resources for Ethical Artificial Intelligence

SIENNA Project Publications

The SIENNA project concluded in April 2021.

The project researched ethical issues in emerging technologies in human genomics, human enhancement and artificial intelligence.  It involved researchers from 13 organisations, including ten universities and three civil society groups, from Europe, Asia, the Americas and Africa.

Dr Dainow’s role was focused on developing concrete responses for the EU to the ethical issues discovered in the research.  His primary contributions were the (now official) EU Guidelines for funding research applications into artificial intelligence and the policy guidelines for introducing and enforcing ethical AI in society.

The SIENNA project website contains the full range of publications.  These contain the practical guidelines for dealing with ethical issues of Artificial Intelligence within the EU.

Embedding AI in Society: The 2021 Rabb Symposium

Dr Dainow presented key elements of his (and the EU’s) approach to making AI development more ethical at at North Carolina State University’s Rabb Symposium on Embedding AI in Society, February 2021.

Recording of the Rabb Symposium presentation


Other resources for ethical Artificial Intelligence

IEEE Standards for Ethical AI

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

Top of website

https://standards.ieee.org/industry-connections/ec/autonomous-systems.html

This is not a list of all P7xxx standards, only those aimed solely at ethics of Artificial Intelligence…

P7003 Algorithmic Bias Considerations (in development)

“This standard describes specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithms, where “negative bias” infers the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics (such as race, gender, sexuality, etc); or with instances of bias against groups not necessarily protected explicitly by legislation, but otherwise diminishing stakeholder or user well being and for which there are good reasons to be considered inappropriate. Possible elements include (but are not limited to): benchmarking procedures and criteria for the selection of validation data sets for bias quality control; guidelines on establishing and communicating the application boundaries for which the algorithm has been designed and validated to guard against unintended consequences arising from out-of-bound application of algorithms; suggestions for user expectation management to mitigate bias due to incorrect interpretation of systems outputs by users (e.g. correlation vs. causation).” [webpage]

P7007 Ontological Standard for Ethically Driven Robotics and Automation Systems (Released)

“A set of ontologies with different abstraction levels that contain concepts, definitions, axioms, and use cases that assist in the development of ethically driven methodologies for the design of robots and automation systems is established by this standard. It focuses on the robotics and automation domain without considering any particular applications and can be used in multiple ways, for instance, during the development of robotics and automation systems as a guideline or as a reference “taxonomy” to enable clear and precise communication among members from different communities that include robotics and automation, ethics, and correlated areas. Users of this standard need to have a minimal knowledge of formal logics to understand the axiomatization expressed in Common Logic Interchange Format. Additional information can be found at https://standards.ieee.org/industry-connections/ec/autonomous-systems.html [webpage]

P7008 Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems (in development)

“‘Nudges’ as exhibited by robotic, intelligent or autonomous systems are defined as overt or hidden suggestions or manipulations designed to influence the behavior or emotions of a user. This standard establishes a delineation of typical nudges (currently in use or that could be created). It contains concepts, functions and benefits necessary to establish and ensure ethically driven methodologies for the design of the robotic, intelligent and autonomous systems that incorporate them.” [webpage]

P7009 Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems (in development)

“This standard establishes a practical, technical baseline of specific methodologies and tools for the development, implementation, and use of effective fail-safe mechanisms in autonomous and semi-autonomous systems. The standard includes (but is not limited to): clear procedures for measuring, testing, and certifying a system’s ability to fail safely on a scale from weak to strong, and instructions for improvement in the case of unsatisfactory performance. The standard serves as the basis for developers, as well as users and regulators, to design fail-safe mechanisms in a robust, transparent, and accountable manner.” [webpage]

P7010 Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being (Released)

“The impact of artificial intelligence or autonomous and intelligent systems (A/IS) on humans is measured by this standard. The positive outcome of A/IS on human well-being is the overall intent of this standard. Scientifically valid well-being indices currently in use and based on a stakeholder engagement process ground this standard. Product development guidance, identification of areas for improvement, risk management, performance assessment, and the identification of intended and unintended users, uses and impacts on human well-being of A/IS are the intents of this standard.” [webpage]

P7014 Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems (in development)

“This standard defines a model for ethical considerations and practices in the design, creation and use of empathic technology, incorporating systems that have the capacity to identify, quantify, respond to, or simulate affective states, such as emotions and cognitive states. This includes coverage of ‘affective computing’, ’emotion Artificial Intelligence’ and related fields.” [webpage]

Ethically Aligned Design

Ethically Aligned Design, First Edition (EAD1e), “From Principles to Practice”, provides a mapping of the conceptual framework of Ethically Aligned Design. It outlines the logic behind “Three Pillars” that form the basis of EAD1e, and it connects the Pillars to high-level “General Principles” which guide all manner of ethical A/IS design.

From Principals to Practice – It is at this step of the Ethically Aligned Design Conceptual Framework that readers will be able to identify the Principles and Chapters of key relevance to their work. Content provided in EAD1e is organized by “Issues” identified as the most pressing ethical matters surrounding A/IS design to address today and “Recommendations” on how it should be done. 

P7000 – Process for Addressing Ethical Concerns During System Design

IEEE P7000 – IEEE Draft Model Process for Addressing Ethical Concerns During System Design
https://standards.ieee.org/project/7000.html

P7003 – Algorithmic Bias Considerations

Formal methodologies by which to report how designers addressed bias in the creation of their algorithms. Including benchmarking procedures, bias quality control; application boundaries; bias due to incorrect interpretation of system output by users (e.g. correlation vs. causation).

https://standards.ieee.org/project/7003.html

P7006 – Standard for Personal Data Artificial Intelligence (AI) Agent

The technical elements required to create and grant access to a personalized Artificial Intelligence (AI). Including inputs, learning, ethics, rules and values controlled by individuals.

https://standards.ieee.org/project/7006.html

P7007 – Ontological Standard for Ethically Driven Robotics and Automation Systems

A set of ontologies that contain concepts, definitions and axioms which are necessary to establish ethically driven methodologies for the design of Robots and Automation Systems.

https://standards.ieee.org/project/7007.html

P7008 – Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems

“Nudges” are programmatic manipulations designed to influence the behavior or emotions of a user. This standard establishes a definition of nudges. It contains concepts, functions and benefits necessary to ensure ethically driven methodologies for AI systems that incorporate them.

https://standards.ieee.org/project/7008.html

P7009 – Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems

Methodologies and tools for the development and use of fail-safe mechanisms in autonomous and semi-autonomous systems. Including procedures for measuring, testing and certifying a system’s ability to fail safely; instructions for improvement in the case of unsatisfactory performance; compliance with accountability requirements.

https://standards.ieee.org/project/7009.html

P7011 – Standard for the Process of Identifying and Rating the Trustworthiness of News Sources

Standards to create and maintain news purveyor ratings for purposes of public awareness. Defines an open source algorithm and a score card rating system for rating trustworthiness.

https://standards.ieee.org/project/7011.html

P7014 – Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems

Defines a model for ethical considerations and practices in the creation and use of empathic technology.

https://standards.ieee.org/project/7014.html

IEEE 7010-2020 – IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being

The impact of artificial intelligence or autonomous and intelligent systems (A/IS) on humans is measured by this standard. The positive outcome of A/IS on human well-being is the overall intent of this standard. Scientifically valid well-being indices currently in use and based on a stakeholder engagement process ground this standard. Product development guidance, identification of areas for improvement, risk management, performance assessment, and the identification of intended and unintended users, uses and impacts on human well-being of A/IS are the intents of this standard.

https://standards.ieee.org/standard/7010-2020.html


Ethical AI Certification Initiatives

ECPAIS – Ethics Certification Program for Autonomous and Intelligent Systems

Create specifications for certification of transparency, accountability and reduction in algorithmic bias. To easily and visually communicate to consumers whether any system is deemed “safe” or “trusted.”

https://standards.ieee.org/industry-connections/ecpais.html

CertNexus – Certified Ethical Emerging Technologist

“designed for individuals seeking to demonstrate a vendor neutral, cross-industry, and multidisciplinary understanding of applied technology ethics that will enable them to navigate the processes by which ethical integrity may be upheld within emerging data-driven technology fields (such as artificial intelligence (AI)/machine learning, Internet of Things (IoT), and data science).”

https://certnexus.com/certification/ceet/

AI Certified Engineer – Singapore

“AI Certification is a professional qualification programme to recognize and award credentials to working professionals in AI-related engineering roles. Please note this is not a course or academic programme.  AI Certification validate the technical competencies and qualified work experiences of applicants and award the title of AI Certified Engineers to those who passed the assessment, listed in the AI Certification Handbook.”  Includes ethical training.

https://www.aisingapore.org/ai-certification/


General Awareness, Groups, etc.

UNESCO

Recommendations on the ethics of artificial intelligence

https://en.unesco.org/artificial-intelligence/resources

PAIR

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities.

https://pair.withgoogle.com/

AIEthicist

AIethicist.org is a global repository of reference & research material for anyone interested in the current discussions on AI ethics and impact of AI on individuals and society.  AWESOME!

https://www.aiethicist.org/

World Economic Forum

WEF “Empowering AI Leadership” https://www.weforum.org/projects/ai-board-leadership-toolkit with link to AI toolkit for Boards of Directors (https://spark.adobe.com/page/RsXNkZANwMLEf/ – but not much substance in it)

American Council for Technology-Industry Advisory Council AI Working Group

“The American Council for Technology-Industry Advisory Council (ACT-IAC) is a non-profit educational organization established to accelerate government mission outcomes through collaboration, leadership and education.”

“The ACT-IAC AI Working Group seeks to enable government agencies with the ability to assess their organization to identify areas that would benefit from the introduction of artificial intelligence and share best practices to implement AI to drive mission and operational value.”

https://www.actiac.org/artificial-intelligence-project

Have published a 42-page “Ethical Application of Artificial Intelligence Framework (EAAI)” consultation document, currently receiving feedback.

https://www.actiac.org/act-iac-white-paper-ethical-application-ai-framework

They have also published an AI Primer and an AI Playbook. They are now working on AI certification for US federal employees.

AlgorithmWatch

“AlgorithmWatch is a non-profit research and advocacy organization committed to evaluating and shedding light on algorithmic decision-making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.”

https://algorithmwatch.org/

OECD.AI Policy Observatory

Lays out AI principles and some legal instruments.

https://www.oecd.ai/

Global AI policy actions

A portal on AI policy actions around the world, by country (oecd.ai/dashboards?selectedTab=countries) by type (oecd.ai/dashboards?selectedTab=policyInstruments) or by sector (oecd.ai/policy-areas)

FAT ML

Fairness, Accountability, and Transparency in Machine Learning

Hosts an annual conference producing significant work.  However, website may not be actively maintained as it does not list the 2019 and 2020 conferences.  Seems related to Microsoft FATE (see below)

https://www.fatml.org/

Publications:  https://www.fatml.org/resources/relevant-scholarship

Australian Government AI Ethics Framework

“We’ve committed to developing an AI Ethics Framework to guide businesses and governments looking to design, develop and implement AI in Australia. This is part of the Australian Government’s commitment to build Australia’s AI capabilities.”

Contains a set of principles, guidelines on application, roadmap of further work to be done.

https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework


Tools for Ethical AI

XAI

XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by The Institute for Ethical AI & ML, and it was developed based on the 8 principles for Responsible Machine Learning.

https://github.com/EthicalML/xai

AI Global

AI Global is a not-for-profit working on “practical tools to support the responsible development of AI systems”.  Claim to create “easy-to-use tools that support practitioners as they navigate the complex landscape of Responsible AI”

https://ai-global.org/

The Ethical AI Design Assistant

A virtual assessment to help members anticipate problems and future-proof their AI system.  Still in development.

https://ai-global.org/design-assistant/

They also collaborate with the Montreal AI Ethics Institute, which is providing a consulting service https://montrealethics.ai/consulting/.

They are also trying to create a Portal on all things Ethical AI – but it is in need of content.

ForHumanity

“ForHumanity endeavors to be a beacon, examining the impact of AI & Automation on jobs, society, our rights and our freedoms.  We focus on mitigating risk in the areas of trust, ethics, bias, privacy and cybersecurity at the corporate and public-policy levels.”

Audit Service

They offer an independent audit service – “Introduction to Independent Audit of AI Systems is a transparent, crowd-sourced set of audit rules/standards for all autonomous systems in the areas of Ethics, Bias, Privacy, Trust, and Cybersecurity. The ForHumanity Fellows consider, draft and vet the audit rules continuously. You are welcome to join the effort, examine the audit in detail and provide you unique perspective and insights into the framework. We believe this framework will create an #infrastructureoftrust desperately needed for our autonomous systems.”

https://www.forhumanity.center/independent-audit-1

ALLAI

“ALLAI is an independent organization dedicated to drive and foster Responsible AI.  ALLAI is an initiative of Catelijne Muller, Virginia Dignum and Aimee van Wynsberghe, the three Dutch members of the EU High Level Expert Group on Artificial Intelligence.”

allai.nl

Training Programs

“ALLAI offers a suite of in-house Masterclasses and Expert Sessions aimed at improving knowledge and awareness on the capabilities and limitations of AI. The Masterclasses and Expert Sessions are tailored to the specific demands and level of expertise within your organization.”

https://allai.nl/allai-programs/

Microsoft FATE

Fairness, Accountability, Transparency, and Ethics in AI

https://www.microsoft.com/en-us/research/theme/fate/

Tools

Their Incubations section provides tools for ethical AI, especially Datasheets for Datasets and Fairlearn (a python system to assess an AI system’s fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as a Jupyter widget for model assessment).

https://www.microsoft.com/en-us/research/theme/fate/#!incubations

Google Model Cards

Google are actively developing tools for ethical assessment of AI, primarily Model Cards

Model Card toolkit (Github)

“The Model Card Toolkit (MCT) streamlines and automates generation of Model Cards, machine learning documents that provide context and transparency into a model’s development and performance.”

https://github.com/tensorflow/model-card-toolkit


Law

NYC 1894 – Annual auditing of AI recruitment systems

https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9

AI recruitment systems must have an audit for bias every year, with full traceability of the specific criteria used by the system when assessing a CV.

My thanks to Dr. Ansgar Koene, Chair for IEEE Standard on Algorithm Bias Considerations, for his assistance in putting this list together.

Leave a Reply

Your email address will not be published. Required fields are marked *