I am working on issues of AI ethics for the SIENNA project. This page is intended as a resource for me, and others, for major initiatives around Ethical AI.
IEEE Standards for Ethical AI
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
Top of website
Ethically Aligned Design
Ethically Aligned Design, First Edition (EAD1e), “From Principles to Practice”, provides a mapping of the conceptual framework of Ethically Aligned Design. It outlines the logic behind “Three Pillars” that form the basis of EAD1e, and it connects the Pillars to high-level “General Principles” which guide all manner of ethical A/IS design.
From Principals to Practice – It is at this step of the Ethically Aligned Design Conceptual Framework that readers will be able to identify the Principles and Chapters of key relevance to their work. Content provided in EAD1e is organized by “Issues” identified as the most pressing ethical matters surrounding A/IS design to address today and “Recommendations” on how it should be done.
P7000 – Process for Addressing Ethical Concerns During System Design
P7003 – Algorithmic Bias Considerations
Formal methodologies by which to report how designers addressed bias in the creation of their algorithms. Including benchmarking procedures, bias quality control; application boundaries; bias due to incorrect interpretation of system output by users (e.g. correlation vs. causation).
P7006 – Standard for Personal Data Artificial Intelligence (AI) Agent
The technical elements required to create and grant access to a personalized Artificial Intelligence (AI). Including inputs, learning, ethics, rules and values controlled by individuals.
P7007 – Ontological Standard for Ethically Driven Robotics and Automation Systems
A set of ontologies that contain concepts, definitions and axioms which are necessary to establish ethically driven methodologies for the design of Robots and Automation Systems.
P7008 – Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
“Nudges” are programmatic manipulations designed to influence the behavior or emotions of a user. This standard establishes a definition of nudges. It contains concepts, functions and benefits necessary to ensure ethically driven methodologies for AI systems that incorporate them.
P7009 – Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems
Methodologies and tools for the development and use of fail-safe mechanisms in autonomous and semi-autonomous systems. Including procedures for measuring, testing and certifying a system’s ability to fail safely; instructions for improvement in the case of unsatisfactory performance; compliance with accountability requirements.
P7011 – Standard for the Process of Identifying and Rating the Trustworthiness of News Sources
Standards to create and maintain news purveyor ratings for purposes of public awareness. Defines an open source algorithm and a score card rating system for rating trustworthiness.
P7014 – Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems
Defines a model for ethical considerations and practices in the creation and use of empathic technology.
IEEE 7010-2020 – IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being
The impact of artificial intelligence or autonomous and intelligent systems (A/IS) on humans is measured by this standard. The positive outcome of A/IS on human well-being is the overall intent of this standard. Scientifically valid well-being indices currently in use and based on a stakeholder engagement process ground this standard. Product development guidance, identification of areas for improvement, risk management, performance assessment, and the identification of intended and unintended users, uses and impacts on human well-being of A/IS are the intents of this standard.
Ethical AI Certification Initiatives
ECPAIS – Ethics Certification Program for Autonomous and Intelligent Systems
Create specifications for certification of transparency, accountability and reduction in algorithmic bias. To easily and visually communicate to consumers whether any system is deemed “safe” or “trusted.”
CertNexus – Certified Ethical Emerging Technologist
“designed for individuals seeking to demonstrate a vendor neutral, cross-industry, and multidisciplinary understanding of applied technology ethics that will enable them to navigate the processes by which ethical integrity may be upheld within emerging data-driven technology fields (such as artificial intelligence (AI)/machine learning, Internet of Things (IoT), and data science).”
AI Certified Engineer – Singapore
“AI Certification is a professional qualification programme to recognize and award credentials to working professionals in AI-related engineering roles. Please note this is not a course or academic programme. AI Certification validate the technical competencies and qualified work experiences of applicants and award the title of AI Certified Engineers to those who passed the assessment, listed in the AI Certification Handbook.” Includes ethical training.
General Awareness, Groups, etc.
Recommendations on the ethics of artificial intelligence
People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities.
AIethicist.org is a global repository of reference & research material for anyone interested in the current discussions on AI ethics and impact of AI on individuals and society. AWESOME!
World Economic Forum
WEF “Empowering AI Leadership” https://www.weforum.org/projects/ai-board-leadership-toolkit with link to AI toolkit for Boards of Directors (https://spark.adobe.com/page/RsXNkZANwMLEf/ – but not much substance in it)
American Council for Technology-Industry Advisory Council AI Working Group
“The American Council for Technology-Industry Advisory Council (ACT-IAC) is a non-profit educational organization established to accelerate government mission outcomes through collaboration, leadership and education.”
“The ACT-IAC AI Working Group seeks to enable government agencies with the ability to assess their organization to identify areas that would benefit from the introduction of artificial intelligence and share best practices to implement AI to drive mission and operational value.”
Have published a 42-page “Ethical Application of Artificial Intelligence Framework (EAAI)” consultation document, currently receiving feedback.
They have also published an AI Primer and an AI Playbook. They are now working on AI certification for US federal employees.
“AlgorithmWatch is a non-profit research and advocacy organization committed to evaluating and shedding light on algorithmic decision-making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.”
OECD.AI Policy Observatory
Lays out AI principles and some legal instruments.
Global AI policy actions
Fairness, Accountability, and Transparency in Machine Learning
Hosts an annual conference producing significant work. However, website may not be actively maintained as it does not list the 2019 and 2020 conferences. Seems related to Microsoft FATE (see below)
Australian Government AI Ethics Framework
“We’ve committed to developing an AI Ethics Framework to guide businesses and governments looking to design, develop and implement AI in Australia. This is part of the Australian Government’s commitment to build Australia’s AI capabilities.”
Contains a set of principles, guidelines on application, roadmap of further work to be done.
Tools for Ethical AI
XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by The Institute for Ethical AI & ML, and it was developed based on the 8 principles for Responsible Machine Learning.
AI Global is a not-for-profit working on “practical tools to support the responsible development of AI systems”. Claim to create “easy-to-use tools that support practitioners as they navigate the complex landscape of Responsible AI”
The Ethical AI Design Assistant
A virtual assessment to help members anticipate problems and future-proof their AI system. Still in development.
They also collaborate with the Montreal AI Ethics Institute, which is providing a consulting service https://montrealethics.ai/consulting/.
They are also trying to create a Portal on all things Ethical AI – but it is in need of content.
“ForHumanity endeavors to be a beacon, examining the impact of AI & Automation on jobs, society, our rights and our freedoms. We focus on mitigating risk in the areas of trust, ethics, bias, privacy and cybersecurity at the corporate and public-policy levels.”
They offer an independent audit service – “Introduction to Independent Audit of AI Systems is a transparent, crowd-sourced set of audit rules/standards for all autonomous systems in the areas of Ethics, Bias, Privacy, Trust, and Cybersecurity. The ForHumanity Fellows consider, draft and vet the audit rules continuously. You are welcome to join the effort, examine the audit in detail and provide you unique perspective and insights into the framework. We believe this framework will create an #infrastructureoftrust desperately needed for our autonomous systems.”
“ALLAI is an independent organization dedicated to drive and foster Responsible AI. ALLAI is an initiative of Catelijne Muller, Virginia Dignum and Aimee van Wynsberghe, the three Dutch members of the EU High Level Expert Group on Artificial Intelligence.”
“ALLAI offers a suite of in-house Masterclasses and Expert Sessions aimed at improving knowledge and awareness on the capabilities and limitations of AI. The Masterclasses and Expert Sessions are tailored to the specific demands and level of expertise within your organization.”
Fairness, Accountability, Transparency, and Ethics in AI
Their Incubations section provides tools for ethical AI, especially Datasheets for Datasets and Fairlearn (a python system to assess an AI system’s fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as a Jupyter widget for model assessment).
Google Model Cards
Google are actively developing tools for ethical assessment of AI, primarily Model Cards
- Model Card website: https://modelcards.withgoogle.com/about
- Model Card samples: https://modelcards.withgoogle.com/model-reports
Model Card toolkit (Github)
“The Model Card Toolkit (MCT) streamlines and automates generation of Model Cards, machine learning documents that provide context and transparency into a model’s development and performance.”
NYC 1894 – Annual auditing of AI recruitment systems
AI recruitment systems must have an audit for bias every year, with full traceability of the specific criteria used by the system when assessing a CV.
My thanks to Dr. Ansgar Koene, Chair for IEEE Standard on Algorithm Bias Considerations, for his assistance in putting this list together.