Summary of IEEE Ethical Standards for Robotics and AI

by Alan J Weissberger

Introduction:

Training a Universal Robot with a wand - no computer programmer required.
Training a Universal Robot with a wand – no computer programmer required.

Should robots and artificial intelligence (AI) be used to improve the human condition and enhance productivity or are they only to be used to increase revenue and profits for the companies pursuing those technologies?  Can they be trusted to make decisions without human intervention?  What about robotic or AI accidents/mishaps?

Alan Winfield, Professor of Robot Ethics at UWE Bristol, addressed those questions along with other issues in his keynote speech on IEEE Ethical Standards at the March 6th Santa Clara University Machine Learning Symposium.

Prof. Winfield conducts research in mobile robotics within the Bristol Robotics Lab.  He’s deeply interested in mobile robots for two reasons:

  1. They are complex and potentially useful machines that embody just about every design challenge and discipline there is.
  2. Robots allow us to address some deep questions about life, emergence, culture and intelligence in a radically new way by building models.

Prof. Winfield said that a new generation of IEEE ethical standards in robotics and AI are emerging as a direct response to a growing awareness of the ethical, legal and societal impacts of those technologies.

Alan called special attention to the reality that robots and AI-controlled machines/diagnosis’ can and do go wrong.  A transparent AI/ robotics/ autonomous system (see P7001 below) permits observers to determine why it behaves in a certain way.  Without transparency, discovering what went wrong and why is extremely difficult.

IEEE Standard on Ethically Aligned Design:

In April 2016, the IEEE Standards Association launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. The significance of this initiative marks a watershed in the emergence of ethical standards and a new subject matter for IEEE standards.

This initiative explicitly aims to address the deep ethical challenges that face the entire gamut of autonomous and intelligent systems — from driverless car autopilots to AI-based medical diagnosis, to deep learning drones, to care robots to chatbots.  This IEEE standardization effort is both ambitious and unprecedented.  Yet few are even aware of it (including this author, prior to attending the March 6th SCU Law ML Symposium.  Please see John C. Havens comment in the box below this article.

Image of Ethically Aligned Design report from the IEEE
Ethically Aligned Design report from the IEEE

The first major output from the IEEE Standards Association’s global ethics initiative is a document titled Ethically Aligned Design (EAD), which was developed through an iterative process that invited public feedback through the course of two public iterations.

The first published version of EAD  (EAD1e) was a three-year comprehensive work effort that distills the consensus of its vast community of over seven hundred global creators into a set of high-level ethical principles, key issues, and practical recommendations. It is available for download here.

[The second version of EAD was launched in December 2017 with feedback that was utilized to create EAD1e.  It sets out more than 100 ethical issues and recommendations.]

According to EAD version 1 (EAD1e), the ethical and values-based design, development, and implementation of autonomous and intelligent systems should be guided by the following General Principles:

  1. Human Rights

Autonomous and intelligent systems (A/IS) shall be created and operated to respect, promote, and protect internationally recognized human rights.

  1. Well-being

A/IS creators shall adopt increased human well-being as a primary success criterion for development.

  1. Data Agency

A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity.

  1. Effectiveness

A/IS creators and operators shall provide evidence of the effectiveness and fitness for the purpose of A/IS.

  1. Transparency

The basis of an A/IS decision should always be discoverable.

  1. Accountability

A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.

  1. Awareness of Misuse

A/IS creators shall guard against all potential misuses and risks of A/IS in operation.

  1. Competence

A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.


The work of EAD covers general ethical principles; how to embed values into autonomous intelligent systems; methods to guide ethical design; safety and beneficence of artificial general intelligence and artificial superintelligence; personal data and individual access control; reframing autonomous weapons systems; economics and humanitarian issues; law; affective computing; classical ethics in AI; policy; mixed-reality; and well-being. Each EAD committee was additionally tasked with identifying, recommending and promoting new candidate standards.


IEEE P7000 series of standards projects:

These 15 standards projects are all based on ethical principles

* The P7103 Working Group is going to be discontinued.

Prof. Winfield’s Conclusions:

While some argue over the pace and level of impact of robotics and AI (on jobs, for example), most agree that increasingly capable intelligent systems create significant ethical challenges, as well as great promise. This new generation of IEEE ethical standards takes a powerful first step toward addressing those challenges. Standards, like open science, are a trust technology.

Without ethical standards, it is hard to see how robots and AIs will be trusted and widely accepted, and without that acceptance, their great promise will not be realized.

This author wholeheartedly agrees!

Till next time……………………………………………..

Author Alan Weissberger

Comments

3 responses to “Summary of IEEE Ethical Standards for Robotics and AI”

  1. John C. Havens Avatar

    Comment by John C. Havens, Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, who reviewed a draft of the article:

    While I agree marketing for Ethically Aligned Design and other efforts of this initiative could use improvement, it’s vague to say “few” here as it could indicate the publication of the paper Ethically Aligned Design and our other efforts haven’t had significant impact.

    To date, EAD has been utilized (with full credit to IEEE / The Initiative) by the OECD in the creation of their AI principles, The Future of Life Institute in the creation of their AI Principles, the EU High Level Experts Group for their principles, IBM for the creation of their paper, “Everyday Ethics for Artificial Intelligence) and our Chair, volunteers, Konstantions Karachalios have spoken at over 3 dozen events since 2015 around the world about EAD and our efforts.

    We have also had articles about The Initiative in approximately 10 IEEE outlets and journals as well. My point here is not to sound defensive, but that I’d recommend wording like, “Yet despite the impact of The Initiative, there is a need and opportunity for greater awareness about this important work” or something along those lines.

  2. Ken Pyle, Managing Editor Avatar

    There is a great deal to digest just in this excellent summary of the efforts of the IEEE to address extremely important questions. With technology advancing so rapidly and the tools to enable the implementation of AI/ML much more accessible than ever, the questions being addressed above needs to be made digestible to the layman.

    For instance, there is a text to audio program that is essentially free to use and probably could be figured out by a 6th grader. After training on a voice, this program can replicate a voice so effectively that it would be difficult for most people to tell who was the speaker and who was the machine-generated version of that voice. It’s not hard to imagine how that could be used for nefarious outcomes.

    As Kelvin Coleman, Executive Director of the National CYBERSECURITY Alliance, suggested in his cybersecurity interview at SCU with Alan recently, AI/ML ethics training probably needs to be taught starting in grade school if we are going to instill a sense of right and wrong.

  3. Jack Avatar
    Jack

    Affirmative, there is more to learn from this IEEE ethical standards initiative. It’s a very broad field. Article has very solid facts and a concise conclusion.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.