by Alan J Weissberger
Should robots and artificial intelligence (AI) be used to improve the human condition and enhance productivity or are they only to be used to increase revenue and profits for the companies pursuing those technologies? Can they be trusted to make decisions without human intervention? What about robotic or AI accidents/mishaps?
Alan Winfield, Professor of Robot Ethics at UWE Bristol, addressed those questions along with other issues in his keynote speech on IEEE Ethical Standards at the March 6th Santa Clara University Machine Learning Symposium.
Prof. Winfield conducts research in mobile robotics within the Bristol Robotics Lab. He’s deeply interested in mobile robots for two reasons:
- They are complex and potentially useful machines that embody just about every design challenge and discipline there is.
- Robots allow us to address some deep questions about life, emergence, culture and intelligence in a radically new way by building models.
Prof. Winfield said that a new generation of IEEE ethical standards in robotics and AI are emerging as a direct response to a growing awareness of the ethical, legal and societal impacts of those technologies.
Alan called special attention to the reality that robots and AI-controlled machines/diagnosis’ can and do go wrong. A transparent AI/ robotics/ autonomous system (see P7001 below) permits observers to determine why it behaves in a certain way. Without transparency, discovering what went wrong and why is extremely difficult.
IEEE Standard on Ethically Aligned Design:
In April 2016, the IEEE Standards Association launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. The significance of this initiative marks a watershed in the emergence of ethical standards and a new subject matter for IEEE standards.
This initiative explicitly aims to address the deep ethical challenges that face the entire gamut of autonomous and intelligent systems — from driverless car autopilots to AI-based medical diagnosis, to deep learning drones, to care robots to chatbots. This IEEE standardization effort is both ambitious and unprecedented. Yet few are even aware of it (including this author, prior to attending the March 6th SCU Law ML Symposium. Please see John C. Havens comment in the box below this article.
The first major output from the IEEE Standards Association’s global ethics initiative is a document titled Ethically Aligned Design (EAD), which was developed through an iterative process that invited public feedback through the course of two public iterations.
The first published version of EAD (EAD1e) was a three-year comprehensive work effort that distills the consensus of its vast community of over seven hundred global creators into a set of high-level ethical principles, key issues, and practical recommendations. It is available for download here.
[The second version of EAD was launched in December 2017 with feedback that was utilized to create EAD1e. It sets out more than 100 ethical issues and recommendations.]
According to EAD version 1 (EAD1e), the ethical and values-based design, development, and implementation of autonomous and intelligent systems should be guided by the following General Principles:
- Human Rights
Autonomous and intelligent systems (A/IS) shall be created and operated to respect, promote, and protect internationally recognized human rights.
A/IS creators shall adopt increased human well-being as a primary success criterion for development.
- Data Agency
A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity.
A/IS creators and operators shall provide evidence of the effectiveness and fitness for the purpose of A/IS.
The basis of an A/IS decision should always be discoverable.
A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.
- Awareness of Misuse
A/IS creators shall guard against all potential misuses and risks of A/IS in operation.
A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.
The work of EAD covers general ethical principles; how to embed values into autonomous intelligent systems; methods to guide ethical design; safety and beneficence of artificial general intelligence and artificial superintelligence; personal data and individual access control; reframing autonomous weapons systems; economics and humanitarian issues; law; affective computing; classical ethics in AI; policy; mixed-reality; and well-being. Each EAD committee was additionally tasked with identifying, recommending and promoting new candidate standards.
These 15 standards projects are all based on ethical principles
- P7000 — Model Process for Addressing Ethical Concerns During System Design
- P7001 — Transparency of Autonomous Systems
- P7002 — Data Privacy Process
- P7003 — Algorithmic Bias Considerations
- P7004 — Standard on Child and Student Data Governance
- P7005 — Standard on Employer Data Governance
- P7006 — Standard on Personal Data Artificial Intelligence (AI) Agent
- P7007 — Ontological Standard for Ethically Driven Robotics and Automation Systems
- P7008 — Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
- P7009 — Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems
- P7010 — Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems
- P7011 — Standard for the Process of Identifying and Rating the Trustworthiness of News Sources
- P7012 — Standard for Machine Readable Personal Privacy Terms
- P7013 — Inclusion and Application Standards for Automated Facial Analysis Technology*
- P7014 — Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems
* The P7103 Working Group is going to be discontinued.
Prof. Winfield’s Conclusions:
While some argue over the pace and level of impact of robotics and AI (on jobs, for example), most agree that increasingly capable intelligent systems create significant ethical challenges, as well as great promise. This new generation of IEEE ethical standards takes a powerful first step toward addressing those challenges. Standards, like open science, are a trust technology.
Without ethical standards, it is hard to see how robots and AIs will be trusted and widely accepted, and without that acceptance, their great promise will not be realized.
This author wholeheartedly agrees!
Till next time……………………………………………..