Why Educate Trustworthy AI?
By: Catelijne Muller, President ALLAI, former member of the High Level Expert Group on AI
Trustworthy AI has become a ‘household name’ in Europe. Fist coined by the EU High Level Expert Group on AI in 2019, it reflects the European view and strategy for AI that is ‘lawful, ethically aligned and socio-technically robust’. But what does this mean exactly? And, more importantly, how do we make sure that trustworthy AI moves form a ‘narrative’ into actual practice.
What is Trustworthy AI?
As said, the High Level Expert Group on AI to the European Commission set the standard for what Trustworthy AI entails.
It rightfully recognised that AI is not an unregulated technology and that many existing laws and regulations apply to AI in quite the same way that they apply to the development and use of any other (software) tool. The first pillar of Trustworthy AI therefore is: lawfulness. The development, deployment and use of AI should comply with existing (and future) legislation. That is not to say that all existing regulation is ‘fit for purpose’ for a world with AI, being the reason that both the European Commission and the Council of Europe are working on a legal frameworks for AI, in order to fill the gaps in existing legislation, including human rights laws.
The second pillar of Trustworthy AI is ethical alignment. Ethical reflection on AI technology can serve multiple purposes. First, it can stimulate reflection on the need to protect individuals and groups at the most basic level. Second, it can stimulate new kinds of innovations that seek to foster ethical values, such as those helping to achieve the UN Sustainable Development Goals, which are firmly embedded in the forthcoming EU Agenda 2030. Trustworthy AI can improve individual flourishing and collective wellbeing by generating prosperity, value creation and wealth maximisation. It can contribute to achieving a fair society, by helping to increase citizens’ health and well-being in ways that foster equality in the distribution of economic, social and political opportunity. Third, ethical reflection often leads to legislation, thus acting as a precursor for rules for which the time had not come yet. Finally, ethical reflection can serve the purpose of interpreting, explaining, valuing and correctly applying (or not) rules that already exist.
The third pillar of Trustworthy AI is socio-technical robustness. Even if lawfulness and an ethical purpose is ensured, individuals and society must also be confident that AI systems will not cause any unintentional harm. Such systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts. It is therefore important to ensure that AI systems are robust. This is needed both from a technical perspective (ensuring the system’s technical robustness as appropriate in a given context, such as the application domain or life cycle phase), and from a social perspective (in due consideration of the context and environment in which the system operates). Ethical and robust AI are hence closely intertwined and complement each other.
Achieving Trustworthy AI through Education
ALLAI believes that one of the main avenues to bring trustworthy AI into practice is through education. Education of those who develop AI, but also of those who might not necessarily develop AI, but will work with AI in their future professions, for example in decision making or management positions. The decision if and how to deploy AI in a trustworthy manner in a company, a government institution, or a city for example, requires broad reflection and substantive knowledge of: 1. Why Trustworthy AI is important; 2. What Trustworthy AI actually entails and; 3. How it can be effectively achieved within any given setting.
Education is also important for those that might come to work with AI, such as for example legal, medical or HR professionals, journalists, teachers, law enforcement officers and so on. They need to understand the technical, ethical, legal and societal implications of the AI application they are working with. They need be empowered to maintain their professional autonomy, to appreciate the capabilities and limitations of the technology, so that an appropriate and responsible human-machine cooperation is reached.
The Trustworthy AI project aims to equip higher education professionals throughout the European Union with the tools and resources to teach all elements of Trustworthy AI to these future professionals.
About ALLAI and its role in this project
ALLAI is proud to be a partner in this project, and will bring expertise on concept of Trustworthy AI and the Ethics Guidelines for Trustworthy AI to this project. ALLAI already built significant expertise and experience in teaching Trustworthy AI in (public) organisations and will build on this experience by developing practical tools and resources for HEI teachers.
ALLAI is an independent international organisation that advocates responsible AI. ALLAI promotes AI that is safe, sustainable, ethically aligned, lawful and socio-technically robust. ALLAI was founded in 2018 by the three Dutch members of the High Level Expert Group on AI to the European Commission (Catelijne Muller, Virginia Dignum and Aimee van Wynsberghe), who each are considered the pioneers in setting and driving the agenda towards responsible AI, in particular at EU and international level. The motivation to found ALLAI was to make sure that the broad impact of AI on society remains at the top of the policy agenda’s, but also to promote that the high level initiatives, guidelines and policies around AI actually find their way into society. To achieve this, ALLAI’s work focusses on several pillars:
- AI policy: advocating and advising on AI policy making and regulatory developments around AI at EU, European and global level, through various activities and collaborations and roles with European and global institutions;
- Knowledge building and awareness raising among policy makers, public institutions, companies and civil society organisations on the opportunities and challenges of AI, AI policy developments and responsible AI practices;
- Translating responsible AI principles into practice through various programs and projects;
- Performing research activities on practical elements of responsible AI and translating existing responsible AI research through dissemination, knowledge and awareness projects;
- Developing educational tools and resources for teaching responsible AI;
- Specific responsible AI projects aimed at tackling particular AI challenges or advancing particular AI applications for the benefit of society.