The number of AI applications is growing rapidly, along with all the opportunities and challenges that come with it. The European Union considers AI reliability to be of utmost importance. This is also evident in the (proposed) regulations. This article focuses on building robust, ethical AI systems. The European Union outlines three conditions for trustworthy AI, which are:
- Lawful
- Ethical
- Robust
This article focuses on the last two points: ethical and robust AI.
What are the characteristics of robust and ethical AI systems? According to experts, the seven characteristics of a robust and ethical AI system are:
- There is always human oversight.
- The system is technically robust and secure.
- Privacy and data governance are ensured.
- Processes are transparent.
- It is inclusive and does not discriminate.
- Consideration is given to environmental and societal well-being.
- There is an accountability obligation.
The complete document containing the ethical guidelines for trustworthy AI by the EU can be downloaded here.
Building Robust, Ethical AI Systems: What does robust mean?
The word “robust” appears several times. In general, you refer to something that is strong, sturdy, and not easily breakable as robust. Because AI is based on models, it’s interesting to examine what robust means in that context. A robust model is one that stays within a predefined level of accuracy. In statistics, which plays a significant role in calculations within AI, robust refers to procedures designed to resist deviations beyond the range of the model. Robust AI maintains ethical and lawful outcomes (accurate enough to stay within that range) despite a wide variety of inputs.
Accountability for AI
Another interesting point is accountability. Taking responsibility essentially means two things:
- foreseeing the consequences of your actions beforehand and being able to justify them to yourself
- being accountable for your choices to society afterwards
It is also in line with the desired ethical approach in the use of AI. However, it’s easier said than done. The article about solving moral dilemmas contains a step-by-step guide to making decisions for yourself and for society. To put it into practice, it is important to do the following:
- Create a list of criteria that your AI system must meet to be ethical and robust
- Regularly check if your AI application still meets the criteria and identify areas for improvement
- Continuously add or modify criteria while maintaining a critical perspective, avoiding a mere “checklist” approach
The four pillars of trustworthy AI for the EU
Finally: the EU operates based on four principles for trustworthy AI. All the conditions and characteristics mentioned above are based on these four principles of trustworthy AI for the EU:
- Ensuring human autonomy
- Prevention of harm
- Justice (see developing just AI systems and algorithms)
- Accountability
It’s important to reflect on the principle of preventing harm as a final point. This means that procedures are established to prevent harm. In other words, if you have doubts, don’t rush ahead. Take action only when you fully comprehend the consequences. Similar to the GDPR legislation, a significant responsibility for assessing risks is placed on those who use AI. Building robust, ethical AI systems is therefore crucial within the framework of risk management. For the same reason, it’s advisable to initiate a moral deliberation before embarking on the construction process.
Nick Nijhuis helps organizations become digitally mature, is a business innovation lecturer, a trainer in moral leadership, and a NIMA examiner.
Pingback: Developing just (AI) systems and algorithms - Moral Leader