Building robust, ethical AI systems

Building robust, ethical AI systems

Five avatars in a circle with one in the center. They are connected to each other, symbolizing the integration of elements to build a robust, ethical AI system

The number of AI applications is growing rapidly, along with all the opportunities and challenges that come with it. The European Union considers AI reliability to be of utmost importance. This is also evident in the (proposed) regulations. This article focuses on building robust, ethical AI systems. The European Union outlines three conditions for trustworthy AI, which are:

  1. Lawful
  2. Ethical
  3. Robust

This article focuses on the last two points: ethical and robust AI.

What are the characteristics of robust and ethical AI systems? According to experts, the seven characteristics of a robust and ethical AI system are:

  1. There is always human oversight.
  2. The system is technically robust and secure.
  3. Privacy and data governance are ensured.
  4. Processes are transparent.
  5. It is inclusive and does not discriminate.
  6. Consideration is given to environmental and societal well-being.
  7. There is an accountability obligation.

The complete document containing the ethical guidelines for trustworthy AI by the EU can be downloaded here.

Building Robust, Ethical AI Systems: What does robust mean?

The word “robust” appears several times. In general, you refer to something that is strong, sturdy, and not easily breakable as robust. Because AI is based on models, it’s interesting to examine what robust means in that context. A robust model is one that stays within a predefined level of accuracy. In statistics, which plays a significant role in calculations within AI, robust refers to procedures designed to resist deviations beyond the range of the model. Robust AI maintains ethical and lawful outcomes (accurate enough to stay within that range) despite a wide variety of inputs.

Building robust, ethical AI systems starts with the right building blocks.
Building robust, ethical AI systems starts with the right building blocks.

Accountability for AI

Another interesting point is accountability. Taking responsibility essentially means two things:

  • foreseeing the consequences of your actions beforehand and being able to justify them to yourself
  • being accountable for your choices to society afterwards

It is also in line with the desired ethical approach in the use of AI. However, it’s easier said than done. The article about solving moral dilemmas contains a step-by-step guide to making decisions for yourself and for society. To put it into practice, it is important to do the following:

  • Create a list of criteria that your AI system must meet to be ethical and robust
  • Regularly check if your AI application still meets the criteria and identify areas for improvement
  • Continuously add or modify criteria while maintaining a critical perspective, avoiding a mere “checklist” approach

The four pillars of trustworthy AI for the EU

Finally: the EU operates based on four principles for trustworthy AI. All the conditions and characteristics mentioned above are based on these four principles of trustworthy AI for the EU:

It’s important to reflect on the principle of preventing harm as a final point. This means that procedures are established to prevent harm. In other words, if you have doubts, don’t rush ahead. Take action only when you fully comprehend the consequences. Similar to the GDPR legislation, a significant responsibility for assessing risks is placed on those who use AI. Building robust, ethical AI systems is therefore crucial within the framework of risk management. For the same reason, it’s advisable to initiate a moral deliberation before embarking on the construction process.


Nick Nijhuis helps organizations become digitally mature, is a business innovation lecturer, a trainer in moral leadership, and a NIMA examiner.

One comment

  1. Pingback: Developing just (AI) systems and algorithms - Moral Leader

Comments are closed.