The European Commission recently published the first draft of its new Artificial Intelligence (AI) Regulation that impacts heavily both providers and business users of AI systems.
The providers such as developers have then to make a distinction between their AI systems. AI systems will then be categorised as either ‘prohibited’ – those that are used for social scoring of individuals -, ‘high risk’ – those that might have a harmful impact on the health or safety of people -, and finally ‘low risk’ – those that have transparency requirements.
These new law aims to guarantee the safety of EU citizens and prevent the use of harmful systems in society. It will then force providers to agree to an array of new obligations such as implementing risk and quality management systems, validating the quality of the data used to train AI systems, providing clear instructions for users, as well as ensuring consistency, accuracy, robustness, and cybersecurity of the AI systems.
Moreover, providers will also have to register details of the AI system on an EU database, monitor the performance of the AI system, report serious incidents and breaches, and correct, withdraw or recall non-conforming AI systems.
The AI Regulation is however not expected to come into play until 2024.