Securiti announces a $75M Series C Funding RoundView
Published on November 1, 2022 AUTHOR - Privacy Research Team
On 21 April 2021, the European Commission published its proposal for the Regulation laying down harmonised rules on artificial intelligence and a Coordinated Plan on artificial intelligence. On 19 October 2022, the Czech Presidency of the Council of the European Union offered its latest compromise text for the Proposed AI Regulation. Once the Proposed AI Regulation will be adopted by the European Parliament and member states, it will become directly enforceable across the European Union.
The Proposed AI Regulation aims to ensure that Artificial Intelligence (AI) systems placed on the European Union market respect the fundamental rights of individuals. It further aims to ensure legal certainty and facilitate the development of a single market for safe and trustworthy AI applications.
The Proposed Regulation consists of 108 pages and is complex and comprehensive. Let’s break it down into key points:
The Regulation defines AI systems as software using one or more techniques and approaches “which generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with”. These techniques and approaches set out in Annex-I of the Proposed Regulation include machine learning approaches, logic and knowledge based approaches, statistical approaches, and bayesian estimation and optimisation methods.
The Proposed Regulation applies to the following:
The Regulation does not apply to AI systems developed or used exclusively for military purposes or public bodies or international organisations in third countries that use AI in the context of international agreements for law enforcement or judicial cooperation.
The Regulation proposes a risk-based approach with four levels of risks for AI systems:
This category includes those AI systems that pose a clear threat to the safety, livelihoods, and fundamental rights of people. The use of such AI systems is prohibited.
The placing, putting into service, or use of the following AI systems is prohibited:
The use of real-time biometric identification systems in publicly accessible spaces for law enforcement purposes is permitted only where it is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risk. These permitted situations involve the search for potential victims of crime including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of criminal offences with certain protections and limitations.
This category includes those AI systems that create a high risk to the health and safety or fundamental rights of natural persons. Such AI systems are permitted on the European market subject to compliance with certain mandatory requirements and ex-ante conformity assessment. All high-risk AI systems have strict obligations before they can be put on the market and throughout their lifecycle. For example, the requirements of high-quality data, documentation and traceability, transparency, human oversight, accuracy, and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety before any high-risk AI system is put on the market.
The European Commission lists the following as high-risk AI systems:
This category includes those AI systems with specific transparency obligations. Providers of such AI systems must ensure that natural persons are informed that they are interacting with an AI system unless it is obvious from the circumstances and context of the use. This enables natural persons to make an informed choice to continue using the AI system or step back from a given situation. For example, the users of the following AI systems have transparency obligations:
However, the transparency obligations do not apply to those AI systems that have been authorized by law to detect, prevent, investigate, and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
This category includes AI systems that represent only minimal or no risk for citizen’s rights or safety, such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category and the Regulation allows the free use of such applications subject to the existing legislation, without any additional legal obligations.
The Regulation provides potentially significant fines for non-compliance:
The supply of incorrect or misleading information to notified bodies or national competent authorities in reply to a request is subject to a fine of up to 10,000,000 EUR or 2% of the annual global turnover of the preceding financial year, whichever is higher.
The risk-based approach adopted by the European Commission indicates that high-risk AI systems constitute a core part of the regulatory framework. However, despite such a sound risk-based approach, certain gaps remain:
The Proposed AI Regulation will now be reviewed by the Council of the European Union and European Parliament after the end of the public consultation period on 6 August 2021. The European Parliament and member states will have to adopt the Proposed AI Regulation for it to become directly enforceable across the European Union. The Regulation appears to have a strong potential to safeguard the fundamental rights and individuals as well as support technological innovation, however, it remains to be seen how well it aligns with the EU data protection framework. Nevertheless, it can be considered the starting point of an upcoming legislative process in Europe.
At Securiti, our mission is to enable enterprises to safely harness the incredible power of data and the cloud by controlling the complex security, privacy and compliance risks.
3031 Tisch Way Suite 110 Plaza West, San Jose,