LISTEN NOW: Evolution of Data Controls in the Era of Generative AI

View

Article 15: Accuracy, Robustness, and Cybersecurity | EU AI Act

Contributors

Anas Baig

Product Marketing Manager at Securiti

Rohma Fatima Qayyum

Associate Data Privacy Analyst at Securiti

Semra Islam

Sr. Data Privacy Analyst

CIPM, CIPP/Europe

All high-risk AI systems must be designed and developed to ensure that they achieve an appropriate level of accuracy, robustness, and cybersecurity while also consistently delivering these aspects throughout their operations and lifecycle.

The European Commission must take a proactive role in developing appropriate benchmarks and measurement methodologies to assess the level of accuracy, robustness, and cybersecurity of high-risk AI systems by working closely with all relevant stakeholders and organizations, such as the metrology and benchmarking authorities.

The levels of accuracy and the accuracy metrics for high-risk AI systems should be declared in the accompanying instructions of use.

High-risk AI systems shall be designed to be as resilient as possible against possible errors, faults, and inconsistencies that may occur within their systems or environments in which they operate, particularly resulting from interactions with natural persons or other systems.

The robustness of such systems may be achieved via technical redundancy solutions such as backups and fail-safe plans.

Furthermore, high-risk AI systems that continue to learn after being placed on the market or put into service should be developed to minimize the likelihood of biased outputs influencing the input datasets for future operations (‘feedback loops’). Any such feedback loops should be appropriately addressed with effective mitigation measures.

High-risk AI systems shall be resilient enough to withstand any attempts by unauthorized third parties to alter their use, outputs, or performance by exploiting system vulnerabilities.

Any technical solutions developed to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve, and control for attacks trying to manipulate the training data set (‘data poisoning’) or pre-trained components used in training (‘model poisoning’), inputs designed to cause the AI model to make a mistake (‘adversarial examples’ or ‘model evasion’), and confidentiality attacks or model flaws.

Suscríbase a nuestro boletín

Obtenga toda la información más reciente, actualizaciones de leyes y más en su bandeja de entrada


Compartir

Videos

Spotlight Talks

Spotlight 1:10:56

Cómo Prepararse y Actuar Positivamente ante la Ley 81

Cómo Prepararse yActuar Positivamente ante la Ley 81
Ver ahora View

Latest

View More

From Trial to Trusted: Securely Scaling Microsoft Copilot in the Enterprise

AI copilots and agents embedded in SaaS are rapidly reshaping how enterprises work. Business leaders and IT teams see them as a gateway to...

The ROI of Safe Enterprise AI View More

The ROI of Safe Enterprise AI: A Business Leader’s Guide

The fundamental truth of today’s competitive landscape is that businesses harnessing data through AI will outperform those that don’t. Especially with 90% of enterprise...

Data Security Governance View More

Data Security Governance: Key Principles and Best Practices for Protection

Learn about Data Security Governance, its importance in protecting sensitive data, ensuring compliance, and managing risks. Best practices for securing data.

AI TRiSM View More

What is AI TRiSM and Why It’s Essential in the Era of GenAI

The launch of ChatGPT in late 2022 was a watershed moment for AI, introducing the world to the possibilities of GenAI. After OpenAI made...

Managing Privacy Risks in Large Language Models (LLMs) View More

Managing Privacy Risks in Large Language Models (LLMs)

Download the whitepaper to learn how to manage privacy risks in large language models (LLMs). Gain comprehensive insights to avoid violations.

View More

Top 10 Privacy Milestones That Defined 2024

Discover the top 10 privacy milestones that defined 2024. Learn how privacy evolved in 2024, including key legislations enacted, data breaches, and AI milestones.

Comparison of RoPA Field Requirements Across Jurisdictions View More

Comparison of RoPA Field Requirements Across Jurisdictions

Download the infographic to compare Records of Processing Activities (RoPA) field requirements across jurisdictions. Learn its importance, penalties, and how to navigate RoPA.

Navigating Kenya’s Data Protection Act View More

Navigating Kenya’s Data Protection Act: What Organizations Need To Know

Download the infographic to discover key details about navigating Kenya’s Data Protection Act and simplify your compliance journey.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New