Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View

What You Should Know About the CNIL’s Guidance On GenAI Deployment

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syed Tatheer Kazmi

Data Privacy Analyst

CIPP/Europe

Published October 25, 2024

Listen to the content

Generative Artificial Intelligence (GenAI) continues to pose several significant challenges for organizations and businesses globally. One key challenge that seems to be a common theme is how organizations looking to leverage GenAI's extensive capabilities do so ethically.

Governments and regulatory bodies worldwide have aimed to address this issue by providing various guidelines, frameworks, and directives. The Commission nationale de l'informatique et des libertés (CNIL) or the National Commission on Informatics and Liberty in France is no different. It was one of the first regulatory bodies to provide an AI Action Plan on how organizations could deploy AI systems, including Generative AI tools while respecting individuals' privacy.

This guidance issued by the CNIL clarifies how organizations hoping to deploy Generative AI within their daily operations can do so responsibly. Read on to learn more.

Generative AI Defined

Similar to several other AI-related regulations and administrative bodies, the CNIL has also provided the definition of what it considers "Generative AI." According to the CNIL, Generative AI refers to systems that are capable of creating content in textual, audio, visual, musical, and computer code formats.

If such GenAI systems are designed to perform a wide range of tasks, they can be referred to as "general-purpose AI systems," which is mostly the case with systems that integrate large language models (LLMs).

Such systems can enhance the creativity and productivity of their users by creating new content and analyzing and restructuring pre-existing content. However, owing to their probabilistic nature, these systems may produce inaccurate results that may appear reasonable.

Furthermore, developing such systems requires training using an extensive dataset, which often includes information about natural persons, their personal data, and data provided when using these systems.

Hence, it is essential for organizations planning to use these systems in their daily operations to take several precautionary measures to ensure that individuals' rights over their data are appropriately protected.

How to Deploy GenAI

The CNIL makes the following recommendations for organizations considering deploying a compliant Gen AI system:

  • Have a Specific Need - An organization must ensure it always has a specific need and purpose for deploying a GenAI system;
  • Frame Uses - Organizations must have a clear list of authorized as well as prohibited uses of the GenAI system it  is deploying, considering the potential risks posed by that system;
  •  Identify the Limitations - The limitations of the GenAI system to be deployed must be appropriately identified to ensure any risks to the interests and rights of persons are appropriately addressed;
  • Choose the System Wisely - When selecting a generative AI system, opt for a strong and secure deployment, such as local and specialized systems. If this isn’t possible, carefully assess the service provider’s data practices, such as whether they may store, analyze, or reuse the input data. This is important because some providers might use the data to improve their models or for other purposes, which could pose privacy or security risks. Based on this assessment, organizations should adjust how you interact with the system, possibly limiting the type of data you share;
  • Train End Users - It is the responsibility of the organization deploying the GenAI system to undertake appropriate steps to train and raise awareness among the users of the system about its prohibited uses and the potential risks involved in its authorized uses;
  • Implement Responsible Governance - A reliable AI governance system compliant with the GDPR's requirements must be implemented, involving all other necessary stakeholders, such as the data protection officer, information systems manager, CISO, "business" managers, etc., from the outset.

For more detailed and specific questions an organization may have, the CNIL provides detailed information on its dedicated FAQ page here.

How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Several of the world's most reputable brands and businesses rely on Securiti's Data Command Center for their data security, privacy, governance, and compliance needs.

With the Data Command Center, you'll gain access to several modules and solutions designed to ensure efficient and effective compliance with an organization's obligations.

To meet the requirements of this particular guidance, privacy policy management allows you to be dynamic in ensuring complete transparency with your users regarding their rights and risks when using your services. AI Security & Governance allows for the discovery and cataloging of all AI models in use within the organization's infrastructure, enabling full visibility, including shadow AI.

Request a demo today and learn more about how Securiti can help you comply with the French CNIL's guidance as well as other major AI-related regulations from across the globe.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
Introducing Agent Commander
The promise of AI Agents is staggering— intelligent systems that make decisions, use tools, automate complex workflows act as force multipliers for every knowledge...
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About View More
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About
Boards are tuned in to the AI conversation, but there’s a blind spot many organizations still haven’t named: risk silos. Everyone agrees AI governance...
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
Building A Secure AI Foundation For Financial Services View More
Building A Secure AI Foundation For Financial Services
Access the whitepaper and discover how financial institutions eliminate Shadow AI, enforce real-time AI policies, and secure sensitive data with a unified DataAI control...
Indiana, Kentucky & Rhode Island Privacy Laws View More
Indiana, Kentucky & Rhode Island Privacy Laws: What Changed & What Businesses Should Do Now
A breakdown of new data privacy laws in Indiana, Kentucky, and Rhode Island—key obligations, consumer rights, enforcement timelines, and what businesses should do now.
Agentic AI Security: OWASP Top 10 with Enterprise Controls View More
Agentic AI Security: OWASP Top 10 with Enterprise Controls
Map the OWASP Top 10 risks for agentic AI to enterprise-grade controls, identity, data security, guardrails, monitoring, and governance to stop autonomous AI abuse.
View More
Strategic Priorities For Security Leaders In 2026
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI. Category:...
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New