Generative Artificial Intelligence (GenAI) continues to pose several significant challenges for organizations and businesses globally. One key challenge that seems to be a common theme is how organizations looking to leverage GenAI's extensive capabilities do so ethically.
Governments and regulatory bodies worldwide have aimed to address this issue by providing various guidelines, frameworks, and directives. The Commission nationale de l'informatique et des libertés (CNIL) or the National Commission on Informatics and Liberty in France is no different. It was one of the first regulatory bodies to provide an AI Action Plan on how organizations could deploy AI systems, including Generative AI tools while respecting individuals' privacy.
This guidance issued by the CNIL clarifies how organizations hoping to deploy Generative AI within their daily operations can do so responsibly. Read on to learn more.
Generative AI Defined
Similar to several other AI-related regulations and administrative bodies, the CNIL has also provided the definition of what it considers "Generative AI." According to the CNIL, Generative AI refers to systems that are capable of creating content in textual, audio, visual, musical, and computer code formats.
If such GenAI systems are designed to perform a wide range of tasks, they can be referred to as "general-purpose AI systems," which is mostly the case with systems that integrate large language models (LLMs).
Such systems can enhance the creativity and productivity of their users by creating new content and analyzing and restructuring pre-existing content. However, owing to their probabilistic nature, these systems may produce inaccurate results that may appear reasonable.
Furthermore, developing such systems requires training using an extensive dataset, which often includes information about natural persons, their personal data, and data provided when using these systems.
Hence, it is essential for organizations planning to use these systems in their daily operations to take several precautionary measures to ensure that individuals' rights over their data are appropriately protected.
How to Deploy GenAI
The CNIL makes the following recommendations for organizations considering deploying a compliant Gen AI system:
- Have a Specific Need - An organization must ensure it always has a specific need and purpose for deploying a GenAI system;
- Frame Uses - Organizations must have a clear list of authorized as well as prohibited uses of the GenAI system it is deploying, considering the potential risks posed by that system;
- Identify the Limitations - The limitations of the GenAI system to be deployed must be appropriately identified to ensure any risks to the interests and rights of persons are appropriately addressed;
- Choose the System Wisely - When selecting a generative AI system, opt for a strong and secure deployment, such as local and specialized systems. If this isn’t possible, carefully assess the service provider’s data practices, such as whether they may store, analyze, or reuse the input data. This is important because some providers might use the data to improve their models or for other purposes, which could pose privacy or security risks. Based on this assessment, organizations should adjust how you interact with the system, possibly limiting the type of data you share;
- Train End Users - It is the responsibility of the organization deploying the GenAI system to undertake appropriate steps to train and raise awareness among the users of the system about its prohibited uses and the potential risks involved in its authorized uses;
- Implement Responsible Governance - A reliable AI governance system compliant with the GDPR's requirements must be implemented, involving all other necessary stakeholders, such as the data protection officer, information systems manager, CISO, "business" managers, etc., from the outset.
For more detailed and specific questions an organization may have, the CNIL provides detailed information on its dedicated FAQ page here.
How Securiti Can Help
Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Several of the world's most reputable brands and businesses rely on Securiti's Data Command Center for their data security, privacy, governance, and compliance needs.
With the Data Command Center, you'll gain access to several modules and solutions designed to ensure efficient and effective compliance with an organization's obligations.
To meet the requirements of this particular guidance, privacy policy management allows you to be dynamic in ensuring complete transparency with your users regarding their rights and risks when using your services. AI Security & Governance allows for the discovery and cataloging of all AI models in use within the organization's infrastructure, enabling full visibility, including shadow AI.
Request a demo today and learn more about how Securiti can help you comply with the French CNIL's guidance as well as other major AI-related regulations from across the globe.