Securiti has initiated an AI Regulation Digest, providing a comprehensive overview of the most recent significant global developments, announcements, and changes in the field of AI regulation. This information will be regularly updated on our website, presenting a monthly roundup of key activities. Each regulatory update will include links to related resources at the bottom for your reference.
North and South America Jurisdiction
1. Delaware Establishes Artificial Intelligence Commission With House Bill
Date: 17th July, 2024
Summary: House Bill 333, signed into law by the Delaware Governor, creates the Delaware Artificial Intelligence Commission with immediate effect. The Commission's responsibilities include:
- Advising the General Assembly and the Chief Information Officer on AI-related legislative and executive actions.
- Developing statewide guidelines for AI use across government branches.
- Promoting AI adoption to enhance service delivery.
- Ensuring safe and rights-respecting AI deployment.
- Conducting an inventory of generative AI use in state agencies to identify potential risks.
The Commission must convene its first meeting by August 1, 2024, and the law will expire 10 years after enactment unless extended by the General Assembly.
This legislation aims to regulate AI use, promote innovation, and protect citizens' rights in Delaware. Read more.
EU Jurisdiction
2. EU’s AI Act Published In Official Journal
Date: 12th July, 2024
Summary: The EU AI Act has finally been published in the Official Journal. Key dates related to various chapters and articles coming into effect are as follows:
- August 1st, 2024: The AI Act enters into force.
- February 2025: Chapters I (General Provisions) and II (Prohibited AI Systems) come into effect.
- August 2025: Chapters III Section 4 (Notifying Authorities), V (General Purpose AI Models), VII (Governance), XII (Confidentiality and Penalties), and Article 78 (Confidentiality) apply, excluding Article 101 (Fines for General Purpose AI Providers).
- August 2026: The entire AI Act applies, except for Article 6(1) and corresponding obligations (High-Risk AI Systems category).
- August 2027: Article 6(1) and corresponding obligations come into effect. Read more.
3. Hamburg Commissioner Publishes Paper Providing Guidance On GDPR & LLMs
Date: 15th July, 2024
Summary: The paper published by the Hamburg Commissioner examines the intersection of GDPR and Large Language Models (LLMs), guiding companies and authorities. Critical takeaways include:
- LLM storage alone doesn't constitute GDPR processing.
- LLM-supported AI systems processing personal data must comply with GDPR requirements, including its output.
- All data subject rights apply to the AI system's input and output.
- Training LLMs with personal data must adhere to data protection regulations and respect individuals' rights, but this doesn't affect the model's legality. Read more.
Date: 17th July, 2024
Summary: Open Rights Group (ORG) filed a complaint against Meta with the Information Commissioner's Office (ICO), alleging violations of the UK GDPR following Meta’s plans to use users’ personal data to train its AI models. In its complaint, ORG alleges that Meta violates Articles 5(1) and (2), 6(1), 6(4), 9(1), 12, 13, 17(1)(c), 18, 19, 21(1), and 25 of the UK GDPR. The ORG specifically alleges that Meta:
- Has no legitimate interest under Article 6(1)(f) of the UK GDPR;
- Attempts to get permission to process personal data for undefined, broad technical means without specifying the purpose of the processing under Article 5(1)(b) of the UK GDPR;
- Fails to provide the necessary concise, transparent, intelligible, and easily accessible information using clear and plain language;
- States that the processing of personal data is irreversible and it is unable to comply with the right to be forgotten once the personal data of the complainants is ingested into its unspecified AI technology;
- Is unable to differentiate between data subjects where it can rely on a legal basis to process personal data and other data subjects where such a legal basis does not exist, and between personal data that falls under Article 9 of the UK GDPR and other data that does not.
ORG calls on the ICO to:
- Issue an imminent and legally binding decision under Article 58(2) of the UK GDPR to prevent the processing of the personal data of the complainants without consent;
- Fully investigate the matter under Article 58(1) of the UK GDPR;
- Prohibit the use of personal data for undefined AI technology without consent from the complainants and other data subjects. Read more.
5. European Data Protection Board Adopts Statement On DPAs Role in AI Act
Date: 17th July, 2024
Summary: The European Data Protection Board (EDPB) formally adopted a statement on the data protection authorities' (DPAs) role in the EU Artificial Intelligence Act (the EU AI Act) framework. The EDPB recommends that:
- DPAs should be designated as Market Surveillance Authorities (MSAs) of high-risk artificial intelligence (AI) systems used for law enforcement, border management, administration of justice, and democratic processes, given their experience and expertise in dealing with AI's impact on fundamental rights;
- DPAs should be designated as MSAs for other high-risk AI systems, specifically in sectors where natural person’s rights and freedoms are likely to be impacted with regards to the processing of personal data;
- DPAs, where appointed as MSAs, should be designated as the single points of contact for the public and counterparts at national and EU levels; and
- Clear procedures should be established for cooperation between MSAs and other regulatory authorities, including DPAs, and between the EU AI Office and the DPAs/EDPB. Read more.
6. French Data Protection Authority (CNIL) Issues Guidance On Responsible GenAI Usage
Date: 18th July, 2024
Summary: The French Data Protection Authority (CNIL) clarified in its guidance that AI refers to any system capable of creating content, including text, computer code, images, music, audio, videos, etc., and that when they can perform a wide range of tasks, such systems can be described as general-purpose AI systems.
The CNIL provided further guidance on using generative AI by making the following recommendations:
- Avoid deploying generative AI without a specific goal and instead respond to already identified uses;
- Define a list of authorized and prohibited uses, taking into account the risks, such as not entering personal data into the system or entrusting it with decision-making;
- Consider the risks generative AI systems pose to the rights and interests of persons concerned;
- Choose between an off-the-shelf system or model, develop your own generative AI system, connect a generative AI system to a knowledge base, or fine-tune a pre-trained model on specific data;
- Determine the extent to which a service provider is likely to reuse the data provided, the general security of a system, its robustness, and the absence of possible biases;
- Determine which GenAI system may be suitable for a particular deployment method, such as systems that will use personal data or sensitive or strategic documentation, which may require 'on-premise' solutions, pursuant to the French Cybersecurity Agency (ANSSI) security recommendations for generative AI;
- Outline prohibited uses and risks involved, observing the quality of outputs and potential for plagiarism and bias in outputs;
- Ensure compliance with the General Data Protection Regulation (GDPR) and its recommendations, involving a data protection officer (DPO) and other stakeholders and carrying out a Data Protection Impact Assessment (DPIA) where applicable;
- Ensure compliance with the EU Artificial Intelligence Act (the EU AI Act) for models considered high-risk systems and those considered general-purpose AI systems under its provisions. Read more.
7. The European Commission Publishes Draft AI Pact
Date: 22nd July, 2024
Summary: The European Commission published its draft AI Pact on July 22, 2024. The Commission reiterated that the draft AI Pact is a voluntary commitment anticipating the requirements under the EU AI Act and implementing them before legal deadlines. The AI Pact is centered around two main pillars:
Pillar 1: Encourage the exchange of best practices and provide practical information on implementing the AI Act through organizational workshops and the creation and management of a dedicated online space for exchanging best practices.
Pillar 2: Encourage AI system providers and deployers to prepare for compliance early with requirements and obligations set out under the AI Act through the creation of templates and monitoring schemes, pledges to take actions related to the AI Act’s requirements, and reporting commitments on a regular basis with them being published by the AI Office for greater visibility, accountability, and credibility.
Consequently, the Draft AI Pact is split into the following key aspects:
1. Core Commitments
Core commitments that will affect all subject organizations include the following:
- Adoption of an AI governance strategy for uptake of AI;
- Mapping of all AI systems developed or used in areas considered high-risk as per the AI Act;
- Promotion of AI literacy amongst all staff and other persons dealing with the deployment of AI systems.
2. AI Systems’ Development
Organizations developing AI systems may, among other things:
- Ensure placement of processes that identify all possible risks to health, safety, and fundamental rights stemming from the use of AI systems;
- Develop policies for training, validation, and testing datasets;
- Implement logging features allowing for traceability of AI systems;
- Inform deployers about appropriate usage, limitations, and risks;
- Ensure human oversight over decisions made or recommended by AI systems;
- Design GenAI systems allowing for AI-generated content to be marked via technical solutions;
- Provide means to clearly and distinguishably label AI-generated content, such as deepfakes, and AI-generated text.
3. AI Systems’ Deployment
Organizations deploying AI systems may, among other things:
- Map all possible risks to the fundamental rights of persons and groups of individual that may be affected through the use of relevant AI systems;
- Implement measures to ensure human oversight over all recommendations and actions taken by AI systems;
- Inform individuals with clear and meaningful explanations when a decision is made about them, prepared, taken or recommended by AI systems;
- Inform workers’ representatives and affected workers when deploying AI systems in the workplace. Read more.
Asia Jurisdiction
Date: 9th July, 2024
Summary: On July 2, 2024, the Ministry of Information and Communications (MIC) requested public comments on the draft Law on the Digital Technology Industry. The law covers activities, products, and services. It emphasizes data security, intellectual property rights, and responsible AI use while also prohibiting practices that pose a danger to national interests or individual rights. Additional provisions within the law deal with data portability, cross-border data transfers, and innovation within ethical confines. Read more.
9. New PDPC Guide In Singapore Offers Support On Synthetic Data Generation
Date: 18th July, 2024
Summary: The PDPC published the Privacy Enhancing Technology (PET): Proposed Guide on Synthetic Data Generation on July 15, 2024. Tools such as anonymization, encryption, and federated analytics, meant to appropriately protect personal and sensitive data while also allowing for analysis and AI training, are introduced. The guide further emphasizes the need to ensure any synthetic data generated closely resembles real data patterns without exposing identifiable information, supporting applications in AI training, data analysis, and testing. To that end, the guide provides a five-step approach to mitigate any chances of re-identification risks while also allowing for residual risk to be managed. Read more.
Date: 18th July, 2024
Summary: The National Science and Technology Council in China invited public comments on its Basic Law on Artificial Intelligence draft. The general provisions within the law emphasize the following:
- Sustainable development, human autonomy, privacy protection, and data governance.
- Information security, safety, transparency, explainability, fairness, and non-discrimination.
- Accountability and avoiding harmful AI applications such as bias and false information.
The law also contains measures for data protection, openness, sharing, and international collaboration in AI research. Lastly, it emphasizes the National Development Council's role in data protection and promoting ethical AI practices through design and accountability mechanisms. Read more.
Securiti's AI Regulation round serves as an invaluable resource for staying ahead of the latest global developments in the AI industry. Our commitment to providing timely updates ensures that you have access to crucial information and a better understanding of the evolving regulatory landscape in the field of AI.
The team has also created a dedicated page showcasing 'An Overview of Emerging Global AI Regulations’ around the world. Click here to delve deeper and learn more about the evolving landscape of global AI regulations.