I. Introduction
The EU has the AI Act, Canada has AIDA, and all indications point to the US adopting some form of federal regulation aimed toward the technology.
In South America, Brazil appears head and shoulders above the rest due to its proactiveness towards AI governance. In fact, between 2019 and 2021, the country had three different AI laws introduced in Congress. Though none of these three became official law, Bill 2338/2023, introduced in May 2023, aims to be a comprehensive piece of legislation that shapes Brazil’s regulatory approach to AI (proposed law).
The proposed law places significant emphasis on transparency and organizations’ responsibility to mitigate biases through regular public impact assessments that ensure a chain of accountability related to all major AI models and systems in use within an organization.
Notably, in addition to other administrative penalties, infringement of the proposed law shall be subject to a fine of up to R$50,000,000 per infringement or 2% of a company’s revenue for the preceding year, excluding taxes.
Read on to learn all you need to know about Brazil’s new AI law:
II. Definitions of Key Terms
Here are definitions of some key terms used in the proposed law:
A. AI System
Computer system, with different degrees of autonomy, designed to infer how to achieve a given set of goals, using approaches based on machine learning and/or logic and knowledge representation, via input data from machines or humans, with the goal of producing predictions, recommendations, or decisions that can influence the virtual or real environment.
B. AI System Provider
A natural person or legal entity, of public or private nature, who develops an AI system, directly or by order, aims to place it on the market or apply it in a service provided by her, under her own name or trademark, for payment or free of charge.
C. AI System Operator
A natural or legal person, of public or private nature, who employs or uses, on her behalf or for her benefit, an AI system, unless it is used in the course of a personal activity of a non-professional nature.
D. AI Agents
Providers and operators of AI systems.
E. Competent Authority
Body/organ or entity of the Federal Public Administration responsible for watching over, implementing, and inspecting compliance with the proposed law throughout the country
F. Text and Data Mining
The process of extracting and analyzing large amounts of data, or partial or complete extracts of textual content, from which are extracted patterns and correlations that will generate relevant information for the development or use of AI systems
III. Principles Behind the Proposed Law
The proposed law requires that the development, implementation, and use of AI systems in Brazil must adhere to the principle of good faith as well as the following principles:
- inclusive growth, sustainable development, and well-being;
- self-determination, freedom of decision, and freedom of choice;
- human participation in the AI cycle and effective human oversight;
- non-discrimination;
- justice, equity, and inclusion;
- transparency, explainability, intelligibility, and auditability;
- trustworthiness and robustness of AI systems and information security;
- due legal process, contestability, and adversary system;
- trackability of decisions during the life cycle of AI systems as a means of accountability and attribution of liability to a natural or a legal person;
- accountability, responsibility, and full compensation of damages;
- prevention, precaution, and mitigation of systemic risks derived from intentional or unintentional uses and unintended effects of AI systems; and
- non-maleficence and proportionality between the methods employed and the determined and legitimate purposes of AI systems.
IV. Obligations for Organizations
Organizations are subject to the following obligations under the proposed law:
A. Risk Categorization
Preliminary Assessment
An organization must conduct a preliminary assessment of any AI system it develops to assess and classify its risk level before placing it on the market for public use. The organization must also include any additional information related to the purposes or applications of its AI systems.
If a system is classified as high-risk, the provider organization must conduct a thorough algorithmic impact assessment and take other necessary governance measures as required by the proposed law.
However, if a system is classified as low risk, the provider organization must provide appropriate documentation of its assessment for accountability purposes. The competent authority may conduct its own assessment and re-classify a system if it determines that it has a different risk profile than as initially classified by the provider.
Excessive Risk
The following AI practices are deemed as “excessive risk” and prohibited as a result:
- Systems that use subliminal techniques that aim to influence a natural person to behave in a manner counterproductive to their health or to the provisions of the proposed law;
- Systems that exploit vulnerabilities of specific groups of natural persons, such as those associated with their age or physical or mental disability, to influence a natural person to behave in a manner counterproductive to their health or to the provisions of the proposed law;
- Systems used by public authorities to evaluate, classify, or rank natural persons based on their social behavior or personality attributes through universal scoring to determine their access to public goods and services;
- The use of biometric identification systems is allowed exclusively remotely and continuously in areas accessible to the general public where provisions of the federal statutory law require their use, such as in the following instances:
- The prosecution of crimes punishable by a maximum prison sentence of more than two years;
- Search for crime victims or missing persons;
- Search for a criminal caught in the act of committing a criminal offense.
The competent authority will provide further guidance on the regulation of AI systems deemed excessive risk.
High Risk
The following AI practices are deemed as “high risk”:
- Use in security devices related to the management and operation of critical infrastructures such as traffic control and water and electricity supply networks;
- Use in education and vocational training, including systems used to verify access to educational or vocational training institutions or for evaluating and overseeing students;
- Use in recruitment, selection, filter, and analysis of candidates for making decisions about promotions or terminations of employment relationships;
- Use in the analysis of criteria for access, eligibility, enjoyment, revision, encroachment, or revocation of essential private and public services;
- Use in the evaluation of a natural person’s creditworthiness or credit scores;
- Use in establishing priorities or dispatching of emergency response services, including fire and medical assistance;
- Use in the management of systems used to assist judicial authorities in their fact-finding and law enforcement tasks;
- Use in autonomous vehicles, when their use can pose risks to the physical integrity of people;
- Use for medical purposes;
- Use in biometric identification systems;
- Use in criminal investigations, specifically for individual risk assessments by competent authorities for predicting the risk of a person committing offenses or re-offending;
- Use in the analytical study of crimes concerning natural persons, aimed at enabling law enforcement authorities to search large sets of complex data;
- Use in investigations by authorities to assess the credibility of evidence in the course of the investigation;
- Use in migration management and border patrol processes.
The competent authority will have the power to deem any AI system as high risk based on at least one of the following factors:
- The system’s implementation is on a large scale, affecting a large number of people in addition to the geographic area covered, duration, and frequency;
- The system may affect the rights and freedoms of individuals;
- The system has a high potential for economic and moral damage;
- The system affects natural persons from a specific vulnerable group;
- The possible harmful effects of the system are irreversible or hardly reversible;
- A similar system has caused economic or moral damage;
- The level of transparency, explainability, and auditability of the AI system is low, making it difficult to oversee or control;
- The system has a high level of identifiability of data subjects, including the processing of genetic and biometric data for the purpose of uniquely identifying a natural person;
- The system processes data where the affected party expects confidentiality, such as in the processing of sensitive or secret data.
The competent authority is responsible for periodically updating the list of high-risk AI Systems in accordance with a number of criteria set out in the proposed law.
B. Governance of AI Systems
General Provisions
The AI agents are required to establish a governance framework in addition to internal processes that are designed to ensure the safety of the systems they govern and guarantee the rights of individuals as identified in the proposed law. These will include:
- Transparency measures regarding the use of AI systems in their interactions with natural persons;
- Transparency regarding governance measures adopted within the development and employment of the AI system by the agent;
- Appropriate data management measures for mitigating and preventing potential discriminatory biases;
- Data processing activities in compliance with Brazil’s data protection and privacy regulations;
- Adoption of privacy by design;
- Adoption of adequate parameters of dataset separation and organization for training, testing, and validation of the system outcomes;
- Adoption of adequate security information measures.
Governance of High-Risk AI Systems
In addition to the aforementioned measures, an organization must undertake the following additional measures when operating systems are classified as “high risk”:
- Thorough documentation about how such a system works and the decisions involved in its building-up, implementation, and use;
- The use of tools for the automatic recording of the system's operation in order to allow the evaluation of its accuracy and robustness and to ascertain potential discriminators and implementation of the risk mitigation measures adopted;
- Regular tests to determine the appropriate levels of reliability, depending on the industry and type of application of the AI system;
- Various data management procedures to eliminate discriminatory biases such as:
- Evaluation of the dataset with appropriate measures to control human cognitive biases that may affect the collection and organization of the dataset and to avoid generating biases;
- Establishing an inclusive team responsible for the conception and development of the system.
- Adoption of technical measures to enable the explainability of the AI systems’ outcomes and measures to make available to operators and potential impacted parties general information about the functioning of the AI model employed.
The use of such high-risk systems would also require consistent human oversight to minimize the risks posed by such systems. The person responsible for human oversight must be able to:
- Properly understand the capabilities and limitations of the AI system and properly review its operation;
- Anticipate the possible tendency to automatically relying or over-relying on the output generated by the AI system;
- Correctly interpret the AI system´s output, taking into account the characteristics of the system;
- Decide not to use the high-risk AI system or to ignore, override, or reverse its output;
- Interfere with the operation of the high-risk AI system or interrupt the system’s operation.
Additionally, when agencies and entities of the Union, States, Federal District, and Municipalities governments hire, develop, or use any such high-risk systems, they must adopt the following measures in tandem:
- Prior public consultation and hearing on the planned use of AI systems;
- Adoption of appropriate access controls;
- Use of datasets from secure sources that are accurate, relevant, up-to-date, and representative of affected populations and tested against discriminatory biases;
- Provide guarantees to the citizens of their right to human explanation and review of decisions by AI systems that generate relevant legal effects;
- Deploy a user-friendly interface that allows its use by other systems for interoperability purposes;
- All preliminary assessments of the system must be made available publicly via easy-to-access means.
However, if the Union, States, Federal District, and Municipalities discover it cannot eliminate or reasonably mitigate the risks associated with the AI system identified in the algorithmic impact assessment section below, its use must be discontinued promptly.
C. Algorithmic Impact Assessment
Organizations must carry out a comprehensive algorithmic impact assessment of an AI system classified as “high risk”. The competent authority will be notified of the organization’s plans to conduct such an assessment as well as detailed documentation of both the preliminary assessment and the algorithmic impact assessment itself once it is complete.
The algorithmic impact assessment must be carried out by a professional team with the appropriate technical, scientific, and legal knowledge to conduct it.
The competent authority has the authority to regulate the performance of the assessment to ensure an independent and impartial team of professionals carries it out. The assessment must contain the following steps:
- Preparation;
- Risk recognition;
- Mitigation of risks found;
- Monitoring.
The assessment must be a continuous iterative process that will require periodic updates. It will be carried out to record and consider the following aspects:
- All the known and foreseeable risks associated with the AI system at the time it was developed, as well as the risks that can reasonably be expected from it;
- The benefits of the developed system;
- The likelihood of the adverse consequences determined in the preliminary assessment;
- The severity of the adverse consequences determined in the preliminary assessment;
- The operating logic of the system;
- The process and the result of testing and evaluation and mitigation measures carried out to verify possible impacts to rights;
- The training and actions to raise awareness of the risks associated with the system;
- All the mitigation measures and indication and justification of the minor risks associated with the system;
- Details on the quality control tests to be carried out;
- Details of all the transparency measures to the public, especially to potential users of the system.
Implementing the precautionary principle, if an AI system is likely to generate irreversible impacts or those difficult to reverse, the algorithmic impact assessment will equally take into consideration incomplete or speculative evidence.
The competent authority may issue additional requirements related to how the impact assessments must be designed, including the participation of different social sectors affected, according to the risk and economic size of the organization.
In case an AI agent becomes aware of any unexpected risks to the rights of natural persons after an AI system is introduced in the market, it must communicate it to the competent authority as well as any individuals who might be affected.
All such assessments must be made public, with appropriate modifications to protect industrial and commercial secrets, containing the following information:
- The description of the intended purpose for which the system will be used, as well as its context of use and territorial scope;
- All risk mitigation measures deployed or to be deployed.
V. Public Database of High-Risk AI Systems
The competent authority is also tasked with creating and maintaining a publicly accessible database of high-risk AI systems, which will contain (among other information) the completed risk assessments of providers and users of such systems. Such assessments will be protected under applicable intellectual property and trade secret laws.
A. Civil Liability
The organization, whether it’s a provider or operator of the AI system that causes economic, immaterial, individual, or collective damage, is obligated to compensate the affected persons. Providers and users of AI systems are responsible for the damage(s) caused by the AI system, regardless of the degree of autonomy of the system. Further, providers and users of “high-risk” AI systems are strictly liable to the extent of their participation in the damage, and their fault in causing the damage is presumed.
In case damage is caused to the person when dealing with a high-risk or excessive-risk system, the organization will be liable for all the damage caused. In such cases, the fault of the organization causing the damage will be considered with the burden of proof favoring the victim.
However, the organization will not be liable unless it can prove the damage was caused by the victim or a third party.
B. Codes of Good Practice & Governance
Organizations may draw up their internal codes of good practice and governance to establish the organization conditions, operating regime, procedures, security standards, technical standards, educational resources, mechanisms for risk mitigation, and the appropriate technical and organizational security measures for managing the risks arising from the application of the AI systems.
When establishing such rules, the developers and operators of the system may implement a governance program that:
- Demonstrates their commitment to adopting internal processes and policies that ensure comprehensive compliance with standards and best practices;
- is adapted to the structure, scale, and volume of its operations, as well as its harmful potential;
- Has the goal of establishing a relationship of trust with the affected persons;
- It is integrated into its overall governance structure and establishes and enforces internal and external oversight mechanisms;
- Have response plans for reversing the possible detrimental outcomes of the AI system;
- It is constantly updated based on insights from impact assessments;
- Conducts an algorithmic impact assessment which must be publicly available and which may need to be periodically repeated; and
- Aims to draw up codes of conduct and governance to support the practical implementation of the draft AI law.
C. Communication of Incidents
Organizations must communicate the occurrence of serious security incidents, including when there are risks posed to the life and integrity of individuals, interruption of the functioning of critical infrastructure operations, severe damage to property or the environment, as well as severe violations of fundamental rights to the competent authority without delay as soon as they become aware of the incident.
The competent authority will then assess the severity of the incident and communicate mitigation measures to counter its adverse effects.
The competent authority may establish a procedure for assessing the compatibility of the organization’s internal code of conduct with the provisions of the proposed law.
VI. Data Subject Rights
The proposed law grants the following rights to the persons affected by the AI systems:
The affected persons have the right to be informed prior to potential interaction with an AI system, in particular, by making available information that discloses (among other things):
- the use of AI, including a description of its role, any human involvement, and the decision(s)/ recommendation(s)/ prediction(s) it is used for (and their consequences);
- identity of the provider of the AI system and governance measures adopted;
- categories of personal data used; and
- measures implemented to ensure security, non-discrimination, and reliability.
B. Right to Explanation
The affected persons have the right to an explanation of the decisions, recommendations, and predictions generated by AI systems. This can include the following information:
- The system´s rationale and logic, the significance and expected consequences of such a decision for the affected person;
- The degree of contribution by the AI system in the decision-making process;
- The data processed, its source, and the decision-making process itself;
- Options and mechanisms for challenging all decisions by such the AI system;
- The possibility of requesting a human intervention or review.
C. Right to Contestation
The affected persons have the right to challenge any decisions, recommendations, and predictions generated by the AI systems, especially if such decisions, recommendations, and predictions produce relevant legal effects or significantly impact the interests of the individual, including by generating profiles and making inferences. Individuals must receive clear and adequate information regarding the following aspects:
- The automated character of the interaction and decision-making in processes or products that affect the person;
- The general description of the system, types of decisions, recommendations or predictions it is intended to make, and consequences of its use for the person;
- The identification of the operators of the AI system and governance measures taken in the development and employment of the system by the organization;
- The role of the AI system and the humans involved in the decision-making, prediction, or recommendation processes;
- The categories of personal data used in the context of the operation of the AI system;
- The security, non-discrimination, and trustworthiness measures taken, including accuracy, precision, and coverage.
D. Right to Human Intervention
The affected persons have the right to request human determination and human participation in decisions of AI systems to ensure context and technological development are taken into account. This includes:
- The right to correction of incomplete, inaccurate, or outdated data used by AI systems;
- The right to request the anonymization, blocking, and deletion of unnecessary, excessive data as well as data processed in non-compliance with other relevant laws and regulations;
- The right to challenge all decisions by AI systems that are:
- Based on inappropriate or excessive data for the processing;
- Based on inaccurate or statistically unreliable methods;
- Not complying with individuals’ right to privacy.
E. Right to Non-discrimination
The affected persons have the right to non-discrimination and corrections of any direct, indirect, illegal, or abusive discriminatory biases within AI systems. This includes being prohibited both the implementation and the use of AI systems that may cause direct, indirect, illegal, or abusive discrimination, including:
- The use of sensitive personal data or disproportionate impacts due to personal features such as geographical origin, race, color or ethnicity, gender, sexual orientation, socioeconomic class, age, disability, religion, or political opinions;
- The establishment of disadvantages for people belonging to a specific group.
F. Right to Privacy
The affected persons have the right to privacy and to the protection of personal data, as guaranteed by the Brazilian General Data Protection Law (LGPD).
AI agents must appropriately inform all individuals of these rights and how to exercise them. Individuals can exercise these rights before all competent authorities and courts in defense of their interests.
VII. Regulatory Authority
The Executive Branch will appoint the competent authority in charge of implementing the proposed law. Once appointed, the authority will be responsible for the following:
- Ensuring the protection of fundamental rights and other rights affected by the use of AI systems;
- Promoting the drafting, updating, and implementation of the Brazilian Strategy for AI together with the bodies having correlated authority;
- Promoting & preparing studies related to best practices in the development and use of AI systems;
- Encouraging the adoption of good practices, including codes of conduct, in the development and use of AI systems;
- Promoting cooperative efforts with equivalent international authorities while also assisting in the development of other countries’ AI systems;
- Issuing guides on how other relevant regulatory bodies in the country can exercise their powers in specific sectors of economic and governmental activities subject to this regulation;
- Inspection of other organizations’ disclosure policies per the requirements of the proposed law;
- Investigation and application of penalties if organizations are found in non-compliance with their regulatory obligations per the proposed law;
- Requesting additional information related to data processing activities from other government bodies;
- Entering an undertaking with organizations promising to eliminate irregularities identified;
- Preparing an annual report on its activities;
- Issuing guides on how other Brazilian regulations are impacted by the proposed law, including:
- Procedures related to the rights and obligations per the proposed law;
- Procedures and considerations when designing algorithmic impact assessments;
- Requirements related to information to be made publicly available about the AI systems in use;
- Procedures related to the certification of development and use of high-risk systems.
The competent authority will have the right and responsibility to establish the conditions, requirements, differentiated communication and disclosure channels for providers and operators of AI systems by large corporations under the provisions of the Complementary Law No. 123 of December 14, 2006, as well as startups under the provisions of the Complementary Law No. 182, of June 1, 2021.
The competent authority is also required to ensure consistent communication with agencies and entities of the public administration responsible for regulating specific economic sectors and sectors of governmental activity to ensure cooperation in matters related to their regulatory, supervisory, and enforcement functions.
Public bodies and entities experimenting with their AI systems in the regulatory sandbox must appropriately inform the competent authority and seek their judgment regarding their compliance with the regulatory obligations per the proposed law.
Lastly, all rules and regulations issued by the competent authority will only be passed after a period of rigorous public consultations and hearings in addition to impact assessments per Law No. 13.848 of June 25, 2019.
VIII. Penalties for Non-compliance
Organizations found in non-compliance with the provisions of the proposed law will be subject to the following administrative penalties to be applied by the competent authority:
- A notice detailing their offense;
- A monetary fine of up to R$ 50,000,000 (approx. $10 million) per offense or up to 2% of its annual gross revenue for the preceding financial year in case of private legal entities;
- Publishing the offense once charged by the authorities;
- Restriction from participation in the regulatory sandbox regime per the proposed law for up to five years;
- Partial or complete suspension of development, supply, and operations of their AI system;
- Prohibition from processing certain databases.
Prior to undertaking the aforementioned steps against an offending entity, the regulatory authorities may adopt a series of preventive measures, such as coercive fines, in cases where the authority has reasonable grounds to believe that the potential offending party might:
- Cause irreparable harm;
- Cause a situation where the outcome of the proceedings might be ineffective.
The relevant authorities may move forward with administrative proceedings against an offending party based on the specific circumstances and careful consideration of the following parameters:
- The gravity and nature of the infringements and the eventual violation of rights;
- The offending party’s good faith;
- Any advantages gained or intended by the offending party;
- The economic situation of the offending party;
- The extent of the damage caused by the offending party;
- The offending party’s cooperative behavior;
- The offending party’s willingness to adopt good practices and a strict governance system;
- The proportionality between the seriousness of the offense and the penalty;
- Other administrative penalties applied for the same offense;
- Adoption of internal mechanisms and processes that minimize risk, such as impact assessments and effective implementation of a code of ethics.
IX. How Securiti Can Help
Brazil joins a burgeoning list of countries either inthe process of adopting or already adopting some form of AI regulation. This highlights the importance of an effective and reliable AI governance framework for organizations and provides insight into the compliance challenges that await organizations with global operations.
Each of these regulations share a fair degree of similarities, but they also differ in various aspects. For organizations that are expected to be compliant with the minutia of each of these, manually attempting to do so would be an exercise in futility.
Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
With the Data Command Center, organizations will gain access to numerous individual modules and solutions designed to be user-friendly in usage while being effective and thorough in their functionality.
These range from data mapping, data lineage, data classification, and data catalog to DSR assessment and access intelligence, among many others. Each will be vital in assisting organizations in their pursuit of regulatory compliance.
Most importantly, owing to just how much emphasis the new law places on public impact assessments, it is critical to have a solution that can not only help your organization conduct such assessments on schedule but also provide real-time insights related to the results of such assessments to allow for instant adjustments for better compliance.
Request a demo today and learn more about how Securiti can help you comply with your obligations per Brazil’s proposed AI law, as well as other major data and AI-related regulations globally.