Securiti AI Launches Context-Aware LLM Firewalls to Secure GenAI Applications

View

An Overview of Emerging Global AI Regulations

By Anas Baig | Reviewed By Adeel Hasan
Published July 10, 2023 / Updated March 2, 2024

Listen to the content

Introduction

Writing, art, backend engineering, marketing, legal analysis, political strategizing - AI can do it all. Over the past few months, AI has seen tremendous leaps in its operational capabilities. Many industry experts now consider this to be the beginning of the Fourth Industrial Revolution, with an MIT study stating that almost minimal use of AI can raise a worker's productivity by as much as 14%. While this does promise a great degree of benefits, it comes with its cons.

Last year has seen incidents of ethical misuse of AI skyrocket. Some incidents have caused international outrage, such as the deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering. In an era where fake news and propaganda are already seen as existential threats to the social fabric, it is easy to see how the introduction of such AI capabilities is the perfect recipe for disaster.

One way to avoid such instances of AI misuse and abuse is elaborate regulation that protects the rights of individuals. This sentiment is shared by lawmakers and legislators globally, as evident in ​​Stanford University's 2023 AI Index, which states that 37 different AI-related bills were passed in 2022 alone. Most of these regulations not only call for better analysis and understanding of AI and its potential risks but also call for making the developers behind AI tools accountable for the actions of their inventions.

AI Considerations in the 21st Century

AI remains a black box in many aspects. Research related to AI interpretability, trustworthiness, and operability is still in its relative infancy as far as our overall understanding of AI's limitless potential is concerned.

AI needs to be managed appropriately and curated to ensure it does not infringe on users' rights or chaotically disrupt business. For that to be the case, AI regulations need to be effective, flexible, and future-proof enough to adequately cover any tangents in AI capability that may come to the fore in the short and long term.

The need for these regulations to be flexible and future-proof becomes even more critical once AI's computations not being "explainable" are factored in.

Additionally, there are a multitude of other problems to consider. For example, the European Union's AI Act states that all "training, validation, and testing datasets shall be relevant, representative, free of errors, and complete." While on paper that does seem an appropriate obligation to place on organizations, in reality, the scale of data required to properly train a machine learning algorithm, with the stipulation of it needing to be "free of errors and complete," sets an extremely high standard that numerous organizations simply may not find tenable.

An example would be Amazon, which had to scrap its AI recruiting tool entirely. The issue was that it wanted to hire more female candidates. However, the tool kept shortlisting male candidates. The problem is that the available training sets were all heavily biased. Since no other training sets were available and creating a new one would have come at a tremendous financial cost, Amazon scrapped the whole project altogether.

Amazon could afford to do that because it's Amazon. A startup or an SME in a similar position may not enjoy such a luxury of options.

Lawmakers in the EU have already called for a global meeting of leaders to address the threats posed by "very powerful" AI to human rights and humanity itself.

Countries in Focus

Several countries have adopted a proactive approach toward AI regulation. In the absence of comprehensive legislation, governments have published frameworks, guidelines, and roadmaps that illustrate the future of possible AI regulation in these countries and help organizations manage their AI usage and tools responsibly.

Loading data

Flag of AustraliaAustralia

Legislation

Federal

Australia does not yet have a dedicated AI regulation. Most of its regulatory actions related to AI either come through existing laws or regular government policy papers guiding various regulatory bodies on how to approach different challenges posed by AI.

State

The New South Wales (NSW) Government came up with the first AI strategy, recognizing the challenges that come with the use of AI and charting a course for AI to be used safely across the government with the right safeguards in place.

For this purpose, the NSW Government published the AI Assurance Framework to assist agencies in designing, building, and using AI-enabled products and solutions. It is mandatory for all projects that incorporate an AI component or utilize AI-driven tools. This encompasses the utilization of large language models and generative AI, explicitly falling within the framework's application scope. The framework is intended to be used by:

  • project teams who are using AI systems in their solutions,
  • operational teams who are managing AI systems,
  • Senior Officers who are accountable for the design and use of AI systems,
  • internal assessors conducting agency self-assessments, and
  • the AI review body (TBC).

However, a project is not expected to use the framework if it meets the following criteria:

  • It uses an AI system that is a widely available commercial application.
  • The solution is not customized or used in any way other than intended.

The AI Assurance Framework became effective in March 2022. The State Government also established the NSW AI Review Committee to provide expert guidance and oversight on using AI within the government. As the first of its kind in Australia, this committee plays a vital role in fostering community trust and ensuring transparency in our AI initiatives.

Additional Resources

In March 2022, the government issued a call for papers on the regulation of AI, calling on various stakeholders on how the government should approach AI regulation in a manner that enables the creation of a harmonic legislative framework without jeopardizing the use of AI to its maximum potential.

In the paper, the government referred to several of its own reports and guides as examples of what kind of ideas it hoped to receive. These include the following:

Various other regulatory bodies in Australia have also undertaken steps on their own to promote the responsible use of AI under their jurisdiction. For example, since the Online Safety Act of 2021 has come into effect, the National eSafety Commissioner requires all organizations to appropriately inform their users of the use of automated recommendation systems.

Similarly, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) launched its own independent Responsible AI Network to promote collaboration between various Australian firms and create ethically safe and viable AI technologies.

While the government has so far not revealed any of the submissions it received in response to its March 2022 call for papers apart from those sent in by KPMG and the Law Council of Australia, there is growing consensus that most paper submissions contain similar recommendations such as the creation of dedicated AI regulatory body and a federal guideline on the responsible use of AI, both commercially and individually.

Flag of BrazilBrazil

Legislation

Federal

To date, there is no comprehensive federal legislation regulating the use of AI in Brazil.

However, in May 2023, the Bill of Law 2338/2023, which provides for the use of Artificial Intelligence (AI) in Brazil, was introduced in the Brazilian Federal Senate. The bill replaces three bills, Bill of Law 5.051/2019, Bill of Law 21/2020, and Bill of Law 872/2021, which were pending before the legislature over the past four years.

Along with imposing various obligations on businesses using AI systems, the law provides the following rights to the consumers:

  • Right to prior information regarding their interactions with artificial intelligence systems.
  • Right to an explanation of the decision, recommendation, or prediction made by artificial intelligence systems.
  • Right to challenge decisions or predictions of artificial intelligence systems that produce legal effects or significantly impact the interests of the affected party.
  • Right to human determination and human participation in decisions of artificial intelligence systems, taking into account the context and the state of the art of technological development.
  • Right to non-discrimination and the correction of direct, indirect, illegal, or abusive discriminatory biases.
  • Right to privacy and to the protection of personal data, in accordance with the relevant legislation.
State

To date, there are no comprehensive state legislations regulating the use of AI.

Flag of United StatesUnited States

Legislation

Federal

To date, there is no comprehensive federal legislation regulating the use of AI in the United States (US).

However, on June 20, 2023, the US lawmakers introduced a bill, the National AI Commission Act, to create a blue-ribbon commission that will review the United States’ current approach to AI regulation, make recommendations on any new office or governmental structure that may be necessary, and develop a comprehensive framework for AI regulation.

State

Following are a few AI regulations that are in force at the state level in the US:

  • Connecticut’s Artificial Intelligence Law regulates the state's use of AI and has established a task force to develop an AI bill of rights and make recommendations for the adoption of other AI legislations. Along with establishing the Office of Artificial Intelligence and the Connecticut Artificial Intelligence Advisory Board, the law also establishes a task force to: (a) study artificial intelligence and (b) develop an artificial intelligence bill of rights.
  • Illinois' Artificial Intelligence Video Interview Act requires all employers using AI technologies to analyze candidates interviewing for employment positions to appropriately inform all applicants and gain their consent before subjecting them to this automated processing.
  • New York City’s Law on Automated Employment Decision Tools expressly prohibits employers from using an automated employment decision tool (AEDT) to make an employment decision unless the tool is audited for bias annually, the employer publishes a public summary of the audit, and the employer provides certain notices to applicants and employees who are subject to screening by the tool. Pursuant to the adoption of final implementing regulations on April 5, 2023, law enforcement shall begin from July 5, 2023.

Guidances

In 2020, the White House issued the Guidance for Regulation of Artificial Intelligence Applications, the purpose of which was to establish an appropriate framework for all relevant federal agencies that may have to regulate various emerging AI technologies, in addition to the ethical and legal issues that would arise in tandem.

The aforementioned Guidance has helped various US agencies formulate, from time to time, different guidelines, recommendations, and plans of their own. These include:

In October 2022, the White House, per current US President Biden's direct instructions, issued a Blueprint for an AI Bill of Rights that laid down critical protections all US citizens must have as AI continues to expand in capabilities and functionalities. These include:

  • Data privacy: A consumer should be protected from abusive data practices via built-in protections, and the consumer should have an agency over how data about the consumer is used.
  • Notice & explanation: A consumer should know that an automated system is being used and understand how and why it contributes to the outcomes that impact the consumer.
  • Algorithmic discrimination protection: A consumer should not face discrimination by algorithms, and systems should be used and designed in an equitable way.
  • Safe & effective systems: A consumer should be protected from unsafe and ineffective systems.
  • Human alternatives, consideration, and fallback options: A consumer should be able to opt-out, where appropriate, and have access to a person who can quickly consider and remedy problems the consumer encounters.

In January 2023, the National Institute of Standards & Technology issued its AI Risk Management Framework (AI RMF), which is aimed at offering a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The AI RMF is intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic.

Most recently, in May 2023, the U.S. Congressional Research Service published its Generative Artificial Intelligence and Data Privacy: A Primer focusing on privacy issues and policy considerations for the U.S. Congress. The report sheds light on the collection and use of data by AI developers and the role data privacy legislation can play in regulating such use. The report proposes the following three requirements/mechanisms that may be considered in privacy regulations to govern the use of data by AI developers:

  • Notice and disclosure requirements: Companies developing or deploying AI may be required to acquire consent from the individuals before collecting or using their data or notifying them that their data will be collected and used for certain purposes.
  • Opt-out requirements: Companies developing or deploying AI may be required to provide the data subjects an option to opt-out of data collection.
  • Deletion and minimization requirements: Companies developing or deploying AI may be required to provide mechanisms for data subjects to delete their data from existing datasets.

Flag of ChinaChina

Legislation

Federal

China does not have a comprehensive and specified AI Regulation in place on a federal level. However, the Administration of Deep Synthesis of Internet-based Information Services contains provisions that strictly punish deep synthesis technology such as deepfakes and other forms of AI-generated media.

The new Regulations also required all AI-generated content to be appropriately labeled as such.

The Regulations offer detailed guidance for the application of deep synthesis technology in providing Internet information services within China. They specify the responsibilities of national and local departments, highlighting the importance of information security, robust management systems, user authentication, content oversight, and effective measures against spreading rumors.

Furthermore, the regulations address the management of deep synthesis data and technology, emphasizing data security, regular evaluation of algorithms, and clear labeling of generated content. Adhering to these regulations is essential to prevent misuse, maintain transparency, and ensure responsible use of deep synthesis technology.

Moreover, the national network information department is responsible for coordinating the governance and related supervision and management of national in-depth synthesis services.

These Regulations also come with Frequently Asked Questions (FAQs). The FAQs clarify that deep synthesis service providers have responsibilities such as establishing management systems for user registration, algorithm review, data security, and personal information protection.

State/Provincial

Regulation at the provincial level has been similarly proactive, with the Shanghai Regulations on Promoting the Development of the AI Industry and Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations placing distinct obligations on subject organizations.

Shanghai Regulations

Shanghai Regulations apply to activities such as AI Science and Technology (S&T) innovation, industrial development, application empowerment, and industrial governance within the administrative region of Shanghai. These Regulations apply to all organizations in Shanghai involved in the AI industry. The regulatory authority is the local municipal economic and information departments and is responsible for planning, implementing, coordinating, and promoting the development of the AI industry.

Shanghai Regulations are formulated in accordance with relevant laws and administrative regulations and based on the actual situation of the Shanghai area in order to promote the high-quality development of the AI industry. Additionally, these Regulations aim to strengthen the functions of new-generation AI S&T innovation sources, promote the deep integration of AI with the economy, everyday life, urban governance, and other fields, and create a world-class AI industrial cluster.

One of the major aims of The Shanghai AI Regulations is to facilitate the responsible and sustainable development of AI technology. It introduces grading management and "sandbox" supervision, which provide companies with opportunities to explore and test their technologies in a regulated environment. This approach encourages innovation while ensuring adherence to guidelines and standards.

Shenzhen Regulations

Shenzhen AI Regulations have been formulated to promote the high-quality development of the AI industry in the Shenzhen Special Economic Zone, encourage AI integration in the economy and society, and ensure orderly and standardized industry growth in accordance with relevant laws and Shenzhen area situation. As per these Regulations, the local government will establish a working mechanism to coordinate and promote the development of the artificial intelligence industry in the city.

This includes ensuring the industry's security, fostering its healthy and orderly growth, and harnessing the potential of AI for sustainable development in the economy, society, and ecology.

The regulatory authority under the Shenzhen AI Regulations is the municipal industrial and information technology department which will serve as the competent authority responsible for implementing, coordinating, and supervising its development within the city's jurisdiction.

Shenzhen AI Regulations categorize activities and applications on three levels. High-risk AI applications require pre-assessment and risk early warning, while medium- and low-risk applications need pre-disclosure and post-tracking regulation. The Shenzhen area government will develop separate measures for classifying and supervising AI applications.

Additionally, AI services and products based in Shenzhen that are deemed to pose "low risk" can undergo testing and trials, even in the absence of local and national norms. However, adherence to international standards is a prerequisite for such testing and trials.

Guidances & Additional Resources

China has arguably been the most proactive country regarding regulating AI technologies and engaging various stakeholders to ensure the best ethical standards are adopted.

In 2017, the State Council of the People's Republic of China published A Next Generation Artificial Intelligence Development Plan. The guide contained a detailed roadmap on how various state and private institutions can help in the development, deployment, and oversight of AI technologies in a responsible manner.

Then, in 2021, the National Special Committee of New Generation Artificial Intelligence, a body established by the aforementioned guide, issued a Code of Ethics for New-Generation Artificial Intelligence to ensure any future development of AI technologies is in line with appropriate ethics and regulatory requirements. It also established six critical ethical standards that must be considered in developing such AI technologies. These include:

  • Improving human well-being,
  • Promoting fairness and justice,
  • Protecting privacy and security,
  • Ensuring controllability and credibility,
  • Strengthening responsibility,
  • Improving ethical literacy.

The following year in March 2022, the Internet Information Service Algorithmic Recommendation Management Provisions came into effect that required all organizations that develop, promote, and facilitate the development of AI-based personalized recommendations on mobile devices to allow users to delete any tags about their personal characteristics that the internal AI-recommendation model may have developed based on their browsing patterns.

It also required the organizations to let users disable such recommendations on their devices.

In November, the Ministry of Public Security, the Cyberspace Administration of China (CAC), and the Ministry of Industry and Information Technology released a new set of regulations.

Most recently, CAC released a set of draft measures for managing generative AI services. These include requiring all organizations using such services to submit to an independent security assessment before such tools can be commercially deployed.

Flag of United KingdomUnited Kingdom

Legislation

In March 2023, the Department for Science, Innovation and Technology announced the introduction of the Data Protection and Digital Information (No. 2) Bill (‘the Bill’). The Bill, amongst other objectives, aims to address the risks associated with AI-powered automated decision-making and determine the data protection controls required for such processes.

The Bill is expected to provide clarity on how the right to not be subjected to automated decision-making, as granted under Article 22 of the UK GDPR, can be invoked and exercised.

Guidances

The UK Information Commissioner's Office, the body primarily responsible for overseeing all data privacy-related affairs in the UK, has released guidelines on how organizations can responsibly explain the use of AI to both their own employees and customers, titled Guidance on AI and Data Protection and AI and Data Protection Risk Toolkit. The ICO has also recently warned organizations using emotional analysis technologies irresponsibly.

The Guidance provides a roadmap to data protection compliance for developers and users of generative AI. The Risk Toolkit enables organizations to identify and mitigate data protection risks and contains eight significant questions that organizations developing or using generative AI should consider.

Moreover, other guidances in relation to the use of artificial intelligence have also been issued by different public entities in the UK. These include the following:

Additional Resources

The UK government issued a white paper in March 2023 titled "A pro-innovation approach to AI regulation." Within the whitepaper, the UK government highlighted the role of its existing legal framework in regulating the use of AI and underscored its “reputation for high-quality regulators and [UK’s] robust approach to the rule of law, supported by [its] technology-neutral legislation and regulations. UK laws, regulators, and courts already address some of the emerging risks AI technologies pose.”

The white paper further stated that “this strong legal foundation encourages investment in new technologies, enabling AI innovation to thrive and high-quality jobs to flourish."

Additionally, the whitepaper outlined five essential principles that all regulatory bodies must consider when evaluating the use of AI within their scope. These include:

  • Safety, security, and robustness;
  • Transparency and explainability;
  • Fairness;
  • Accountability and governance;
  • Contestability and redress.

In the same 2023 whitepaper, the UK government also laid down its own plan on how it aims to curate both the national development and regulation of AI technologies. Partly inspired by its 2021 10-Year National AI Strategy, the UK government categorically ruled out establishing a new regulatory body or commission to oversee AI-related regulation.

Instead, existing regulatory bodies such as the Health & Safety Executive, the Equality & Human Rights Commission, and the Competition & Markets Authority will expand their powers and jurisdictions to ensure effective oversight of AI-related technologies within their sectors.

In March 2023, the UK government's Department of Education released another whitepaper titled "Generative Artificial Intelligence in Education" in response to the alarming growth in the use of ChatGPT by students nationwide.

Flag of SingaporeSingapore

Legislation

Singapore is one of the few countries in the world with a dedicated government body tasked with curating Singapore's digitalization journey. It aims to nurture a vibrant digital economy fuelled by technological innovation. The Advisory Council on the Ethical Use of AI and Data was established in 2018 to help Singapore identify and address all ethical questions and dilemmas that may arise. The body's primary responsibility is to advise the government on ethical, policy, and governance issues related to the use of AI technologies.

As a result of the body's recommendation, the country's first National Artificial Intelligence Strategy was published in 2019. Not only did it aim to identify strategic areas where resources need to be deployed most urgently while also addressing the emerging risk of AI becoming expansive beyond control.

Additionally, various existing regulations have been amended to include AI systems and technologies, such as:

Guidances

The Infocomm Media Development Authority (IMDA) has identified four distinct "pillar" technologies to drive Singapore's digitalization journey. These include:

  • Cybersecurity;
  • Immersive Media;
  • The Internet of Things;
  • Artificial Intelligence.

Each pillar has its own dedicated development program, with the AI section driven by AI Singapore, launched to build and grow Singapore's AI ecosystem, including research institutions, startups, and tech.

Two distinct AI programs, the National AI Program in Government and the National AI Program in Finance, were established to guide various regulatory agencies via guidance and policy papers. Additionally, other government bodies have issued various other guides related to the use of AI within their sectors. These include the following:

Flag of JapanJapan

Legislation

To date, Japan does not have dedicated AI legislation. However, the Japanese government has been one of the more proactive ones in publishing frequent guidelines for better and more coherent use of AI in professional and social settings.

The Japanese government has amended several of its existing legislations to incorporate AI systems' rapidly developing capabilities and functionalities. Some examples include:

Guidances

In 2019, the Japanese government published the Social Principles of Human-Centric AI, which laid down distinct principles that would help individuals and enterprises implement AI systems within society. These principles include:

  • Fair competition;
  • Accountability;
  • Innovation;
  • Principle of Literacy;
  • Human-centric principle;
  • Transparency;
  • Privacy Protection.

Since then, the Social Principles of Human-Centric AI have led to further guidances that have strategically focused on AI's applications in various sectors of government, education, defense, and corporate governance. Some of these include:

Flag of CanadaCanada

Legislation

Federal

To date, there is no comprehensive federal legislation regulating the use of AI in Canada.

However, in June 2022, the Government of Canada tabled the landmark Artificial Intelligence and Data Act (AIDA) as part of the omnibus Bill C-27, Digital Charter Implementation Act 2022. The AIDA aims to set out new measures to regulate international and inter-provincial trade and commerce in AI systems and establish common requirements for the design, development, and use of AI systems.

The law establishes common requirements for the design, development, and use of artificial intelligence systems and also prohibits specific practices with data and artificial intelligence systems that may result in serious harm to individuals or their interests.

In March 2023, the Government of Canada issued the AIDA Companion document aimed at highlighting Canada’s approach towards the regulation of AI and how AIDA shall contribute to that approach once enacted. The Companion document also identified a number of existing frameworks for consumer protection, human rights, and criminal law that apply to the use of AI, including the following:

  • The Canada Consumer Product Safety Act;
  • The Food and Drugs Act;
  • The Motor Vehicle Safety Act;
  • The Bank Act;
  • The Canadian Human Rights Act and provincial human rights laws; and
  • The Criminal Code.

As per the consultation timeline provided in the Companion document, AIDA would come into force no sooner than 2025.

Provincial

To date, no province in Canada has enacted comprehensive legislation regulating the use of AI. However, the provincial human rights laws apply to the use of AI and afford some protections to consumers.

Guidances

In November 2020, the Office of the Privacy Commissioner of Canada (OPC) issued A Regulatory Framework for AI: Recommendations for PIPEDA Reform containing OPC’s final recommendations after the public consultation on proposals for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA). The recommendations, among others, included the recognition of privacy as a human right, specific provisions of automated decision-making, and demonstrable accountability of the business community.

In May 2021, the Government of Ontario published its report on Consultation: Ontario’s Trustworthy Artificial Intelligence (AI) Framework. The report provided an overview of the potential actions the government could take to ensure the responsible and safe use of AI and the feedback from the consumers on those actions.

In April 2023, the Government of Canada issued a report on the Responsible use of artificial intelligence (AI), which describes how the government makes sure that the use of AI by its departments and agencies is responsible and accountable. Among other things, the document outlines the following actions that need to be taken and monitored so governments may use AI responsibly:

  • Understand and measure the impact of using AI by developing and sharing tools and approaches.
  • Be transparent about how and when we are using AI, starting with a clear user need and public benefit.
  • Provide meaningful explanations about AI decision-making, while also offering opportunities to review results and challenge these decisions.
  • Be as open as we can by sharing source code, training data, and other relevant information, all while protecting personal information, system integration, and national security and defense.
  • Provide sufficient training so that government employees developing and using AI solutions have the responsible design, function, and implementation skills needed to make AI-based public services better.

One practical effect of this approach can be seen in the launch of the Directive on Automated Decision Making - which is a policy directive by the Federal Government of Canada on how to responsibly incorporate AI decision-making within the public sphere.

Flag of IsraelIsrael

Legislation

Currently, there is no major comprehensive AI-related legislation in effect in Israel.

However, considering how Israel has managed to carve out a remarkable reputation for itself as a potential hub of AI innovation, a draft policy has been enacted that intends to act as the basis for future regulatory considerations and ethical framework designs related to AI.

There have been discussions within the legislative circles of the country on what kind of a regulatory setup would be ideal for Israel to ensure it can guarantee the protection of its citizens' digital rights without stifling innovation and creativity within the sector.

Rather than a linear regulatory approach, Israel will likely rely on amendments to its existing regulations and the development of policy guidance that will allow for both effective self-regulation and standardization.

The government of Israel has released the following resources to act as official policy guidances related to AI. Future such guidances will likely follow a similar pattern:

  • Israeli AI Regulation and Policy White Paper: A First Glance
  • Harnessing Innovation: Israeli Perspectives on AI Ethics and Governance

Additional Resources

As stated earlier, in the absence of a dedicated AI regulation in Israel, various existing laws and regulations provide the necessary guidance on how users' data and information should be used concerning their digital rights.

The Copyright Act of Israel protects all intellectual property rights, including copyrights, related to creative works such as literary, artistic, and musical works. In such cases, organizations and individuals have turned to this regulation in matters related to AI-generated content and copyright ownership.

Further, Israel's Basic Law, Human Dignity and Liberty, may govern all consumer rights-related matters in relation to the overall development of AI governance frameworks in Israel. The law requires considerations respecting the universal and constitutional rights of Israeli citizens. Any practice or policy involving AI that may conflict with these rights can be challenged under the Basic Law.

Flag of United Arab EmiratesUnited Arab Emirates

Legislation

The UAE currently does not have a dedicated AI regulation in effect.

However, it is one of the few nations at the forefront of both AI adoption and policies to promote a collaborative relationship between AI capabilities and responsible usage.

The UAE was the first country to establish an AI ministry to ensure that anybody responsible for overseeing the burgeoning sector had the appropriate resources and knowledge to make informed decisions.

Under the Ministry, the Council for AI and Blockchain, a dedicated government body, was also established to oversee all major policy considerations related to the more significant usage of AI tools and mechanisms within government infrastructures.

The body has taken a radically proactive approach towards playing more of an advisory role by issuing regular toolkits and manifestoes to help public and private bodies make responsible decisions related to the use of AI, especially when involving the data of UAE residents.

Additional Resources

Here are the main guides, tools, and incentives offered by the UAE government meant to govern the development and deployment of AI capabilities and systems within the country.

  • The National Artificial Intelligence Strategy 2031: The document aims to create a single homogenous framework for the general adoption of AI across various economic sectors. The program contains extensive details related to policies, initiatives, and investments being undertaken by the government to ensure AI is leveraged to its maximum potential within government services such as healthcare, education, and transportation;
  • AI Ethics Principles and Guidelines: A set of guidelines and principles that act as the primary standards to be followed by organizations when developing and deploying AI technologies. In essence, the guidelines aim to ensure that all developed tools and systems follow appropriate considerations for fairness, transparency, accountability, privacy, and security;
  • AI Coding Licence: The AI Coding License is meant as a special license for coders who are willing to develop various AI tools and codes for UAE-based organizations. Launched by Dubai International Financial Centre (DIFC), in coordination with the UAE Artificial Intelligence Office, it aims to make the UAE a regional and global hub of AI innovation;
  • AI Systems Ethics Self-Assessment Tool: Based on the aforementioned AI Ethics Principles and Guidelines, the UAE government's self-assessment tool for AI systems ethics gives organizations a thorough assessment of whether their services or systems follow the official AI guidelines or not.

Flag of South KoreaSouth Korea

Legislation

South Korea currently has no dedicated AI regulation.

However, the nation is in the process of enacting a comprehensive legislation, titled "AI Act," that would introduce a degree of efficiency in the process related to the development of AI technology.

Private firms and organizations will be able to both develop and deploy new AI technologies and systems without the need for rigorous government oversight owing to a strict set of standards that would allow organizations to adopt a proactive approach towards designing each new AI system in a regulatory-compliant manner.

The draft new law will also contain detailed provisions related to copyrights and intellectual property rights users may have over any AI-generated content.

Additional Resources

Over the years, the South Korean government has issued several tools, strategies, and guidelines to provide private firms with a roadmap for where best to focus their resources on maximizing AI's potential. These resources include:

  • Korean New Deal: The Korean New Deal was released in the immediate aftermath of COVID-19 as a means to boost the local economy and provide a vision to accelerate growth. One of the key policy areas the new strategy covered was sustainable AI research to create sustainable career opportunities for future generations;
  • AI Innovation Hub: The Ministry of Science and ICT designed this initiative specifically to provide AI technology development infrastructure, equipment, software, and data for SMEs. As a result, local firms can leverage high-performance computing power without having to compromise on the robustness, security, or overall safety of the designed system;
  • Data & AI-Driven Economy Promotion Plan: Devised and implemented in 2019, the five-year Data & AI-Driven Economy Promotion Plan aimed to prepare the local South Korean market for data value chain by fostering a world-class AI innovation ecosystem;
  • AI R&D Strategy: The AI R&D Strategy aims to create an innovative and proactive AI ecosystem via a comprehensive analysis of the current state of AI technology, human resources, and infrastructure. Gained insights can be leveraged to ensure resources can be distributed and devoted towards projects that promise the most effective results.

Flag of Saudi ArabiaSaudi Arabia

Legislation

Saudi Arabia does not currently have a comprehensive AI regulation.

However, similar to several other countries taking the lead within AI, the country plans to take a more laissez-faire approach towards regulating the fledgling industry as a means to promote international investment and collaboration.

The National Strategy for Data & AI highlights the nation's ambition to transition from its traditional economic sectors towards emerging technologies, namely data and AI. With projects like NEOM well-placed to complement the development of AI technologies, Saudi Arabia's flexible regulatory framework towards AI, in addition to its several incentive schemes, is likely to attract both AI companies and investors significantly.

Secondly, Saudi Vision 2030, the Kingdom's outline for a future vibrant society, economy, and nation, aims to leverage AI towards the multifaceted needs of its citizens. The roadmap elaborates on how AI capabilities can be deployed across multiple sectors, from safer and more effective infrastructure development to facility management, environmental monitoring, traffic control management, and cybersecurity.

Additional Resources

In addition to the National Strategy for Data & AI and the overall Vision 2030, the following resources will lend a great help in understanding Saudi Arabia's regulatory attitude towards AI:

  • Open Data Policy: Saudi Arabia's Open Data Policy aims to create a strict framework related to the use of data, stating that any data collected for any commercial purpose must not be used for political purposes, or to support illegal or criminal activity, or to be used in racist or discriminatory expressions. This applies to any form of datasets being used to train AI systems and mechanisms;
  • Personal Data Protection Law: The PDPL is Saudi Arabia's primary data privacy regulation that governs how any collected personal data is to be used. Certain provisions within this regulation require the users' explicit permission and consent before being used in AI training.

Flag of New ZealandNew Zealand

Legislation

New Zealand does not have a comprehensive AI regulation in place.

More importantly, there are no immediate plans within the country's legislative bodies to draft such a regulation as well. Instead, government agencies and private organizations can draw insights and guidance from the Algorithm Charter.

The Algorithm Charter is a tool designed to act as a risk matrix, allowing for real-time assessment of all the relevant risks associated with an AI system or tool. Furthermore, adopting such a matrix provides an elementary set of considerations any organization developing AI tools should consider and leaves ample room for consistent innovation and creativity within the field without regulatory redtapes.

Additional Resources

Some other critical resources and considerations to take into account when looking at AI developments within New Zealand include the following:

  • Māori Data Sovereignty Principles: Māori Data Sovereignty refers to the inherent rights and interests that Māori have concerning the collection, ownership, and application of Māori data. Such rights and interests would also extend to any AI models hoping to leverage datasets that may lead to the development of principles, structures, accountability mechanisms, legal instruments, and policies that affect the Māori people;
  • AI Cornerstones: Released by the national government, the AI Cornerstones will likely form the foundational basis for the overall national AI Strategy in the long run. The document aims to build a thriving human-centric AI ecosystem in New Zealand on a solid foundation of trust, equity, and accessibility that provides ​​a roadmap of key priority areas, actions, and timelines to create a national strategy.

Flag of European UnionThe European Union

As with data privacy regulations in the form of the General Data Protection Regulation, the European Union (EU) looks increasingly likely to provide the rest of the world with an appropriate blueprint on how to proceed with AI regulation.

Legislation

The AI Act

The proposed AI Act provides a standardized definition of what constitutes an AI system and contains provisions that protect the rights of individuals in relation to the use of AI systems. One of the key highlights of the Act is that it classifies AI systems into four distinct categories depending on the four levels of risks for AI systems:

These include:

  • Unacceptable Risk AI systems: AI systems that pose a clear threat to the safety, livelihoods, and fundamental rights of people, the use of such AI systems is prohibited.
  • High-Risk AI systems: AI systems that create a high risk to the health and safety or fundamental rights of natural persons, the use of such AI systems is permitted subject to compliance with certain requirements, including ex-ante conformity assessment.
  • Limited risk AI systems: AI systems with specific transparency obligations.
  • Minimal risk AI systems: AI systems representing minimal or no risk for citizens’ risks or safety, such as AI-enabled video games or spam filters.

Depending on their classification, different legal provisions, obligations, and penalties will apply to AI technologies. For example, any automated service or technology that can alter a human's behavior that constitutes a high-risk AI system, leading to potential or actual physical or psychological harm and carries fines of up to €30,000,000 per offense or 6% of a company’s annual turnover for the preceding financial year, whichever is higher.

The Act elaborates at great length on how different technologies are categorized.

An AI system is deemed as having an Unacceptable Risk if it clearly endangers people's safety, livelihood, and fundamental rights. Such AI systems are completely prohibited.

Since the mechanism that will be used to categorize systems as either High Risk or Low Risk is still being debated, the aforementioned criteria is likely to be adjusted in the future.

The AI Act is expected to be adopted by the end of 2023 after due consideration, discussion, and necessary adjustments due to dynamic AI developments.

Additional Resources

In June 2023, the Confederation of European Data Protection Organisations (CEDPO) published an AI and Personal Data guidance for Data Protection Officers. In the guidance, the CEDPO answered some fundamental questions that arise in relation to the intersection of the data protection legislative framework and the use of artificial intelligence and machine learning.

The guidance delves into matters such as the need for the AI Act, whether the GDPR regulates artificial intelligence and machine learning and which core data protection principles apply thereto, and the role of DPOs in the ever-evolving digital and technological landscape. The European Commission has also released guidelines on the Ethical Use of Artificial Intelligence in educational settings.

Regulatory Actions

EU member countries have been in the headlines for their regulatory actions against emerging AI technologies. Italy became the first European country to temporarily ban the use of ChatGPT after its data protection authority, Garante, raised serious suspicions about ChatGPT's collection, use, and maintenance of users' personal data. The ban led other regulatory bodies in EU countries, such as France and Spain, to review the use of the famous chatbot in their own jurisdictions.

The Italian DPA also issued a provisional limitation on further processing of data by Replika - a chatbot with a written and vocal interface based on AI that generates a “virtual friend” - highlighting violations of the GDPR.

Recently, the French DPA issued a 20 million Euro fine on Clearview AI for processing biometric data without an appropriate legal basis and the failure to exercise data subjects’ rights and requests to erase their data. The Austrian DPA has also ruled that Clearview AI cannot process biometric data and must delete complainants' existing personal data.

The Finnish DPA has warned healthcare providers that automated decisions for detecting patients’ healthcare needs can fail to meet data protection requirements.

Finally, in April 2023, the European Data Protection Board set up a taskforce dedicated to cooperation and exchange of information on possible enforcement actions by various data protection agencies across the EU. The EU Advocate General has also issued an opinion on the lawfulness of processing and automated decision-making under the GDPR and noted that an appropriate legal basis of data processing for AI systems to ensure compliance with the requirements of the GDPR.

Key Data Protection Obligations In Relation To The Use of AI

Let’s look into some of the key data protection obligations and best practices in relation to the use of AI that have emerged as a result of upcoming AI regulations all around the world. These obligations and best practices can provide AI systems a starting point for compliance with data protection principles.

  • An appropriate legal basis must be established for the processing of personal data by Artificial intelligence systems, which must be aligned with applicable privacy laws, and where consent is required or used as a legal basis, it must be ensured that it is freely given, informed, specific and unambiguous as per most privacy laws and documented.
  • User transparency must be ensured. Users must be informed if they are interacting with an AI system - unless it is clearly evident from the context and circumstances of the use- and they must be informed if their personal data will be used by the artificial intelligence system.
  • In case the artificial intelligence system shall utilize the personal data of the user for any decision-making, the user must be informed of the logic for any decision-making by the AI. The user must also be allowed to obtain human intervention and opt-out/object to data processing for automated decision-making or contest the decision.
  • Privacy risk assessments must be conducted before the implementation of AI systems.
  • Data security measures must be adopted depending on the risks to individuals caused by the AI system.
  • Data protection principles of data minimization and purpose limitation must be ensured.
  • Data accuracy must be maintained.
  • Certain AI systems cannot be allowed to process the personal data of data subjects and produce results due to high risk to individuals and privacy concerns. This includes AI systems leading to the exploitation of specific groups of persons (e.g. children, mentally disabled) and AI systems causing high risk to the health and safety or fundamental rights of natural persons.
  • Use of real-time biometric identification systems and the use of other sensitive personal data must take place with caution and in line with the applicable data protection requirements. In generative AIs, it must be ensured that AI systems do not produce results that provide personal data or sensitive personal data of individuals in response to queries.

How Securiti Can Help

Securiti is a global leader in providing enterprise data privacy, security, compliance, and governance solutions.

For organizations that understand just how important it is to comply with the existing and upcoming AI-related regulations, Securiti offers a proactive way of doing so.

Securiti's Data Command Center™ is an enterprise solution based on a Data Command Center framework that allows organizations to optimize their oversight and compliance with various data regulatory obligations.

Similarly, numerous other modules, such as data mapping and lineage, allow for real-time tracking of all data in motion across different AI models or systems. Doing so helps in understanding data transformation over time with absolute transparency.

Request a demo today and learn more about how Securiti can help your organization comply with any AI-specific regulation you may be subject to.


Key Takeaways:

  1. Rapid AI Advancements: AI has made significant leaps in operational capabilities, marking the onset of the Fourth Industrial Revolution. MIT studies suggest minimal AI use can increase worker productivity by up to 14%.
  2. Ethical Concerns and Misuse: There has been a surge in ethical misuse of AI, including deepfake technologies, raising concerns over its potential to exacerbate fake news and propaganda issues.
  3. Global Legislative Response: In response to AI's ethical challenges, there has been a legislative push worldwide, with 37 AI-related bills passed in 2022 alone, aiming to regulate AI use and hold developers accountable.
  4. AI as a "Black Box": Despite its potential, AI's operations remain largely unexplained, posing challenges in ensuring trustworthiness and interpretability.
  5. Need for Flexible, Future-Proof Regulation: Effective AI regulation requires flexibility to adapt to future advancements and to be comprehensive enough to address the wide range of AI capabilities.
  6. Data Privacy and Bias Concerns: Regulations like the European Union's AI Act emphasize the need for error-free, representative training datasets, highlighting challenges organizations face in meeting these standards.
  7. Global AI Regulation Landscape: Various countries, including Australia, Brazil, the United States, China, the United Kingdom, Singapore, Japan, Canada, Israel, the UAE, South Korea, Saudi Arabia, New Zealand, and the European Union, are developing frameworks, guidelines, and legislation to govern AI use responsibly.
  8. Key AI Regulation Themes:
    - The importance of ensuring AI systems' safety, security, and ethical use.
    - The need for transparency in AI operations and decisions.
    - The challenge of addressing bias and ensuring fairness in AI systems.
    - The role of international cooperation in harmonizing AI regulations.
  9. Securiti's Role in AI Compliance: Securiti offers solutions like the Data Command Center™ to help organizations comply with AI regulations, emphasizing data privacy, security, and governance. Tools for data mapping and lineage provide transparency in data transformation, essential for complying with AI-specific regulations.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox

Share


More Stories that May Interest You

What's
New