IDC Names Securiti a Worldwide Leader in Data Privacy
ViewListen to the content
Writing, art, backend engineering, marketing, legal analysis, political strategizing - AI can do it all. Over the past few months, AI has seen tremendous leaps in its operational capabilities. Many industry experts now consider this to be the beginning of the Fourth Industrial Revolution, with an MIT study stating that almost minimal use of AI can raise a worker's productivity by as much as 14%. While this does promise a great degree of benefits, it comes with its cons.
Last year has seen incidents of ethical misuse of AI skyrocket. Some incidents have caused international outrage, such as the deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering. In an era where fake news and propaganda are already seen as existential threats to the social fabric, it is easy to see how the introduction of such AI capabilities is the perfect recipe for disaster.
One way to avoid such instances of AI misuse and abuse is elaborate regulation that protects the rights of individuals. This sentiment is shared by lawmakers and legislators globally, as evident in Stanford University's 2023 AI Index, which states that 37 different AI-related bills were passed in 2022 alone. Most of these regulations not only call for better analysis and understanding of AI and its potential risks but also call for making the developers behind AI tools accountable for the actions of their inventions.
AI remains a black box in many aspects. Research related to AI interpretability, trustworthiness, and operability is still in its relative infancy as far as our overall understanding of AI's limitless potential is concerned.
AI needs to be managed appropriately and curated to ensure it does not infringe on users' rights or chaotically disrupt business. For that to be the case, AI regulations need to be effective, flexible, and future-proof enough to adequately cover any tangents in AI capability that may come to the fore in the short and long term.
The need for these regulations to be flexible and future-proof becomes even more critical once AI's computations not being "explainable" are factored in.
Additionally, there are a multitude of other problems to consider. For example, the European Union's AI Act states that all "training, validation, and testing datasets shall be relevant, representative, free of errors, and complete." While on paper that does seem an appropriate obligation to place on organizations, in reality, the scale of data required to properly train a machine learning algorithm, with the stipulation of it needing to be "free of errors and complete," sets an extremely high standard that numerous organizations simply may not find tenable.
An example would be Amazon, which had to scrap its AI recruiting tool entirely. The issue was that it wanted to hire more female candidates. However, the tool kept shortlisting male candidates. The problem is that the available training sets were all heavily biased. Since no other training sets were available and creating a new one would have come at a tremendous financial cost, Amazon scrapped the whole project altogether.
Amazon could afford to do that because it's Amazon. A startup or an SME in a similar position may not enjoy such a luxury of options.
Lawmakers in the EU have already called for a global meeting of leaders to address the threats posed by "very powerful" AI to human rights and humanity itself.
Several countries have adopted a proactive approach toward AI regulation. In the absence of comprehensive legislation, governments have published frameworks, guidelines, and roadmaps that illustrate the future of possible AI regulation in these countries and help organizations manage their AI usage and tools responsibly.
Australia does not yet have a dedicated AI regulation. Most of its regulatory actions related to AI either come through existing laws or regular government policy papers guiding various regulatory bodies on how to approach different challenges posed by AI.
The New South Wales (NSW) Government came up with the first AI strategy, recognizing the challenges that come with the use of AI and charting a course for AI to be used safely across the government with the right safeguards in place.
For this purpose, the NSW Government published the AI Assurance Framework to assist agencies in designing, building, and using AI-enabled products and solutions. It is mandatory for all projects that incorporate an AI component or utilize AI-driven tools. This encompasses the utilization of large language models and generative AI, explicitly falling within the framework's application scope. The framework is intended to be used by:
However, a project is not expected to use the framework if it meets the following criteria:
The AI Assurance Framework became effective in March 2022. The State Government also established the NSW AI Review Committee to provide expert guidance and oversight on using AI within the government. As the first of its kind in Australia, this committee plays a vital role in fostering community trust and ensuring transparency in our AI initiatives.
In March 2022, the government issued a call for papers on the regulation of AI, calling on various stakeholders on how the government should approach AI regulation in a manner that enables the creation of a harmonic legislative framework without jeopardizing the use of AI to its maximum potential.
In the paper, the government referred to several of its own reports and guides as examples of what kind of ideas it hoped to receive. These include the following:
Various other regulatory bodies in Australia have also undertaken steps on their own to promote the responsible use of AI under their jurisdiction. For example, since the Online Safety Act of 2021 has come into effect, the National eSafety Commissioner requires all organizations to appropriately inform their users of the use of automated recommendation systems.
Similarly, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) launched its own independent Responsible AI Network to promote collaboration between various Australian firms and create ethically safe and viable AI technologies.
While the government has so far not revealed any of the submissions it received in response to its March 2022 call for papers apart from those sent in by KPMG and the Law Council of Australia, there is growing consensus that most paper submissions contain similar recommendations such as the creation of dedicated AI regulatory body and a federal guideline on the responsible use of AI, both commercially and individually.
To date, there is no comprehensive federal legislation regulating the use of AI in Brazil.
However, in May 2023, the Bill of Law 2338/2023, which provides for the use of Artificial Intelligence (AI) in Brazil, was introduced in the Brazilian Federal Senate. The bill replaces three bills, Bill of Law 5.051/2019, Bill of Law 21/2020, and Bill of Law 872/2021, which were pending before the legislature over the past four years.
Along with imposing various obligations on businesses using AI systems, the law provides the following rights to the consumers:
To date, there are no comprehensive state legislations regulating the use of AI.
To date, there is no comprehensive federal legislation regulating the use of AI in the United States (US).
However, on June 20, 2023, the US lawmakers introduced a bill, the National AI Commission Act, to create a blue-ribbon commission that will review the United States’ current approach to AI regulation, make recommendations on any new office or governmental structure that may be necessary, and develop a comprehensive framework for AI regulation.
Following are a few AI regulations that are in force at the state level in the US:
In 2020, the White House issued the Guidance for Regulation of Artificial Intelligence Applications, the purpose of which was to establish an appropriate framework for all relevant federal agencies that may have to regulate various emerging AI technologies, in addition to the ethical and legal issues that would arise in tandem.
The aforementioned Guidance has helped various US agencies formulate, from time to time, different guidelines, recommendations, and plans of their own. These include:
In October 2022, the White House, per current US President Biden's direct instructions, issued a Blueprint for an AI Bill of Rights that laid down critical protections all US citizens must have as AI continues to expand in capabilities and functionalities. These include:
In January 2023, the National Institute of Standards & Technology issued its AI Risk Management Framework (AI RMF), which is aimed at offering a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The AI RMF is intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic.
Most recently, in May 2023, the U.S. Congressional Research Service published its Generative Artificial Intelligence and Data Privacy: A Primer focusing on privacy issues and policy considerations for the U.S. Congress. The report sheds light on the collection and use of data by AI developers and the role data privacy legislation can play in regulating such use. The report proposes the following three requirements/mechanisms that may be considered in privacy regulations to govern the use of data by AI developers:
China does not have a comprehensive and specified AI Regulation in place on a federal level. However, the Administration of Deep Synthesis of Internet-based Information Services contains provisions that strictly punish deep synthesis technology such as deepfakes and other forms of AI-generated media.
The new Regulations also required all AI-generated content to be appropriately labeled as such.
The Regulations offer detailed guidance for the application of deep synthesis technology in providing Internet information services within China. They specify the responsibilities of national and local departments, highlighting the importance of information security, robust management systems, user authentication, content oversight, and effective measures against spreading rumors.
Furthermore, the regulations address the management of deep synthesis data and technology, emphasizing data security, regular evaluation of algorithms, and clear labeling of generated content. Adhering to these regulations is essential to prevent misuse, maintain transparency, and ensure responsible use of deep synthesis technology.
Moreover, the national network information department is responsible for coordinating the governance and related supervision and management of national in-depth synthesis services.
These Regulations also come with Frequently Asked Questions (FAQs). The FAQs clarify that deep synthesis service providers have responsibilities such as establishing management systems for user registration, algorithm review, data security, and personal information protection.
Regulation at the provincial level has been similarly proactive, with the Shanghai Regulations on Promoting the Development of the AI Industry and Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations placing distinct obligations on subject organizations.
Shanghai Regulations apply to activities such as AI Science and Technology (S&T) innovation, industrial development, application empowerment, and industrial governance within the administrative region of Shanghai. These Regulations apply to all organizations in Shanghai involved in the AI industry. The regulatory authority is the local municipal economic and information departments and is responsible for planning, implementing, coordinating, and promoting the development of the AI industry.
Shanghai Regulations are formulated in accordance with relevant laws and administrative regulations and based on the actual situation of the Shanghai area in order to promote the high-quality development of the AI industry. Additionally, these Regulations aim to strengthen the functions of new-generation AI S&T innovation sources, promote the deep integration of AI with the economy, everyday life, urban governance, and other fields, and create a world-class AI industrial cluster.
One of the major aims of The Shanghai AI Regulations is to facilitate the responsible and sustainable development of AI technology. It introduces grading management and "sandbox" supervision, which provide companies with opportunities to explore and test their technologies in a regulated environment. This approach encourages innovation while ensuring adherence to guidelines and standards.
Shenzhen AI Regulations have been formulated to promote the high-quality development of the AI industry in the Shenzhen Special Economic Zone, encourage AI integration in the economy and society, and ensure orderly and standardized industry growth in accordance with relevant laws and Shenzhen area situation. As per these Regulations, the local government will establish a working mechanism to coordinate and promote the development of the artificial intelligence industry in the city.
This includes ensuring the industry's security, fostering its healthy and orderly growth, and harnessing the potential of AI for sustainable development in the economy, society, and ecology.
The regulatory authority under the Shenzhen AI Regulations is the municipal industrial and information technology department which will serve as the competent authority responsible for implementing, coordinating, and supervising its development within the city's jurisdiction.
Shenzhen AI Regulations categorize activities and applications on three levels. High-risk AI applications require pre-assessment and risk early warning, while medium- and low-risk applications need pre-disclosure and post-tracking regulation. The Shenzhen area government will develop separate measures for classifying and supervising AI applications.
Additionally, AI services and products based in Shenzhen that are deemed to pose "low risk" can undergo testing and trials, even in the absence of local and national norms. However, adherence to international standards is a prerequisite for such testing and trials.
China has arguably been the most proactive country regarding regulating AI technologies and engaging various stakeholders to ensure the best ethical standards are adopted.
In 2017, the State Council of the People's Republic of China published A Next Generation Artificial Intelligence Development Plan. The guide contained a detailed roadmap on how various state and private institutions can help in the development, deployment, and oversight of AI technologies in a responsible manner.
Then, in 2021, the National Special Committee of New Generation Artificial Intelligence, a body established by the aforementioned guide, issued a Code of Ethics for New-Generation Artificial Intelligence to ensure any future development of AI technologies is in line with appropriate ethics and regulatory requirements. It also established six critical ethical standards that must be considered in developing such AI technologies. These include:
The following year in March 2022, the Internet Information Service Algorithmic Recommendation Management Provisions came into effect that required all organizations that develop, promote, and facilitate the development of AI-based personalized recommendations on mobile devices to allow users to delete any tags about their personal characteristics that the internal AI-recommendation model may have developed based on their browsing patterns.
It also required the organizations to let users disable such recommendations on their devices.
In November, the Ministry of Public Security, the Cyberspace Administration of China (CAC), and the Ministry of Industry and Information Technology released a new set of regulations.
Most recently, CAC released a set of draft measures for managing generative AI services. These include requiring all organizations using such services to submit to an independent security assessment before such tools can be commercially deployed.
In March 2023, the Department for Science, Innovation and Technology announced the introduction of the Data Protection and Digital Information (No. 2) Bill (‘the Bill’). The Bill, amongst other objectives, aims to address the risks associated with AI-powered automated decision-making and determine the data protection controls required for such processes.
The Bill is expected to provide clarity on how the right to not be subjected to automated decision-making, as granted under Article 22 of the UK GDPR, can be invoked and exercised.
The UK Information Commissioner's Office, the body primarily responsible for overseeing all data privacy-related affairs in the UK, has released guidelines on how organizations can responsibly explain the use of AI to both their own employees and customers, titled Guidance on AI and Data Protection and AI and Data Protection Risk Toolkit. The ICO has also recently warned organizations using emotional analysis technologies irresponsibly.
The Guidance provides a roadmap to data protection compliance for developers and users of generative AI. The Risk Toolkit enables organizations to identify and mitigate data protection risks and contains eight significant questions that organizations developing or using generative AI should consider.
Moreover, other guidances in relation to the use of artificial intelligence have also been issued by different public entities in the UK. These include the following:
The UK government issued a white paper in March 2023 titled "A pro-innovation approach to AI regulation." Within the whitepaper, the UK government highlighted the role of its existing legal framework in regulating the use of AI and underscored its “reputation for high-quality regulators and [UK’s] robust approach to the rule of law, supported by [its] technology-neutral legislation and regulations. UK laws, regulators, and courts already address some of the emerging risks AI technologies pose.”
The white paper further stated that “this strong legal foundation encourages investment in new technologies, enabling AI innovation to thrive and high-quality jobs to flourish."
Additionally, the whitepaper outlined five essential principles that all regulatory bodies must consider when evaluating the use of AI within their scope. These include:
In the same 2023 whitepaper, the UK government also laid down its own plan on how it aims to curate both the national development and regulation of AI technologies. Partly inspired by its 2021 10-Year National AI Strategy, the UK government categorically ruled out establishing a new regulatory body or commission to oversee AI-related regulation.
Instead, existing regulatory bodies such as the Health & Safety Executive, the Equality & Human Rights Commission, and the Competition & Markets Authority will expand their powers and jurisdictions to ensure effective oversight of AI-related technologies within their sectors.
In March 2023, the UK government's Department of Education released another whitepaper titled "Generative Artificial Intelligence in Education" in response to the alarming growth in the use of ChatGPT by students nationwide.
Singapore is one of the few countries in the world with a dedicated government body tasked with curating Singapore's digitalization journey. It aims to nurture a vibrant digital economy fuelled by technological innovation. The Advisory Council on the Ethical Use of AI and Data was established in 2018 to help Singapore identify and address all ethical questions and dilemmas that may arise. The body's primary responsibility is to advise the government on ethical, policy, and governance issues related to the use of AI technologies.
As a result of the body's recommendation, the country's first National Artificial Intelligence Strategy was published in 2019. Not only did it aim to identify strategic areas where resources need to be deployed most urgently while also addressing the emerging risk of AI becoming expansive beyond control.
Additionally, various existing regulations have been amended to include AI systems and technologies, such as:
The Infocomm Media Development Authority (IMDA) has identified four distinct "pillar" technologies to drive Singapore's digitalization journey. These include:
Each pillar has its own dedicated development program, with the AI section driven by AI Singapore, launched to build and grow Singapore's AI ecosystem, including research institutions, startups, and tech.
Two distinct AI programs, the National AI Program in Government and National AI Program in Finance, were established to guide various regulatory agencies via guidance and policy papers. Additionally, other government bodies have issued various other guides related to the use of AI within their sectors. These include the following:
To date, Japan does not have dedicated AI legislation. However, the Japanese government has been one of the more proactive ones in publishing frequent guidelines for better and more coherent use of AI in professional and social settings.
The Japanese government has amended several of its existing legislations to incorporate AI systems' rapidly developing capabilities and functionalities. Some examples include:
In 2019, the Japanese government published the Social Principles of Human-Centric AI, which laid down distinct principles that would help individuals and enterprises implement AI systems within society. These principles include:
Since then, the Social Principles of Human-Centric AI have led to further guidances that have strategically focused on AI's applications in various sectors of government, education, defense, and corporate governance. Some of these include:
To date, there is no comprehensive federal legislation regulating the use of AI in Canada.
However, in June 2022, the Government of Canada tabled the landmark Artificial Intelligence and Data Act (AIDA) as part of the omnibus Bill C-27, Digital Charter Implementation Act 2022. The AIDA aims to set out new measures to regulate international and inter-provincial trade and commerce in AI systems and establish common requirements for the design, development, and use of AI systems.
The law establishes common requirements for the design, development, and use of artificial intelligence systems and would also prohibit specific practices with data and artificial intelligence systems that may result in serious harm to individuals or their interests.
In March 2023, the Government of Canada issued the AIDA Companion document aimed at highlighting Canada’s approach towards the regulation of AI and how AIDA shall contribute to that approach once enacted. The Companion document also identified a number of existing frameworks for consumer protection, human rights, and criminal law that apply to the use of AI, including the following:
As per the consultation timeline provided in the Companion document, AIDA would come into force no sooner than 2025.
To date, no province in Canada has enacted comprehensive legislation regulating the use of AI. However, the provincial human rights laws apply to the use of AI and afford some protections to consumers.
In November 2020, the Office of the Privacy Commissioner of Canada (OPC) issued A Regulatory Framework for AI: Recommendations for PIPEDA Reform containing OPC’s final recommendations after the public consultation on proposals for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA). The recommendations, among others, included the recognition of privacy as a human right, specific provisions of automated decision-making, and demonstrable accountability of the business community.
In May 2021, the Government of Ontario published its report on Consultation: Ontario’s Trustworthy Artificial Intelligence (AI) Framework. The report provided an overview of the potential actions the government could take to ensure the responsible and safe use of AI and the feedback from the consumers on those actions.
In April 2023, the Government of Canada issued a report on the Responsible use of artificial intelligence (AI), which describes how the government makes sure that the use of AI by its department and agencies is responsible and accountable. Among other things, the document outlines the following actions that need to be taken and monitored so governments may use AI responsibly:
One practical effect of this approach can be seen in the launch of the Directive on Automated Decision Making - which is a policy directive by the Federal Government of Canada on how to responsibly incorporate AI decision-making within the public sphere.
As with data privacy regulations in the form of the General Data Protection Regulation, the European Union (EU) looks increasingly likely to provide the rest of the world with an appropriate blueprint on how to proceed with AI regulation.
The proposed AI Act provides a standardized definition of what constitutes an AI system and contains provisions that protect the rights of individuals in relation to the use of AI systems. One of the key highlights of the Act is that it classifies AI systems into four distinct categories depending on the four levels of risks for AI systems:
These include:
Depending on their classification, different legal provisions, obligations, and penalties will apply to AI technologies. For example, any automated service or technology that can alter a human's behavior that constitutes a high-risk AI system, leading to potential or actual physical or psychological harm and carries fines of up to €30,000,000 per offense or 6% of a company’s annual turnover for the preceding financial year, whichever is higher.
The Act elaborates at great length on how different technologies are categorized.
An AI system is deemed as having an Unacceptable Risk if it clearly endangers people's safety, livelihood, and fundamental rights. Such AI systems are completely prohibited.
Since the mechanism that will be used to categorize systems as either High Risk or Low Risk is still being debated, the aforementioned criteria is likely to be adjusted in the future.
The AI Act is expected to be adopted by the end of 2023 after due consideration, discussion, and necessary adjustments due to dynamic AI developments.
In June 2023, the Confederation of European Data Protection Organisations (CEDPO) published an AI and Personal Data guidance for Data Protection Officers. In the guidance, the CEDPO answered some fundamental questions that arise in relation to the intersection of the data protection legislative framework and the use of artificial intelligence and machine learning.
The guidance delves into matters such as the need for the AI Act, whether the GDPR regulates artificial intelligence and machine learning and which core data protection principles apply thereto, and the role of DPOs in the ever-evolving digital and technological landscape. The European Commission has also released guidelines on the Ethical Use of Artificial Intelligence in educational settings.
EU member countries have been in the headlines for their regulatory actions against emerging AI technologies. Italy became the first European country to temporarily ban the use of ChatGPT after its data protection authority, Garante, raised serious suspicions about ChatGPT's collection, use, and maintenance of users' personal data. The ban led other regulatory bodies in EU countries, such as France and Spain, to review the use of the famous chatbot in their own jurisdictions.
The Italian DPA also issued a provisional limitation on further processing of data by Replika - a chatbot with a written and vocal interface based on AI that generates a “virtual friend” - highlighting violations of the GDPR.
Recently, the French DPA issued a 20 million Euro fine on Clearview AI for processing biometric data without an appropriate legal basis and the failure to exercise data subjects’ rights and requests to erase their data. The Austrian DPA has also ruled that Clearview AI cannot process biometric data and must delete complainants' existing personal data.
The Finnish DPA has warned healthcare providers that automated decisions for detecting patients’ healthcare needs can fail to meet data protection requirements.
Finally, in April 2023, the European Data Protection Board set up a taskforce dedicated to cooperation and exchange of information on possible enforcement actions by various data protection agencies across the EU. The EU Advocate General has also issued an opinion on the lawfulness of processing and automated decision-making under the GDPR and noted that an appropriate legal basis of data processing for AI systems to ensure compliance with the requirements of the GDPR.
Let’s look into some of the key data protection obligations and best practices in relation to the use of AI that have emerged as a result of upcoming AI regulations all around the world. These obligations and best practices can provide AI systems a starting point for compliance with data protection principles.
Securiti is a global leader in providing enterprise data privacy, security, compliance, and governance solutions.
For organizations that understand just how important it is to comply with the existing and upcoming AI-related regulations, Securiti offers a proactive way of doing so.
Securiti's DataControls Cloud™ is an enterprise solution based on a Unified Data Controls framework that allows organizations to optimize their oversight and compliance with various data regulatory obligations.
Similarly, numerous other modules, such as data mapping and lineage, allow for real-time tracking of all data in motion across different AI models or systems. Doing so helps in understanding data transformation over time with absolute transparency.
Request a demo today and learn more about how Securiti can help your organization comply with any AI-specific regulation you may be subject to.
Get all the latest information, law updates and more delivered to your inbox
September 26, 2023
AI is no longer the future; it has well and truly become the present. What may have seemed like science fiction a few years...
September 22, 2023
Canada has become a leading force in Generative AI's responsible and ethical development in a rapidly evolving field of artificial intelligence. The nation has...
September 21, 2023
Introduction The emergence of Generative AI has ushered in a new era of innovation in the ever-evolving technological landscape that pushes the boundaries of...
At Securiti, our mission is to enable enterprises to safely harness the incredible power of data and the cloud by controlling the complex security, privacy and compliance risks.
Copyright © 2023 Securiti · Sitemap · XML Sitemap
[email protected]
300 Santana Row Suite 450. San Jose,
CA 95128