Securiti AI Launches Context-Aware LLM Firewalls to Secure GenAI Applications

View

Charting The Future: White House Rolls Out a Landmark AI Executive Order

By Anas Baig | Reviewed By Omer Imran Malik
Published November 24, 2023

Listen to the content

Introduction

Artificial Intelligence (AI) has shown immense potential in transforming the way businesses and industries have traditionally operated due to its immense capabilities and scalability, especially since the introduction of Generative AI (GenAI) and how it can be deployed for various tasks. However, it is essential to recognize the potential risks associated with this groundbreaking technology if it is not correctly regulated.

Hence, the US President has rolled out a landmark Executive Order (EO) on AI governance and regulation to mitigate the potential risks associated with AI. The latest EO on AI aims to shape the future of AI and its associated technologies in the US. The EO goes beyond outlining provisions for the governance of the technology and extends it to ensure the safety and security of national interests, the economy, and the well-being of US residents and workers in the context of the development and use of AI technologies by American businesses.

Significance of the Executive Order on Artificial Intelligence

Released on October 30th, 2023, the EO provides extensive directives that set the course for the federal government’s efforts to regulate the development and use of Artificial Intelligence in the US.

The EO introduces a wide range of critical guidelines aimed at the privacy, security, and safety of AI technologies. The White House Deputy Chief of Staff, Bruce Reed, labeled the EO as the “strongest set of actions any government in the world has ever taken on AI safety, security and trust.” The Deputy further emphasized the significance of the EO as a tool built to cover the security, privacy, safety, and ethical aspects of AI from all fronts and better maximize its potential while mitigating risks.

The move reflects the initiations on the AI front as taken by other global leaders, such as China and the European Union, as they set out guidelines on regulating Artificial Intelligence.

Who Does the New AI Executive Order Apply to?

AI Systems

The EO broadly defines AI as any “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

Federal Agencies and Departments

The EO primarily applies to federal government agencies as well as departments. However, it also has requirements that extend towards US Government contractors who work with these agencies and departments.

Coverage of Private Actors

Notably, using the Defense Production Act, the EO also extends coverage to:

(1) Developers of Dual Use AI Models (AI models trained on broad data and applicable in a wide range of contexts) to share their safety test results and other important information with the U.S.  government. Until the technical regulations are prepared and released by the Department of Commerce, the following AI systems would be considered to fall within this definition as per the EO:

  • Any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and
  • Any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.

(2) United States IaaS Providers which train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (a “training run”). Until further regulations are provided:

  • a model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 1026 integer or
  • floating-point operations and is trained on a computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 1020 integer or floating-point operations per second for training AI.

Also, enterprises that develop AI models that could pose significant risks to critical infrastructure sectors will also have to comply with federal regulations by the appropriate federal agency or regulator.

The EO also directs many federal regulators to prepare guidelines or rules or regulations for championing safety and security, data privacy, civil rights, and fairness in the use of AI in their specific sectors - all of which will arguably impact many private businesses developing or using AI technologies in the US.

Conclusion

Overall, determining the extent to which the EO and its resultant rules, guidances, and regulations will affect an organization will require it to make a careful assessment of the following:

  • The type of AI technology it is developing or using or supplying;
  • The computing power/capacity of its AI technology;
  • To whom is it provisioning AI technology, and for what purpose (i.e, the use of the AI technology for any work arising from federal contracts);
  • The industry within which it is operating or making use of or developing AI technology for; and
  • The risks (privacy, cybersecurity, fairness, and others) posed by the AI technology.

Crucial Mandates Under the New AI Executive Order

Let’s take a closer look at the key areas of concern discussed in the EO.

1. Elevating AI Safety & Security

The EO directs the following actions to protect individuals from the potential risks of AI systems.

(a) Developing Guidelines, Standards, and Best Practices for AI Safety

The EO focuses on developing guidelines, standards, and best practices for the safety and security of AI systems.

  • Within 270 days, the Secretary of Commerce, in collaboration with relevant agencies, including the National Institute of Standards and Technology (NIST), is required to establish guidelines for the development and deployment of safe AI systems. This includes creating companion resources for the AI Risk Management Framework and Secure Software Development Framework, specifically addressing generative AI and dual-use foundation models.
  • Additionally, an initiative is to be launched for guidance and benchmarks for evaluating AI capabilities in areas like cybersecurity and biosecurity.
  • Moreover, guidelines and processes for AI red-teaming tests are to be established to facilitate the deployment of safe AI systems. This involves coordinating efforts related to assessing the safety and security of dual-use foundation models and working with the Secretary of Energy and the Director of the National Science Foundation to create testing environments.
  • Moreover, the EO mandates the Secretary of Energy to create a plan for AI model evaluation tools and testbeds to assess near-term extrapolations of AI capabilities, particularly in security-critical domains like nuclear, biological, and energy security, within 270 days.

(b) Ensuring Safe and Reliable AI

The EO, to ensure the continuous availability of safe and reliable AI, uses the powers granted to the Federal Government under the Defense Production Act, to direct the Secretary of Commerce to establish requirements (within 90 days) for “companies developing or demonstrating an intent to develop potential dual-use foundation models” to report ongoing information, reports, or records to the Federal Government regarding:

  • activities related to training, developing, or producing dual-use foundation models (including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats);
  • ownership and possession of model weights (and the physical and cybersecurity measures taken to protect those model weights); and
  • results of dual-use foundation models' performance in AI red-team testing and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security as developed by NIST as per the direction of this EO (Prior to the development of guidance on red-team testing standards by NIST - this description shall include the results of any red-team testing that the company has conducted relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives);
  • Any acquisition, development, or possession of large-scale computing clusters, including the existence and location of these clusters and the amount of total computing power available in each cluster.

These regulations will also require US-based IaaS Providers to report transactions where foreign individuals train large AI models for potential malicious cyber use. Foreign resellers of US IaaS providers will also be required to report ensuring identity verification for foreign individuals obtaining IaaS accounts. The reporting requirements will involve US-based IaaS providers (and their foreign resellers) collect, maintain, and provide the US Federal Government the following information, at a minimum, when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in the malicious cyber-enabled activity (a “training run”):

  • the identity of such foreign person, including name and address;
  • the means and source of payment (including any associated financial institution and other identifiers such as credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier);
  • the electronic mail address and telephonic contact information used to verify a foreign person’s identity; and
  • the Internet Protocol addresses used for access or administration and the date and time of each such access or administrative action related to ongoing verification of such foreign person’s ownership of such an account.

Furthermore, The EO also directs the Secretary of Commerce to set technical conditions and reporting requirements for AI models utilizing large-scale computing clusters and with regard to dual-use foundation models with widely available model weights (e.g., Open Source Large Language Models) by soliciting input from the private sector, academia, civic society, and other stakeholders through a public consultation process on potential risks, benefits and other implications within 270 days; and based on consultations, submit a report to the president on the potential benefits, risks and implications as well as policy and regulatory recommendations pertaining to these models.

(c) Federal Procurement of AI Systems

The EO requires the Office of Management and Budget (OMB) to set up an interagency council on the use of AI in federal government operations (outside of national security systems). OMB’s director must issue, within 150 days, “guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government.” In addition to other provisions that guidance must include (in part):

  • Requiring the designation of a Chief Artificial Intelligence Officer at each agency;
  • At some agencies, the creation of an internal Artificial Intelligence Governance Board;
  • Risk management practices for government use of AI that impacts “people’s rights or safety” (utilizing, as appropriate, the NIST AI Risk Management Framework);
  • Recommendations regarding AI testing, labeling AI output, and the “independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings.”

Furthermore, OMB has been further tasked with establishing systems to ensure agency compliance with guidance on AI technologies, including ensuring agency contracts for purchasing AI systems align with all legal and regulatory requirements and yearly cataloging of agency AI use cases.

(d) Managing AI in Critical Infrastructure and Cybersecurity

To safeguard critical infrastructure and enhance cybersecurity, the EO requires:

  • The regulatory authorities and Sector Risk Management Agencies to assess AI-related risks in critical infrastructure within 90 days.
  • The Treasury Secretary to issue cybersecurity best practices for financial institutions within 150 days.
  • The Secretary of Homeland Security, in coordination with relevant entities, integrate an AI Risk Management Framework into safety guidelines for critical infrastructure within 180 days.
  • An AI Safety and Security Board will be established by the Secretary of Homeland Security, advising on AI security in critical infrastructure.

In addition, to strengthen cybersecurity, the EO directs that within 180 days, the Secretary of Defense and Homeland Security develop plans and conduct pilot projects deploying AI capabilities to identify and fix vulnerabilities in the US government systems. Reports on results and lessons learned are required to be submitted to the US President within 270 days.

(e) Reducing Risks at the Intersection AI and CBRN Threats

The EO addresses AI risks in CBRN threats, particularly biological weapons. Within 180 days, the Secretary of Homeland Security, with the Secretary of Energy and Director OSTP, is required to assess AI misuse risks by consulting experts and providing a report to the President with regulatory recommendations.

Simultaneously, within 120 days, Defense, in consultation with relevant entities, is obliged to contract with the National Academies for a study on AI's biosecurity risks and recommendations.

The EO directs OSTP to establish screening mechanisms within 180 days to reduce synthetic nucleic acid risks. The OSTP director must establish a framework for effective screening mechanisms within 180 days. This includes:

  • Criteria for identifying sequences posing national security risks.
  • Standardized methodologies for screening and reporting.

In addition, the EO requires the Secretary of Commerce and NIST to engage with industry and stakeholders to develop specifications and best practices for nucleic acid synthesis procurement screening, informed by the framework. The EO mandates that:

  • All agencies funding life sciences research within 180 days of the framework's establishment, adhere to the framework for synthetic nucleic acid procurement.
  • The Secretary of Homeland Security to develop a framework for evaluating and stress testing nucleic acid synthesis procurement screening and submit an annual report on the results and recommendations for strengthening screening measures.

(f) Reducing Risks of Synthetic Content

The EO outlines measures to reduce the risks associated with synthetic content. It directs the following:

  • Within 240 days, the Secretary of Commerce is required to submit a report identifying science-backed standards, tools, and practices for content authentication, tracking, labeling, and detecting synthetic content (e.g. through watermarking), preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals.
  • Following the report, within 180 days, the Secretary of Commerce, with the Director OMB, is required to develop guidance on tools for digital content authentication and synthetic content detection.
  • Subsequently, within 180 days of guidance development, OMB should issue guidance to agencies focusing on labeling and authenticating official U.S. Government digital content. The Federal Acquisition Regulatory Council is required to amend the Federal Acquisition Regulation to align with this guidance.

(g) National Security Memorandum Development for AI Governance

To establish a unified approach for managing AI-related security risks in the executive branch, the EO directs the development of the National Security Memorandum that establishes further actions on AI and security. The memorandum will:

  • guide the Department of Defense, relevant agencies, and the Intelligence Community;
  • promote AI adoption for enhanced U.S. national security missions;
  • direct AI assurance and risk-management practices; and
  • consider impacts on the rights and safety of U.S. and, where relevant, non-U.S. individuals.

Additionally, the memorandum will direct actions to counter potential threats from adversaries and foreign actors using AI systems that may jeopardize U.S. security.

2.  Promoting Innovation and Competition

The EO also focuses on promoting innovation and competition in the field of AI. It aims to attract AI talent to the United States, including streamlining visa processing and developing immigration pathways for AI experts. Moreover, it emphasizes advancing innovation through partnerships and research initiatives. This involves:

  • the initiation of a National AI Research Resource (NAIRR) pilot program;
  • the establishment of NSF Regional Innovation Engines; and
  • the expansion of National AI Research Institutes.

Furthermore, it includes efforts to enhance training programs in high-performance computing, provide intellectual property (IP) and patent guidance for AI-related inventions, and mitigate AI-related IP risks. It prioritizes responsible AI innovation in healthcare, hosts AI Tech Sprint competitions for veterans' healthcare, explores AI's role in strengthening climate resilience, and mandates a report on AI's potential role in scientific research.

Additionally, the EO addresses the importance of ensuring fair competition in AI markets. Agency heads are tasked with utilizing their authority to promote competition and prevent anti-competitive practices.

The EO provides additional guidance on fostering competition amongst AI technology companies by:

  • Directing Small Business Administration (SBA) to allocate funding to promote small businesses to commercialize AI breakthroughs; and
  • Directing all federal agencies to work towards avoiding the concentration of key inputs by a handful of players and to work towards breaking unlawful collusion amongst dominant market players; and
  • Providing researchers and students a platform to access AI resources, fostering job opportunities for skilled AI professionals, and streamlining visa applications for students and researchers wanting to work in AI; and
  • Directing the Federal Trade Commission (FTC) to use its rulemaking authority to ensure fairness in the AI marketplace and work towards safeguarding consumers and workers from potential harms; and
  • Directing the Department of Commerce to make measures to promote competition in the semiconductor industry (which is crucial hardware in developing AI technology).

3.  Supporting Workers

Recognizing the risk AI adoption poses to the future of labor, the EO also focuses on supporting workers in the context of AI implementation. To mitigate the risk of AI to the workers, the President directs the following actions:

  • The Chairman of the Council of Economic Advisers is directed to submit a report within 180 days detailing the labor-market effects of AI.
  • The Secretary of Labor is tasked with presenting a report to the President within the same timeframe, evaluating federal agencies' abilities to support workers facing disruptions due to AI.
  • The Secretary of Labor will issue guidance to ensure that AI-monitored or augmented employees are appropriately compensated for their work time, complying with labor standards.

Additionally, the EO emphasizes prioritizing resources for AI-related education and workforce development through existing programs and collaboration with agencies to build a diverse AI-ready workforce.

4.  Championing Individuals’ Privacy Protection

Addressing the potential privacy risks posed by the wide adoption of AI technology, the EO directs that the Director OMB will:

  • Assess and identify commercially available information (CAI) procured by agencies, particularly CAI containing personally identifiable information (PII).
  • Examine agency standards and procedures associated with CAI containing PII is required to inform potential guidance on mitigating privacy and confidentiality risks.
  • Within 180 days, issue an RFI in consultation with key stakeholders to improve guidance on implementing the privacy provisions of the E-Government Act, seeking feedback on enhancing privacy impact assessments, especially in the context of AI.
  • Take necessary steps to support near-term actions and a long-term strategy identified through the RFI process, including issuing updated guidance or consulting with relevant entities.

Additionally, within 365 days, the Secretary of Commerce, through the Director of the NIST, is tasked with creating guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including those related to AI.

Furthermore, to advance privacy research and the development of Privacy-Enhancing Technologies (PETs), the EO requires that:

  • The Director of the National Science Foundation (NSF) will collaborate with the Secretary of Energy to fund the creation of a Research Coordination Network (RCN) dedicated to advancing privacy research.
  • The Director NSF will engage with agencies to identify opportunities for incorporating PETs into their operations and prioritize research that encourages the adoption of cutting-edge PETs solutions.
  • The results of the United States-United Kingdom PETs Prize Challenge will inform approaches and opportunities for PETs research and adoption.

5.  Protecting Equitable Outcomes and Civil Rights

The EO focuses on advancing equity and civil rights in the context of AI applications. It outlines measures to address discrimination and violations in the criminal justice system. The EO directs:

  • The Attorney General to coordinate with agencies to enforce laws against civil rights violations related to AI and provide a report on AI's impact on law enforcement.
  • Agencies to prevent AI-driven discrimination in federal programs and benefits:
    • The Secretary of HHS will ensure fairness in public benefits;
    • the Secretary of Agriculture will issue guidance for AI usage by administrators who allocate public benefits;
    • The Department of Labor (DoL) will publish guidance for federal contractors regarding nondiscrimination in hiring;
    • The Federal Housing Authority and Consumer Financial Protection Bureau (CFPB) to prevent bias in the housing and consumer financial markets, including in the areas of underwriting and appraisals and along with the Department of Housing and Urban Development (HUD) to prevent bias in the rental housing market when AI systems used for tenant screening etc.
    • The Architectural and Transportation Barriers Compliance Board is directed to ensure that people with disabilities are not subject to unequal treatment by AI systems that use biometric data.
  • To address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
  • The development of best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis to ensure fairness throughout the criminal justice system.

6.  Sector-Specific AI Protections

The EO emphasizes the role of independent regulatory agencies in safeguarding consumers from potential risks associated with AI, such as fraud and privacy threats. It encourages these agencies to engage in rulemaking and provide clarity on existing regulations. The EO directs:

(a) Safe Deployment of AI in Healthcare

  • Responsible AI use and development of affordable and life-saving drugs
  • Establishment of an HHS AI Task Force for strategic planning in healthcare within 90 days.
  • Promoting compliance with nondiscrimination laws related to AI  and establishment of AI safety programs to track clinical errors, analyze data, and disseminate recommendations.

(b) Safe and Responsible AI in Transportation

  • In transportation, the Secretary of Transportation is instructed to assess AI-related needs, while advisory committees provide guidance on safe AI use.

(c) Responsible AI in Education

  • Within 365 days, the Secretary of Education will develop resources, policies, and guidance for the safe and nondiscriminatory use of AI in education. This includes an "AI toolkit" for education leaders.

(d)Federal Communications Commission (FFC) Actions

  • The FCC is encouraged to address AI's impact on communications networks and consumers, including improving spectrum management, network security, and combating unwanted robocalls and robotexts facilitated by AI.

7. Advancing Federal Government Use of AI

To ensure the responsible government deployment of AI and modernize federal AI infrastructure, the following actions are directed:

  • Focus on increasing AI talent in the government, involving identifying priority areas, establishing an AI and Technology Talent Task Force, and coordinating among various agencies for rapid recruitment. Special authorities for AI hiring and retention are encouraged, and efforts will address gaps in AI talent for national defense.
  • AI training programs to be introduced for the federal workforce.
  • Any blanket ban by government agencies on the use of Generative AI has also been discouraged. The EO promotes federal agencies and departments to develop guard rails so as to utilize generative AI “at least for the purposes of experimentation and routine tasks that carry a low risk of impacting Americans’ rights.”

8. Advancing American Leadership of AI Abroad

The EO focuses on strengthening U.S. leadership in AI globally. This includes:

  • The Secretary of State to lead efforts to engage with international allies and establish a framework for managing AI risks and benefits.
  • The Secretary of Commerce to coordinate a global initiative for AI standards and develop a plan for global engagement within 270 days.
  • To foster responsible AI development abroad, the Secretary of State and Administrator of the United States Agency for International Development, with the Secretary of Commerce, are also directed to create an ‘AI in Global Development Playbook’ and a ‘Global AI Research Agenda’ within international contexts.
  • To address AI risks to critical infrastructure globally, the Secretary of Homeland Security will lead efforts to enhance international cooperation and develop a multilateral engagement plan within 270 days, reporting priority actions within 180 days.

Final Thoughts

It is safe to say that the Executive Order issued by the Biden administration is indeed one of the most comprehensive directives ever introduced for AI governance, development, and regulation by any government in the world.

In the absence of federal legislation by Congress on AI development and use, the Biden EO attempts to fill the gap in the most comprehensive manner possible while also calling on Congress to play its part and pass bipartisan legislation on privacy and AI technology.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox

Share


More Stories that May Interest You

What's
New