Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

An Overview of Virginia’s High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094)

Contributors

Anas Baig

Product Marketing Manager at Securiti

Adeel Hasan

Sr. Data Privacy Analyst at Securiti

CIPM, CIPP/Canada

Aswah Javed

Associate Data Privacy Analyst at Securiti

Listen to the content

Virginia Governor Vetoes the High-risk AI Developer and Deployer Act

On March 24, 2025, Virginia Governor Glenn Youngkin vetoed the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094), stating that enacting it would establish a burdensome AI regulatory framework and stifle the AI industry in the state. The legislature passed the bill on February 20, 2025, and, if not vetoed, would have been only the second comprehensive AI legislation at the state level, following Colorado's AI Act.

In his veto statement, the Governor stressed that the bill failed to account for the rapidly evolving nature of the AI industry and would put an especially heavy burden on smaller firms and startups. Notably, earlier in February, the US Chamber of Commerce also urged the governor to veto this bill for various reasons, specifically the adverse impact it could have on small businesses.

What was in the Bill?

Let’s have a quick look at what would have been Virginia’s first comprehensive AI regulatory framework had it been enacted.

Scope

The bill primarily would have applied to the developers and deployers of high-risk AI systems. The bill categorized an AI system as high-risk if it was specifically intended to autonomously make, or be a substantial factor in making, consequential decisions about consumers. Any decision would be a consequential decision under the bill if it had a material legal, or similarly significant, effect on the provision or denial of essential services, such as employment, housing, insurance, education, etc.

Obligations of Developers and Deployers

The bill would have required the developers of high-risk AI systems to:

  • Disclose proposed applications of a high-risk AI system;
  • Maintain comprehensive documentation, including performance evaluations of the AI system, data governance measures, impact assessments, model card, etc;
  • Comply with recognized AI risk management frameworks, such as NIST AI Risk Management Framework, ISO/IEC 42001, etc, and
  • Label synthetic digital content for consumer recognition.

Similarly, the deployers of high-risk AI systems would have been required to:

  • safeguard consumers from known or foreseeable risks of algorithmic discrimination;
  • conduct impact assessments before deployment of a high-risk AI system;
  • notify consumers about their interaction with the AI system;
  • inform the consumers about the reasons behind an adverse decision, data sources, and their right to correction and appeal; and
  • publish and maintain clear statements on managing discrimination risks.

Takeaways

Virginia’s veto of HB 2094 aligns with the broader approach followed by the US federal government, which is more focused on promoting AI innovation than being safety-focused, as in the case of the European Union. While similar bills are under consideration by different state legislatures, the tilt in the US is more towards relying on the existing regulatory frameworks than bringing new laws when it comes to regulating the emerging technologies, especially AI.

How Securiti Can Help

Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.

Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with the AI regulations.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View
Spotlight 13:35

The Better Organized We’re from the Beginning, the Easier it is to Use Data

Watch Now View

Latest

Accelerating Safe Enterprise AI View More

Accelerating Safe Enterprise AI: Securiti’s Gencore AI with Databricks and Anthropic Claude

Securiti AI collaborates with the largest firms in the world who are racing to adopt and deploy safe generative AI systems, leveraging their own...

View More

CAIO’s Guide to Building Safe Knowledge Agents

AI is rapidly moving from test cases to real-world implementation like internal knowledge agents and customer service chatbots, and a PwC report predicts 2025...

View More

What are Data Security Controls & Its Types

Learn what are data security controls, the types of data security controls, best practices for implementing them, and how Securiti can help.

View More

What is cloud Security? – Definition

Discover the ins and outs of cloud security, what it is, how it works, risks and challenges, benefits, tips to secure the cloud, and...

The Future of Privacy View More

The Future of Privacy: Top Emerging Privacy Trends in 2025

Download the whitepaper to gain insights into the top emerging privacy trends in 2025. Analyze trends and embed necessary measures to stay ahead.

View More

Personalization vs. Privacy: Data Privacy Challenges in Retail

Download the whitepaper to learn about the regulatory landscape and enforcement actions in the retail industry, data privacy challenges, practical recommendations, and how Securiti...

India’s Telecom Security & Privacy Regulations View More

India’s Telecom Security & Privacy Regulations: A High-Level Overview

Download the infographic to gain a high-level overview of India’s telecom security and privacy regulations. Learn how Securiti helps ensure swift compliance.

Nigeria's DPA View More

Navigating Nigeria’s DPA: A Step-by-Step Compliance Roadmap

Download the infographic to learn how Nigeria's Data Protection Act (DPA) mapping impacts your organization and compliance strategy.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New