Virginia Governor Vetoes the High-risk AI Developer and Deployer Act
On March 24, 2025, Virginia Governor Glenn Youngkin vetoed the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094), stating that enacting it would establish a burdensome AI regulatory framework and stifle the AI industry in the state. The legislature passed the bill on February 20, 2025, and, if not vetoed, would have been only the second comprehensive AI legislation at the state level, following Colorado's AI Act.
In his veto statement, the Governor stressed that the bill failed to account for the rapidly evolving nature of the AI industry and would put an especially heavy burden on smaller firms and startups. Notably, earlier in February, the US Chamber of Commerce also urged the governor to veto this bill for various reasons, specifically the adverse impact it could have on small businesses.
What was in the Bill?
Let’s have a quick look at what would have been Virginia’s first comprehensive AI regulatory framework had it been enacted.
Scope
The bill primarily would have applied to the developers and deployers of high-risk AI systems. The bill categorized an AI system as high-risk if it was specifically intended to autonomously make, or be a substantial factor in making, consequential decisions about consumers. Any decision would be a consequential decision under the bill if it had a material legal, or similarly significant, effect on the provision or denial of essential services, such as employment, housing, insurance, education, etc.
Obligations of Developers and Deployers
The bill would have required the developers of high-risk AI systems to:
- Disclose proposed applications of a high-risk AI system;
- Maintain comprehensive documentation, including performance evaluations of the AI system, data governance measures, impact assessments, model card, etc;
- Comply with recognized AI risk management frameworks, such as NIST AI Risk Management Framework, ISO/IEC 42001, etc, and
- Label synthetic digital content for consumer recognition.
Similarly, the deployers of high-risk AI systems would have been required to:
- safeguard consumers from known or foreseeable risks of algorithmic discrimination;
- conduct impact assessments before deployment of a high-risk AI system;
- notify consumers about their interaction with the AI system;
- inform the consumers about the reasons behind an adverse decision, data sources, and their right to correction and appeal; and
- publish and maintain clear statements on managing discrimination risks.
Takeaways
Virginia’s veto of HB 2094 aligns with the broader approach followed by the US federal government, which is more focused on promoting AI innovation than being safety-focused, as in the case of the European Union. While similar bills are under consideration by different state legislatures, the tilt in the US is more towards relying on the existing regulatory frameworks than bringing new laws when it comes to regulating the emerging technologies, especially AI.
How Securiti Can Help
Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.
Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with the AI regulations.
Request a demo to learn more.