Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

What to Know About the Utah Artificial Intelligence Policy Act (UAIPA)

Contributors

Anas Baig

Product Marketing Manager at Securiti

Sadaf Ayub Choudary

Data Privacy Analyst at Securiti

CIPP/US

Aswah Javed

Associate Data Privacy Analyst at Securiti

Listen to the content

I. Introduction

On March 13, 2024, Utah’s Governor Spencer Cox officially signed the Utah Artificial Intelligence Policy Act (“UAIPA” or “Act”) into law, making Utah the first US state to have a regulation that imposes transparency obligations on organizations that use Generative AI (GenAI) technologies. The law came into effect on May 1, 2024.

The UAIPA specifically addresses GenAI, defined as any AI system trained on data that can interact with humans via text, audio, or visual communication and produce unscripted outputs similar to human responses with little or no human oversight. This includes chatbots, content generation tools, and other AI-driven communication mechanisms.

A key component of the UAIPA is its strict disclosure requirements, which obligate organizations to ensure consumers are aware when interacting with AI systems.

The UAIPA also establishes the Office of Artificial Intelligence Policy within the Department of Commerce to oversee AI regulations and consult with businesses and other stakeholders to ensure innovation can continue seamlessly and responsibly.

Other US states are in the process of following Utah with similar AI-related regulations that contain similar disclosure and usage requirements. Hence, a thorough understanding and compliance with UAIPA can be critical in organizations complying with these forthcoming regulations as well.

Read on to learn more about UAIPA and the best approach to ensure compliance with this regulation.

II. Definitions of Key Terms

a. Department

The Department of Commerce.

b. Generative Artificial Intelligence

An artificial intelligence system that is:

  • Trained on data;
  • Interacts with a person via textual, audio, or visual communication; and
  • Generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.

c. Learning Laboratory

The Artificial Intelligence Analysis and Research program created under this Act.

d. License

A state-granted authorization that allows a person to engage in a specific occupation based on the person meeting the personal qualifications established under state law and where state law requires the authorization before the person may lawfully engage in the occupation for compensation.

e. Office

The Office of Artificial Intelligence Policy created by the Act.

f. Regulated Occupation

An occupation regulated by the Department of Commerce that requires a person to obtain a license or state certification to practice.

g. Regulatory Mitigation Agreement

An agreement between a participant of the Learning Laboratory Program, the Office, and other relevant state agencies.

h. Regulatory Mitigation

Regulatory Mitigation refers to:

  • When restitution to users may be required;
  • Terms and conditions related to any cure period before the penalties imposed can be assessed;
  • Any reduced civil fines during the participation period; and
  • Other terms that are tailored to identified issues of AI technology.

i. State Certification

State Certification refers to a state-granted authorization given to a person to use the term "state certified" as part of a designated title related to engaging in a specified occupation based on the person meeting personal qualifications established under state law and where state law prohibits a non-certified person from using the term "state certified" as part of a designated title but does not otherwise prohibit a non-certified person from engaging in the occupation for compensation.

III. GenAI Disclosure Requirement

A person who uses, prompts, or in any way causes GenAI to interact with a person in any form subject to this Act must clearly and conspicuously disclose such an interaction to the person with whom the GenAI interacts when asked or prompted. This disclosure should clearly communicate that the person is interacting with GenAI and not an actual human.

Similarly, when GenAI is used in providing services within regulated occupations such as those requiring a license or state certification (e.g., accountants, financial advisors, physicians, dentists, and nurses)—a clear and prominent disclosure is mandatory. The disclosure to a person interacting with GenAI in the provision of regulated services must be provided:

  • Verbally at the beginning of an exchange or conversation
  • Via electronic means before a written interaction begins

Moreover, the Utah Attorney General can pursue penalties of $5,000 per violation against anyone who breaches an existing administrative or judicial order.

AI cannot replace the requirements for practicing in a regulated occupation (e.g., a licensed professional cannot solely rely on AI to fulfill occupational duties).

IV. Penalties for Non-compliance

The Utah Division of Consumer Protection's Director can impose an administrative fine of up to $2,500 for each violation regarding disclosure requirements related to GenAI and bring an action in court to enforce such a provision.

In a court action by the Division, the court may:

  • Declare that an act or practice violates the provisions regarding GenAI’s disclosure requirements;
  • Issue an injunction for such a violation;
  • Order disgorgement of money received in violation of these provisions;
  • Order payment of disgorged money to a person affected by such a violation;
  • Impose an additional fine of up to $2,500 for each violation; or
  • Award any other relief the court deems reasonable and necessary.

If a court grants a judgment or injunctive relief to the Division, the court will award the Division:

  • Reasonable attorney’s fees;
  • Court costs; and
  • Investigative costs.

A company or individual cannot use the fact that generative AI made a violative statement or acted in violation of consumer protection laws as a defense.

V. The Office of Artificial Intelligence Policy

The Act mandates the formation of the Office of Artificial Intelligence Policy (the Office) within the Department of Commerce. It is overseen by a director appointed by the state department’s executive director.

The Office of AI Policy has a range of responsibilities, including:

  • Creating and administering an artificial intelligence learning laboratory program;
  • Consulting  with businesses and other stakeholders in the state on potential regulatory proposals;
  • Developing rules in accordance with the Utah Administrative Rulemaking Act. These rules will govern various aspects of the learning laboratory program, such as:
    • Procedures and requirements for participating;
    • Criteria for invitation, acceptance, denial, or removal of participants;
    • Data usage limitations and cybersecurity criteria for participants;
    • Required participant disclosures towards consumers;
    • Reporting requirements for participants to the Office;
    • Criteria for limited extension of the participation period; and
    • Other requirements necessary to administer the learning laboratory,

The Office of AI Policy is also required to report annually (by November 30) to the Business and Labor Interim Committee. This report will cover:

  • The proposed learning agenda for the learning laboratory;
  • The findings, participation, and outcomes of the learning laboratory; and
  • All recommended legislation from findings from the learning laboratory.

The Artificial Intelligence Learning Laboratory Program

The law establishes the Artificial Intelligence Learning Laboratory Program that will be administered directly by the Office. The primary purposes of the laboratory will be to:

  • Analyze and research all the relevant risks, benefits, impacts, and policy implications of artificial intelligence technologies and create the state regulatory framework accordingly;
  • Encourage the development of AI technologies in the state;
  • Evaluate the effectiveness and viability of current, potential, and proposed regulations on AI with leading AI organizations; and
  • Develop findings and recommendations for legislation and regulation of AI.

The Office will regularly set a learning agenda for the learning laboratory to establish specific areas of AI policy that the Office aims to study. It may consult the entities when establishing the learning agenda, including:

  • Relevant agencies;
  • Industry leaders;
  • State academic institutions; and
  • Other key stakeholders with relevant knowledge, experience, and expertise within the field of AI.

The Office may also invite and receive applications from individuals to participate in the learning laboratory. It will also establish the procedures and requirements for sending and receiving such requests per the requirements of the learning laboratory. When selecting participants for the learning laboratory, the Office must consider the following:

  • The relevance of an invite or applicant’s AI technology to the learning agenda;
  • The invitee or applicant’s expertise and knowledge relevant to the learning agenda; and
  • Other important factors identified by the Office as relevant to the participation in the learning laboratory.

The Office will work closely with all the eventual participants to establish appropriate benchmarks and assess the outcomes of their participation in the learning laboratory.

Regulatory Mitigation Agreements

The Act also introduces the concept of “regulatory mitigation”  which allows Program participants to develop and test AI technologies with limited liability.

To be eligible for regulatory mitigation, a participant must be able to demonstrate the following to the Office:

  • Their technical expertise and capability to develop and test the proposed AI technology responsibly;
  • Sufficient financial resources to meet all relevant obligations during testing;
  • AI technology that provides potential consumer benefits that outweigh risks from mitigated enforcement of regulations;
  • An effective plan to monitor and minimize any identified risks from testing; and
  • The scale, scope, and duration of proposed testing are appropriately limited based on the risk assessment.

The Office may freely consult with other relevant agencies and experts to determine whether an applicant meets the eligibility criteria.

The Office may temporarily grant the participant regulatory mitigation by entering into a regulatory mitigation agreement with the Office and other relevant agencies. The participants will only be eligible to receive the regulatory mitigation if they demonstrate that they meet the established eligibility criteria.

The regulatory mitigation agreement should include the following:

  • The scope of AI technology use, including the number and types of users, geographic limitations, and other restrictions.
  • Safeguards to be implemented.
  • Any other regulatory mitigation granted to the application.

The Office will work closely with relevant agencies to develop appropriate terms for the regulatory mitigation agreement. A participant will remain subject to all legal and regulatory requirements not expressly waived or modified by the terms of the agreement.

Additionally, the Office holds the power to remove a participant at any time for any reason. Participation in the learning laboratory does not grant any property rights or licenses.

Furthermore, if a participant violates a legal or regulatory requirement or the terms of their participation agreement, they may be removed and subject to all applicable civil and criminal penalties.  Moreover, participation in the learning laboratory does not imply endorsement or approval by the state. Hence, the state is not responsible for any claims, liabilities, damages, losses, or expenses arising from a participant's involvement in the learning laboratory.

Participation in the Artificial Intelligence Learning Laboratory

The Office may approve an applicant’s participation in the program. Upon approval, the applicant officially becomes a participant by entering into a participation agreement with the office and relevant state agencies. In such a case, the participant must:

  • Provide required information to state agencies per the provisions of the participation agreement; and
  • Report to the Office according to the terms of the agreement.

The Office may establish additional cybersecurity auditing procedures for participants it considers high risk owing to their AI technology. The participant will be required to report any incidents that result in harm, privacy breaches, or unauthorized data usage that may lead to the participant's removal from the laboratory.  Furthermore, the participant must retain detailed records as required by the Office’s rules or participation agreement.

Duration and Extension

The initial regulatory mitigation agreement will be in effect for up to 12 months. The participant may request a single 12-month extension by 30 days before the end of the initial 12-month period.

The Office will then either grant or deny the extension request before the initial demonstration period expires.

VI. How Securiti Can Help

Thanks to its Data Command Center, Securiti has established itself as a market leader in data security, privacy, governance, and compliance. This centralized platform enables the safe use of data and GenAI and provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments.

Not only is it easy to deploy and use, but it also provides users access to numerous individual modules and solutions, such as the People Data Graph, Assessment Automation, Sensitive Data Catalog, and, most importantly, AI Security & Governance, among several others, designed to ensure compliance with major obligations an organization may be subject to. The centralized dashboard allows for real-time monitoring that enables proactive measures to be taken whenever necessary, elevating an organization’s efforts toward effective compliance.

Request a demo today and learn more about how Securiti can help you comply with Utah’s AI Act and all other similar AI and data privacy-related regulations within the United States.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 13:38

Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines

Sanofi Thumbnail
Watch Now View
Spotlight 10:35

There’s Been a Material Shift in the Data Center of Gravity

Watch Now View
Spotlight 14:21

AI Governance Is Much More than Technology Risk Mitigation

AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View

Latest

AI System Observability: Go Beyond Model Governance View More

AI System Observability: Go Beyond Model Governance

Across industries, AI systems are no longer just tools acting on human prompts. The AI landscape is evolving rapidly, and AI systems are gaining...

View More

Securiti Accelerates Secure Agentic AI Deployments with NVIDIA Enterprise AI Factory

Still adapting to  the initial Gen AI boom, the IT industry is now undergoing another profound evolution- the rise of Agentic AI. AI has...

Virginia’s Privacy Protections for Reproductive and Sexual Health Data View More

Virginia’s Privacy Protections for Reproductive and Sexual Health Data

Gain insights into Virginia’s Privacy Protections for Reproductive and Sexual Health Data. Learn about key provisions, implications for business, and how Securiti can help.

Understanding Data Regulations in Australia’s Telecom Sector View More

Understanding Data Regulations in Australia’s Telecom Sector

1. Introduction Australia’s telecommunications sector plays a crucial role in connecting millions of people. However, with this connectivity comes the responsibility of safeguarding vast...

Big Data, Big Risks View More

Big Data, Big Risks: The Data Privacy Challenges For Credit Reporting Agencies

Learn about regulatory frameworks, enforcement actions, privacy challenges, practical recommendations, how Securiti helps and more.

ROPA View More

Records of Processing Activities (RoPA): A Cross-Jurisdictional Analysis

Download the whitepaper to gain a cross-jurisdictional analysis of records of processing activities (RoPA). Learn what RoPA is, why organizations should maintain it, and...

Comparison of RoPA Field Requirements Across Jurisdictions View More

Comparison of RoPA Field Requirements Across Jurisdictions

Download the infographic to compare Records of Processing Activities (RoPA) field requirements across jurisdictions. Learn its importance, penalties, and how to navigate RoPA.

Navigating Kenya’s Data Protection Act View More

Navigating Kenya’s Data Protection Act: What Organizations Need To Know

Download the infographic to discover key details about navigating Kenya’s Data Protection Act and simplify your compliance journey.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New