On 24 December 2024, the Oregon Department of Justice (DoJ) released the AI Guidance, emphasizing how the existing legal framework covers AI systems. While acknowledging AI's potential to improve efficiency and contribute to economic growth, the Guidance also highlights some drawbacks, including privacy, prejudice, and transparency issues.
While there is an absence of AI specific law in Oregon, the Guidance makes it clear that AI is not entirely unregulated. The existing laws apply to the use of AI, addressing issues like fairness, accountability, and consumer protection. To explain this, the Guidance briefly looks into a few laws, such as the Unlawful Trade Practices Act and Oregon Consumer Privacy Act, and their application to different use cases involving AI.
1. Unlawful Trade Practices Act (UTPA)
The Unlawful Trade Practices Act (UTPA), ORS 646.605 to 646.656, enacted in 1971, is designed to protect consumers from deceptive and misleading business practices. The Guidance clarifies that companies developing, deploying, or selling AI systems are completely subject to UTPA while marketing, selling, or using AI. It further provides several instances where the AI developers and deployers can be in violation of UTPA. These include:
- Misrepresentations in consumer transactions, including failure to disclose any fact. (ORS 646.608(2))
- Failure to disclose known material defects. (ORS 646.608(1)(t))
- Misrepresentation about an AI product regarding its features, uses, or qualities, or regarding the “real estate, goods or services” to have sponsorship and approval that they do not have. For example, AI-generated fake reviews and videos wrongly portraying celebrity endorsements. (ORS 646.608(1)(e))
- Use of AI to create deceptive price reductions, such as false “flash sales”. (ORS 646.608(1)(j))
- Use of AI to set unconscionably excessive prices during emergencies. (ORS 646.607(3)
- Use of AI produced voice in robocalls to misrepresent caller identity. (ORS 646.608(1)(ff))
- Use of AI to carry out any unethical practice related to the sale, rental, or disposal of products or services, real estate, or the collection or enforcement of a duty. This includes exploitation of consumer ignorance or to intentionally allow a customer to engage in a transaction with no material advantage to the consumer. (ORS 646.607(1))
The Guidance clarifies that the mentioned examples are a few of the many ways the UTPA applies to AI.
2. Oregon Consumer Privacy Act (OCPA)
The Oregon Consumer Privacy Act (OCPA), ORS 646A.570-646A.589, came into effect in July 2024, giving consumers more control over their personal data. Generative AI often relies on consumers’ personal data. Hence, the OCPA applies directly to this technology. The Guidance provides a few instances where OCPA applies to AI, such as:
- Privacy Notices: Businesses must clearly disclose to consumers in their privacy notices when using consumer data to train AI systems.
- Obligations for Data Purchasers: Under the OCPA, developers who purchase or use third-party data sets for model training are considered "controllers." This means they are obligated to adhere to the same standards as the original data collectors.
- Consent for Sensitive Data (ORS 646A.578(2)(b)): If sensitive data, as defined by the OCPA, is involved, businesses must obtain explicit consent from consumers before using their personal data for training or developing AI models.
- Affirmative Consent for Secondary Uses (ORS 646A.578(2)(a)): The developers and deployers of AI systems are bound to take affirmative consent for any new or secondary uses of the data beyond its initial purpose.
- Consent Revocation (ORS 646A.578(1)(d)): Businesses must provide a mechanism for consumers to revoke their consent and must cease data processing within 15 days of receiving a revocation request.
- Consumer Rights: Businesses using or developing AI models must respect consumer rights such as the right to opt out of profiling and the right to request the erasure of the personal data that are provided to them under the OCPA.
- Data Protection Assessments (ORS 646A.586): The developers and deployers are required to conduct Data Protection Assessments before processing the personal data of consumers for profiling or any other activity that poses a “heightened risk of harm to consumers”.
The Oregon Consumer Information Protection Act (OCIPA), ORS 646A.600 to 646A.628, provides measures to safeguard consumers' personal data. The Guidance explains that OCIPA applies to AI developers. Under the Act, they are required to notify the consumers and the Oregon Attorney General in case of a security breach. Moreover, the violations of OCIPA are enforceable under UTPA.
4. Oregon Equality Act
The Oregon Equality Act (OEA) ORS chapter 659A prevents discrimination and aims to ensure equitable opportunities and access for everyone. The Guidance emphasizes that AI systems must not promote biases or discriminatory practices, particularly in sensitive areas like housing, hiring, and lending which are deemed unlawful under the OEA.
Conclusion
Oregon’s AI Guidance offers a thoughtful roadmap for responsible and ethical use of AI, showing how existing laws continue to address the challenges of emerging innovations. The Guidance serves as a reminder that the existing legal frameworks are still applicable and capable of regulating artificial intelligence by highlighting particular legislation. Businesses using or developing AI technologies are urged to align their practices with these regulations to ensure compliance, consumer protection, and fairness. This well-balanced approach is crucial for encouraging responsibility while allowing for innovation in this rapidly evolving technological environment.
How Securiti Can Help
Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.
Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with Oregon’s AI Guidance.
Request a demo to learn more.