Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

U.S. Department of Commerce Issues New AI Guidance, Tools Post Biden’s EO

Contributors

Anas Baig

Product Marketing Manager at Securiti

Sadaf Ayub Choudary

Data Privacy Analyst at Securiti

CIPP/US

Published August 22, 2024

Listen to the content

On July 26, 2024, the United States Department of Commerce (DOC) released new National Institute of Standards and Technology (NIST) draft guidelines from the U.S. AI Safety Institute to assist AI developers in evaluating and mitigating risks associated with dual-use foundation models and generative Artificial Intelligence (AI).

Here’s more on the White House Face Sheet on Administration-wide actions on AI.

Background of the New Guidance

On October 20, 2023, President Biden issued a groundbreaking Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, aimed at positioning the US as a leader in harnessing AI’s potential and managing evolving AI-associated risks.

Fast-forward to today. The department's National Institute of Standards and Technology (NIST) has released three final guidance documents, which were first made available for public comment in April, along with a draft guidance document from the US AI Safety Institute to help mitigate risks.

Learn more about the landmark Executive Order.

Overview of the New Guidance

The DOC has announced new guidance and tools, marking 270 days since President Biden’s EO. This initiative is part of ongoing efforts to ensure the safe and responsible development of AI technologies. Key highlights include:

NIST Publications

NIST has released new AI guidance documents covering various aspects of AI technology. Two newly published items include:

  • a draft guidance document from the U.S. AI Safety Institute to help developers mitigate risks from generative AI and dual-use foundation models and
  • a testing platform to measure AI system vulnerabilities to attacks.

The remaining three documents are final versions of previously released drafts:

  • two provide guidelines for managing risks associated with generative AI, complementing NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF) and
  • the third outlines a plan for global collaboration on AI standards​​​​​​​​.

Software Tools

NIST has released a software package to evaluate the potential effects of adversarial assaults on the functionality of AI systems. This package enables AI developers to identify and mitigate vulnerabilities in AI models.

USPTO Updates

The USPTO has amended its guidelines on patent eligibility for AI and other developing technologies. This improvement clarifies the review process for AI-related ideas.

Global AI Standards

A draft proposal has been made available for US stakeholders to collaborate worldwide on AI standards. This strategy's main objectives are to improve transparency, testing, and evaluation and to establish universal standards for AI technology.

NIST is announcing two new releases today for the first time:

Protecting Against Misuse Risk from Dual-Use Foundation Models

  • Introduction of Guidelines: NIST’s AI Safety Institute has released the initial public draft of NIST AI 800-1.
  • Scope of Models: Guidelines focus on dual-use foundation models with potential for both beneficial and harmful applications.
  • Purpose: Aim to provide voluntary best practices for developers to prevent misuse of AI systems.
  • Key Approaches: The draft guidance outlines seven key approaches to mitigate misuse risks.
  • Implementation and Transparency: Includes recommendations for implementing best practices and ensuring transparency.
  • Specific Threats Addressed: Aims to prevent AI models from being used for creating biological weapons, conducting offensive cyber operations, and producing harmful content (e.g., child sexual abuse material, nonconsensual intimate imagery).
  • Targeted Protection: Focuses on protecting individuals, public safety, and national security from AI misuse risks.

Testing How AI Systems Respond to Attacks

  • Core Vulnerability: AI systems' core vulnerability lies in the model, which makes decisions based on large training data.
  • Adversarial Threat: Adversaries can poison training data, causing models to make incorrect decisions (e.g., misidentifying stop signs as speed limit signs).
  • Introduction of Dioptra: Dioptra is a new software package designed to test AI software's resilience against adversarial attacks.
  • Open-Source Availability: Dioptra is open-source and available for free download.
  • Community Support: Supports government agencies and small to medium-sized businesses in conducting AI evaluations.
  • Verification of Performance: This enables the community to verify AI developers' performance claims.
  • Model Testing Assistance: Helps identify types of attacks that can degrade AI model performance.
  • Quantifying Performance Reduction: Dioptra quantifies performance reduction, providing insights into the frequency and circumstances of AI system failures.

In addition to two initial releases, NIST has finalized three documents:

Mitigating the Risks of Generative AI

The first publication details the NIST AI RMF Generative AI Profile (NIST AI 600-1).

  • Purpose of AI RMF Generative AI Profile: NIST AI 600-1 helps organizations identify and manage unique risks associated with generative AI.
  • Comprehensive Risk Management: Suggests over 200 actions for managing generative AI risks.
  • Companion to AI RMF: Serves as a companion resource to NIST's AI Risk Management Framework (AI RMF).
  • Focus on Specific Risks: To align with organizational goals and priorities, it concentrates on 12 specific risks. Key risks include lowered barriers to cybersecurity attacks, the production of misinformation, disinformation, hate speech, and other harmful content, and AI systems generating false or "hallucinated" outputs.
  • Alignment with Organizational Goals: Ensures risk management actions align with the organization's goals and priorities.

Reducing Threats to the Data Used to Train AI Systems

The second finalized publication, "Secure Software Development Practices for Generative AI and Dual-Use Foundation Models" (NIST Special Publication 800-218A), complements the Secure Software Development Framework (SP 800-218).

  • Focused Concern: Addresses the specific issue of generative AI systems being compromised by malicious training data.
  • Scope: Focuses on the training and use of AI systems.
  • Risk Identification: Identifies potential risk factors associated with AI training and use.
  • Mitigation Strategies: Provides strategies to mitigate identified risks.
  • Key Recommendations: Analyze training data for signs of poisoning, bias, homogeneity, and tampering and ensure the integrity and performance of AI systems.
  • Overall Goal: Enhance the security and reliability of generative AI and dual-use foundation models through robust development practices.

Global Engagement on AI Standards

The third finalized publication, "A Plan for Global Engagement on AI Standards" (NIST AI 100-5), aims to promote the worldwide development and implementation of AI-related consensus standards.

  • Focus Areas: Foster cooperation among international stakeholders and enhance coordination and information sharing across countries to ensure the safe and effective deployment of AI technologies​​​​​​.
  • Alignment with Existing Plans: The guideline corresponds to the National Standards Strategy for Critical and Emerging Technology and is based on priorities specified in the NIST-developed Plan for Federal Engagement in AI Standards and Related Tools.

How Securiti Helps

Securiti’s Data Command Center enables organizations to comply with the NIST’s AI Risk Management Framework (AI RMF) by securing the organization’s data, enabling organizations to maximize data value, and fulfilling an organization’s obligations around data security, data privacy, AI security and governance, and compliance.

Additionally, Securiti’s Genstack AI Suite removes the complexities and risks inherent in the GenAI lifecycle, empowering organizations to swiftly and safely utilize their structured and unstructured data anywhere with any AI and LLMs. It provides features such as secure data ingestion and extraction, data masking, anonymization, and redaction, as well as indexing and retrieval capabilities. It also facilitates the configuration of LLMs for Q&A, inline data controls for governance, privacy, and security, and LLM firewalls to enable the safe adoption of GenAI.

Request a demo to witness Securiti in action.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
View More
Securiti and Databricks: Putting Sensitive Data Intelligence at the Heart of Modern Cybersecurity
Securiti is thrilled to partner with Databricks to extend Databricks Data Intelligence for Cybersecurity. This collaboration marks a pivotal moment for enterprise security, bringing...
Shrink The Blast Radius: Automate Data Minimization with DSPM View More
Shrink The Blast Radius
Recently, DaVita disclosed a ransomware incident that ultimately impacted about 2.7 million people, and it’s already booked $13.5M in related costs this quarter. Healthcare...
View More
What is Trustworthy AI? Your Comprehensive Guide
Learn what Trustworthy AI means, the principles behind building reliable AI systems, its importance, and how organizations can implement it effectively.
View More
What is Security Posture?
Learn what security posture is, its strategic importance, types, how to conduct a security posture assessment, and how Securiti DSPM helps.
The Healthcare Data & AI Security Playbook View More
The Healthcare Data & AI Security Playbook
Practical blueprint to secure PHI and AI workloads—discover and classify data across EHRs and clouds, enforce least privilege, de-identify/tokenize, monitor risk, and meet HIPAA/FHIR...
Energy Data & AI: A DSPM Playbook for Secure Innovation View More
Energy Data & AI: A DSPM Playbook for Secure Innovation
The whitepaper highlights the critical data security challenges and risks associated with the Energy sector, the real-world risk scenarios, and how DSPM can help.
Operationalizing DSPM: 12 Must-Dos for Data & AI Security View More
Operationalizing DSPM: 12 Must-Dos for Data & AI Security
A practical checklist to operationalize DSPM—12 must-dos covering discovery, classification, lineage, least-privilege, DLP, encryption/keys, policy-as-code, monitoring, and automated remediation.
7 Data Minimization Best Practices View More
7 Data Minimization Best Practices: A DSPM Powered Guide
Discover 7 core data minimization best practices in this DSPM-powered infographic checklist. Learn how to cut storage waste, automate discovery, detection and remediation.
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New