Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View
Veeam

The Funniest Evening at RSA with Hasan Minhaj

Hasan Minhaj Request ticket
View

Navigating the Challenges in NIST AI RMF

Contributors

Anas Baig

Product Marketing Manager at Securiti

Aman Rehan

Data Privacy Analyst

Published July 31, 2024

Listen to the content

The Artificial Intelligence Risk Management Framework (AI RMF 1.0), developed by the National Institute of Standards and Technology (NIST), offers a structured approach to handling risks related to the design, development, deployment, use, and assessment of AI systems.

Although a voluntary tool by nature, adopting the framework poses several benefits for organizations struggling to navigate AI waters. At the same time, it can be overwhelming to fully comprehend the comprehensive framework and ensure its adoption as recommended in the guidelines. Consequently, organizations may encounter several challenges in implementing the NIST AI RMF, but it is important to recognize and overcome these challenges for a number of reasons.

Common Challenges in Implementing the NIST AI RMF

Implementing the NIST AI RMF involves navigating various challenges. Here are some common challenges that organizations may encounter:

Challenge 1: Alignment with Existing Policies and Regulations

Aligning the AI RMF with current corporate policies, legal requirements, and risk management frameworks may be a huge challenge for organizations. This may need significant amendments to current governance frameworks and procedures.

Challenge 2: Complexity of AI Technologies

Even though the birth of AI dates back to the mid-1900s, it is still an evolving field in today’s digitally advanced, data-driven world. Because AI technologies may be broad and complicated, it can be challenging for businesses to completely comprehend the workings, constraints, and possible risks of these tools. As such, there’s a constant need to stay updated with recent developments and strategize how they work best for the organization.

Challenge 3: Data Management Issues

One of the biggest challenges in training and testing AI systems is ensuring high-quality, relevant, and impartial data. Poor data quality might result in incorrect AI predictions and choices. Organizations constantly face the challenge of determining whether their data stores contain correct data that can be mapped to the right data owner and whether that data is updated.

Challenge 4: Resource Allocation

Implementing the NIST AI RMF requires substantial time, resources, and skilled personnel. Effective resource allocation may be difficult for organizations, especially in industries where AI technology is developing quickly. This is particularly a challenge for small and medium-sized organizations that lack the finances and personnel to invest in AI technologies, as they are typically heavily invested in ensuring adequate revenue streams to stay competitive in the industry.

Challenge 5: Technical Expertise

The NIST AI RMF can undoubtedly be technical and confusing for many individuals, particularly those who are new to the AI industry. Expertise in risk management concepts and AI technology is a prerequisite for the NIST AI RMF. Organizations may lack the in-house knowledge and expertise required to assess and mitigate the particular risks posed by AI systems.

It may be challenging to stay abreast of and comply with the many constantly evolving regulatory frameworks and laws across geographies. On the other hand, with AI still in its infancy and development phase, understanding and managing the legal fallout from AI responses, especially in the event of mistakes or when AI systems function, is still a grey area for most organizations unaware of the risk and reward of AI.

Challenge 7: Stakeholder Engagement and Communication

With multiple stakeholders involved in the adoption phase, acquiring support from all relevant parties, such as AI developers, users, senior management, and impacted parties, might be more challenging in reality than on paper. Additionally, explaining AI risks and how to manage AI transparently to stakeholders—some of whom may not have technical backgrounds—can cause significant delays.

Challenge 8: Data Privacy and Security

In a data-driven realm dominated by data protection laws such as the European Union’s General Data Protection Regulation (GDPR), California Privacy Rights Act (CPRA), and others applicable to the business, additional frameworks such as the NIST AI RMF can be all the more challenging and overwhelming.

Challenge 9: AI Governance

One of the growing challenges faced by organizations is establishing AI governance. Without governance and adequate safeguards in place, there’s a high chance of attracting malicious actors and exploits.

Addressing the Challenges in Adopting NIST AI RMF

Understanding and addressing challenges posed by the adoption of NIST AI RMF  is critical for several key reasons:

Establishing AI Governance

Govern

Organizations can address this challenge by using the framework’s Govern function to foster a risk-management culture. It provides a structured approach for implementing policies and practices and outlines processes that foresee, identify, and manage AI system risks. Moreover, it establishes an intersection between company principles and AI system architecture, facilitating ethical AI lifecycle activities.

Map

Organizations may utilize the Map function to understand AI risks and to foresee, assess, and deal with such challenges. It fosters comprehensive AI documentation, from testing to use cases. Organizations may use it to project the impact of AI systems, identify risks, better understand contexts, identify system malfunctions, and identify real-world application limits that may have detrimental implications.

Measure

Organizations may utilize the Measure function, which assists in reducing the risks associated with AI. This function analyzes, assesses, benchmarks, and tracks AI-related risks and their effects using a combination of quantitative, qualitative, and mixed-method tools and methodologies. By assessing the risks associated with AI, organizations may monitor metrics pertaining to societal effects, human-AI interactions, and trustworthy attributes.

Interdepartmental Collaboration

Proper implementation of the AI RMF helps improve interdepartmental collaboration, which can assist in identifying certain issues and ensuring that organizations meet evolving regulatory requirements and frameworks. This can drastically minimize the risk of non-compliance penalties or other legal repercussions.

Periodic Training

Organizations must invest in training programs for their employees regarding evolving frameworks, AI risks, and RMF compliance.

How Securiti Can Help

Securiti’s Data Command Center enables organizations to comply with the NIST AI RMF by securing the organization’s data, enabling organizations to maximize data value, and fulfilling an organization’s obligations around data security, data privacy, data governance, and compliance.

Organizations can overcome hyperscale data environment challenges by delivering unified intelligence and controls for data across public clouds, data clouds, and SaaS, enabling organizations to swiftly comply with privacy, security, governance, and compliance requirements.

Request a demo to witness Securiti in action.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
Introducing Agent Commander
The promise of AI Agents is staggering— intelligent systems that make decisions, use tools, automate complex workflows act as force multipliers for every knowledge...
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About View More
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About
Boards are tuned in to the AI conversation, but there’s a blind spot many organizations still haven’t named: risk silos. Everyone agrees AI governance...
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
California’s Delete Request and Opt-out Platform (DROP) and the Delete Act View More
California’s Delete Request and Opt-out Platform (DROP) and the Delete Act
Understand California’s DROP platform and the Delete Act, including compliance timelines, the 45-day cycle, broker obligations, and how to operationalize compliance.
Building A Secure AI Foundation For Financial Services View More
Building A Secure AI Foundation For Financial Services
Access the whitepaper and discover how financial institutions eliminate Shadow AI, enforce real-time AI policies, and secure sensitive data with a unified DataAI control...
Emerging AI Security Trends For 2026 View More
Emerging AI Security Trends For 2026
Securiti’s latest infographic provides security leaders with a walkthrough of all the emerging AI security trends for 2026 to help them assess and plan...
Safe AI, Accelerated: View More
Safe AI, Accelerated: Securing Data & AI Across the Lifecycle
Securiti’s latest infographic dives into the issue organizations face when scaling their AI projects safely, and how best they can address those challenges.
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New