Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

Navigating the Challenges in NIST AI RMF

Contributors

Anas Baig

Product Marketing Manager at Securiti

Aman Rehan

Data Privacy Analyst

Published July 31, 2024

Listen to the content

The Artificial Intelligence Risk Management Framework (AI RMF 1.0), developed by the National Institute of Standards and Technology (NIST), offers a structured approach to handling risks related to the design, development, deployment, use, and assessment of AI systems.

Although a voluntary tool by nature, adopting the framework poses several benefits for organizations struggling to navigate AI waters. At the same time, it can be overwhelming to fully comprehend the comprehensive framework and ensure its adoption as recommended in the guidelines. Consequently, organizations may encounter several challenges in implementing the NIST AI RMF, but it is important to recognize and overcome these challenges for a number of reasons.

Common Challenges in Implementing the NIST AI RMF

Implementing the NIST AI RMF involves navigating various challenges. Here are some common challenges that organizations may encounter:

Challenge 1: Alignment with Existing Policies and Regulations

Aligning the AI RMF with current corporate policies, legal requirements, and risk management frameworks may be a huge challenge for organizations. This may need significant amendments to current governance frameworks and procedures.

Challenge 2: Complexity of AI Technologies

Even though the birth of AI dates back to the mid-1900s, it is still an evolving field in today’s digitally advanced, data-driven world. Because AI technologies may be broad and complicated, it can be challenging for businesses to completely comprehend the workings, constraints, and possible risks of these tools. As such, there’s a constant need to stay updated with recent developments and strategize how they work best for the organization.

Challenge 3: Data Management Issues

One of the biggest challenges in training and testing AI systems is ensuring high-quality, relevant, and impartial data. Poor data quality might result in incorrect AI predictions and choices. Organizations constantly face the challenge of determining whether their data stores contain correct data that can be mapped to the right data owner and whether that data is updated.

Challenge 4: Resource Allocation

Implementing the NIST AI RMF requires substantial time, resources, and skilled personnel. Effective resource allocation may be difficult for organizations, especially in industries where AI technology is developing quickly. This is particularly a challenge for small and medium-sized organizations that lack the finances and personnel to invest in AI technologies, as they are typically heavily invested in ensuring adequate revenue streams to stay competitive in the industry.

Challenge 5: Technical Expertise

The NIST AI RMF can undoubtedly be technical and confusing for many individuals, particularly those who are new to the AI industry. Expertise in risk management concepts and AI technology is a prerequisite for the NIST AI RMF. Organizations may lack the in-house knowledge and expertise required to assess and mitigate the particular risks posed by AI systems.

It may be challenging to stay abreast of and comply with the many constantly evolving regulatory frameworks and laws across geographies. On the other hand, with AI still in its infancy and development phase, understanding and managing the legal fallout from AI responses, especially in the event of mistakes or when AI systems function, is still a grey area for most organizations unaware of the risk and reward of AI.

Challenge 7: Stakeholder Engagement and Communication

With multiple stakeholders involved in the adoption phase, acquiring support from all relevant parties, such as AI developers, users, senior management, and impacted parties, might be more challenging in reality than on paper. Additionally, explaining AI risks and how to manage AI transparently to stakeholders—some of whom may not have technical backgrounds—can cause significant delays.

Challenge 8: Data Privacy and Security

In a data-driven realm dominated by data protection laws such as the European Union’s General Data Protection Regulation (GDPR), California Privacy Rights Act (CPRA), and others applicable to the business, additional frameworks such as the NIST AI RMF can be all the more challenging and overwhelming.

Challenge 9: AI Governance

One of the growing challenges faced by organizations is establishing AI governance. Without governance and adequate safeguards in place, there’s a high chance of attracting malicious actors and exploits.

Addressing the Challenges in Adopting NIST AI RMF

Understanding and addressing challenges posed by the adoption of NIST AI RMF  is critical for several key reasons:

Establishing AI Governance

Govern

Organizations can address this challenge by using the framework’s Govern function to foster a risk-management culture. It provides a structured approach for implementing policies and practices and outlines processes that foresee, identify, and manage AI system risks. Moreover, it establishes an intersection between company principles and AI system architecture, facilitating ethical AI lifecycle activities.

Map

Organizations may utilize the Map function to understand AI risks and to foresee, assess, and deal with such challenges. It fosters comprehensive AI documentation, from testing to use cases. Organizations may use it to project the impact of AI systems, identify risks, better understand contexts, identify system malfunctions, and identify real-world application limits that may have detrimental implications.

Measure

Organizations may utilize the Measure function, which assists in reducing the risks associated with AI. This function analyzes, assesses, benchmarks, and tracks AI-related risks and their effects using a combination of quantitative, qualitative, and mixed-method tools and methodologies. By assessing the risks associated with AI, organizations may monitor metrics pertaining to societal effects, human-AI interactions, and trustworthy attributes.

Interdepartmental Collaboration

Proper implementation of the AI RMF helps improve interdepartmental collaboration, which can assist in identifying certain issues and ensuring that organizations meet evolving regulatory requirements and frameworks. This can drastically minimize the risk of non-compliance penalties or other legal repercussions.

Periodic Training

Organizations must invest in training programs for their employees regarding evolving frameworks, AI risks, and RMF compliance.

How Securiti Can Help

Securiti’s Data Command Center enables organizations to comply with the NIST AI RMF by securing the organization’s data, enabling organizations to maximize data value, and fulfilling an organization’s obligations around data security, data privacy, data governance, and compliance.

Organizations can overcome hyperscale data environment challenges by delivering unified intelligence and controls for data across public clouds, data clouds, and SaaS, enabling organizations to swiftly comply with privacy, security, governance, and compliance requirements.

Request a demo to witness Securiti in action.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
View More
Aligning Your AI Systems With GDPR: What You Need to Know
Securiti’s latest blog walks you through all the important information and guidance you need to ensure your AI systems are compliant with GDPR requirements.
Network Security: Definition, Challenges, & Best Practices View More
Network Security: Definition, Challenges, & Best Practices
Discover what network security is, how it works, types, benefits, and best practices. Learn why network security is core to having a strong data...
EU AI Act: What Changes Now vs What Starts in 2026 View More
EU AI Act: What Changes Now vs What Starts in 2026
Understand the EU AI Act rollout—what obligations apply now, what phases in by 2026, and how providers and deployers should prepare for risk tiers,...
Australia’s Guidance for AI Adoption View More
Australia’s Guidance for AI Adoption
Access the whitepaper to learn about what businesses need to know about Australia’s Guidance for AI Adoption. Discover how Securiti helps ensure compliance.
View More
Solution Brief: Microsoft Purview + Securiti
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Top 7 Data & AI Security Trends 2026 View More
Top 7 Data & AI Security Trends 2026
Discover the top 7 Data & AI security trends for 2026. Learn how to secure AI agents, govern data, manage risk, and scale AI...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New