Securiti AI Launches Context-Aware LLM Firewalls to Secure GenAI Applications

View

NIST AI Risk Management Framework (AI RMF 1.0) Explained

By Anas Baig | Reviewed By Omer Imran Malik
Published August 17, 2023 / Updated March 10, 2024

Listen to the content

Executive Summary

On January 26, 2023, the National Institute of Standards and Technology (NIST) released the first version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI Framework). The NIST AI Framework is designed to be voluntarily used and equip organizations and individuals with approaches that increase the trustworthiness of AI systems and help foster the responsible design, development, deployment, and use of AI systems over time.

As per the NIST AI Framework, AI RMF refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. The goal of the AI RMF is to offer a resource to the organizations using and deploying AI systems to help manage and mitigate risks arising from the use of AI systems and ensure responsible development and use of the AI.

The NIST AI Framework is designed to assist organizations and individuals - referred to as AI actors. AI actors, as described by the Organization for Economic Co-operation and Development (OECD), are "those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI."

As the field of AI rapidly evolves, it revolutionizes several industries and unlocks undiscovered opportunities. However, as AI advances and becomes mainstream, it presents unique risks and challenges. Some potential risks of the use of AI systems include threats to civil liberties and the rights of individuals. Addressing these concerns and emerging risks has led to the birth of an Artificial Intelligence Risk Management Framework. The NIST AI Framework not only addresses negative impacts but also attempts to maximize the positive impacts of the use of AI systems.

This framework provides a structured approach to identify, assess, and mitigate the risks associated with AI systems. This framework enables organizations to navigate the complicated world of AI technology while assuring ethical AI adoption, protecting against potential harm, encouraging accountability and transparency when implementing AI, and protecting individuals' rights.

AI RMF 1.0

The Framework is divided into Part 1 and Part 2.

Part 1: Foundational Information

Part 1 of the NIST AI Framework addresses the kinds of risks and harms that may arise due to the use of AI systems and the challenges for AI Risk Management. Potential harms of the use of AI systems can be categorized into the following:

  • Harm to people: this category includes harm to a person’s civil liberties, rights, physical or psychological safety, or economic opportunity.
  • Harm to an organization: this category includes harm to an organization’s business operations, harm to an organization from security breaches or monetary loss, or harm to an organization’s reputation.
  • Harm to an ecosystem: this category includes harm to interconnected and interdependent elements and resources, to the global financial system, supply chain, or interrelated systems, as well as harm to natural resources, the environment, and the planet.

It also highlights several challenges to consider when managing risks in pursuit of AI trustworthiness.

Risk Measurement Challenges

Undefined or poorly understood AI risks or failures are challenging to quantify statistically or subjectively. The inability to accurately assess the risks posed by AI systems does not always mean that those risks are either high or low risk. Some of the risk measurement challenges as per the NIST AI Framework include the following:

  • Risk metrics or methodologies used by the organization developing the AI system may not align with the risk metrics or methodologies used by the organization deploying or operating the system.
  • Tracking emergent risks is a challenge.
  • The lack of reliable metrics is an AI risk measurement challenge.
  • Different risks may arise at different stages of the AI life cycle.
  • Results in real-world settings are different from laboratory or controlled environments.
  • Inscrutable AI systems can lead to a lack of transparency or documentation.
  • Baseline metrics comparison is difficult since AI systems perform tasks differently than humans.

Risk Tolerance

The Framework is meant to be adaptable and to reinforce current risk management procedures, which must comply with all applicable laws, rules, and standards. The criteria, tolerance, and response to risk set by organizational, domain, discipline, sector, or professional requirements should be followed by organizations based on existing regulations and guidelines. Where established guidelines do not exist for specific sectors or specific applications, organizations must define reasonable risk tolerance. This AI RMF can be used to manage risks and to record risk management procedures once risk tolerance has been defined.

Risk Prioritization

Organizations must prioritize unacceptable negative risk levels, such as where significant negative impacts imminent or catastrophic risks are present. The development and deployment of AI systems should be stopped until unacceptable negative risks are sufficiently managed. In addition, any residual risks must be documented so that the AI system provider can inform end-users about the potential negative impacts of interacting with an AI system.

Organizational Integration and Management of Risk

For risk management to be effective, organizations must establish and maintain the necessary procedures for accountability, assign roles and duties, and foster an awareness culture and an incentive structure. Implementing the AI RMF alone won't be enough.

Realizing effective risk management requires senior-level organizational commitment and could call for cultural change within an organization or industry. Additionally, depending on their capabilities and resources, small to medium-sized organizations may encounter different difficulties than large organizations when managing AI risks or implementing the AI RMF. AI actors belonging to diverse groups (e.g. environmental groups, civil society organizations, end-users, and potentially impacted individuals and communities) can collectively provide a broader perspective to identify and mitigate existing and emergent risks.

AI Risks and Trustworthiness

AI systems often must be responsive to various essential factors for interested parties to be trusted. Approaches that enhance AI trustworthiness can reduce negative AI risks. The Framework outlines traits of reliable AI, along with guidance on addressing them.

As per the NIST AI Framework, characteristics of trustworthy AI systems include the following:

  • valid and reliable,
  • safe, secure and resilient,
  • accountable and transparent,
  • explainable and interpretable,
  • privacy-enhanced, and
  • fair with harmful bias managed.

Each factor must be balanced depending on the context in which the AI system will be used if AI is to be trusted. While these characteristics are socio-technical system properties, accountability and transparency also pertain to the internal workings of an AI system and its surrounding environment. Neglecting these characteristics can raise the likelihood and severity of unfavorable outcomes.

Part 2: Core and Profiles

Part 2 comprise the “Core” of the Framework. It outlines four distinct functions to assist organizations in addressing the threats posed by actual AI systems. These four functions—GOVERN, MAP, MEASURE, and MANAGE—are further divided into groups and subgroups.

GOVERN applies to all phases of an organization's AI risk management processes and procedures. The MAP, MEASURE, and MANAGE functions can be utilized in settings that are specific to AI systems and during particular stages of the AI lifecycle.

The AI RMF Core provides outcomes and actions that enable dialogue, understanding, and activities to manage AI risks and responsibly develop trustworthy AI systems. Practices related to governing AI risks are described in the NIST AI RMF Playbook.

Govern

The GOVERN function:

  • develops and implements a risk management culture within organizations that design, develop, deploy, test, or acquire AI systems;
  • describes the methods to be followed to reach those results, as well as the processes, documents, and organizational plans that forecast, identify, and manage the risks that a system may cause, especially to users and other members of society;
  • includes processes for assessing potential impacts;
  • provides a framework through which AI risk management activities can be in line with organizational principles, policies, and strategic priorities;
  • connects organizational values and principles to technical aspects of AI system design and development, enabling organizational practices and competencies for those involved in purchasing, training, deploying, and monitoring such systems;
  • considers the entire product lifecycle, related procedures, and any legal or other challenges relating to using third-party hardware or software systems and data.

GOVERN is a cross-cutting function incorporated throughout AI risk management and enables the other functions of the process. The other functions should incorporate GOVERN components, especially those dealing with compliance or evaluation.

Categories and subcategories of the GOVERN function as identified in the NIST AI Framework are as follows:

Categories Subcategories
GOVERN 1:

Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.

GOVERN 1.1: Legal and regulatory requirements involving AI are understood, managed, and documented. GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.

GOVERN 1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based on the organization’s risk tolerance. GOVERN 1.4: The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.

GOVERN 1.4: The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.

GOVERN 1.5: Ongoing monitoring and periodic review of the risk management process and its outcomes are planned and organizational roles and responsibilities clearly defined, including determining the frequency of periodic review.

GOVERN 1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.

GOVERN 1.7: Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization’s trustworthiness

GOVERN 2:

Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.

GOVERN 2.1: Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.

GOVERN 2.2: The organization’s personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.

GOVERN 2.3: Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.

GOVERN 3:

Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle.

GOVERN 3.1: Decision-making related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds).

GOVERN 3.2: Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.

GOVERN 4:

Organizational teams are committed to a culture that considers and communicates AI risk.

GOVERN 4.1: Organizational policies and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize potential negative impacts.

GOVERN 4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate, and use, and they communicate about the impacts more broadly.

GOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.

GOVERN 5:

Processes are in place for robust engagement with relevant AI actors.

GOVERN 5.1: Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI risks.

GOVERN 5.2: Mechanisms are established to enable the team that developed or deployed AI systems to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.

GOVERN 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues. GOVERN 6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third-party’s intellectual property or other rights.

GOVERN 6.2: Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.

Map

The MAP function establishes the context to frame risks to an AI system. It enables organizations to identify risks and risk-contributing factors to prevent negative risks and develop trustworthy AI systems proactively. This function enables organizations develop trustworthy AI systems by:

  • enhancing their capacity for understanding contexts;
  • testing their usage context assumptions;
  • enabling identification of when systems are not operating properly in or outside of their intended context;
  • identifying positive and beneficial uses of their existing AI systems;
  • enhancing the understanding of AI and ML processes' limitations;
  • recognizing limitations in practical applications that can have detrimental effects;
  • identifying possible adverse effects from the intended use of AI systems that are known and foreseeable; and
  • anticipating risks of the use of AI systems beyond the intended use.

Framework users should have enough contextual knowledge about the effects of AI systems after completing the MAP function to decide whether to design, build, or deploy an AI system. If it is decided to move forward, organizations should use the MEASURE and MANAGE functions and the GOVERN function's established policies and processes to manage AI risk. The MAP function must be continued to be applied to AI systems by Framework users as context, capabilities, risks, rewards, and potential impacts evolve over time.

As per NIST AI RMF Playbook, the MAP functions’ categories and subcategories are as follows:

Categories Subcategories
MAP 1: Context is established and understood. MAP 1.1: Intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: the specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or product AI lifecycle; and related TEVV and system metrics.

MAP 1.2: Interdisciplinary AI actors, competencies, skills, and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary collaboration are prioritized.

MAP 1.3: The organization’s mission and relevant goals for AI technology are understood and documented.

MAP 1.4: The business value or context of business use has been clearly defined or – in the case of assessing existing AI systems – re-evaluated. MAP 1.5: Organizational risk tolerances are determined and documented.

MAP 1.5: Organizational risk tolerances are determined and documented.

MAP 1.6: System requirements (e.g., “the system shall respect the privacy of its users”) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.

MAP 2: Categorization of the AI system is performed. MAP 2.1: The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders).

MAP 2.2: Information about the AI system’s knowledge limits and how system output may be utilized and overseen by humans is documented. Documentation provides sufficient information to assist relevant AI actors when making decisions and taking subsequent actions.

MAP 2.3: Scientific integrity and TEVV considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation.

MAP 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood. MAP 3.1: Potential benefits of intended AI system functionality and performance are examined and documented.

MAP 3.2: Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness – as connected to organizational risk tolerance – are examined and documented.

MAP 3.3: Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization.

MAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant technical standards and certifications – are defined, assessed, and documented.

MAP 3.5: Processes for human oversight are defined, assessed, and documented in accordance with organizational policies from the GOVERN function

MAP 4: Risks and benefits are mapped for all components of the AI system, including third-party software and data. MAP 4.1: Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or software – are in place, followed, and documented, as are risks of infringement of a third party’s intellectual property or other rights.

MAP 4.2: Internal risk controls for components of the AI system, including third-party AI technologies, are identified and documented.

MAP 5: Impacts to individuals, groups, communities, organizations, and society are characterized. MAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.

MAP 5.2: Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.

Measure

The MEASURE function leverages quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and associated impacts. It uses information relevant to AI risks found in the MAP function and provides guidance to the MANAGE function. Before deployment and frequently after that, AI systems should be tested. AI risk measurements include documenting aspects of systems’ functionality and trustworthiness.

Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI configurationsRigorous software testing and performance evaluation procedures with accompanying measurements of uncertainty, comparisons to performance benchmarks, and structured reporting and documentation of findings should all be part of the processes created or implemented by the MEASURE function. Independent review processes can increase testing efficiency and reduce internal biases and potential conflicts of interest.

As per the NIST AI Framework, the categories and subcategories for the MEASURE function are as follows:

Categories Subcategories
MEASURE 1: Appropriate methods and metrics are identified and applied. MEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for implementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be measured are properly documented.

MEASURE 1.2: Appropriateness of AI metrics and effectiveness of existing controls are regularly assessed and updated, including reports of errors and potential impacts on affected communities.

MEASURE 1.3: Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, AI actors external to the team that developed or deployed the AI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance.

MEASURE 2: AI systems are evaluated for trustworthy characteristics. MEASURE 2.1: Test sets, metrics, and details about the tools used during TEVV are documented.

MEASURE 2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are representative of the relevant population.

MEASURE 2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s). Measures are documented.

MEASURE 2.4: The functionality and behavior of the AI system and its components – as identified in the MAP function – are monitored when in production.

MEASURE 2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the conditions under which the technology was developed are documented.

MEASURE 2.6: The AI system is evaluated regularly for safety risks – as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and it can fail safely, particularly if made to operate beyond its knowledge limits. Safety metrics reflect system reliability and robustness, real-time monitoring, and response times for AI system failures.

MEASURE 2.7: AI system security and resilience – as identified in the MAP function – are evaluated and documented.

MEASURE 2.8: Risks associated with transparency and accountability – as identified in the MAP function – are examined and documented.

MEASURE 2.9: The AI model is explained, validated, and documented, and AI system output is interpreted within its context – as identified in the MAP function – to inform responsible use and governance.

MEASURE 2.10: Privacy risk of the AI system – as identified in the MAP function – is examined and documented.

MEASURE 2.11: Fairness and bias – as identified in the MAP function – are evaluated and results are documented.

MEASURE 2.12: Environmental impact and sustainability of AI model training and management activities – as identified in the MAP function – are assessed and documented.

MEASURE 2.13: Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and documented.

MEASURE 3: Mechanisms for tracking identified AI risks over time are in place. MEASURE 3.1: Approaches, personnel, and documentation are in place to regularly identify and track existing, unanticipated, and emergent AI risks based on factors such as intended and actual performance in deployed contexts.

MEASURE 3.2: Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.

MEASURE 3.3: Feedback processes for end users and impacted communities to report problems and appeal system outcomes are established and integrated into AI system evaluation metrics.

MEASURE 4: Feedback about efficacy of measurement is gathered and assessed. MEASURE 4.1: Measurement approaches for identifying AI risks are connected to deployment context(s) and informed through consultation with domain experts and other end users. Approaches are documented.

MEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are informed by input from domain experts and relevant AI actors to validate whether the system is performing consistently as intended. Results are documented.

MEASURE 4.3: Measurable performance improvements or declines based on consultations with relevant AI actors, including affected communities, and field data about context-relevant risks and trustworthiness characteristics are identified and documented.

Manage

The MANAGE function requires regularly assigning risk resources to mapped and measured risks in accordance with the GOVERN function's definitions. Risk treatment comprises plans for responding to, recovering from, and communicating about incidents or events.

To reduce the possibility of system failures and adverse outcomes, contextual information gathered via expert consultation and input from relevant AI actors - developed in GOVERN and carried out in MAP - is used in the MANAGE function. AI risk management initiatives are strengthened by systematic documentation procedures implemented in GOVERN and used in MAP and MEASURE, improving accountability and transparency.

After the successful completion of the MANAGE function, plans for prioritizing risk and regular monitoring and improvement will come into force, enabling Framework users to have enhanced capacity to manage the risks of deployed AI systems and to allocate risk management resources based on assessed and prioritized risks. The MANAGE function must be continued to be applied to deployed AI systems by Framework users as methods, contexts, risks, and needs or expectations from relevant AI actors evolve over time.

As per the NIST AI Framework, categories and subcategories for the MANAGE function are as follows:

Categories Subcategories
MANAGE 1: AI risks based on assessments and other analytical output from the MAP and MEASURE functions are prioritized, responded to, and managed. MANAGE 1.1: A determination is made as to whether the AI system achieves its intended purposes and stated objectives and whether its development or deployment should proceed.

MANAGE 1.2: Treatment of documented AI risks is prioritized based on impact, likelihood, and available resources or methods.

MANAGE 1.3: Responses to the AI risks deemed high priority, as identified by the MAP function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting.

MANAGE 1.4: Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.

MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors. MANAGE 2.1: Resources required to manage AI risks are taken into account – along with viable non-AI alternative systems, approaches, or methods – to reduce the magnitude or likelihood of potential impacts.

MANAGE 2.2: Mechanisms are in place and applied to sustain the value of deployed AI systems.

MANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified.

MANAGE 2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use.

MANAGE 3: AI risks and benefits from third-party entities are managed. MANAGE 3.1: AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented.

MANAGE 3.2: Pre-trained models which are used for development are monitored as part of AI system regular monitoring and maintenance.

MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly MANAGE 4.1: Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.

MANAGE 4.2: Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.

MANAGE 4.3: Incidents and errors are communicated to relevant AI actors, including affected communities. Processes for tracking, responding to, and recovering from incidents and errors are followed and documented.

How Can Organizations Operationalize NIST’s AI RMF

The NIST AI Framework allows flexibility for organizations in implementation. To operationalize NIST's Artificial Intelligence Risk Management Framework, organizations should follow these steps:

Familiarize with the AI RMF

Understand the guidelines, processes, and components of the NIST AI Risk Management Framework.

Identify AI Systems

List every AI system and application in the organization, including understanding their objective, data inputs, outcomes, and possible risks.

Conduct Risk Assessment

Conduct a thorough risk assessment of each AI system. This involves identifying potential threats, vulnerabilities, and how AI-related risks may affect an organization's mission and objectives.

Categorize AI Systems into Risk Levels

Classify each AI system depending on the identified risks and identify top-priority risks.

Implement Risk Mitigation Strategies

To address the risks that have been identified, develop risk mitigation procedures. This could entail implementing technical controls, process modifications, or governance measures.

Regular Test and Validation

Conduct regular tests and validate AI systems to ensure they function as intended and manage any discovered risks promptly.

Comprehensive Documentation

Maintain comprehensive documentation of all steps in the risk management process, such as assessments, strategies, and test results.

Continuous Monitoring

Utilize ongoing monitoring to identify and mitigate any risks associated with evolving AI.

Conduct Training

Provide adequate and up-to-date training to employees to understand AI risks and their roles in the AI risk management process. Assign accountability where needed.

Engagement with Stakeholders

Engage relevant stakeholders, such as legal, compliance, IT, and business units, to establish a collaborative approach to AI risk management.

Adaptation and Improvement

Continually update the risk management framework in light of feedback, personal experiences, and revisions to organizational needs or AI technology.

Remember that implementing the Framework successfully necessitates dedication to continued assessment and a proactive strategy for addressing AI-related risks.


Key Takeaways:

  1. The National Institute of Standards and Technology (NIST) introduced the Artificial Intelligence Risk Management Framework (AI RMF 1.0) to guide organizations in responsibly developing, deploying, and using AI systems.
    Here are the key takeaways for operationalizing NIST’s AI RMF:
  2. Framework Purpose: NIST AI Framework aims to enhance the trustworthiness of AI systems and ensure their responsible design and use. It provides strategies to manage and mitigate risks associated with AI technologies.
  3. AI Systems Definition: AI systems are defined as engineered or machine-based systems that can influence real or virtual environments by generating outputs like predictions, recommendations, or decisions.
  4. Framework Design: It assists AI actors (organizations and individuals involved in the AI system lifecycle) in navigating AI technology's complexities while ensuring ethical adoption and safeguarding against potential harm.
  5. Framework Structure: Divided into two parts, Part 1 covers foundational information about AI risks and challenges, while Part 2, the "Core," outlines functions to address AI threats: GOVERN, MAP, MEASURE, and MANAGE.
  6. Risk Categories: Risks are categorized into harms to people, organizations, and ecosystems, highlighting the importance of addressing these risks to foster AI trustworthiness.
  7. Risk Management Challenges: The framework outlines challenges in measuring AI risks due to undefined or poorly understood failures and the inscrutable nature of AI systems, emphasizing the need for clear risk metrics and methodologies.
  8. Risk Tolerance and Prioritization: Organizations are advised to define reasonable risk tolerance levels and prioritize managing unacceptable risks, documenting residual risks for informed decision-making by AI system providers and end-users.
  9. Organizational Integration: Effective risk management requires senior-level commitment, role assignment, and fostering a risk management culture, potentially requiring cultural change within organizations or industries.
  10. Trustworthy AI Characteristics: Trustworthy AI systems are characterized by being valid, reliable, safe, secure, accountable, transparent, explainable, privacy-enhanced, and fair, with harmful bias managed.
  11. Operationalizing the Framework:
    - Familiarize with the AI RMF and its components.
    - Identify and list all AI systems within the organization.
    - Conduct thorough risk assessments for each AI system.
    - Categorize AI systems based on risk levels and prioritize top risks.
    - Develop and implement risk mitigation strategies.
    - Perform regular testing and validation of AI systems.
    - Maintain comprehensive documentation of the risk management process.
    - Engage in continuous monitoring to identify and address new risks.
    - Provide training for employees on AI risks and risk management roles.
    - Engage with stakeholders for a collaborative risk management approach.
    - Adapt and improve the risk management framework based on feedback and new developments in AI technology.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox

Share


More Stories that May Interest You

What's
New