Gartner recommends that organizations undertaking AI TRiSM efforts develop or partner to deliver AI catalogs, data maps and continuous monitoring capabilities.
First is an AI catalog. Organizations must establish a complete inventory of AI entities used in the organization, including models, agents, and applications. All AI must be accounted for, including off-the-shelf and third-party applications. Models and agents that have been built or fine-tuned with enterprise data or contextualized via Retrieval Augmented (RAG) systems must also be accounted for.
Second is an AI data map. Each of those systems needs an explicit and detailed mapping of the data they utilize and have access to, including all processing, aggregation, and transformation steps they might undergo in an AI pipeline, all the way back to the source system. An AI data map is vital to gaining a complete view of risks and deploying adequate controls.
Lastly, a real-time continuous monitoring capacity for providing continuous assurance and system evaluation is required. Measures for trust, performance, etc, must be developed and systems should be regularly tested against them both offline and in a continuous, real-time manner.
In order to properly architect a TRiSM framework, organizations need to address the following technical requirements for Information Governance, AI Runtime Inspection and Enforcement and AI governance.
The purpose of Information Governance technologies in TRiSM is to restrict AI and user access to only relevant and properly permissioned data throughout the lifecycle. In a study conducted by ISMG in partnership with Microsoft titled First Annual Generative AI Study, the top concern about the use of AI, cited by 80% of business leaders and 82% of cybersecurity professionals, was the potential leakage of sensitive data. A comprehensive approach to information governance is therefore critical in securing an organization's sensitive information and providing a solid foundation for TRiSM efforts.
Technology solutions for Information Governance must address key challenges that organizations face when trying to secure their data. First is discovery. According to a recent survey from Omdia, only 11% of organizations can account for 100% of their data. This issue is exacerbated by a patchwork of tools across hybrid and multi-cloud environments, giving fragmented views of enterprise data. Information governance solutions must be able to scan environments and bring back intelligence about the structured and unstructured data that exists throughout an organization.
The second challenge Information Governance technologies must address is the classification of sensitive data. Solutions that rely on keywords and manual tagging are insufficient for the volume and variety of unstructured data that organizations typically store. Solutions that merely “sample” data sets to determine if they contain sensitive data are also insufficient because PII or other sensitive data can sometimes be buried deep in unstructured data in unexpected places. Robust solutions conduct a deep scan of all data and automate classification with a high degree of accuracy and specific labels (such as PII, IP, password, etc) by analyzing context around potentially sensitive data.
The third challenge is overpermissioning. The Sysdig 2023 Cloud-Native Security and Usage Report found that 90% of granted permissions are not used. This suggests that users often have access to data they don’t need and possibly shouldn’t access. AI makes that data much more accessible to end users. Information Governance solutions should be able to identify overpermissioned users and data sets and enforce policies restricting access.
Lastly, as data is moved from a source system to be staged for use in AI, critical context is often lost. Entitlements, classifications, ownership and residency, and other information that is critical to managing risk and compliance can get lost. Information Governance solutions should preserve that metadata for use in AI Runtime Inspection and Enforcement as well as audits.
Beyond those common challenges, Information Governance technologies should clearly map data utilized and accessed by AI by capturing data provenance across complex pipelines that can include aggregations, processing and movement from source systems. Information Governance solutions must also establish data retention policies to aid in data minimization efforts and regulatory compliance.
A good Information Governance solution curates clean, sanitized data for AI that has security built-in from the beginning, not as an afterthought. Fine-tuning, RAG-powered solutions, or agents can leverage permissions, labels and other critical context that can be enforced at runtime. Done well, Information Governance accelerates the development of safe AI by making the right data easily available for use while making sure sensitive data is not exposed to the wrong users or systems, providing a solid foundation for AI Runtime Inspection and Enforcement.
Problem/Need |
AI Governance Tech Feature |
Outcome |
Fragmented view of enterprise data |
Deep scan across environments |
Visibility into all enterprise data |
Manual or incomplete classification |
Auto classification of complete data sets |
Accurate view of sensitive data, specific labels |
Overpermissioning |
Identification of overpermissioned users and data sets |
Tighter access controls |
Loss of context around data when moved from source system |
Preservation of critical context |
Labels to be used by AI Runtime Inspection and Enforcement, context to be used for governance/audit/visibility |
Data mapping |
Data provenance and visualization |
Data Map |
Data minimization |
Configurable data retention policies |
Reduced risk in ROT data (redundant, obsolete, trivial) |
Handling of sensitive data |
Filter, mask, redact sensitive data |
Curated sanitized data sets |
Maintenance of AI data security posture |
Periodic assessment of vulnerabilities |
Enhanced AI data security posture |
AI Runtime Inspection and Enforcement Technical Requirements
The purpose of AI Runtime Inspection and Enforcement technology is to monitor for and address risks and threats as they unfold at runtime. Organizations need assurances that AI model outputs can be trusted, risks are mitigated and that systems are secure. Prevention is the goal; discovering a cyber attack or sensitive data leakage after it has happened does little to help. Therefore, AI Runtime Inspection and Enforcement technologies must first and foremost have real-time observability into AI events. An AI event is a discrete interaction or state change occurring within an AI system or workflow, involving one or more key components: human users, autonomous or semi-autonomous AI agents, AI models, and the data being accessed, processed, or generated. Because any of these interactions can be attack points for cyber threats or failure points for data protections, they must be addressed. Merely monitoring prompts for known “jailbreak” attempts, for example, is insufficient.
A short list of AI events must be monitored:
- The user submits a prompt
- A prompt is engineered/modified
- Agent receives user request and formulates a query for a model
- Agent retrieves context data (e.g., from a vector database)
- Agent sends prompt/query and context data to a model
- Model performs inference
- Model accesses training data or external tools/APIs
- Model generates a response or output data
- Agent receives model response
- Agent formats and delivers response to the user
- Detection of sensitive data in a prompt or response
- Policy violation detected (e.g., prompt injection attempt, harmful content generation)
- Update to a model's configuration or fine-tuning data
The first challenge is visibility into these various systems. Visibility across multiple models, applications, databases, pipelines, etc requires a high degree of interoperability, lest organizations suffer from fragmented or incomplete views of their AI events. McKinsey’s 2025 Study“ The state of AI: How organizations are rewiring to capture value” revealed that only 27% of organizations deploying AI actually monitor all outputs. AI Runtime Inspection and Enforcement solutions must integrate with many different systems and tools.
The second challenge in the AI Runtime Inspection and Enforcement layer of AI TRiSM is the number and variety of AI events that must be monitored. AI systems generate vast amounts of data. This becomes more pronounced as organizations develop and deploy more AI use cases, connecting different models to different data sets and applications or potentially creating a multi-agent system where each agent executes different sub-tasks in complex end-to-end business processes. Human operators simply can not inspect all events in real-time to spot risks and threats. Inspection of AI events at scale requires a high degree of automation.
The third challenge is that AIs are non-deterministic, meaning they take “fuzzy inputs” and create probabilistic outputs. From IBM, “How Observability is Adjusting to Generative AI”, "Unlike traditional software, LLMs produce probabilistic outputs, meaning identical inputs can yield different responses. This lack of interpretability—or the difficulty in tracing how inputs shape outputs—can cause problems for conventional observability tools. This "black box" phenomenon highlights a critical challenge for LLM observability. While observability tools can detect problems that have occurred, they cannot prevent those issues because they struggle with AI explainability—the ability to provide a human-understandable reason why a model made a specific decision or generated a particular output.” Explicit rules-based approaches to monitoring are likely to prove insufficient. Robust AI runtime inspection & enforcement tech should be flexible, context-aware and able to discern the meaning and intent of various actions, ie, robust AI runtime inspection & enforcement should utilize AI to inspect events for potential issues.
Lastly, is the adaptability problem. AI systems, threats, and how people use them change constantly. Controls need to adapt just as fast. Static rules quickly become obsolete, and designing controls that can reliably handle the unpredictable or novel outputs and uses of AI is an ongoing challenge. AI runtime inspection & enforcement should be easily modifiable or have some capacity to learn and improve automatically.
Effective AI runtime inspection & enforcement facilitate the deployment of safe, trusted AI by building upon the Information Governance layer, utilizing labels to protect sensitive data, detecting drift from an established baseline of “normal” activity and detecting specific risks and threats in real-time to be remediated and fed back to sentinels in order to continuously learn and improve AI safety and security posture.
AI Governance Technical Requirements
The primary purpose of AI governance technologies is to provide a unified view of AI across the enterprise to facilitate trust, risk and security management. AI governance technologies must map the relationships between all AI models, agents, applications data and relevant policies to facilitate enterprise goals for TRiSM as well as compliance with regulations. Rapid identification of vulnerabilities or policy violations for swift remediation is how organizations manage TRiSM proactively.
Here, the familiar challenge of scale presents itself in a new way. The variety of tools, data sets and models across a fragmented landscape makes governance difficult. Establishing and tracking data provenance with an explicit purpose documented for every bit and byte of data across complex environments proves to be prohibitively laborious without the right tools. Managing versioning of models with documentation about their vulnerabilities, biases, etc., adds a layer of complexity. McKinsey’s study shows that organizations are rapidly deploying models in multiple domains.