Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

The Presidio AI Framework: Chart the Future of AI Innovation Responsibly

Contributors

Anas Baig

Product Marketing Manager at Securiti

Maria Khan

Data Privacy Legal Manager at Securiti

FIP, CIPT, CIPM, CIPP/E

Listen to the content

Introduction

The World Economic Forum (WEF) convened for its 54th annual meeting at Davos-Klosters from 15th to 19th January 2024. The agenda included several critical items such as medical preparedness, global sustainable economic growth, and of course, artificial intelligence (AI).

The meeting, attended by entrepreneurs, journalists, domain experts, and hundreds of representatives of national governments and businesses, yielded a plethora of insights. In the midst of all this, the AI Governance Alliance (AIGA) presented the Presidio AI Framework.

Developed based on the Presidio Recommendations from the Responsible AI Leadership Summit in April 2023, AIGA developed a series of Briefing Papers that present the ideas, insights, and recommendations from various stakeholders involved in responsible AI development.

The Presidio AI Framework is the culmination of the Safe Systems and Technology Track working in tandem with IBM Consulting. The Framework emphasizes the need for a standard methodology that standardizes Generative AI models’ lifecycle management while also highlighting the need for multistakeholder governance and transparent communication.

More importantly, the Presidio AI Framework is intended to be a comprehensive guide for businesses aiming to leverage AI capabilities within their existing operations. The roadmap this framework presents provides organizations with the necessary tools and philosophies needed to navigate the ethical, regulatory, and safety challenges posed by AI.

In other words, the framework proffers not just a strategy but a vision that can help organizations prepare for an AI-driven world grounded in common ethical principles and values.

Key Tenets of the Presidio AI Framework

Here are the three critical tenets of the Presidio AI Framework:

Expanded AI Lifecycle

The Expanded AI Lifecycle proposed by the Presidio Framework recommends a radical new approach to AI development that extends beyond the conventional stages of design, build, and deployment. Per this recommendation, the newly expanded lifecycle would cover a broader range of activities, including an initial concept evaluation, ethical impact assessments, stakeholder assessment, and engagement, as well as rigorous post-deployment monitoring and feedback.

For organizations involved in the development of AI models and systems, adopting the aforementioned lifecycle approach would require a combination of comprehensive planning and visionary foresight.

Concept & Design

The initial stages would require a thorough evaluation of the direct and indirect impact of the AI models and systems being developed, in addition to their immediate impact on the various stakeholders involved. This can help in appropriately assessing whether such developments align with the ethical guidelines, regulatory requirements, and operational aspirations of all the stakeholders.

Data Collection & Curation

Any and all data collected to train AI models and systems in the development phase should be screened via an extensive process to ensure their quality, diversity, and appropriateness in terms of representation. By doing so, an organization can avoid having to deal with issues such as bias and unfair outcomes that stem from inefficiencies in the data collection process.

Testing & Validation

Once the development process does begin, organizations will need to develop and implement mechanisms that enable a cycle of continuous risk assessment and appropriate mitigation. Furthermore, these mechanisms will need to be developed while ensuring robust data governance practices that continue to maintain the privacy and quality of data being used to train the AI models.

Model Development & Training

For organizations to truly adopt and, more importantly, to truly benefit from the expanded lifecycle approach, resolute investment in appropriate training, resources, and processes will be crucial. This involves the adoption of appropriate techniques that lend greater explainability to how particular AI models and systems generate specific outputs.

It is important to note that, in most cases, this will translate into higher operational costs while also elevating the overall complexity of the AI projects. However, such an approach will yield significant long-term benefits. Some of these benefits include enhanced customer trust, minimal risk of regulatory non-compliance and subsequent regulatory offenses, as well as better alignment of the organization’s AI development efforts with societal values.

Deployment & Monitoring

The post-deployment phase is just as critical as it requires a similarly rigorous mechanism in place that can proactively monitor the developed AI models and systems for any biases that may emerge. The Framework recommends an open channel of communication between the users and all major stakeholders involved at this juncture to ensure timely feedback that facilitates proactive interventions and immediate mitigation of the identified issues.

More importantly, organizations that successfully embrace this expanded lifecycle approach will further solidify and consolidate their claims as leaders in responsible AI development. As a result, such organizations can gain a competitive advantage in a market increasingly concerned about the various ethical implications of AI development.

The Presidio AI Framework encourages organizations to move towards a more ethical, transparent, and accountable AI development process that emphasizes proactive engagement with multiple stakeholders within the context of social and ethical considerations. Organizations that do adopt such a shift can not only expect to mitigate the associated risks and higher likelihood of regulatory compliance but also an opportunity to be a leader in the development and implementation of responsible AI.

Comprehensive Risk-Guardrails

With the comprehensive risk guardrails, the Presidio Framework aims to ensure that all AI models and systems are designed to be safe, ethical, and perfectly aligned with societal values by instilling these aspects at the development phase.

It does so by promoting the establishment of effective and robust mechanisms that appropriately identify, assess, and mitigate all relevant risks across major considerations such as privacy, security, and transparency. These mechanisms act as the guardrails that enable the adoption of a systemic approach to risk management as well as its effective integration across various stages of the aforementioned expanded AI lifecycle.

These comprehensive guardrails can include but are not limited to the following:

  1. Technical Safeguards: These will be deployed to ensure the appropriate security and integrity of all AI models and systems being developed. These can include but are not limited to the use of adversarial training, which enables models to thwart and resist manipulation attempts, as well as data augmentation, which diversifies the training data to deliver better model generalization outputs.
  2. Algorithmic Bias Detection & Mitigation: This will focus on organizations deploying measures to appropriately identify and mitigate any identified biases within the developed AI models and systems. Exact measures can include but are not limited to the use of various fairness metrics and bias detection tools and techniques, which can be leveraged depending on organizational needs.
  3. Human Oversight & Control Mechanisms: This would involve organizations leveraging some form of a Human-in-the-loop (HITL) methodology, which ensures that any developed AI models and systems do not operate with absolute autonomy and allows for human intervention, especially in instances that involve critical decision-making processes. Further steps can be taken to clearly identify the roles and responsibilities related to the monitoring, management, and intervention in AI operations.
  4. Transparency & Explainability: This is more of an organizational philosophy driving the internal culture of the organization rather than an exact set of measures and safeguards. The purpose is to ensure that any developed AI models and systems are sufficiently understandable to humans. Hence, various mechanisms and measures can be leveraged that allow for an appropriate explanation of how the AI models and systems in use within the organizations work. These mechanisms and measures are vital for organizations in terms of building trust, fostering a culture of accountability, and facilitating the interpretation of AI outputs accurately.

Naturally, the guardrails themselves will have to go above and beyond the traditional safeguards utilized by organizations. Done properly, this can empower organizations to effectively and accurately anticipate potential risks and issues, both technical and non-technical, at the development stage.

Furthermore, the adoption of comprehensive risk guardrails symbolizes a grander commitment toward responsible AI development. Investment in the appropriate tools, capabilities, human resources, and infrastructure that can facilitate the adoption of such guardrails will need to be followed to ensure their rigorous monitoring for efficient risk mitigation and management.

Additionally, transparent reporting mechanisms must be developed simultaneously to enable timely demonstration of an organization’s adherence to ethical expectations and regulatory requirements while also instilling a reliable accountability structure within the organization.

Shift-Left Methodology

The Shift-Left Methodology within the Presidio AI Framework represents a proactive approach to AI development, emphasizing early integration of ethical, safety, and compliance considerations.

Originating from software engineering, "shift left" refers to the practice of incorporating tests and quality assurance (QA) early in the development cycle to identify and fix issues promptly, thus saving time and resources. This methodology necessitates a fundamental change in how AI development is traditionally conducted, advocating for a proactive rather than reactive risk management strategy. It encourages early and close collaboration among developers, testers, and operations teams and requires the inclusion of a diverse group of stakeholders, including ethicists and compliance experts, in AI project planning.

The adoption of the Shift-Left Methodology necessitates significant planning and execution changes, aiming to develop AI solutions that are technologically advanced while being socially responsible. Importantly, it addresses the increasing need for regulatory compliance in the face of stricter AI regulations worldwide, enabling organizations to mitigate non-compliance risks, stay ahead of regulatory changes, and enhance public trust in their AI development processes.

The path toward the development and wide-scale adoption of responsible AI will be a long and arduous journey. It will require a comprehensive, collaborative effort across multiple sectors and disciplines. This need for greater collaboration will gain extensive importance as AI technologies and capabilities continue to evolve at the current breakneck pace.

In addition to the speed of this evolution is the increasingly evident and necessary integration of AI in both quotidian tasks as well as critical infrastructure. This serves to highlight further the urgent necessity of a robust, ethical, and reliable framework that can help organizations adapt emerging technologies with a clear consideration for their impact on society.

In the face of this, the Presidio AI Framework’s emphasis on early adoption and implementation of risk guardrails and a comprehensive AI lifecycle approach empowers organizations to appropriately anticipate and mitigate possible threats before they pose significant risks to the organizations. More importantly, the Framework establishes a clear precedent within organizations for a proactive approach to AI governance rather than a reactive one.

Furthermore, owing to the complexity of the AI models and systems being developed, organizations must look to increasingly nuanced approaches that enable a grander and more expansive platform for various stakeholders such as industry leaders, policymakers, and regulators. Such an approach underscores the relevance of the Presidio Framework as a shared responsibility model that takes diverse perspectives and expertise into account, making it more adaptable to the intricacies of emerging AI technologies as well as their social implications,

How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and AI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for AI governance, data security, privacy, data governance, and compliance.

The AI Security & Governance solution empowers organizations to discover shadow AI and institute privacy, security, and governance guardrails to drive the safe adoption of AI.

Request a demo today and learn more about how Securiti can empower your organization to go from AI anxiety to its methodical and safe adoption.

Whitepaper

5 Steps to AI Governance:
Ensuring Safe, Trustworthy, and Compliant Artificial Intelligence

Download Whitepaper Now
Demo Tablet
AIGovernance.Center

Gain expert insights
and key resources on
AI Governance

Explore AIGovernance.Center
Explore AIGovernance.Center

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigation OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View
Spotlight 13:35

The Better Organized We’re from the Beginning, the Easier it is to Use Data

Watch Now View
Spotlight 13:11

Securing GenAI: From SaaS Copilots to Enterprise Applications

Rehan Jalil
Watch Now View
Spotlight 47:02

Navigating Emerging Technologies: AI for Security/Security for AI

Rehan Jalil
Watch Now View
Spotlight 59:55

Building Safe
Enterprise AI

Watch Now View

Latest

View More

Building Safe, Enterprise-grade AI with Securiti’s Gencore AI and NVIDIA NIM

Businesses are rapidly adopting generative AI (GenAI) to boost efficiency, productivity, innovation, customer service, and growth. However, IT & AI executives—particularly in highly regulated...

Automating EU AI Act Compliance View More

Automating EU AI Act Compliance: A 5-Step Playbook for GRC Teams

Artificial intelligence is revolutionizing industries, driving innovation in healthcare, finance, and beyond. But with great power comes great responsibility—especially when AI decisions impact health,...

Navigating Data Regulations in India’s Telecom Sector View More

Navigating Data Regulations in India’s Telecom Sector: Security, Privacy, Governance & AI

Gain insights into the key data regulations in India’s telecom sector and how they impact your business. Learn how Securiti helps ensure swift compliance...

Best Practices for Microsoft 365 Copilot View More

Data Governance Best Practices for Microsoft 365 Copilot

Learn key governance best practices for Microsoft 365 Copilot to ensure security, compliance, and effective implementation for optimal business performance.

5-Step AI Compliance Automation Playbook View More

EU AI Act: 5-Step AI Compliance Automation Playbook

Download the whitepaper to learn about the EU AI Act & its implication on high-risk AI systems, 5-step framework for AI compliance automation and...

A 6-Step Automation Guide View More

Say Goodbye to ROT Data: A 6-Step Automation Guide

Eliminate redundant obsolete and trivial (ROT) data with a strategic 6-step automation guide. Download the whitepaper today to discover how to streamline data management...

Texas Data Privacy and Security Act (TDPSA) View More

Navigating the Texas Data Privacy and Security Act (TDPSA): Key Details

Download the infographic to learn key details about Texas’ Data Privacy and Security Act (TDPSA) and simplify your compliance journey with Securiti.

Oregon’s Consumer Privacy Act (OCPA) View More

Navigating Oregon’s Consumer Privacy Act (OCPA): Key Details

Download the infographic to learn key details about Oregon’s Consumer Privacy Act (OCPA) and simplify your compliance journey with Securiti.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New