Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

The Dangers of Uncontrolled AI: Shadow AI and Ethical Risks

Author

Ankur Gupta

Director for Data Governance and AI Products at Securiti

Listen to the content

This post is also available in: Brazilian Portuguese

We're in the midst of a generative AI revolution that's reshaping how we approach business. It's imperative to recognize the potential hazards of uncontrolled AI as we seek to harness its benefits while maintaining safety and trust. To do so, businesses must prioritize transparency, safety, and security in their AI models to ensure ethical and legal compliance, fostering trust with customers.

Why Uncontrolled AI is a Recipe for Trouble

Integrating AI services into enterprise data models requires careful control and oversight over the entire AI lifecycle, spanning from creation to deployment. This is essential to reduce risk around security breaches, compromised data privacy, legal violations, and damaged brand trust. Yet an alarming gap exists between adoption and governance. A September 2023 survey from The Conference Board shows that over half (56%) of US workers are using generative AI technologies on the job, and a survey by ISACA indicates that only 10% of organizations have a formal generative AI policy in place.

And so we also enter the era of uncontrolled AI, in which AI governance becomes an increasingly vital priority for businesses that want to integrate AI models safely and transparently while driving positive business impact and meeting legal and ethical requirements. Without the right controls and oversight in place, enterprises encounter a series of risks that can turn their quest for innovation and efficiency into a compliance and security calamity. Here are just a few of those dangers.

Shadow AI is Already Here

In this rapidly changing environment, the race to innovation is more competitive than ever — and privacy and security risks are more relevant than ever. As companies strive to achieve business goals via the expeditious incorporation of AI, those very same organizations are still figuring out what their AI posture will be.

Without complete visibility into all AI systems, deployed internally or through SaaS, hidden models operate with unknown risks that can lead to astronomical costs down the line. To intensify the problem, shadow AI shows signs of proliferating at a faster rate than the parallel challenge of shadow IT that has beset security and governance teams for decades — and continues to.

Unidentified Risks Pave the Way for Unwanted Consequences

A lot of questions still exist around the use of AI — and many of them involve challenges that enterprise security teams have never encountered before.

Because AI models are not just a function of the model’s code or the data that flows through it, but also the logic that “learns” from that data, the output is more of a moving target than many orgs are used to. Blindness to potential risks like bias, discrimination, and "hallucinatory" responses can cause serious setbacks for security, data, and compliance teams, increasing the likelihood of ethical violations and reputational damage. For example, Navy Federal Credit Union was recently subjected to a lawsuit over allegations of racial bias in their mortgage lending practices.

Data Opacity Fuels Privacy Concerns

Organizations have long grappled with data transparency challenges in the world of privacy and security — and the same issues arise in the use of artificial intelligence.

Data generated by AI models is often cloaked in obscurity, raising questions about its origin, use, and accuracy. This unclear data usage lurking in AI models and pipelines raises doubt around entitlements and exposes sensitive information to potential leaks, derailing compliance efforts and exposing enterprises to a world of uncharted vulnerabilities. For example, a leading consumer electronics company banned ChatGPT among its employees after a sensitive code leak happened.

Unsecured Models Create Vulnerabilities

As the use of AI expands, the need to implement data controls on model inputs and outputs also increases.

Sensitive information that is both put into and generated from AI models must meet compliant data protection and privacy standards. Lack of security controls leaves AI models open to manipulation, data leakage, and malicious attacks. Organizations that want to avoid data breach incidents do not have the luxury of making AI security an afterthought; doing so poses a threat to the integrity of the enterprise and the reliability of the brand.

Uncontrolled Interactions Invite Abuse

Unguarded prompts, agents, and assistants open the door to harmful interactions, threatening user safety and ethical principles.

It's crucial to understand how the data generated by these models is being utilized — whether it's being shared in a Slack channel, integrated into a website as a chatbot, disseminated through an API, or embedded in an app. Moreover, these agents, while serving as channels for legitimate queries, also become potential pathways for new types of attacks on AI systems.

Globally, policymakers are perking up, paying attention, and taking action on the safe, secure, and trustworthy use of AI. The EU was the first to put a comprehensive AI law on the books with the aptly named AI Act, and several other nations promptly followed suit, with China, the UK, and Canada proposing or enacting AI legislation. Even the Biden-Harris administration issued an executive order on the matter in late 2023.

Failure to keep pace with global regulations like the EU AI Act and the NIST RMF (Risk Management Framework) puts organizations at odds with responsible and ethical AI development — and exposes them to the substantial financial penalties and damaged brand reputation that can come from non-compliance.

5 Steps to AI Governance

Fortunately, there are ways that enterprises looking to enable the safe use of AI can integrate AI models into their data landscape while meeting legal requirements, upholding ethical standards, and driving positive business outcomes. Here’s how incorporating AI governance into a central Data Command Center enables the safe use of AI:

1. Discover AI Models

The first step is to discover and catalog AI models in use across public clouds, private clouds, and SaaS applications.

2. Assess Risks and Classify AI Models

Evaluate risks related to data and AI models and classify AI models as per global regulatory requirements.

3. Map and Monitor Data + AI Flows

Connect models to data sources, data processing paths, vendors, potential risks, and compliance obligations — and continuously monitor data flow.

4. Implement Data + AI Controls for Privacy, Security, and Compliance

Establish data controls on model inputs and outputs, securing AI systems from unauthorized access or manipulation.

5. Comply with Regulations

Conduct assessments to comply with standards such as the NIST AI RMF and generate AI ROPA reports and AI system event logs.

Beyond merely “controlling” data, forward-thinking businesses that get ahead of the risk posed by uncontrolled AI will not only enable the safe use of AI through better governance that upholds ethical and legal standards, but will unlock untold value in business performance, insight, innovation, and brand reputation.

Read the whitepaper “5 Steps to AI Governance” to learn more about each of the actions above that you can take to start ensuring the safe, secure, trustworthy, and compliant use of AI.

5 Steps to AI Governance: Ensuring Safe, Trustworthy, and Compliant Artificial Intelligence

Download Whitepaper
View
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix View More
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix
The Challenge of Navigating Global Data Privacy Laws In today’s privacy-first world, navigating data protection laws and direct marketing compliance requirements is no easy...
View More
Databricks AI Summit (DAIS) 2025 Wrap Up
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
A Complete Guide on Uganda’s Data Protection and Privacy Act (DPPA) View More
A Complete Guide on Uganda’s Data Protection and Privacy Act (DPPA)
Delve into Uganda's Data Protection and Privacy Act (DPPA), including data subject rights, organizational obligations, and penalties for non-compliance.
Data Risk Management View More
What Is Data Risk Management?
Learn the ins and outs of data risk management, key reasons for data risk and best practices for managing data risks.
Beyond DLP: Guide to Modern Data Protection with DSPM View More
Beyond DLP: Guide to Modern Data Protection with DSPM
Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.
Mastering Cookie Consent: Global Compliance & Customer Trust View More
Mastering Cookie Consent: Global Compliance & Customer Trust
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
Singapore’s PDPA & Consent: Clear Guidelines for Enterprise Leaders View More
Singapore’s PDPA & Consent: Clear Guidelines for Enterprise Leaders
Download the essential infographic for enterprise leaders: A clear, actionable guide to Singapore’s PDPA and consent requirements. Stay compliant and protect your business.
View More
Australia’s Privacy Act & Consent: Essential Guide for Enterprise Leaders
Download the essential infographic for enterprise leaders: A clear, actionable guide to Australia’s Privacy Act and consent requirements. Stay compliant and protect your business.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New