Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

How Unstructured Data Governance Can Prevent Costly Mishaps in GenAI

Author

Jack Berkowitz

Chief Data Officer at Securiti

Listen to the content

This post is also available in: Brazilian Portuguese

In the past several months, we’ve seen a flurry of embedded generative AI applications released, from Microsoft CoPilot to Adobe Firefly. (The image of me below — with its generous interpretation of my beard — is courtesy of the latter.) The revolution is upon us, with foundation models like OpenAI GPT, Anthropic Claude, Mistral, and others quickly expanding to fill in capabilities and completeness.

However, many companies are still struggling to get their initial work into production. Lots of changes are happening in the industry, and the technical pace is almost overwhelming. In March, Silicon Valley Venture Capital firm Andreeson Horowitz published the eye-opening results of their survey on enterprise leaders’ changing opinions of generative AI. The findings show that, although budgets for generative AI are “skyrocketing,” many companies are still concerned about the security of their sensitive data. And while enterprises are building their own apps, they’re far more excited about GenAI for internal use cases than external-facing ones. February’s Air Canada ChatBot lawsuit may help shed some light on why.

The cautionary tale of Air Canada

Air Canada’s blunder was pretty straightforward. The company’s chatbot provided some information to a customer about bereavement fares — and, later, the company refused to honor the chatbot’s response. Air Canada cited conflicting information on their website, claimed that the website was the correct source, and asserted that they did not have control over the chatbot’s output.

The Canadian court didn’t agree, ruling that Air Canada should be responsible for its sponsored work and stand behind its technology. Although the financials at stake were far from significant ($812 CAD), the publicity was less than ideal. And, despite having no special information about how Air Canada operates internally or built their chatbot, it seems to me that Air Canada was missing two key considerations in their adoption of generative AI technologies — elements that, with a little bit of preparation and forward-thinking, companies can get in front of to avoid having a similar issue.

  1. Make sure you know where your content is coming from and how it is being used.
    One of the points in the lawsuit was that the chatbot cited different content than the website. Was this an AI-induced hallucination like the type that affected one New York lawyer last year? Or was it due to using two different sets of content: one to train or manage the chatbot and one to live on the website? In either case, companies can now get ahead of these issues by adopting technology that enables the use of known and approved content with clearly traceable sources and responses — and by building unstructured data governance into their GenAI programs.
  2. Look beyond technical tools to prioritize the responsible use of AI.
    While the first steps of building and implementing a responsible AI program involve the setup of technical controls and guarantees — ensuring that the data used by the AI is clean and licensed, measuring and controlling for algorithmic bias, and testing and understanding the results — ethical use extends beyond the technical. Much of what constitutes a responsible AI program is driven by the business and the ethical considerations of panels of people who come together to discuss, decide, and act on a company’s unique imperatives. Whether it’s by the establishment of an AI ethics committee or a formal organizational construct, people throughout a company’s operations should be involved in understanding and fielding the technology. A good way to do that is to implement a centralized, unified Data Command Center that everyone in the decision loop can visualize in order to understand and agree on how systems are implemented.

It stands to reason that Air Canada might have avoided the damaging publicity that came along with the lawsuit over their chatbot if they had put active programs in place to address both points above: knowing and understanding the origin of their training data, plus relying on the direction of an informed AI ethics committee. Regardless, getting your AI program implemented, deployed, and operational is a straightforward process. Taking a Data Command Center approach to GenAI — and establishing your own AI ethics committee — will set you up with responsible, ethical AI practices that will take your company far into the future. Reach out for a demo to see how Securiti can help.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
View More
Databricks AI Summit (DAIS) 2025 Wrap Up
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
Inside Echoleak View More
Inside Echoleak
How Indirect Prompt Injections Exploit the AI Layer and How to Secure Your Data What is Echoleak? Echoleak (CVE-2025-32711) is a vulnerability discovered in...
What is AI Security Posture Management (AI-SPM)? View More
What is AI Security Posture Management (AI-SPM)?
AI SPM stands for AI Security Posture Management. It represents a comprehensive approach to ensure the security and integrity of AI systems throughout the...
View More
Data Security & GDPR Compliance: What You Need to Know
Learn the importance of data security in ensuring GDPR compliance. Implement robust data security measures to prevent non-compliance with the GDPR.
Beyond DLP: Guide to Modern Data Protection with DSPM View More
Beyond DLP: Guide to Modern Data Protection with DSPM
Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.
Mastering Cookie Consent: Global Compliance & Customer Trust View More
Mastering Cookie Consent: Global Compliance & Customer Trust
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
Understanding Data Regulations in Australia’s Telecom Sector View More
Understanding Data Regulations in Australia’s Telecom Sector
Gain insights into the key data regulations in Australia’s telecommunication sector. Learn how Securiti helps ensure swift compliance.
Top 3 Key Predictions on GenAI's Transformational Impact in 2025 View More
Top 3 Key Predictions on GenAI’s Transformational Impact in 2025
Discover how a leading Chief Data Officer (CDO) breaks down top predictions for GenAI’s transformative impact on operations and innovation in 2025.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New