Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

How Unstructured Data Governance Can Prevent Costly Mishaps in GenAI

Published May 6, 2024

Listen to the content

In the past several months, we’ve seen a flurry of embedded generative AI applications released, from Microsoft CoPilot to Adobe Firefly. (The image of me below — with its generous interpretation of my beard — is courtesy of the latter.) The revolution is upon us, with foundation models like OpenAI GPT, Anthropic Claude, Mistral, and others quickly expanding to fill in capabilities and completeness.

However, many companies are still struggling to get their initial work into production. Lots of changes are happening in the industry, and the technical pace is almost overwhelming. In March, Silicon Valley Venture Capital firm Andreeson Horowitz published the eye-opening results of their survey on enterprise leaders’ changing opinions of generative AI. The findings show that, although budgets for generative AI are “skyrocketing,” many companies are still concerned about the security of their sensitive data. And while enterprises are building their own apps, they’re far more excited about GenAI for internal use cases than external-facing ones. February’s Air Canada ChatBot lawsuit may help shed some light on why.

The cautionary tale of Air Canada

Air Canada’s blunder was pretty straightforward. The company’s chatbot provided some information to a customer about bereavement fares — and, later, the company refused to honor the chatbot’s response. Air Canada cited conflicting information on their website, claimed that the website was the correct source, and asserted that they did not have control over the chatbot’s output.

The Canadian court didn’t agree, ruling that Air Canada should be responsible for its sponsored work and stand behind its technology. Although the financials at stake were far from significant ($812 CAD), the publicity was less than ideal. And, despite having no special information about how Air Canada operates internally or built their chatbot, it seems to me that Air Canada was missing two key considerations in their adoption of generative AI technologies — elements that, with a little bit of preparation and forward-thinking, companies can get in front of to avoid having a similar issue.

  1. Make sure you know where your content is coming from and how it is being used.
    One of the points in the lawsuit was that the chatbot cited different content than the website. Was this an AI-induced hallucination like the type that affected one New York lawyer last year? Or was it due to using two different sets of content: one to train or manage the chatbot and one to live on the website? In either case, companies can now get ahead of these issues by adopting technology that enables the use of known and approved content with clearly traceable sources and responses — and by building unstructured data governance into their GenAI programs.
  2. Look beyond technical tools to prioritize the responsible use of AI.
    While the first steps of building and implementing a responsible AI program involve the setup of technical controls and guarantees — ensuring that the data used by the AI is clean and licensed, measuring and controlling for algorithmic bias, and testing and understanding the results — ethical use extends beyond the technical. Much of what constitutes a responsible AI program is driven by the business and the ethical considerations of panels of people who come together to discuss, decide, and act on a company’s unique imperatives. Whether it’s by the establishment of an AI ethics committee or a formal organizational construct, people throughout a company’s operations should be involved in understanding and fielding the technology. A good way to do that is to implement a centralized, unified Data Command Center that everyone in the decision loop can visualize in order to understand and agree on how systems are implemented.

It stands to reason that Air Canada might have avoided the damaging publicity that came along with the lawsuit over their chatbot if they had put active programs in place to address both points above: knowing and understanding the origin of their training data, plus relying on the direction of an informed AI ethics committee. Regardless, getting your AI program implemented, deployed, and operational is a straightforward process. Taking a Data Command Center approach to GenAI — and establishing your own AI ethics committee — will set you up with responsible, ethical AI practices that will take your company far into the future. Reach out for a demo to see how Securiti can help.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New