The cautionary tale of Air Canada
Air Canada’s blunder was pretty straightforward. The company’s chatbot provided some information to a customer about bereavement fares — and, later, the company refused to honor the chatbot’s response. Air Canada cited conflicting information on their website, claimed that the website was the correct source, and asserted that they did not have control over the chatbot’s output.
The Canadian court didn’t agree, ruling that Air Canada should be responsible for its sponsored work and stand behind its technology. Although the financials at stake were far from significant ($812 CAD), the publicity was less than ideal. And, despite having no special information about how Air Canada operates internally or built their chatbot, it seems to me that Air Canada was missing two key considerations in their adoption of generative AI technologies — elements that, with a little bit of preparation and forward-thinking, companies can get in front of to avoid having a similar issue.
- Make sure you know where your content is coming from and how it is being used.
One of the points in the lawsuit was that the chatbot cited different content than the website. Was this an AI-induced hallucination like the type that affected one New York lawyer last year? Or was it due to using two different sets of content: one to train or manage the chatbot and one to live on the website? In either case, companies can now get ahead of these issues by adopting technology that enables the use of known and approved content with clearly traceable sources and responses — and by building unstructured data governance into their GenAI programs.
- Look beyond technical tools to prioritize the responsible use of AI.
While the first steps of building and implementing a responsible AI program involve the setup of technical controls and guarantees — ensuring that the data used by the AI is clean and licensed, measuring and controlling for algorithmic bias, and testing and understanding the results — ethical use extends beyond the technical. Much of what constitutes a responsible AI program is driven by the business and the ethical considerations of panels of people who come together to discuss, decide, and act on a company’s unique imperatives. Whether it’s by the establishment of an AI ethics committee or a formal organizational construct, people throughout a company’s operations should be involved in understanding and fielding the technology. A good way to do that is to implement a centralized, unified Data Command Center that everyone in the decision loop can visualize in order to understand and agree on how systems are implemented.
It stands to reason that Air Canada might have avoided the damaging publicity that came along with the lawsuit over their chatbot if they had put active programs in place to address both points above: knowing and understanding the origin of their training data, plus relying on the direction of an informed AI ethics committee. Regardless, getting your AI program implemented, deployed, and operational is a straightforward process. Taking a Data Command Center approach to GenAI — and establishing your own AI ethics committee — will set you up with responsible, ethical AI practices that will take your company far into the future. Reach out for a demo to see how Securiti can help.