Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

An Overview of Canadian Guardrails for Generative AI

Contributors

Anas Baig

Product Marketing Manager at Securiti

Omer Imran Malik

Senior Data Privacy Consultant at Securiti

FIP, CIPT, CIPM, CIPP/US

Listen to the content

Canada has become a leading force in Generative AI's responsible and ethical development in a rapidly evolving field of artificial intelligence. The nation has paved the way for developing comprehensive AI guardrails with a persistent commitment to fostering innovation while bolstering the principles of transparency, fairness, and accountability.

While Generative AI boasts several benefits, it’s a powerful tool that attracts malicious actors to use it maliciously or inappropriately, raising serious concerns across the private and public sectors and among leading AI industry experts. Consequently, the Canadian government is making more than just voluntary guardrails in the field of Generative AI, such as the Artificial Intelligence and Data Act (AIDA), which establishes standards for "high-impact systems."

Additionally, Canadian Guardrails for Generative AI – Code of Practice has been issued by Innovation, Science and Economic Development Canada (ISED), a federal department of the Government of Canada responsible for several duties, including regulating industry and commerce, fostering innovation and science, and assisting in economic development.

Under the proposed document, the Government of Canada seeks comments on the proposed code of practice elements for generative AI systems. The Government of Canada's AI Advisory Council will host a series of virtual and hybrid roundtables and expert evaluations as part of this process.

In this blog, we will embark on a journey to explore the multifaceted landscape of Canadian guardrails for generative AI, particularly the recently introduced Canadian Guardrails for Generative AI – Code of Practice.

Code of Practice – Elements

Since the introduction of Bill C-27 in June 2022, the Government of Canada has actively engaged with stakeholders regarding AIDA.

The following proposed elements of a code of practice for generative AI systems are being considered by the Government of Canada for comments based on the inputs received thus far from a wide range of stakeholders.

Safety

Throughout the AI system's lifecycle, safety must be considered holistically with a broad view of potential implications, especially regarding misuse. Many generative AI systems have diverse applications; therefore, their safety risks must be evaluated more extensively than the systems with limited applications.

Developers and deployers of generative AI systems should recognize the potential for harmful use of the system, such as using it to impersonate real individuals or launch spearfishing assaults, and take measures to prevent it from occurring.

Developers, deployers, and operators of generative AI systems should be aware of the potential risks associated with the system, such as using a large language model (LLM) for providing legal or medical advice and taking precautions to avoid them. One such measure would be informing users of the system's capabilities and limitations.

Fairness and Equity

Generative AI systems can negatively affect societal fairness and equity through, for example, the perpetuation of biases and damaging preconceptions due to the large datasets on which they are trained and the scale at which they are implemented. Ensuring that models are trained on relevant and representative data and producing accurate, unbiased, and relevant outputs is crucial.

Generative AI system developers should evaluate and curate the data to avoid biased or low-quality datasets. On the other hand, developers, deployers, and operators of generative AI systems should implement measures to assess and mitigate the risk of biased output (e.g., fine-tuning).

Transparency

When it comes to Generative AI systems, transparency is a huge challenge. Their training data and source code might not be made available to the general public, and their output might be challenging to understand or justify. It is crucial to ensure that individuals are aware when dealing with an AI system or the output produced by AI tools as generative AI systems evolve and become increasingly advanced.

Developers and deployers of generative AI systems should provide an impartial and publicly accessible method to identify content produced by the AI system (for example, watermarking) and provide a comprehensive account of the development process, including the source of training data and steps taken to identify and mitigate risks. Additionally, to prevent systems from being confused for humans, operators of generative AI systems should make sure that they are clearly and conspicuously labeled as such.

Human Oversight and Monitoring

Human oversight and monitoring are essential to ensure that AI systems are developed, implemented, and used responsibly. Before making generative AI systems widely accessible, developers, deployers, and operators must take particular measures to ensure adequate human oversight and mechanisms to identify and report negative effects. This is due to the scale of deployment and the wide range of potential uses and misuse of these systems.

Given the deployment scope, the method in which the system is made accessible for usage, and its user base, deployers, and operators of generative AI systems should provide human oversight in the deployment and operations of their system.

Developers, deployers, and operators of generative AI systems should put in place procedures to enable the identification and reporting of negative effects once the system is made public (for example, maintaining an incident database), and they should commit to routine model enhancements based on results (for example, fine-tuning).

Validity and Robustness

Relying on AI systems requires ensuring they function as intended and are resilient across the situations to which they are likely to be exposed. Since they can be employed in various contexts and may be more vulnerable to misuse and attacks, trusting a Generative AI model has proved to be an increasing challenge. While AI’s agility makes it promising, stringent controls and testing must be implemented to prevent abuse and unforeseen consequences.

To assess the performance and identify vulnerabilities, developers of generative AI systems should employ a wide range of testing techniques across various activities and situations, including adversarial testing (such as red-teaming). Moreover, to prevent or identify adversarial attacks on the system (such as data poisoning), developers, deployers, and operators of generative AI systems should use the appropriate cybersecurity measures.

Accountability

The risk profiles of generative AI systems are extensive and complex. While internal governance mechanisms are crucial for any organization developing, deploying, or operating AI systems, special attention and care must be given to generative AI systems to ensure that a thorough and multifaceted risk management process is followed and that employees throughout the AI value chain know their responsibilities.

For the safety of their system, developers, deployers, and operators of generative AI systems should put multiple lines of safeguards in place, such as engaging in internal and external (independent) audits both before and after the system is put into operation and developing policies, procedures, and training to ensure that roles and responsibilities are clearly defined and that employees are familiar with their duties and the organization's risk management practices.

Conclusion

Organizations should adhere to these guidelines and other global AI best practices to prevent their AI systems from operating in ways that can endanger people, such as impersonation or giving incorrect outputs. Additionally, organizations must use approaches like 'red teaming' to identify and fix system problems and train their AI systems on sample datasets to reduce biased outputs.

Organizations should explicitly label AI-generated content to prevent conflict with human content and give consumers the information they need to make decisions. Organizations are also urged to share important details about the inner workings of their AI systems to increase user confidence and understanding.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 14:21

AI Governance Is Much More than Technology Risk Mitigation

AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View

Latest

View More

From Trial to Trusted: Securely Scaling Microsoft Copilot in the Enterprise

AI copilots and agents embedded in SaaS are rapidly reshaping how enterprises work. Business leaders and IT teams see them as a gateway to...

The ROI of Safe Enterprise AI View More

The ROI of Safe Enterprise AI: A Business Leader’s Guide

The fundamental truth of today’s competitive landscape is that businesses harnessing data through AI will outperform those that don’t. Especially with 90% of enterprise...

Data Security Governance View More

Data Security Governance: Key Principles and Best Practices for Protection

Learn about Data Security Governance, its importance in protecting sensitive data, ensuring compliance, and managing risks. Best practices for securing data.

AI TRiSM View More

What is AI TRiSM and Why It’s Essential in the Era of GenAI

The launch of ChatGPT in late 2022 was a watershed moment for AI, introducing the world to the possibilities of GenAI. After OpenAI made...

Managing Privacy Risks in Large Language Models (LLMs) View More

Managing Privacy Risks in Large Language Models (LLMs)

Download the whitepaper to learn how to manage privacy risks in large language models (LLMs). Gain comprehensive insights to avoid violations.

View More

Top 10 Privacy Milestones That Defined 2024

Discover the top 10 privacy milestones that defined 2024. Learn how privacy evolved in 2024, including key legislations enacted, data breaches, and AI milestones.

Comparison of RoPA Field Requirements Across Jurisdictions View More

Comparison of RoPA Field Requirements Across Jurisdictions

Download the infographic to compare Records of Processing Activities (RoPA) field requirements across jurisdictions. Learn its importance, penalties, and how to navigate RoPA.

Navigating Kenya’s Data Protection Act View More

Navigating Kenya’s Data Protection Act: What Organizations Need To Know

Download the infographic to discover key details about navigating Kenya’s Data Protection Act and simplify your compliance journey.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New