Securiti AI Launches Context-Aware LLM Firewalls to Secure GenAI Applications

View

An Overview of Canadian Guardrails for Generative AI

By Anas Baig | Reviewed By Omer Imran Malik
Published September 22, 2023

Listen to the content

Canada has become a leading force in Generative AI's responsible and ethical development in a rapidly evolving field of artificial intelligence. The nation has paved the way for developing comprehensive AI guardrails with a persistent commitment to fostering innovation while bolstering the principles of transparency, fairness, and accountability.

While Generative AI boasts several benefits, it’s a powerful tool that attracts malicious actors to use it maliciously or inappropriately, raising serious concerns across the private and public sectors and among leading AI industry experts. Consequently, the Canadian government is making more than just voluntary guardrails in the field of Generative AI, such as the Artificial Intelligence and Data Act (AIDA), which establishes standards for "high-impact systems."

Additionally, Canadian Guardrails for Generative AI – Code of Practice has been issued by Innovation, Science and Economic Development Canada (ISED), a federal department of the Government of Canada responsible for several duties, including regulating industry and commerce, fostering innovation and science, and assisting in economic development.

Under the proposed document, the Government of Canada seeks comments on the proposed code of practice elements for generative AI systems. The Government of Canada's AI Advisory Council will host a series of virtual and hybrid roundtables and expert evaluations as part of this process.

In this blog, we will embark on a journey to explore the multifaceted landscape of Canadian guardrails for generative AI, particularly the recently introduced Canadian Guardrails for Generative AI – Code of Practice.

Code of Practice – Elements

Since the introduction of Bill C-27 in June 2022, the Government of Canada has actively engaged with stakeholders regarding AIDA.

The following proposed elements of a code of practice for generative AI systems are being considered by the Government of Canada for comments based on the inputs received thus far from a wide range of stakeholders.

Safety

Throughout the AI system's lifecycle, safety must be considered holistically with a broad view of potential implications, especially regarding misuse. Many generative AI systems have diverse applications; therefore, their safety risks must be evaluated more extensively than the systems with limited applications.

Developers and deployers of generative AI systems should recognize the potential for harmful use of the system, such as using it to impersonate real individuals or launch spearfishing assaults, and take measures to prevent it from occurring.

Developers, deployers, and operators of generative AI systems should be aware of the potential risks associated with the system, such as using a large language model (LLM) for providing legal or medical advice and taking precautions to avoid them. One such measure would be informing users of the system's capabilities and limitations.

Fairness and Equity

Generative AI systems can negatively affect societal fairness and equity through, for example, the perpetuation of biases and damaging preconceptions due to the large datasets on which they are trained and the scale at which they are implemented. Ensuring that models are trained on relevant and representative data and producing accurate, unbiased, and relevant outputs is crucial.

Generative AI system developers should evaluate and curate the data to avoid biased or low-quality datasets. On the other hand, developers, deployers, and operators of generative AI systems should implement measures to assess and mitigate the risk of biased output (e.g., fine-tuning).

Transparency

When it comes to Generative AI systems, transparency is a huge challenge. Their training data and source code might not be made available to the general public, and their output might be challenging to understand or justify. It is crucial to ensure that individuals are aware when dealing with an AI system or the output produced by AI tools as generative AI systems evolve and become increasingly advanced.

Developers and deployers of generative AI systems should provide an impartial and publicly accessible method to identify content produced by the AI system (for example, watermarking) and provide a comprehensive account of the development process, including the source of training data and steps taken to identify and mitigate risks. Additionally, to prevent systems from being confused for humans, operators of generative AI systems should make sure that they are clearly and conspicuously labeled as such.

Human Oversight and Monitoring

Human oversight and monitoring are essential to ensure that AI systems are developed, implemented, and used responsibly. Before making generative AI systems widely accessible, developers, deployers, and operators must take particular measures to ensure adequate human oversight and mechanisms to identify and report negative effects. This is due to the scale of deployment and the wide range of potential uses and misuse of these systems.

Given the deployment scope, the method in which the system is made accessible for usage, and its user base, deployers, and operators of generative AI systems should provide human oversight in the deployment and operations of their system.

Developers, deployers, and operators of generative AI systems should put in place procedures to enable the identification and reporting of negative effects once the system is made public (for example, maintaining an incident database), and they should commit to routine model enhancements based on results (for example, fine-tuning).

Validity and Robustness

Relying on AI systems requires ensuring they function as intended and are resilient across the situations to which they are likely to be exposed. Since they can be employed in various contexts and may be more vulnerable to misuse and attacks, trusting a Generative AI model has proved to be an increasing challenge. While AI’s agility makes it promising, stringent controls and testing must be implemented to prevent abuse and unforeseen consequences.

To assess the performance and identify vulnerabilities, developers of generative AI systems should employ a wide range of testing techniques across various activities and situations, including adversarial testing (such as red-teaming). Moreover, to prevent or identify adversarial attacks on the system (such as data poisoning), developers, deployers, and operators of generative AI systems should use the appropriate cybersecurity measures.

Accountability

The risk profiles of generative AI systems are extensive and complex. While internal governance mechanisms are crucial for any organization developing, deploying, or operating AI systems, special attention and care must be given to generative AI systems to ensure that a thorough and multifaceted risk management process is followed and that employees throughout the AI value chain know their responsibilities.

For the safety of their system, developers, deployers, and operators of generative AI systems should put multiple lines of safeguards in place, such as engaging in internal and external (independent) audits both before and after the system is put into operation and developing policies, procedures, and training to ensure that roles and responsibilities are clearly defined and that employees are familiar with their duties and the organization's risk management practices.

Conclusion

Organizations should adhere to these guidelines and other global AI best practices to prevent their AI systems from operating in ways that can endanger people, such as impersonation or giving incorrect outputs. Additionally, organizations must use approaches like 'red teaming' to identify and fix system problems and train their AI systems on sample datasets to reduce biased outputs.

Organizations should explicitly label AI-generated content to prevent conflict with human content and give consumers the information they need to make decisions. Organizations are also urged to share important details about the inner workings of their AI systems to increase user confidence and understanding.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox

Share


More Stories that May Interest You

What's
New