Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View

What to Know About Canadian Center for Cyber Security’s Guidance on Generative AI

Contributors

Anas Baig

Product Marketing Manager at Securiti

Omer Imran Malik

Data Privacy Legal Manager, Securiti

FIP, CIPT, CIPM, CIPP/US

Published September 18, 2023

Listen to the content

Generative AI promises to expand organizational productivity tenfold. A remarkable combination of quality and quantity of content generation allows organizations to achieve greater efficiency than ever before. Organizations across various industries, such as healthcare, software development, media and publishing, academia, and cybersecurity, have leveraged generative AI tools to aid their operations in various capacities.

However transformative and disruptive as Generative AI may be, its immense potential can just as easily be leveraged for malicious acts by cybercriminals and attackers.

In the face of this, the Canadian Center for Cyber Security has recently published a guidance document identifying the major risks and the best practices to mitigate these risks. For organizations still grappling with how best to integrate generative AI into their daily operations, this guidance offers a chance to do so with minimal risk.

Major Risks Identified

The guidance is meticulous in identifying potential threats and risks businesses may face when deploying generative AI within their products and services. These include the following:

Misinformation

Misinformation has been a rampant issue for tech companies globally. However, it could evolve to more catastrophic levels via generative AI tools. With generative AI, malicious actors can produce deceptive and false information en masse, with the language being designed explicitly to influence and convince the public with greater certainty.

Phishing

Phishing has been a major cyber threat for decades, but generative AI can lead to far more sophisticated and frequent phishing attacks, raising their likelihood of success. As with misinformation, phishing emails can be generated with terrifying precise language, leading to potential identity theft, financial fraud, and other forms of cybercrime.

Data Privacy

Generative AI tools are still in their relative infancy. As time progresses, so will their potential and our ability to properly and responsibly leverage them to their maximum potential. However, until then, users may unintentionally expose their personally identifiable information (PII) or their employer’s sensitive data on these generative AI tools. Malicious actors may then leverage various techniques to access this vital data to impersonate individuals.

AI Poisoning

A relatively new threat that can compromise entire models. Instead of targeting a model itself, a malicious actor may instead opt to compromise the dataset the model is trained on. Doing so cannot only severely compromise the accuracy, quality, and transparency of the generated output but may also be leveraged along with some of the other threats identified for large-scale coordinated digital enterprise attacks.

Model Bias

It’s one thing for a generative AI model to be compromised due to a well-choreographed AI poisoning attack, but these models are just as vulnerable to unintentional inaccuracies or biases within the training datasets. Most models are trained on limited datasets scraped from open-source Internet sources. The bias in these sources may prejudice the training data, thereby influencing the model.

Intellectual property (IP) rights are already a bone of contention within the generative AI sphere. With questions revolving around the ownership of the art and content generated via generative AI tools, malicious actors may leverage these tools to steal large volumes of confidential corporate IP data at an accelerated speed. This can pose serious existential threats to an organization’s finances and reputation.

The guidance states quite plainly that it may not always be possible to identify generative AI-assisted cyberattacks. However, it does outline several countermeasures that can be leveraged on both an organizational and individual level to mitigate the chances of success these attacks may have:

Organization Level

Access Governance

The guidance insists that only the most relevant individuals can access critical organizational assets. To do so, organizations are advised to adopt a practical access control framework that prevents unauthorized access to high-value resources.

Consistent Security Updates & Patches

Malicious actors have several tools that aid them in carrying out their attacks. More importantly, these tools are consistently being improved to raise their overall effectiveness. Hence, it is just as critical for organizations to adopt a similarly rigorous and proactive approach towards their security updates and patches as these are often the first and most important lines of defense against any cyberattack.

Network Security

An organization must adopt proactive and thorough network detection tools to ensure it can identify and address potential threats on its network before they’re able to cause any major disruption or damage. While Generative AI tools do promise effectiveness, there is a slight con to their usage as they put a tremendous strain on network resources. Such instances would be easily identified if the organization deployed a reliable network detection tool.

The guide provides additional information related to network security here.

Employee Training

An organization may have the best mechanisms and policies to prevent cyberattacks. However, these mean nothing if its employees do not understand or follow them. Hence, regular employee training sessions where appropriate training is provided to employees related to the countermeasures adopted and best cybersecurity practices can go a long way in ensuring cyberattacks have a far lesser chance of success.

Individual Level

Content Verification

Misinformation has already been identified as one of the most immediate dangers posed by generative AI owing to the quantity and quality of misinformation content that can be generated via such tools. Hence, employees must deploy their deepest logical faculties to verify all content they interact with to ensure they’re not subjected to social engineering or phishing attempts.

The guide provides helpful resources in this regard here.

Beware of Social Engineering

It’s not the latest trick up cyber attackers' sleeves, but it remains one of the most effective. And with generative AI, it is likely to become even more effective. Hence, individuals must implement basic digital safety practices such as minimizing the amount of personal information available online, interacting with email attachments from unknown sources, or conducting their communications via unverified or alternative channels.

The guide provides helpful resources in this regard here.

Sound Cybersecurity Hygiene

Simple measures such as strong passwords, multi-factor authentication (MFA), and a reliable anti-virus can prove vital in an organization’s cybersecurity countermeasure strategy as they minimize the likelihood of any weakness within its internal security framework.

How Can Securiti Help

If used responsibly, generative AI promises to elevate an organization’s performance, productivity, and revenues on an unprecedented scale. At the same time, owing to its relative infancy, the scale of the various risks associated with generative AI isn’t clear yet.

As a result, at least for now, organizations must walk a tightrope, balancing the risks and rewards of generative AI usage.

Securiti’s Data Command Center™ is an enterprise solution based on a Data Command Center framework that allows organizations to implement various modules, solutions, and mechanisms that can help address the security challenges posed by generative AI.

These include data privacy, regulatory compliance, and data security management.

Furthermore, it allows organizations to leverage various modules and solutions such as data access controls, data lineage, sensitive data intelligence, and others in line with this guidance’s recommendations.

Request a demo today and learn more about how Securiti can help you mitigate the challenges and risks posed by generative AI usage.

Frequently Asked Questions

It’s a best-practice guide issued by Canada’s national cyber centre to help organizations use generative AI safely. The guide explains key risks, rights, responsibilities, and the steps companies should take before, during, and after adopting generative AI tools.

It shows how to manage generative AI as part of a broader risk management approach by classifying data, tracking how it flows, enforcing privacy and security controls, and maintaining oversight. This helps organizations support innovation while staying compliant.

It’s a national framework designed to help organizations understand and manage the cybersecurity risks associated with generative AI. The guidance explains how to adopt AI tools safely while protecting sensitive data, maintaining trust, and upholding ethical standards.

Generative AI is powerful, but it also introduces risks such as data leaks, misinformation, and malicious content creation. Canada’s cyber authority released this guidance to help businesses, government agencies, and individuals use AI responsibly while protecting privacy and security.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
Introducing Agent Commander
The promise of AI Agents is staggering— intelligent systems that make decisions, use tools, automate complex workflows act as force multipliers for every knowledge...
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About View More
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About
Boards are tuned in to the AI conversation, but there’s a blind spot many organizations still haven’t named: risk silos. Everyone agrees AI governance...
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
Building A Secure AI Foundation For Financial Services View More
Building A Secure AI Foundation For Financial Services
Access the whitepaper and discover how financial institutions eliminate Shadow AI, enforce real-time AI policies, and secure sensitive data with a unified DataAI control...
Indiana, Kentucky & Rhode Island Privacy Laws View More
Indiana, Kentucky & Rhode Island Privacy Laws: What Changed & What Businesses Should Do Now
A breakdown of new data privacy laws in Indiana, Kentucky, and Rhode Island—key obligations, consumer rights, enforcement timelines, and what businesses should do now.
Agentic AI Security: OWASP Top 10 with Enterprise Controls View More
Agentic AI Security: OWASP Top 10 with Enterprise Controls
Map the OWASP Top 10 risks for agentic AI to enterprise-grade controls, identity, data security, guardrails, monitoring, and governance to stop autonomous AI abuse.
View More
Strategic Priorities For Security Leaders In 2026
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI. Category:...
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New