Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

What to Know About Canadian Center for Cyber Security’s Guidance on Generative AI

Published September 18, 2023
Contributors

Anas Baig

Product Marketing Manager at Securiti

Omer Imran Malik

Data Privacy Legal Manager, Securiti

FIP, CIPT, CIPM, CIPP/US

Listen to the content

Generative AI promises to expand organizational productivity tenfold. A remarkable combination of quality and quantity of content generation allows organizations to achieve greater efficiency than ever before. Organizations across various industries, such as healthcare, software development, media and publishing, academia, and cybersecurity, have leveraged generative AI tools to aid their operations in various capacities.

However transformative and disruptive as Generative AI may be, its immense potential can just as easily be leveraged for malicious acts by cybercriminals and attackers.

In the face of this, the Canadian Center for Cyber Security has recently published a guidance document identifying the major risks and the best practices to mitigate these risks. For organizations still grappling with how best to integrate generative AI into their daily operations, this guidance offers a chance to do so with minimal risk.

Major Risks Identified

The guidance is meticulous in identifying potential threats and risks businesses may face when deploying generative AI within their products and services. These include the following:

Misinformation

Misinformation has been a rampant issue for tech companies globally. However, it could evolve to more catastrophic levels via generative AI tools. With generative AI, malicious actors can produce deceptive and false information en masse, with the language being designed explicitly to influence and convince the public with greater certainty.

Phishing

Phishing has been a major cyber threat for decades, but generative AI can lead to far more sophisticated and frequent phishing attacks, raising their likelihood of success. As with misinformation, phishing emails can be generated with terrifying precise language, leading to potential identity theft, financial fraud, and other forms of cybercrime.

Data Privacy

Generative AI tools are still in their relative infancy. As time progresses, so will their potential and our ability to properly and responsibly leverage them to their maximum potential. However, until then, users may unintentionally expose their personally identifiable information (PII) or their employer’s sensitive data on these generative AI tools. Malicious actors may then leverage various techniques to access this vital data to impersonate individuals.

AI Poisoning

A relatively new threat that can compromise entire models. Instead of targeting a model itself, a malicious actor may instead opt to compromise the dataset the model is trained on. Doing so cannot only severely compromise the accuracy, quality, and transparency of the generated output but may also be leveraged along with some of the other threats identified for large-scale coordinated digital enterprise attacks.

Model Bias

It’s one thing for a generative AI model to be compromised due to a well-choreographed AI poisoning attack, but these models are just as vulnerable to unintentional inaccuracies or biases within the training datasets. Most models are trained on limited datasets scraped from open-source Internet sources. The bias in these sources may prejudice the training data, thereby influencing the model.

Intellectual property (IP) rights are already a bone of contention within the generative AI sphere. With questions revolving around the ownership of the art and content generated via generative AI tools, malicious actors may leverage these tools to steal large volumes of confidential corporate IP data at an accelerated speed. This can pose serious existential threats to an organization’s finances and reputation.

The guidance states quite plainly that it may not always be possible to identify generative AI-assisted cyberattacks. However, it does outline several countermeasures that can be leveraged on both an organizational and individual level to mitigate the chances of success these attacks may have:

Organization Level

Access Governance

The guidance insists that only the most relevant individuals can access critical organizational assets. To do so, organizations are advised to adopt a practical access control framework that prevents unauthorized access to high-value resources.

Consistent Security Updates & Patches

Malicious actors have several tools that aid them in carrying out their attacks. More importantly, these tools are consistently being improved to raise their overall effectiveness. Hence, it is just as critical for organizations to adopt a similarly rigorous and proactive approach towards their security updates and patches as these are often the first and most important lines of defense against any cyberattack.

Network Security

An organization must adopt proactive and thorough network detection tools to ensure it can identify and address potential threats on its network before they’re able to cause any major disruption or damage. While Generative AI tools do promise effectiveness, there is a slight con to their usage as they put a tremendous strain on network resources. Such instances would be easily identified if the organization deployed a reliable network detection tool.

The guide provides additional information related to network security here.

Employee Training

An organization may have the best mechanisms and policies to prevent cyberattacks. However, these mean nothing if its employees do not understand or follow them. Hence, regular employee training sessions where appropriate training is provided to employees related to the countermeasures adopted and best cybersecurity practices can go a long way in ensuring cyberattacks have a far lesser chance of success.

Individual Level

Content Verification

Misinformation has already been identified as one of the most immediate dangers posed by generative AI owing to the quantity and quality of misinformation content that can be generated via such tools. Hence, employees must deploy their deepest logical faculties to verify all content they interact with to ensure they’re not subjected to social engineering or phishing attempts.

The guide provides helpful resources in this regard here.

Beware of Social Engineering

It’s not the latest trick up cyber attackers' sleeves, but it remains one of the most effective. And with generative AI, it is likely to become even more effective. Hence, individuals must implement basic digital safety practices such as minimizing the amount of personal information available online, interacting with email attachments from unknown sources, or conducting their communications via unverified or alternative channels.

The guide provides helpful resources in this regard here.

Sound Cybersecurity Hygiene

Simple measures such as strong passwords, multi-factor authentication (MFA), and a reliable anti-virus can prove vital in an organization’s cybersecurity countermeasure strategy as they minimize the likelihood of any weakness within its internal security framework.

How Can Securiti Help

If used responsibly, generative AI promises to elevate an organization’s performance, productivity, and revenues on an unprecedented scale. At the same time, owing to its relative infancy, the scale of the various risks associated with generative AI isn’t clear yet.

As a result, at least for now, organizations must walk a tightrope, balancing the risks and rewards of generative AI usage.

Securiti’s Data Command Center™ is an enterprise solution based on a Data Command Center framework that allows organizations to implement various modules, solutions, and mechanisms that can help address the security challenges posed by generative AI.

These include data privacy, regulatory compliance, and data security management.

Furthermore, it allows organizations to leverage various modules and solutions such as data access controls, data lineage, sensitive data intelligence, and others in line with this guidance’s recommendations.

Request a demo today and learn more about how Securiti can help you mitigate the challenges and risks posed by generative AI usage.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures View More
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures
The U.S. M&A landscape is back in full swing. May witnessed a significant rebound in deal activity, especially for transactions exceeding $100 million, signaling...
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix View More
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix
The Challenge of Navigating Global Data Privacy Laws In today’s privacy-first world, navigating data protection laws and direct marketing compliance requirements is no easy...
View More
What is IAM (Identity and Access Management)?
Gain insights into Identity and Access Management (IAM), what it is, challenges, core components, and how organizations can leverage it.
AI Data Mapping View More
AI Data Mapping: The Pathway to Intelligent Data Insights
Discover how AI data mapping revolutionizes data utilization. Harness the power of AI for smarter decision-making, data utilization, and ensuring regulatory compliance.
View More
Is Your Business Ready for the EU AI Act August 2025 Deadline?
Download the whitepaper to learn where your business is ready for the EU AI Act. Discover who is impacted, prepare for compliance, and learn...
View More
Getting Ready for the EU AI Act: What You Should Know For Effective Compliance
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI.
August 2, 2025 - A Critical Date in the EU AI Act Enforcement Timeline View More
August 2, 2025 – A Critical Date in the EU AI Act Enforcement Timeline
Securiti’s latest infographic explains the obligations and requirements coming into effect for different entities as the AI Act’s August 2 deadline approaches.
LGPD & Consent: Clear Compliance Guide for Enterprise Executives View More
LGPD & Consent: Clear Compliance Guide for Enterprise Executives
Download the infographic to learn about LGPD and consent. Get a clear compliance guide for enterprise executives. Ensure swift compliance with Securiti.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New