Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

An Overview of Malaysia’s National Guidelines on AI Governance and Ethics

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syeda Eimaan Gardezi

Associate Data Privacy Analyst at Securiti

Salma Khan

Data Privacy Analyst

CIPP/Asia

Listen to the content

Introduction

Malaysia, via the Ministry of Science, Technology, and Innovation (MOSTI), introduced the National Guidelines on AI Governance and Ethics (AI Guidelines) in September 2024. These AI Guidelines respond to the rapidly evolving field of artificial intelligence (AI) and its potential to revolutionize several industries. Moreover, they intend to support the implementation of the previously published National Artificial Intelligence Roadmap (AI-Roadmap) 2021-2025 to promote Malaysia as a high-tech, AI-driven economy.

Thus, the AI Guidelines provide a foundational framework and ensure that AI technologies are developed and deployed in a manner that aligns with ethical principles and prioritizes public interest, safety, and fairness.

Objectives of the AI Guidelines

In essence, the AI Guidelines aim to:

  • support and facilitate the implementation of the AI-Roadmap;
  • promote the reliability and trustworthiness of AI systems;
  • address the potential risks of developing and deploying AI systems; and
  • enhance economic development, competitiveness, and productivity by leveraging AI.

Scope of the AI Guidelines

Currently, Malaysia has no particular legislation governing the use of AI. While the AI Roadmap’s seven AI principles are not legally obligatory, the AI Guidelines urge AI developers and deployers to embrace them as an industry best practice. They  provide customized recommendations for:

  • end users of AI to educate the public on responsible AI usage;
  • policymakers and government organizations in formulating AI-related policies; and
  • developers and designers to guide the ethical design and implementation of AI systems.

Seven Key AI Principles

The AI Guidelines propose seven core principles to ensure that AI technologies are developed and deployed in an ethically and legally compliant manner. These principles include the following:

  • Fairness: Ensure AI systems are developed and deployed free from bias and discrimination.
  • Reliability, Safety, and Control: Enable security measures to ensure AI systems perform as intended.
  • Privacy and Security: AI systems must undergo rigorous testing and risk assessments to confirm they secure personal data and maintain user privacy.
  • Inclusiveness: Ensure AI systems are accessible and beneficial to all societal segments.
  • Transparency: Ensure transparency by clearly explaining AI capabilities, disclosing relevant information, and making AI algorithms easier to understand to assess evolving risks. Additionally, ensure clarity in AI operations and decision-making processes.
  • Accountability: Ensure AI system developers and deployers are held accountable for AI systems’ performance and AI outcomes.
  • Pursuit of Human Benefit and Happiness: Ensure AI system developers and deployers leverage AI technologies to enhance human well-being and respect individual rights.

Obligations of Stakeholders

The AI Guidelines outline the obligations of the three main stakeholder groups— end users, policymakers, and developers—within a shared responsibility framework.

1. End Users

End users, individuals, or organizations use AI products in various ways, including virtual assistants and smart home appliances. AI is also utilized for content creation, fraud detection, and security.

Consumer Protection Rights

AI developers and deployers must establish ethical systems to ensure various rights to end users, including:

  • the right to be respected at all times concerning AI products and services;
  • the right to be informed when an algorithm reports their personal data to third parties, uses it to make decisions or uses it to provide offers for goods and services;
  • the right to object and to be given an explanation;
  • the right to be forgotten and have personal data deleted;
  • the right to interact with a human instead of an AI;
  • the right to redress and compensation for damages (if any);
  • the right to collective redress if a business violates  the rights of end-users; and
  • the right to complain to a supervisory authority or take legal action.

AI developers and deployers must establish systems to ensure these rights are available.

Accountability

End users must be cautious when utilizing AI tools, ensuring the technology is used in a sustainable, responsible and ethical manner.

Stakeholders, model owners, and AI developers must accept accountability for AI solutions' as it leads to ensuring the operation of AI systems in a compliant manner.  Thus, AI developers should consider the system's intended use, technical abilities, reliability and quality, and possible effects on individuals with special needs to avoid harm.

Key Consumer Protection Measures for AI

To further protect end-users, the following steps can be taken:

  • defining generative AI and clearly outlining its scope and applications;
  • ensuring companies disclose AI-generated content;
  • requiring explicit user consent for data usage;
  • establishing guidelines for accuracy and fairness;
  • holding companies accountable for harmful outputs;
  • regularly auditing and enforcing compliance;
  • educating the public about AI risks and benefits; and
  • engaging stakeholders to create balanced policies.

2. Policy Makers of Government, Agencies, Organizations and Institutions

The AI Guidelines also target policymakers, planners, and managers overseeing AI workforce policy and planning. They offer a structured approach to ensure AI's ethical and responsible application, assisting policymakers and regulatory bodies in enforcing regulations, protecting consumers’ rights, and encouraging fair competition across industries. Key obligations for policymakers include:

Policy Formulation and Enforcement

Policymakers must establish and enforce regulations that balance innovation and the general welfare while encouraging the development and application of ethical AI across industries and enforcing compliance with AI laws and regulations.

Consumer Protection

They must protect individuals from harm caused by AI-related decisions, ensure fairness in AI interactions, and protect consumer rights.

Ensuring Transparency

Policymakers should enforce policies requiring transparent, accountable and unbiased AI systems, ensuring stakeholders understand how decisions are made. Transparency principles primarily apply in situations where AI is used in decision-making. The following five requirements must be met:

  • complete disclosure of the information that an AI system is using to make decisions;
  • the AI systems' intended use;
  • the training data (including a description of the data used for training, any historical or social biases in the data, and the methods used to ensure the data's quality);
  • AI system maintenance and assessment;
  • and the ability to contest the AI systems' decisions.

Ethics and Inclusivity

Policies must ensure nondiscrimination, fairness, and inclusivity in AI systems, particularly in high-impact industries like public administration, healthcare, and finance.

Capacity Building

Governments are responsible for establishing AI literacy initiatives and raising public knowledge of AI governance, ethics, and rights.

3. Developers, Designers, Technology Providers and Suppliers

The AI Guidelines also target developers and designers who create AI products for various industries and include technological benchmarks, ethical standards, and best practices to ensure ethical AI development and deployment, improved outcomes, and fewer ethical concerns. Key obligations of developers and designers include:

Developers should obtain individual consent before processing or sharing personal information for AI research and implementation where required.

Ethical Development

From design to delivery, AI developers must follow ethical guidelines and ensure that their systems are unbiased, fair, and secure.

Technical Standards

Developers must comply with local standards and internationally accepted technical benchmarks to ensure the system’s reliability and safety. AI systems must also provide individuals with robust data protection and privacy throughout their life cycle.

Bias Mitigation

Developers must proactively detect and correct any potential biases in AI systems to ensure fair results and refrain from using user data and information without a legal basis.

Accountability Mechanisms

Developers must adopt state-of-the-art, robust features that enable traceability and auditability, ensuring accountability for AI systems' decisions.

Risk Assessment

Developers must actively conduct risk assessments and monitoring and adopt risk mitigation steps to address unforeseen AI development and deployment instances.

Security Measures

AI systems must undergo robust testing to ensure reliability, safety, and fail-safe performance. They must also function reliably, efficiently manage common and uncommon circumstances, and provide safeguards against or minimize negative consequences. Developers must conduct comprehensive testing, certification, and risk assessments to reduce risk.

Privacy by Design

When implementing the AI system into practice, developers should also consider security-by-design and privacy-by-design principles and assess international information security and privacy regulations.

Continuous Monitoring and Evaluation

To assess AI systems' impact on privacy and security, they must be monitored in real-time and continuously updated. This means assessing the effectiveness of established protections and updating them to address evolving threats. Additionally, organizations must proactively determine and address drift or changes in data distribution to assess any biases in the AI system and make any required adjustments.

4. Shared Responsibilities

Transparency and Trust

All stakeholders are responsible for promoting a transparent culture in AI development, deployment, operations, and decisions to ensure user trust.

Collaboration

Stakeholders must collaborate to address multifaceted issues like privacy, security, and bias reduction by ensuring their involvement.

Ethical Leadership

Stakeholders must actively embrace a leadership role to promote the ethical application of AI and ensure that it serves the public interest without harming individuals’ rights or principles.

How Securiti Can Help

Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.

Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks, and protect sensitive data. This enables organizations to comply with the AI Guidelines on AI Governance and Ethics.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View
Spotlight 13:35

The Better Organized We’re from the Beginning, the Easier it is to Use Data

Watch Now View

Latest

The ROI of Safe Enterprise AI View More

The ROI of Safe Enterprise AI: A Business Leader’s Guide

The fundamental truth of today’s competitive landscape is that businesses harnessing data through AI will outperform those that don’t. Especially with 90% of enterprise...

View More

Accelerating Safe Enterprise AI: Securiti’s Gencore AI with Databricks and Anthropic Claude

Securiti AI collaborates with the largest firms in the world who are racing to adopt and deploy safe generative AI systems, leveraging their own...

View More

What are Data Security Controls & Its Types

Learn what are data security controls, the types of data security controls, best practices for implementing them, and how Securiti can help.

View More

What is cloud Security? – Definition

Discover the ins and outs of cloud security, what it is, how it works, risks and challenges, benefits, tips to secure the cloud, and...

View More

2025 Privacy Law Updates: Key Developments You Need to Know

Download the whitepaper to discover privacy law updates in 2025 and the key developments you need to know. Learn how Securiti helps ensure swift...

View More

Verifiable Parental Consent Requirements Under Global Privacy Laws

Download the whitepaper to learn about verifiable parental consent requirements under global privacy laws and simplify your compliance journey.

Navigating Kenya’s Data Protection Act View More

Navigating Kenya’s Data Protection Act: What Organizations Need To Know

Download the infographic to discover key details about navigating Kenya’s Data Protection Act and simplify your compliance journey.

India’s Telecom Security & Privacy Regulations View More

India’s Telecom Security & Privacy Regulations: A High-Level Overview

Download the infographic to gain a high-level overview of India’s telecom security and privacy regulations. Learn how Securiti helps ensure swift compliance.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New