Introduction
Malaysia, via the Ministry of Science, Technology, and Innovation (MOSTI), introduced the National Guidelines on AI Governance and Ethics (AI Guidelines) in September 2024. These AI Guidelines respond to the rapidly evolving field of artificial intelligence (AI) and its potential to revolutionize several industries. Moreover, they intend to support the implementation of the previously published National Artificial Intelligence Roadmap (AI-Roadmap) 2021-2025 to promote Malaysia as a high-tech, AI-driven economy.
Thus, the AI Guidelines provide a foundational framework and ensure that AI technologies are developed and deployed in a manner that aligns with ethical principles and prioritizes public interest, safety, and fairness.
Objectives of the AI Guidelines
In essence, the AI Guidelines aim to:
- support and facilitate the implementation of the AI-Roadmap;
- promote the reliability and trustworthiness of AI systems;
- address the potential risks of developing and deploying AI systems; and
- enhance economic development, competitiveness, and productivity by leveraging AI.
Scope of the AI Guidelines
Currently, Malaysia has no particular legislation governing the use of AI. While the AI Roadmap’s seven AI principles are not legally obligatory, the AI Guidelines urge AI developers and deployers to embrace them as an industry best practice. They provide customized recommendations for:
- end users of AI to educate the public on responsible AI usage;
- policymakers and government organizations in formulating AI-related policies; and
- developers and designers to guide the ethical design and implementation of AI systems.
Seven Key AI Principles
The AI Guidelines propose seven core principles to ensure that AI technologies are developed and deployed in an ethically and legally compliant manner. These principles include the following:
- Fairness: Ensure AI systems are developed and deployed free from bias and discrimination.
- Reliability, Safety, and Control: Enable security measures to ensure AI systems perform as intended.
- Privacy and Security: AI systems must undergo rigorous testing and risk assessments to confirm they secure personal data and maintain user privacy.
- Inclusiveness: Ensure AI systems are accessible and beneficial to all societal segments.
- Transparency: Ensure transparency by clearly explaining AI capabilities, disclosing relevant information, and making AI algorithms easier to understand to assess evolving risks. Additionally, ensure clarity in AI operations and decision-making processes.
- Accountability: Ensure AI system developers and deployers are held accountable for AI systems’ performance and AI outcomes.
- Pursuit of Human Benefit and Happiness: Ensure AI system developers and deployers leverage AI technologies to enhance human well-being and respect individual rights.
Obligations of Stakeholders
The AI Guidelines outline the obligations of the three main stakeholder groups— end users, policymakers, and developers—within a shared responsibility framework.
1. End Users
End users, individuals, or organizations use AI products in various ways, including virtual assistants and smart home appliances. AI is also utilized for content creation, fraud detection, and security.
Consumer Protection Rights
AI developers and deployers must establish ethical systems to ensure various rights to end users, including:
- the right to be respected at all times concerning AI products and services;
- the right to be informed when an algorithm reports their personal data to third parties, uses it to make decisions or uses it to provide offers for goods and services;
- the right to object and to be given an explanation;
- the right to be forgotten and have personal data deleted;
- the right to interact with a human instead of an AI;
- the right to redress and compensation for damages (if any);
- the right to collective redress if a business violates the rights of end-users; and
- the right to complain to a supervisory authority or take legal action.
AI developers and deployers must establish systems to ensure these rights are available.
Accountability
End users must be cautious when utilizing AI tools, ensuring the technology is used in a sustainable, responsible and ethical manner.
Stakeholders, model owners, and AI developers must accept accountability for AI solutions' as it leads to ensuring the operation of AI systems in a compliant manner. Thus, AI developers should consider the system's intended use, technical abilities, reliability and quality, and possible effects on individuals with special needs to avoid harm.
Key Consumer Protection Measures for AI
To further protect end-users, the following steps can be taken:
- defining generative AI and clearly outlining its scope and applications;
- ensuring companies disclose AI-generated content;
- requiring explicit user consent for data usage;
- establishing guidelines for accuracy and fairness;
- holding companies accountable for harmful outputs;
- regularly auditing and enforcing compliance;
- educating the public about AI risks and benefits; and
- engaging stakeholders to create balanced policies.
2. Policy Makers of Government, Agencies, Organizations and Institutions
The AI Guidelines also target policymakers, planners, and managers overseeing AI workforce policy and planning. They offer a structured approach to ensure AI's ethical and responsible application, assisting policymakers and regulatory bodies in enforcing regulations, protecting consumers’ rights, and encouraging fair competition across industries. Key obligations for policymakers include:
Policymakers must establish and enforce regulations that balance innovation and the general welfare while encouraging the development and application of ethical AI across industries and enforcing compliance with AI laws and regulations.
Consumer Protection
They must protect individuals from harm caused by AI-related decisions, ensure fairness in AI interactions, and protect consumer rights.
Ensuring Transparency
Policymakers should enforce policies requiring transparent, accountable and unbiased AI systems, ensuring stakeholders understand how decisions are made. Transparency principles primarily apply in situations where AI is used in decision-making. The following five requirements must be met:
- complete disclosure of the information that an AI system is using to make decisions;
- the AI systems' intended use;
- the training data (including a description of the data used for training, any historical or social biases in the data, and the methods used to ensure the data's quality);
- AI system maintenance and assessment;
- and the ability to contest the AI systems' decisions.
Ethics and Inclusivity
Policies must ensure nondiscrimination, fairness, and inclusivity in AI systems, particularly in high-impact industries like public administration, healthcare, and finance.
Capacity Building
Governments are responsible for establishing AI literacy initiatives and raising public knowledge of AI governance, ethics, and rights.
3. Developers, Designers, Technology Providers and Suppliers
The AI Guidelines also target developers and designers who create AI products for various industries and include technological benchmarks, ethical standards, and best practices to ensure ethical AI development and deployment, improved outcomes, and fewer ethical concerns. Key obligations of developers and designers include:
Obtain Consent
Developers should obtain individual consent before processing or sharing personal information for AI research and implementation where required.
Ethical Development
From design to delivery, AI developers must follow ethical guidelines and ensure that their systems are unbiased, fair, and secure.
Technical Standards
Developers must comply with local standards and internationally accepted technical benchmarks to ensure the system’s reliability and safety. AI systems must also provide individuals with robust data protection and privacy throughout their life cycle.
Bias Mitigation
Developers must proactively detect and correct any potential biases in AI systems to ensure fair results and refrain from using user data and information without a legal basis.
Accountability Mechanisms
Developers must adopt state-of-the-art, robust features that enable traceability and auditability, ensuring accountability for AI systems' decisions.
Risk Assessment
Developers must actively conduct risk assessments and monitoring and adopt risk mitigation steps to address unforeseen AI development and deployment instances.
Security Measures
AI systems must undergo robust testing to ensure reliability, safety, and fail-safe performance. They must also function reliably, efficiently manage common and uncommon circumstances, and provide safeguards against or minimize negative consequences. Developers must conduct comprehensive testing, certification, and risk assessments to reduce risk.
Privacy by Design
When implementing the AI system into practice, developers should also consider security-by-design and privacy-by-design principles and assess international information security and privacy regulations.
Continuous Monitoring and Evaluation
To assess AI systems' impact on privacy and security, they must be monitored in real-time and continuously updated. This means assessing the effectiveness of established protections and updating them to address evolving threats. Additionally, organizations must proactively determine and address drift or changes in data distribution to assess any biases in the AI system and make any required adjustments.
4. Shared Responsibilities
Transparency and Trust
All stakeholders are responsible for promoting a transparent culture in AI development, deployment, operations, and decisions to ensure user trust.
Collaboration
Stakeholders must collaborate to address multifaceted issues like privacy, security, and bias reduction by ensuring their involvement.
Ethical Leadership
Stakeholders must actively embrace a leadership role to promote the ethical application of AI and ensure that it serves the public interest without harming individuals’ rights or principles.
How Securiti Can Help
Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.
Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with the AI Guidelines on AI Governance and Ethics.
Request a demo to learn more.