Be aware of scammers impersonating as IMDA officers and report any suspicious calls to the police. Please note that IMDA officers will never call you nor request for your personal information. For scam-related advice, please call the Anti-Scam helpline at 1800-722-6688 or go to

Singapore proposes framework to foster trusted Generative AI development

  • Framework furthers international consensus on governance of Generative AI
  • Builds on previous AI Model Governance Framework 


1. The AI Verify Foundation (AIVF) and Infocomm Media Development Authority (IMDA) have developed a draft Model AI Governance Framework for Generative AI. This framework expands on the existing Model Governance Framework that covers Traditional AI, last updated in 20201.

2. Generative AI has significant transformative potential - above and beyond what Traditional AI has been able to achieve. This also comes with risks. While it remains a dynamically developing space, there is growing global consensus that consistent principles are needed to create a trusted environment - one that enables end-users to use AI confidently and safely, while allowing space for cutting-edge innovation. The use and impact of AI is not limited to individual countries. Hence this proposed framework aims to facilitate international conversations among policymakers, industry, and the research community, to enable trusted development globally.

3. With Generative AI, there is a need to update the earlier model governance framework to holistically address new issues that have emerged. The proposed framework integrates integrates ideas from our earlier discussion paper on Generative AI2, which put forward a conceptual foundation. It also draws on earlier technical work to provide an initial catalogue and guidance on suggested practices for safety evaluation of Generative AI models3. On top of this, it draws on practical insights from ongoing evaluations tests, conducted within our Generative AI Evaluation Sandbox4.

4. The framework looks at nine proposed dimensions to support a comprehensive and trusted AI ecosystem. The core elements are based on the principles that decisions made by AI should be explainable, transparent, and fair. Beyond principles, it offers practical suggestions that model developers and policymakers can apply as initial steps (see Annex (157.58KB) for more information).

5. AI governance remains a nascent space. Building international consensus is key, as demonstrated by the successful mapping and interoperability of national AI governance frameworks between Singapore and the US, through the IMDA and the US National Institute of Science and Technology (NIST) crosswalk5. The proposed Model Governance Framework for Generative AI takes this one step further by covering the latest developments in Generative AI. In turn, this will inform Singapore’s next steps, as we adopt a practical approach to maximise both trust and innovation. This framework will evolve as techniques and technologies develop.

6. For more details, please refer to the proposed draft framework. We welcome views from the international community, which can be sent to This will support finalisation of the Model AI Governance Framework in mid-2024.


Issued by Infocomm Media Development Authority

About Infocomm Media Development Authority (IMDA)

The Infocomm Media Development Authority (IMDA) leads Singapore’s digital transformation with infocomm media. To do this, IMDA will develop a dynamic digital economy and a cohesive digital society, driven by an exceptional infocomm media (ICM) ecosystem – by developing talent, strengthening business capabilities, and enhancing Singapore's ICM infrastructure. IMDA also regulates the telecommunications and media sectors to safeguard consumer interests while fostering a pro-business environment, and enhances Singapore’s data protection regime through the Personal Data Protection Commission.

For more news and information, visit or follow IMDA on LinkedIn (@IMDA), Facebook (@IMDAsg) and Instagram (@IMDAsg).

About AI Verify Foundation

The AI Verify Foundation (AIVF) harnesses the collective power and contributions of the global open-source community to develop AI testing tools enabling responsible AI. The Foundation promotes best practices and standards for AI and seeks to build trust through ethical AI.

For more information, visit

For media clarifications, please contact:

(Ms) Brenda Tan
(Communications and Marketing)
DID: (65) 8180 0228

Explore more