This speech was delivered by Denise Wong, Assistant Chief Executive, Data Innovation and Protection Group, IMDA at PECC 2025 Conference on “Asia-Pacific AI Governance Accelerator and Preparing for AI Transformation” on 11 July 2025.
SINGAPORE – 11 JUL 2025
Introduction
Distinguished guests, fellow policymakers, business leaders, ladies and gentlemen, good morning. To all our guests from overseas, a very warm welcome to Singapore.
1. I would like to thank the Singapore National Committee for Pacific Economic Cooperation for inviting us to be part of this Conference, to support the global discussion on what matters to us – Accelerating AI Governance and Preparing for AI Transformation.
2. Across the Asia-Pacific, we are seeing rapid growth in the use of AI, generating value for our economies and addressing real societal needs. Many of these use cases are applicable across the region. For example:
- iHub, a logistics company operating in five countries, used AI in route planning and space optimisation, demonstrating how AI implementation can improve supply chain efficiency.
- In healthcare, AI is used in many hospitals to assist doctors in medical report interpretation.
3. At the same time, we are all aware of the need to ensure that AI is used responsibly and equitably. If not, it can result in unreliable and unsafe outcomes, and we will not be able to maximise the benefits of this technology.
4. Today is a rare, but important, opportunity where APEC economies are gathered to discuss the challenges and collectively shape policies that will influence the global development and adoption of AI. With the rapid growth of AI capabilities, it is critical that we work together so that we can all harness AI successfully.
5.There are 2 areas for collaboration that I would like to touch on:
- How we can enable AI adoption; and
- How we can mitigate AI risks and harms.
How we can enable AI adoption
6. Our region is home to many cultures, with diverse societal values. For AI to be truly useful, the underlying AI models, which provide the foundation for many AI applications, need to be contextualised to these local languages and social norms. This means that:
- The models must be trained for local context so that accurate and relevant output is generated when they are prompted. For example, local laws, phrases, and places.
- The models should also not generate undesirable output. Last year, Singapore organised a red teaming challenge involving 8 other Asia Pacific countries – countries like Japan, China, Korea, Indonesia and Thailand. Together, we found that even the most advanced models would produce harmful output when given a simple prompt, such as biased stereotypes (around gender, race and religion).
7. Hence, we need to develop models that are aligned to our context and values. In South-East Asia, we have the SEA‑LION (or Southeast Asian Languages in One Network) initiative, which is a set of open-source LLMs tailored for Southeast Asian languages and contexts. And they were built by further training existing models like Llama. The AI Singapore team learnt many useful techniques when developing SEA-LION, and they have in turn shared this experience with others. For example, Indonesia's GoTo team, which then developed "Sahabat AI" – a LLM designed for Bahasa Indonesia and the Indonesian context.
8. Throughout APAC, there are countries, like Singapore and Indonesia, training and fine-tuning local models. If we can all come together to share knowledge and experiences, we can help advance capabilities and accelerate AI development for everyone.
9. Besides models, applications are also important. This is the layer that consumers and businesses directly engage with. Apps are typically built by local enterprises and tech solution providers. Relevant and accurate data is critical to ensure that these apps are tailored to use case specific needs. For example, customer data and data on business policies. Yet, having good quality data to build AI applications is a gap. In a recent IBM global survey, 42% of respondents cited this as one of their biggest challenges to AI adoption.
10. This has been a busy week for us. It is the Personal Data Protection Week, where Singapore played host to many events on data protection and use. This issue of data for AI was discussed extensively. Policy makers and regulators in the data space have a crucial role to play in developing mechanisms to facilitate data sharing between companies, and unlocking data for AI use.
11. First, we can work together to establish common frameworks that allow for more cross border data sharing.
- The Global Cross-Border Privacy Rules (or GCBPR) certification system is a good example. GCBPR brings multiple national data protection regimes closer to improve interoperability, allowing businesses that are GCBPR certified to share and pool data from different countries, while complying with their national data regulations.
- The ASEAN Model Contractual Clauses (MCC) are boilerplate contract terms that companies can use to meet their national and local regulations. This is another example of how a group of countries, in this case, South-East Asian nations come together to define common contractual requirements.
12. Second, we can promote the adoption of technical solutions such as Privacy Enhancing Technologies or PETs. These solutions are advanced techniques that allow businesses to use data without exposing sensitive or personal information - a win-win for innovation and protection. Singapore has been working with companies on the use of these technologies in our PETs Sandbox. Drawing on this experience, we published an adoption guide for PETs (this week), and are also actively partnering with the OECD to drive greater use. We will be happy to share some of the resources and experience with APAC colleagues.
How we can mitigate AI Risks and Harms
13. As AI capabilities improve, the associated risks are also evolving. Harmful content, missed opportunities due to unfair automated decisions, and inaccurate output – these are just some examples of AI generated problems that we are aware of. If not managed carefully, they will affect public’s confidence and trust. Without trust, consumers will not adopt AI. Trustworthy, secure and reliable AI is therefore important to facilitate AI adoption. Given the rapid rate of change, it is critical that we take a globally collaborative approach to tackling AI risks.
14. In the area of AI safety research, there is still much to be done. How do we more effectively assess and mitigate risks as model performance advances? For example, Anthropic’s Claude 4, released in May this year, has shown improved agentic abilities. Working together to grow AI safety science, to improve our methods for testing AI and develop solutions in areas of concern, is perhaps the only way that we can catch up with the pace of AI capability development.
15. This is why Singapore brought together more than 100 technical experts from around the world in the field of AI safety research, to discuss and agree on AI safety research priorities. These important priorities are documented in the Singapore Consensus. Through the Singapore Consensus, governments, research institutions and AI developers can identify and support impactful R&D in AI safety. I hope that APAC countries will also find these priorities useful as you set your research agenda and investment focus.
16. Besides research, we also need coherent AI governance frameworks and standards to reduce complexity and cost of compliance for businesses. A common approach to responsible and trustworthy AI deployment will make it easier, and thus more likely, for businesses to adopt. This can be done in two ways.
17. First, jointly develop a set of requirements based on shared principles and norms, aligning on foundational values, such as fairness, transparency, accountability and security.
- We could take a look, for example, at what ASEAN has done. Singapore chairs the workgroup on AI in ASEAN and, together with our Southeast Asian neighbours, developed the ASEAN Guide on AI Governance and Ethics. The guide sets out clear expectations on what enterprises need to do in oversight, risk assessment and operations management, when implementing AI systems.
- The G7 is another example – through the Hiroshima AI Process, a Code of Conduct and Reporting Framework have been developed to provide guidance for AI system developers.
18. These frameworks help to define globally acceptable norms for safe and reliable AI development.
19. Second, in nascent fields like AI application testing, we can collaborate on developing good practices that could eventually become standards for businesses to follow. Singapore recently launched the Global AI Assurance Sandbox, as a way for companies to work with third party testers and government agencies, to experiment and develop new ways to test AI risks in their applications. We piloted this at the Paris AI Summit earlier this year and received very good participation and results – more than 30 companies from around the world working on 17 use cases. I hope all of you can encourage your companies to participate in the Sandbox. This way, we can collectively develop and coalesce common standards in evaluating AI systems for safety and reliability.
20. Before I conclude, I would like to bring our attention to perhaps the most crucial aspect of our AI collaboration journey – addressing AI’s impact on our societies.
- AI-generated deepfakes can pose a serious threat to our social fabric. Not only can they spread false narratives that damage businesses and harm individuals, they also have the power to erode trust in our media, institutions and our personal relationships.
- We also anticipate unprecedented changes in our workforce. AI is fundamentally transforming how we work – changing jobs and creating new ones. Our workers will need new skill sets. This means that retraining and reskilling is critical
- Looking further ahead, we are also starting to think about how we can ensure the well-being and education of our children, as they interact more with AI and become increasingly dependent on it.
21. These challenges are complex and interconnected, and there are no easy answers. They also demand more than individual solutions - they require collective understanding and action. Gatherings like today are invaluable platforms for exchanging ideas and forging partnerships to help tackle these challenges.
Closing
22. The future of AI is about ensuring that as we advance technologically, everyone benefits. We want to build a future where AI innovation and trust can flourish together. A future where we can build consensus around practical solutions despite our different starting points. Today, let us establish new connections, learn from each other, and inspire collaborations that will shape the future of AI and society as a whole. Thank you.