Be aware of scammers impersonating as IMDA officers and report any suspicious calls to the police. Please note that IMDA officers will never call you nor request for your personal information. For scam-related advice, please call the Anti-Scam helpline at 1800-722-6688 or go to www.scamalert.sg.

Responsible AI boosts consumer trust and business growth in Singapore
ai governance banner

Responsible AI boosts consumer trust and business growth in Singapore

As Artificial Intelligence (AI) is creating new business opportunities and transforming industries, there are increasing public concerns about the risks of using it. Even as many companies are adopting the technology to reduce human effort, make faster decisions, increase accuracy, and deliver a more personalised consumer experience, only 35 per cent of global consumers trust how AI is being implemented1. Companies who want to unlock AI’s transformative opportunities must do more to ensure that the AI systems they implement are not only making accurate, bias-aware decisions without violating personal data privacy, but are also being used in a responsible manner.

AI governance helps to achieve this balance, introducing guidelines and frameworks that can help companies stay accountable and ethical in this process. Risks associated with AI can be addressed through a structured approach that involves sound policy and regulation. On that front, Singapore has contributed to the international discourse on AI ethics and governance, with IMDA championing efforts to establish Singapore as a hub for responsible AI deployment and innovation, through initiatives such as the Model AI Governance Framework, AI Verify and the Generative AI Evaluation Sandbox.

An illustration of a man starting an AI test surrounded by tabs showing a security checklist, weighing balance and chart.
To monitor and scale with responsible AI, businesses must ensure appropriate governance is in place.

IMDA leading responsible AI efforts

Since 2018, IMDA has played a crucial role in helping Singapore adopt a practical, risk-based AI governance approach that facilitates innovation, safeguards consumer interests, and serves as a common global reference point. This includes investing in AI governance initiatives, introducing legal frameworks that allow AI to thrive safely and benefit users, and positioning the country as a heavy weight in this space. IMDA also ensures that AI regulations undergo regular review to match the pace at which AI continues to develop, where current models are assessed and refined without stifling its advancement.

The Centre of AI & Data Governance

Under the administration of IMDA, the Centre of AI & Data Governance was set up at the Singapore Management University School of Law in 2019 to develop data governance research and inform policy formation in Singapore. It also looks at the responsible development and deployment of AI by the industry.

The Model AI Governance Framework

To further promote the responsible use of AI, IMDA and the Personal Data Protection Commission (PDPC) launched the first edition of the Model AI Governance Framework (Model Framework), which converts high level AI ethics principles into implementable measures for organisations to deploy AI responsibly. With a diverse range of organisations taking up these measures, IMDA then released a second edition of the Model Framework, incorporating feedback from industry organisations and companies that have adopted AI. IMDA and PDPC also partnered the World Economic Forum Centre for the Fourth Industrial Revolution to develop an Implementation and Self-Assessment Guide for Organisations (ISAGO). Using ISAGO, organisations can assess the alignment of their AI governance practices with the Model Framework while learning from industry best practices.

More recently in 2024, a new draft Model AI Governance Framework for Generative AI was developed and shared for international feedback. Expanding on the existing Model Framework that covers Traditional AI, the new framework addresses emerging issues from Generative AI (Gen AI) and will help facilitate international efforts to build a trusted AI ecosystem.

AI Verify

Beyond the Model Framework, IMDA and PDPC launched AI Verify at the 2022 World Economic Forum’s annual meeting in Davos. The world’s first voluntary AI Governance Testing Framework and Toolkit, AI Verify helps businesses demonstrate their deployment of responsible AI through technical tests and process checks. First made available as a Minimum Viable Product, AI Verify brings together the disparate ecosystem of testing sciences, algorithms, and technical tools to enable companies to assess their AI models holistically in a user-friendly way. AI Verify also facilitates the interoperability of AI governance frameworks in multiple markets and contributes to the development of international AI standards. It encompasses a testing framework that is aligned with internationally accepted AI ethics principles such as those from the EU, OECD, and Singapore.

The AI Verify Foundation

While guidelines are critical to safeguarding responsible AI use, it is important to ensure that these guidelines do not inadvertently restrict innovation. To this end, Singapore is leading discussions and action for responsible AI governance and contributing to the development of international standards and best practices.

IMDA unveiled the AI Verify Foundation in June 2023. Comprising of industry-leading members, the Foundation harnesses the collective power and contributions of the global open-source community to develop AI testing tools, promote best practices and standards, and enable responsible AI. Working together with AI owners, solution providers, users, and policymakers, the Foundation also supports the development and use of AI Verify to address risks of AI. Companies including AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, the Land Transport Authority, and Standard Chartered Bank have tested AI Verify and provided IMDA with valuable feedback on the framework. Such industry feedback is consistently channelled into the development of the framework to strengthen AI governance testing and evaluation.

The AI Verify Foundation has also contributed to efforts that support the responsible adoption of Gen AI. A first of its kind, the Gen AI Evaluation Sandbox utilises a new Evaluation Catalogue that sets out common baseline methods and recommendations for large language models (LLM). Industry partners are invited to collaborate and develop evaluation tools and capabilities in the Sandbox to assess Gen AI and tackle potential harms from the technology.

Leveraging opportunities in AI

AI is poised to disrupt the industry, drive innovation, and accelerate business growth across a wide range of industries. Be it in healthcare, retail or even banking, there are opportunities for AI to make a significant difference. Understanding and addressing the risks of nascent technologies such as AI will spur innovation and allow users to scale these technologies in a safe, trustworthy, and ethical manner. Be part of this exciting shift and find out how your business can transform through responsible AI.

Footnote

1Accenture, Technology Vision 2022, Meet Me in the Metaverse.

LAST UPDATED: 02 MAY 2024

Explore more