About Artificial Intelligence Singapore (AI SG)
Artificial Intelligence (AI) refers to the study and use of intelligent machines to mimic human action and thought. With the availability of Big Data, advances in computing, and the invention of new algorithms, AI has risen as a disruptive technology in recent years, not only in Singapore but globally.
Leveraging opportunities in AI to improve lives and transform businesses
Whether it’s rendering better healthcare or servicing the banking needs of our seniors, AI can be applied to a variety of industries to create better living, stronger communities, and more opportunities for all.
With much to gain, Singapore is looking to develop niches within AI R&D, building local digital capabilities and fostering partnerships across relevant parties.
AI SG is a national programme to catalyse, synergise and boost Singapore’s AI capabilities. It is driven by a partnership between the National Research Foundation (NRF), the Smart Nation and Digital Government Office (SNDGO), the Economic Development Board (EDB), the Infocomm Media Development Authority (IMDA), SGInnovate, and the Integrated Health Information Systems (IHiS). Up to $150 million will be invested over five years, by the NRF.
Using AI and Data responsibly
As Singapore develops its digital economy, a trusted ecosystem is key — one where organisations can benefit from tech innovations while consumers are confident to adopt and use AI. In the global discourse on AI ethics and governance issue, Singapore believes that its balanced approach can facilitate innovation, safeguard consumer interests, and serve as a common global reference point.
On 25 May 2022, IMDA/PDPC launched A.I. Verify - the world’s first AI Governance Testing Framework and Toolkit for companies in Singapore that wish to demonstrate responsible AI in an objective and verifiable manner. A.I. Verify – currently a Minimum Viable Product (MVP), aims to promote transparency between companies and their stakeholders.
Developers and owners can verify the claimed performance of their AI systems against a set of principles through standardised tests. A.I. Verify packages a set of open-source testing solutions together, including a set of process checks into a Toolkit for convenient self-assessment. The Toolkit will generate reports for developers, management, and business partners, covering major areas affecting AI performance.
10 companies from different sectors and of different scale have already tested and/or provided feedback. These companies include - AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, NCS (Part of Singtel Group)/Land Transport Authority, Standard Chartered Bank, UCARE.AI, and X0PA.AI.
We welcome organisations to pilot the MVP. Companies participating in the pilot will have the unique opportunity to:
- Gain early access to the MVP and use it to conduct self-testing on their AI systems/models;
- Use MVP-generated reports to demonstrate transparency and build trust with their stakeholders; and
- Help shape an internationally applicable MVP to reflect industry needs and contribute to international standards development.
For background on the development of the MVP, please click here.
Model AI Governance Framework
On 23 January 2019, Singapore released its first edition of the Model AI Governance Framework (“Model Framework”) for broader consultation, adoption and feedback. The framework provides detailed and readily implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions. By explaining how AI systems work, building good data accountability practices, and creating open and transparent communication, the framework aims to promote public understanding and trust in technologies.
On 21 January 2020, Singapore released the second edition of the Model Framework.
Decisions made by AI should be
EXPLAINABLE, TRANSPARENT & FAIR
AI systems should be
Internal Governance Structures and Measures
- Clear roles and responsibilities in your organisation
- SOPs to monitor and manage risks
- Staff training
Determining the Level of Human Involvement in AI-augmented Decision-making
- Appropriate degree of human involvement
- Minimise the risk of harm to individuals
- Minimise bias in data and model
- Risk-based approach to measures such as explainaibility, robustness, and regular tuning
Stakeholder Interaction and Communication
- Make AI policies known to users
- Allow users to provide feedback, if possible
- Make communications easy to understand
The second edition includes additional considerations (such as robustness and reproducibility) and refines the original Model Framework for greater relevance and usability. For instance, the section on customer relationship management has been expanded to include considerations on interactions and communications with a broader network of stakeholders. The second edition of the Model Framework continues to take a sector- and technology-agnostic approach that can complement sector-specific requirements and guidelines.
Implementation and Self-Assessment Guide for Organisations (ISAGO)
Intended as a companion guide to the Model Framework, ISAGO aims to help organisations assess the alignment of their AI governance practices with the Model Framework. It also provides an extensive list of useful industry examples and practices to help organisations implement the Model Framework.
ISAGO is the result of the collaboration with World Economic Forum Centre for the Fourth Industrial Revolution to drive further AI and data innovation. The guide was developed in close consultation with the industry, with contributions from over 60 organisations.
Access the ISAGO here.
A Compendium of Use Cases
Complementing the Model Framework and ISAGO is a Compendium of Use Cases (Compendium) that demonstrates how local and international organisations across different sectors and sizes implemented or aligned their AI governance practices with all sections of the Model Framework. The Compendium also illustrates how the featured organisations have effectively put in place accountable AI governance practices and benefitted from the use of AI in their line of business. We hope these real-world use cases will inspire other companies to do the same.
Volume 1 features use cases from Callsign, DBS Bank, HSBC, MSD, Ngee Ann Polytechnic, Omada Health, UCARE.AI and Visa Asia Pacific. Access Volume 1 here.
Volume 2 contains use cases from the City of Darwin (Australia), Google, Microsoft, Taiger as well as a special section on how AI Singapore implemented our Model Framework in its 100 Experiments projects with IBM, RenalTeam, Sompo Asia Holdings and VersaFleet. Access Volume 2 here.
A Guide to Job Redesign in the Age of AI (Guide)
Under the guidance of the Advisory Council of the Ethical Use of AI and Data, the IMDA/PDPC has collaborated with the Lee Kuan Yew Centre for Innovative Cities (LKYCIC), Singapore University of Technology and Design to launch Singapore’s first guide that helps organisations and employees understand how existing job roles can be redesigned to harness the potential of AI so that the value of their work is increased.
The adoption of AI has gained significant momentum in Singapore in recent years, with the government and various industries leveraging AI to drive innovation and transformation. Launched on 4 December 2020, this Guide provides an industry-agnostic and practical approach to help companies manage AI's impact on employees, and for organisations that are adopting AI to prepare themselves for the digital future.
This Guide provides guidance on practical steps in four areas of job redesign:
Assessing the impact of AI on tasks, including whether each task can be automated or augmented by AI or remain in human hands, and deciding which jobs can be transformed within an appropriate time frame.
Charting clear pathways between jobs
Chart task pathways between jobs within an organisation and identify the tasks employees would need to learn to transition from one job to another.
Clearing barriers to Digital Transformation
Suggest ways to address potential challenges and support employees when implementing AI.
Enabling effective communication between employers and employees
Build a shared understanding within the organisation of “why”, “what”, and “how” AI will augment human capabilities and empower employees in their careers.
The Guide supports IMDA’s efforts to build a trusted and progressive AI environment that benefits businesses, employees and consumers. For example, the Model Framework guides organisations to deploy AI responsibly and address consumer concerns. Likewise, the Guide encourages organisations to take a human-centric approach to manage the impact of AI adoption by investing in redesigning jobs and reskilling employees.
Access the Guide here and the primer here.
Adoption and Feedback
We encourage organisations to use the Framework, ISAGO and Guide for internal discussion and implementation. Trade associations and chambers, professional bodies, and interest groups are welcome to use this document for their discussions and adapt it for their own use. The way in which businesses employ AI continues to evolve and so will this living document in the form of future editions.
To this end, we welcome organisations to share with us:
• Practical examples that would aid in illustrating section(s) of the Model Framework and Guide; and/or
• Experiences in using the Model Framework, ISAGO and Guide, e.g. how easy it is to implement the measures, how the framework can be better improved, or a helpful case of implementation that we may publish as a use case. Your use cases would continue to inspire more companies to implement AI responsibly.
Please email us at firstname.lastname@example.org
Trusted Data Sharing Framework
The trusted use of data is the foundation of a vibrant Digital Economy, and trusted data flows have the potential to deliver tremendous benefits to both organisations and consumers. IMDA has released the Trusted Data Sharing Framework to help companies overcome challenges in addressing trust between data providers and develop “trusted data”. The framework helps companies by establishing a baseline “common data sharing language” and systematic approach to understanding the broad considerations for establishing trusted data-sharing partnerships.
Find out more about the framework on the Data Collaboratives Programme page.
AI for Everyone
“AIAP mentors are professional and experienced. My knowledge has increased tremendously under their mentorship."
AI for You
Aimed at training fresh graduates with programming experience to become AI professionals, AIAP is a full-time, 9-month programme. It blends classroom, online, and hands-on project work, and will be mentored by AI, Big Data and High-Performance Computing practitioners.
This programme will introduce AI to 10,000 students and working adults and is conducted free of charge by AI SG. Co-supported by IMDA, Microsoft and Intel, its aim is to demonstrate how AI can improve the way one lives, works, and plays.
This programme seeks to enhance 2,000 working professionals’ and students’ competitiveness in a digital economy by equipping them with basic AI and data competency skills.The three-month online-offline hybrid curriculum is developed by AI SG and supported by IMDA, Microsoft, Intel and DataCamp, an online learning platform for Data Science and AI.
Singapore’s Advisory Council on the Ethical Use of AI and Data
To drive awareness of the benefits and understand the challenges of AI (such as on ethics and legal issues), IMDA is engaging key stakeholders including the government, industry, consumers, and academia to collaboratively shape the government’s plans for the AI ecosystem.
The such discourse will inform the government’s plans to support Singapore as a hub for AI development and innovation, and help Singapore effectively respond to global developments.
The council is made up of members from diverse backgrounds, including international leaders in AI, advocates of social and consumer interests, and leaders of Singapore companies who are keen to make use of AI.