Be aware of scammers impersonating as IMDA officers and report any suspicious calls to the police. Please note that IMDA officers will never call you nor request for your personal information. For scam-related advice, please call the Anti-Scam helpline at 1800-722-6688 or go to

Building a better world with trustworthy AI

Building a better world with trustworthy AI

Building a better world with trustworthy AI
From environmental to humanitarian issues, AI has the potential to support social good, but ethical matters like accessibility and privacy must be tackled at the beginning of the AI development process—and not at the end as an afterthought.

By Jill Arul

Since the term was coined in 1956, artificial intelligence (AI) has progressed from a field exclusive to scientists and researchers to an integral part of our everyday lives. More than a tool for convenience, profit and tailored online shopping, AI has the power to address significant societal challenges—from environmental to humanitarian issues. To this end, companies like gringgo in Indonesia and Wadhwani AI in India are developing tools that improve waste management, health systems and access to education.

Despite its vast potential for good, AI is not a one-size-fits-all solution. Due to its rapid rise—a process hastened by the pandemic—it can be easy for less privileged communities to be left out.

As part of the ATxAI Conference on 14 July 2021, the speakers at the ‘AI for Social Good’ panel discussed the fine line between AI’s benefits and potential downsides. Representatives from the government, industry and research institutes discussed possible solutions and explored ethical regulations, inequality and AI governance.

The guiding principles of responsible AI

Kicking off the discussion, Ms Gabriela Ramos, Assistant Director General for the Social and Human Sciences at UNESCO, shared the efforts of UNESCO to provide much-needed guidelines on ethical AI. Consider the case of facial recognition systems with hidden biases embedded in the code failing to identify individuals of certain ethnicities.

In light of the risks posed by the lack of diversity and biases in AI, UNESCO member states have come together to develop a blueprint outlining ethical recommendations. This framework covers issues of transparency and privacy, and the regulation of data related to education, health, environment and gender.

Efforts to ensure ethical AI and reduce negative outcomes are not new, as His Excellency Omar Sultan Al Olama, the Minister of State for Digital Economy AI in the United Arab Emirates (UAE) explained. As early as 2017, the UAE launched a strategy to analyse the technology’s use and to manage challenges.

Speaking from a policy maker’s perspective, H.E. Omar believes an eye on the future is crucial.

The government needs to perform a balancing act here. They need to ensure that every portion of society can use and develop AI and progress without affecting other portions of society with a Pandora’s Box scenario that would have to be dealt with in the future.

His Excellency Omar Sultan Al Olama

Minister of State for Digital Economy AI of United Arab Emirates (UAE)

Making AI accessible for all

Even with guidelines available, the mere presence of AI in our lives has the potential to exacerbate existing social issues if not addressed. Two major concerns mentioned during the panel were accessibility and privacy.

When it comes to accessibility, technology tends to exacerbate the gaps and inequalities between communities. To address this, H.E. Omar recommends educating all members of the community with a better understanding of AI, starting with students, so that it can continue to be developed as a powerful tool for social good.

On the privacy front, users and their personal data must be protected from cybersecurity breaches and questionable uses. Mr Raymund Enriquez Liboro, Commissioner and Chairman of the National Privacy Commission in the Philippines explained that law enforcement should be implemented when it comes to responsible AI use.

Facial recognition technology, if used responsibly, provides benefits to society. However, when used unlawfully, it can result in unbridled surveillance, bias decision making, and can potentially violate human rights, especially when used to crack down on legitimate dissent.

Mr Raymund Enriquez Liboro

Commissioner and Chairman of the National Privacy Commission of Philippines

Getting everyone involved in ensuring AI for social good

With panel members offering solutions from the perspective of government, industry and international organisations, it was clear that work still needs to be done. But the question remains—who will lead the charge?

Internationally, steps have been taken to protect the privacy of users worldwide through the Global Privacy Assembly's working group. For over four decades, the assembly has offered strategies for data protection and privacy authorities. Mr Liboro stressed the assembly’s collaboration with local policymakers, individual users, academia and businesses to push for accountability and transparency.

Mr Royce Wee, Director and Head of Global Public Policy at e-commerce giant Alibaba Group added that in 2012, China passed a resolution during its 18th National Congress that emphasised the importance of upholding personal data protection. This direction of AI governance came into full force in July 2017 when China's State Council released a comprehensive plan to make China the leading AI power by 2030, intending to use the technology to simultaneously solve societal and economic issues.

As laid out in this mind map, achieving socially beneficial AI is a whole-of-world and whole-of-community effort, as all stakeholders must be involved in the technology’s regulation, education and innovation.

While it might seem like the government and international organisations are spearheading efforts of AI for social good, industry also has a huge role to play. For example, Alibaba’s smart touch system uses AI to increase accessibility and allow those with disabilities to obtain goods and services more easily. In response to the world’s current coronavirus crisis, Alibaba has also invested heavily in improving COVID-19 diagnosis tools with AI.

However, to truly integrate ethics into AI development, players from the industry, government and academia must work together to decide how ethics can be included in the process as early as possible. Associate Professor Jennifer Mei Sze Ang, Director of the Centre for University Core at Singapore University of Social Science, believes that while there may be difficulty communicating across disciplines, it is essential.

Ethics should feature in the design and development process—not as an afterthought. If we are going to start right at the beginning, developers need to have a lot more conversations with ethicists and social scientists to ensure ethics is a feature in every part of the process, not just at the end.

Associate Professor Jennifer Mei Sze Ang

Director of the Centre for University Core, Singapore University of Social Science

From these discussions, it is apparent that AI, when wielded responsibly, has huge potential to be a ‘tech for good.’ With players worldwide working hand-in-hand to make the technology more accessible and accountable, it is only a matter of time before the positive societal impacts of AI will be clearly felt by anyone, anywhere.

Held from 13th to 16th July 2021, Asia Tech x Singapore brought together thought leaders in business, tech and government to discuss the trends, challenges and growth opportunities of the digital economy and how to shape the digital future.

Explore related tags


Explore more