Be aware of scammers impersonating as IMDA officers and report any suspicious calls to the police. Please note that IMDA officers will never call you nor request for your personal information. For scam-related advice, please call the Anti-Scam helpline at 1800-722-6688 or go to www.scamalert.sg.

Keynote Speech by Deputy Chief Executive, Mr Leong Keng Thai, at Tsinghua University School of Law on Thursday, 17 January 2019 in Beijing, China

SINGAPORE – 17 JAN 2019

Data Protection in Singapore: From Compliance to Accountability

Deans
Professors
Distinguished Guests
Ladies and Gentlemen

  1. I am very honoured to have been invited to give a keynote speech at the International Exchanges Symposium on Data Security and Personal Information Protection today. Thank you Prof Shen for putting together this Symposium at Tsinghua University School of Law to bring together the leaders, officials, academics and experts to exchange perspectives on different approaches on data protection.  This is a great opportunity for me to share from Singapore’s humble experience and speak about our approach. I describe it as an Asian approach to data protection and there is no better place to share our perspectives than right here in the heart of Asia, Beijing, the capital of the People’s Republic of China.

  2. With the proliferation of the Internet of Things (IoT) devices, the velocity and volume of data that is generated and collected each day has seen unprecedented growth. On the back of Big Data analytics, we have been able to aggregate an unimaginable variety of data from many sources, and to be able to manage this expansion of data types, structured as well as unstructured data. Artificial Intelligence (AI) is an enabling technology that will augment our human ability to transform data into features that we have come to depend on in our daily lives like recommendation engines, image recognition, translation and predictions. Data has created new opportunities for entrepreneurship and innovation that are disrupting established business models. Its impact is unimaginable just merely a decade ago. The mass deployment of AI has benefited both China and Singapore. AI applied to powering facial recognition technology developed by Chinese companies are sold world wide. We use such technology back home. I no longer have to carry my office pass with me because facial recognition technology literally opens doors for me.

  3. The ubiquity of sensors in our mobile devices and different types of video cameras has led to collection of vast amounts of personal data. When Singapore enacted our Personal Data Protection Act, user-provided personal data formed the majority of the data that we thought will be collected. Since then, IoT has dramatically changed the way personal data is generated and collected. The volume of user activities or observable data captured and collected through IoT devices today eclipses the more traditional declared or user-provided data. IoT devices on our persons, for example mobile phones or wearables like activity trackers are constantly collecting data every moment, whether we are moving about, relaxing or even asleep. The data generated by a single sensor on a smart device like a mobile phone is often captured and collected by multiple apps that we have installed. IoT sensors in our environment, like the increasingly common webcams that we install to keep an eye on our children or parents at home while we are in the office, collect personal data whenever a person steps within range of these sensors. Cloud storage allows vast volumes to be stored for retrieval anywhere in the world. I can keep an eye on what’s going on at home even when I am travelling for official business to China! This fundamental shift in how data is collected challenges the assumption of the consent-based foundation of data protection law in several ways. For example, it is difficult to identify the moment when the purpose for collection is effectively brought to the attention of an individual. Similarly, it is difficult to find a clear indication of consent from the individual when he does not have to interact with the IoT device or even know that it is observing him.

  4. The ways that companies can make use of our personal data has also changed. Data analytics and AI offer new possibilities to allow companies to make better use of data that they had collected in the past. In data protection language, we refer to this as re-purposing or secondary use of data. The possibility of repurposing places additional stress on the traditional consent-based regime. Consent obtained in the past was for a set of purposes that was contemplated at that point in time. Technological advances today offer new possibilities which could not have been anticipated before. We can be certain that there will be even more new possibilities that technological advances will offer in the future. From the perspective of a company looking to employ new technology to better harness its store of data, contacting each individual to obtain fresh consent is often not viable if not impossible. Contact details can get outdated. Even when contacted, we can only realistically expect that a small fraction will bother to reply. On the flip side, leaving past data sitting in storage and not making use of them is economically wasteful. The company will not be maximising the value in its data to benefit its competitive edge and its customers.

  5. We are familiar with these challenges. Even though Singapore’s Personal Data Protection Act is in operation for barely 5 years, we have decided that it needs to be updated. A decade ago when we started thinking about enacting a data protection law, we looked to the OECD guidelines and the Canadian data protection law as models. As we review our data protection law today, we have decided to craft a set of customised rules. This is the only way that will enable us to strike a three-way balance between consumer protection, business efficiency and technology innovation. I should emphasise that we do not believe that these are competing interests. They are complementary. Businesses will deploy new technologies as fast and as far as their customers are willing to adopt. Establishing a conducive and consistent regulatory environment that encourages innovation through the building of public trust will spur the adoption of emerging technologies. Maintain a high level of public trust, and consumers will be all the more willing to participate in the digital economy. When the balance is struck well, these form a reinforcing loop that will propel society and the economy forward.

From Compliance to Accountability

  1. The history of accountability can be traced to the very beginnings of data protection. It began as a way to ensure that organisation transfers personal data only to other organisations that have equivalent data protection practices in place. This principle has evolved but one key aspect of accountability today is still ensuring that personal data is transferred to receiving organisations with comparable practices. Today, this principle operates at different levels. Some countries have mutual recognition that each other’s data protection laws are comparable, making life easy for businesses that transact between them. Other regimes require that companies put in place contracts to ensure that their counterpart has data protection practices in place. A new area that we think will grow is for recognition of data protection marks as a visible indication that data protection practices are in place.

  2. Let me now share Singapore’s approach in building business and consumer trust in the digital economy in order to create an environment of trust that is conducive to innovation. Two years ago, we embarked on a  transformation journey. In this transformation, we brought to the surface the core principle of accountability that is embedded in Singapore’s Personal Data Protection Act. The shift away from a check list approach of compliance towards a principled and more adaptable implementation of data protection is what we describe as a shift from compliance to accountability. Accountability is about organisations being able to demonstrate to its customers that it has put measures in place to pre-emptively identify and address the risks that come with data use. It is about providing the regulator with the assurance that policies and practices are robust, governance and monitoring structures are in place, drawer plans are ready and can be put into action, and communication lines with customers are prepared and ready.

  3. In this modern world with transborder commerce taking place all the time, we need to present options for the business community so that consumers at home have access to worldwide choice of goods and services, and reap the benefits of e-commerce. The principle of accountability will enable us to construct secure bridges between countries with comparable data protection practices, so that data can flow in support of transborder trade and e-commerce. I believe that conferences like this where we share practices in data protection will increase our knowledge of each other’s data protection systems. Increased knowledge and familiarity will enable us to find the best way to facilitate cross border flow of data in support of modern digital trade for the ultimate goal of benefiting our consumers at home.

The Data Protection Trustmark certification system

  1. We are conscious that we live in a connected world today. Having a Data Protection Trustmark is helpful for a domestic market but so much of our trade takes place across borders. We believe that facilitating cross-border data flows while maintaining high standards for personal data protection is necessary for international trade. This allows our companies to reach global markets. This also benefits our consumers as they will have greater access to a wider variety of goods and services. In order to enable this to happen, personal data protection has to be in place to gain consumers’ trust to transfer their personal data in an increasingly global digital economy.

  2. Last week, Singapore officially launched the Data Protection Trustmark certification scheme. We think this is an important development. The Commission’s 2018 surveys had shown that 2 in 3 consumers would rather buy from a brand that is certified to protect their personal data. In addition, 4 in 5 organisations would also work with businesses that properly manage personal data.

  3. The Trustmark enhances and promotes consistency in data protection standards by establishing and recognising robust data governance standards. It helps organisations increase their competitive advantage and build trust with their clients. As the certification entails regular independent review of work processes, the Trustmark serves as a visible badge that gives organisations the recognition that they demonstrate accountability and responsibility in their data protection policies and practices. This visible badge makes it easy for customers to identify accountable companies, giving business partners and consumers the assurance that they are looking for.

  4. The framework was developed based on adopting and aligning to the PDPA and incorporating elements of international benchmarks such as APEC CBPR/PRP and global best practices. It has been designed to have both domestic and global relevance. We want Singapore to be part of a global network that connects accountable companies so that personal data are passed from one set of safe hands to another. Certified organisations will find it easier to move personal data across borders in participating APEC economies. That is why Singapore is participating in the APEC CBPR and PRP systems. We have announced plans that the application for our Data Protection Trustmark and registration for APEC CBPR or PRP will be an integrated process for companies based in Singapore. With this, we look forward to many organisations coming on board.

Enhancing the consent regime

  1. To ensure that the regulatory environment keeps pace with evolving technology in enabling innovation, we are reviewing the Personal Data Protection Act. We started this two years ago and have conducted two rounds of public consultations, sharing our proposed policy positions and gathering feedback from stakeholders.

  2. As I have discussed earlier, we cannot rely only on consent as the sole means of controlling how personal data is collected and used. We intend to enhance our consent regime by introducing parallel bases for processing personal data. We have identified two such enhancements. First, we intend to introduce a notification and opt-out system to allow organisations to notify consumers of their intended secondary use of personal data collected in the past. The notification can take place in any means that the organisation thinks is effective to reach its customer-base. The notification will have to specify a period for the customer to opt-out. At the expiry of the notification period, customers that have not informed the organisation that they wish to opt-out will be deemed to have consented and the organisation can proceed with the intended secondary use. Of course customers still have the right to opt-out anytime subsequently. This provides a pragmatic way to organisations to make better use of data that they have already collected as and when technological advances offer new opportunities.

  3. At other times, it may be necessary for the larger interests of systemic benefits to override individual preferences. One often cited example is monitoring payment transactions for fraudulent activities attempts. The need to maintain integrity and trust in the payment, banking or financial system must trump individual preferences. We will therefore be introducing legitimate interest as a way to allow organisations to make use of data without having to obtain consent when there is a larger benefit to society.

  4. We introduced a regulatory sandbox to pilot test these concepts before we amend our law. There has been strong interest from the private sector. We look forward to working with the private sector to pilot these concepts and fine-tune the details before we introduce legislative amendments.

Empowering the consumer

  1. Consumers can also look forward to greater accountability from organisations. A mandatory data breach notification regime will be introduced. Under this system, organisations will have to notify the Commission and affected individuals when there is a significant data breach. We do not introduce this lightly as we expect that it will increase the Commission’s workload. But we think that this is necessary for several reasons.

  2. First, notifying individuals early will allow them to take steps to protect themselves, for example, changing passwords and cancelling credit cards, depending on the context. Second, notifying the Commission gives us the opportunity to provide guidance to organisations in managing the breach and containing its potential harm. Third, breach notification allows us to spot trends and problem hotspots. This will enable us to identify prevailing problems and we can then craft industry interventions to address them. Even with the voluntary breach notification system we have in place today, we were able to put this intention into practice. Last year, we completed an intervention in a healthcare institution in order to help it improves its data protection practices.

Accountable Use of AI and Data

  1. I have spoken about our shift from compliance to accountability in personal data protection. Let me now share how we are putting accountability into practice in the area of AI governance. AI technology diffusion into the marketplace is a good example of how we are making use of accountability as the guard rails for guiding the broad adoption of AI in Singapore.

  2. In order to create a trusted ecosystem for AI adoption, we announced three initiatives last year to engage key stakeholders in the shaping of the AI ecosystem. These are, the formation of an Advisory Council on Ethical Use of AI and Data, the establishment of a Research Programme on the Governance of AI and Data Use, and the publication of a discussion paper on the responsible development and adoption of AI.

Advisory Council on the Ethical Use of AI and Data

  1. Our foremost initiative in shaping AI ecosystem is the formation of an Advisory Council on the Ethical Use of AI and Data last August. It comprises international leaders in AI such as Google, Microsoft and Alibaba; advocates of social and consumer interests; and leaders of local companies who are keen to make use of AI. These eleven members who come from diverse backgrounds have been selected for their ability to contribute to the Advisory Council’s objectives.

  2. The Advisory Council will assist the Government in developing ethics standards and reference governance frameworks and publish advisory guidelines, practical guidance and codes of practice for voluntary adoption by the industry.

  3. They do this by engaging with stakeholders such as ethics boards of commercial enterprises on private sector use of AI and data, and consumer representatives on consumer expectations and acceptance. The Council also engages the private capital community, on the need for incorporating ethical considerations when investing into businesses which develop or adopt AI.

Research Programme on the Governance of AI and Data Use

  1. As part of the third initiative in shaping the AI ecosystem, we have appointed the Singapore Management University School of Law to establish a five-year research programme on the governance of AI and data use. It has set up the Centre for AI and Data Governance. This independent research centre has the following objectives: (1) promote cutting edge thinking and practices in AI and data policies and regulations, (2) inform AI and data policy and regulation formulation in Singapore through research publications and stakeholders engagement activities, (3) establish Singapore as a global thought leader in AI and data policies and regulations.

Model AI Governance Framework

  1. Spearheading Singapore’s discussions on legal, ethical and governance issues that may arise from the use of AI and other data-driven technologies, we developed a Model AI Governance Framework. It is an accountability-based framework for responsible development and adoption of AI. It sets out what we think are the sorts of questions which should be asked by any company before embarking on or implementing AI into their products.

  2. By adopting the model framework, companies will be putting into practice, fair, transparent and accountable AI that is human-centric. The framework seeks to do three things:

  3. First, it seeks to identify how ethics and risks associated with AI can be integrated into and managed by existing corporate governance structures. When AI is used to augment human decisions or even to make autonomous decisions, the framework seeks to provide guidance on how to select the most appropriate decision-making model. By doing so, it hopes to ensure that decisions are taken at the right levels, with proper oversight by the Board of Directors, without introducing new structures.

  4. Second, the model framework examines operational considerations when data from different internal or external sources are prepared and used to train machine learning models. By identifying the people involved in ensuring data quality, model training, model selection, and monitoring the output of selected models, companies will be able to allocate the right responsibilities to the right departments.

  5. Finally, how does a company communicate with its customers when it makes use of AI in its services or products? The model framework aims to provide guidance on the level of disclosure, identifying effective means of communication and providing customers with the means to reach out to the organisation.

The importance of data for AI

  1. Having talked about our shift to accountability and how we are applying accountability in our AI strategy, let me speak about the importance of data for successful AI implementation. AI is dependent on access to data and for many corporations today, it is necessary for it to be able to access data from around the world to make full use of AI. Regardless how huge national datasets may be, they suffer from one issue: selection bias. The patterns and trends only reflect the behaviours and preferences of customers or users in that country. Biased datasets used to train AI models will result in features that are useful only for that country. The feature that is implemented may not be applicable in a different country or culture. It will be difficult to deploy it in another market. Hence, it is important that companies have the option of localising their feature set when it is necessary, but to be able to globalise their feature set when they need to.

  2. Let me illustrate using a multi-national company with global operations and world-wide customer base. Imagine that it is a logistics company, integrating land, sea and air transportation for the delivery of goods worldwide. Its data is generated in many countries. For example, land transportation is generated where its trucks run; customer data is generated in the countries where its customers are from; data to improve the supply chain or the delivery network will be generated on every country the supply chain or delivery network touches. Hence data is generated and initially stored in multiple jurisdictions. As I have just explained, the benefits from analysing data is limited if they are stored in silo and analysed where they are stored. We cannot get a complete picture. It will always only provide us with a biased set of insights.

  3. The ideal is to pull data into data centres around the world to facilitate processing. Just as data is generated and initially stored in different countries, where they are processed can also be distributed around the world. This very much depends on how the MNC has structured its corporate functions and operations. Human resource matters may be run out of one country, while improvements to operations is driven by another, customer behaviour analysis done in a third country, and research and development spread out across multiple corporate laboratories in different continents. Data from the MNC’s global operations will need to be pulled into each of these different centres for different types of analysis and machine learning model training. Data scientists and AI experts will then have access to the right datasets, on which they can apply their skills.

  4. What about smaller companies? They do not have the global reach or muscle of MNCs. For them, it is all the more important to be able to form partnerships and pool data for common commercial goals. In order for smaller companies to compete effectively, they have to share data from suppliers and business customers. This allows them to approximate the breadth of data that MNCs have access to. Pooling datasets allows them to enrich their observations. This mutual exchange enables greater collaboration and generation of new insights using AI tools. Take the example of logistics startups providing local delivery services in ASEAN countries tying up in order to provide a delivery network that covers all of ASEAN to a Chinese online mall. They will need to share data.

  5. At the same time, these startups may not have the requisite expertise and need to tap on external AI processing services. If the service provider is located in a country outside ASEAN, it is less costly to move the data to the service provider than to have them send employees over for weeks to work on a project. Modern communications technologies allow us to collaborate across the globe. Access to a global market of suppliers of AI processing services will lower the barrier of entry for smaller companies to adopt AI.

Conclusion

  1. We believe in taking the position of promoting responsible data use and sharing to engender a high level of consumer trust. Striking the right balance between organisations interest and consumers interest (and perhaps national interest) is key. Privacy and AI can leverage each other to enhance overall output while at the same time provide consumers with the best user experience. We want companies to build-in good data accountability practices, to provide reassurance to consumers that decisions or suggestions made through AI are working towards their best interests. In order to build public understanding and trust in AI technologies, it is important for companies and consumers to understand the benefits and challenges of the AI technologies, such as challenges related to ethics and the law. The ultimate goal of our shift from compliance to accountability is to establish a high level of consumer trust as the bedrock of our data protection regime, thereby enabling data innovation in a Digital Economy.

  2. Let’s think together, share experiences and learn best practices on harnessing the new wave of challenges, and work together to continue building a robust, trusted data ecosystem that will drive this digital economy. With this, I would like to thank Tsinghua University School of Law for the opportunity to speak and I wish you a very happy Lunar New Year ahead.

Thank you

 

LAST UPDATED: 12 MAR 2024

Explore related tags

Explore more