With more products and services employing AI to provide greater personalisation or to make autonomous predictions, the public needs to be assured that AI systems are fair, explainable, and safe, and companies that deploy them are transparent and accountable.
IMDA and PDPC have developed A.I. Verify, an AI Governance Testing Framework and Toolkit, to enable industry to demonstrate their deployment of responsible AI. This is currently available as a Minimum Viable Product (MVP) for system developers and owners who want to be more transparent about the performance of their AI systems through a combination of technical tests and process checks.
The IMDA is now inviting participants from the broader industry to participate in this pilot phase of the MVP. For more information, refer to the detailed invitation to pilot here and primer here.
Who can apply?
The IMDA is inviting the following to participate in the pilot of the MVP:
- AI system owners and developers who wish to verify their AI systems* against internationally accepted AI ethics principles;
- Technology solution providers who wish to contribute to the development of AI governance implementation and testing tools; and
- Other testing framework owners and developers who wish to have early discussions on compatibility and interoperability with Singapore’s AI Governance Testing Framework and Toolkit
*As A.I. Verify is still an MVP, the current version is only able to support binary classification and regression models.