
Singapore Unveils New AI Safety Initiatives
- Understanding AI Safety
- What Makes AI Safety Important?
- Singapore's New AI Safety Initiatives
- Launch of the Global AI Assurance Pilot: This initiative, launched by the AI Verify Foundation and the Infocomm Media Development Authority (IMDA), aims to establish or develop global best practices around technical testing of generative AI applications [3]. The Pilot will bring together AI assurance and testing vendors with companies that are deploying real-life GenAI applications [3].
- Release of a Joint Testing Report with Japan under the AI Safety Institute (AISI) Network: The joint testing report with Japan aims to improve the safety of Large Language Models (LLMs) across different linguistic settings by assessing whether guardrails hold up in non-English contexts [3]. It focuses on evaluating the performance of AI safeguards across ten languages (Cantonese, English, Farsi, French, Japanese, Kiswahili, Korean, Malay, Mandarin Chinese, Telugu) and five harm categories (violent crime, non-violent crime, IP, privacy, jailbreaking) [2][3].
- Publication of the Singapore AI Safety Red Teaming Challenge Evaluation Report 2025 : The report assesses LLMs for cultural bias in non-English languages and sets out a consistent methodology for testing across different linguistic and cultural contexts to tackle regional AI safety issues [2]. The report is based on findings from the AI Safety Red Teaming Challenge, which was organized by the IMDA and Humane Intelligence in November 2024 [3].
- Conclusion
AI safety, in broad terms, refers to the idea of practices and principles that help ensure AI technologies are developed and used ethically for the benefit of humanity and minimize any potential harm or negative impacts [1]. As AI systems become increasingly prevalent and impactful across various sectors, it is crucial to prioritize AI safety to protect individuals and society. Building and maintaining safe AI systems requires identifying and addressing potential risks that could arise from its use. These risks include issues like algorithmic bias, data security breaches, and vulnerabilities to external threats that could compromise the integrity of AI systems. By proactively addressing these concerns, developers, and organizations can create processes and frameworks that help safeguard against unintended negative outcomes. Establishing robust AI safety measures not only prevents harm but also promotes trust in these technologies, ensuring that their integration into everyday life is both secure and beneficial for all. In this article, we’ll explore the importance of AI safety and take a closer look at Singapore’s new AI initiatives.
As AI systems continue to advance, they become more integrated into our daily lives and their influence extends into critical sectors such as infrastructure, finance, and national security [1]. This growing reliance on AI technologies brings both opportunities and challenges, as they can have profound positive and negative effects on both the organizations that implement them and society at large. For society as a whole, ensuring AI safety is crucial to safeguarding public welfare, privacy, and fundamental rights. AI systems that are biased, opaque, or misaligned with human values run the risk of perpetuating or even exacerbating existing societal inequalities, making it imperative for safety protocols to be in place [1]. In addition, these technologies must be carefully regulated to prevent harm and to guarantee they serve the public good without undermining core democratic principles.
From a business standpoint, prioritizing AI safety is equally important for establishing consumer trust and mitigating legal risks [1]. Organizations that adopt responsible AI practices can safeguard themselves against potential legal liabilities, are better positioned to build consumer trust, and avoid making poor decisions [1]. By ensuring that AI systems operate in alignment with the company’s ethical standards and values, businesses can avoid negative consequences for themselves and their clients or customers [1]. These proactive measures not only safeguard the organization's bottom line but also ensure that AI technologies benefit both the company and society at large, without compromising the trust and safety of the public. By prioritizing AI safety, we can harness the full potential of AI and automation, while maintaining ethical standards and minimizing associated risk [1].
Singapore has introduced a series of new AI governance initiatives designed to improve the safety of AI for both Singaporeans and global citizens, given the transboundary nature of AI products and services [2][3]. The announcement was made by Singapore’s Minister for Digital Development and Information, Josephine Teo, at the AI Action Summit (AIAS) in Paris, France [3]. Here’s a closer look at each of the three initiatives:
These new initiatives reflect Singapore’s commitment to rallying industry and global partners towards concrete actions that advance AI safety [3]. Through promoting best practices in AI testing and evaluation, the country plays a part in the development of a safer and more responsible global AI ecosystem [4].
AI safety is essential to ensure the responsible development and deployment of AI technologies. As we move forward, it is crucial that both policymakers and developers continue to work hand-in-hand to ensure that the integration of AI into various sectors is done in a way that is ethical, transparent, and above all, safe. The latest AI safety initiatives launched by Singapore set a valuable example of how countries can take concrete steps toward improving AI safety.
Notes and References
- McGrath, A., & Jonker, A. (2024, November 15). What is AI safety? - IBM. https://www.ibm.com/think/topics/ai-safety
- Singapore Announces New AI Safety Initiatives. (2025, February 13) - Singapore Business Review. https://sbr.com.sg/information-technology/news/singapore-announces-new-ai-safety-initiatives
- SG announces new AI Safety initiatives at the global AI Action Summit in France. (2025, February 11) - Infocomm Media Development Authority. https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2025/singapore-ai-safety-initiatives-global-ai-summit-france
- Singapore’s New AI Safety Initiatives: A Global Commitment to Responsible AI. (2025, February 13) - Ainvest. https://www.ainvest.com/news/singapore-s-new-ai-safety-initiatives-a-global-commitment-to-responsible-ai-25021010f598c2120383f6d3/