APAC is becoming the powerhouse for AI innovation. With frontier markets such as China, India, Singapore, South Korea, and Japan leading the change, businesses across industries are eager to gain from AI capabilities expected to power better efficiency, growth, and innovation. As AI adoption increases, significant challenges arise concurrently, particularly in relation to data privacy and compliance with regional regulations. Navigating the maze of rules governing data sovereignty in these markets can feel overwhelming, especially as they continue to evolve. Yet, for firms wanting to expand into APAC or optimize AI use in those markets, compliance is a non-negotiable.
In this article, we’ll examine the AI and data privacy regulations across key APAC regions and potential solutions to smooth compliance.
China: the region’s strictest AI regulations?
China has some of the strictest AI and data privacy laws within the APAC region, with laws like the Data Security Law (DSL) and Personal Information Protection Law (PIPL) requiring the personal data of Chinese citizens to be stored and processed within the country. AI systems can be put into application only after they have gone through strict cybersecurity assessments; recently issued regulations, such as the Generative AI Measures, have increased transparency and security both for domestic and foreign companies operating generative AI services in China. Companies must also adhere to transparency requirements, such as labeling AI-generated content and using algorithms in an ethical manner.
India: balancing innovation and privacy
India’s Digital Personal Data Protection Bill (DPDPB) is set to introduce comprehensive data privacy regulations, similar to Europe’s GDPR. One of the key provisions of the DPDPB is the requirement for data localization, meaning sensitive personal data must be stored and processed within India’s borders. While the legislation hasn’t yet been finalized, India’s approach to AI regulation emphasizes innovation within a framework that prioritizes ethical and responsible AI use. In March 2024, the government also released an advisory urging platforms and intermediaries to label AI-generated content.
India has a large AI startup ecosystem which we can expect to drive specific nuances in its regulatory approach. This might differentiate its legal framework from other countries in years to come.
Singapore: a voluntary ethical AI benchmark
Singapore has positioned itself as a thought leader in responsible AI development, with a strong emphasis on transparency and accountability. The country’s Personal Data Protection Act (PDPA) governs how businesses handle personal data, and in May 2024, Singapore introduced the Model AI Governance Framework for Generative AI, a set of best practices aimed at bringing ethical AI to all industries.
Singapore’s government has also launched AI Verify, an AI governance framework that helps businesses benchmark their AI systems against internationally recognized governance principles. This proactive approach allows businesses to innovate while empowering them to check that their AI systems are transparent, accountable, and free from harmful biases.
Interestingly, Singapore has not legislated specific AI laws at this time. The country has instead chosen to emphasize the voluntary, responsible use of AI by businesses operating in Singapore.
South Korea: new compliance requirements
With AI development occurring rapidly in South Korea, the country is caught in a balancing act between innovation and regulatory oversight. The regulatory framework relating to AI will soon be further advanced by adopting the Act on Promotion of the AI Industry and Framework for Establishing Trustworthy AI. This act was passed by the Science, ICT, Broadcasting, and Communications Committee of the Korean National Assembly early in 2023 and will soon take effect.
The AI Act encourages innovation by allowing the development of new AI technologies without pre-approval from the government while imposing stringent requirements for high-risk AI systems, particularly those that impact public health, safety, or fundamental rights. These high-risk AI systems will be subject to strict notification and trustworthiness standards, reflecting the government’s commitment to both innovation and safety. South Korea’s Personal Information Protection Act (PIPA) also plays a key role, giving individuals the right to contest automated decisions. PIPA is regularly revised to address emerging technologies including AI, most recently in 2023.
Japan: preparing for responsible AI legislation
Japan, known for being the world’s leader in AI and robotics, is similarly considering more conservative regulation. Today, it runs on a “soft law” basis, based on the newly created AI Guidelines for Business Version 1.0, which came out in April 2024. The guidelines offer voluntary action through transparency, accountability, and safety, while agile governance brings about continuous assessment.
Although Japan currently moves based on a framework, the country is in the process of developing a legal approach. The proposed Basic Law for Promoting Responsible AI imposes binding obligations on developers and providers of AI—an obligation to be passed by the end of 2024. It is based on strict reporting requirements and penalties for non-compliance, drastically shifting Japan’s current regulatory landscape.
Indonesia, Malaysia, and Vietnam: emerging AI frameworks
Other APAC countries, such as Indonesia, Malaysia, and Vietnam, are also working on AI regulations to encourage more responsible use of AI and data privacy. In Indonesia, regulations are set to be implemented by the end of 2024. These regulations will deal with sanctions for misusing AI technologies that mainly relate to personal data protection. Meanwhile, Malaysia has been working on an AI ethics code with requirements for transparency of AI algorithms in order to eliminate bias. Vietnam’s draft Digital Technology Industry Law proposes regulatory sandboxes for AI development, and lists prohibited AI activities.
Partner with Gcore, a global provider, for your AI compliance and innovation
APAC is an exciting region for business looking to innovate in a flexible environment. Compared to stricter regions, like the EU, APAC allows rapid development of AI technologies, which might make it appealing to startups in particular.
As AI regulations across APAC continue to evolve, businesses need more than just awareness of these rules—they need solutions that allow them to stay compliant while fostering innovation. Working with a global AI and cloud provider that offers localized data storage and edge computing solutions is key to navigating this complex regulatory landscape.
With our sovereign cloud solutions and expansive network of data centers, we at Gcore provide businesses with the infrastructure businesses need to remain compliant with APAC’s data protection and AI regulations. By leveraging the localized data storage and edge computing capabilities provided by Gcore, businesses can comply with regional laws while optimizing AI performance and driving innovation across the APAC region.