Award-winning Singapore corporate law firm specialising in M&A ECM VC PE Corporate Law

tldr

The Legal Dispatch Room

The Global AI Regulatory Landscape: Core Principles and Requirements

Introduction

As artificial intelligence (AI) transforms industries and societies worldwide, governments are moving to establish regulatory frameworks to manage its deployment and use.

While each jurisdiction brings unique considerations to AI regulation, core principles and requirements have emerged. Understanding these shared foundations is crucial for organisations navigating the complex global AI regulatory landscape.

Core principles

Transparency and explainability

Most AI laws require systems to clearly disclose when users are interacting with AI. To this end, regulators are exploring the use of technical solutions, such as:

  •  digital watermarking – the embedment of code into digital content to provide information on its source; and

  • cryptographic provenance – the use of algorithms to indicate if the digital content has been altered by AI.

One outcome in jurisdictions like the European Union (EU) and the People’s Republic of China is that AI systems that create synthetic content, e.g. deepfakes, must mark their outputs in a machine-readable format as artificially generated or manipulated.

Safety and risk management

Safety requirements appear consistently across jurisdictions, with risk-based approaches becoming the dominant regulatory model. These tiered regulatory frameworks impose stricter requirements on higher-risk AI applications (e.g. those affecting health, safety or fundamental rights), requiring organisations to:

  •  conduct risk assessments;

  • implement appropriate safeguards based on risk levels; and

  • establish monitoring systems to detect and address potential harms.

The EU AI Act, for instance, mandates amongst others that providers of high-risk AI systems: (i) ensure datasets are relevant, representative, and to the best extent possible error-free and complete according to the intended purpose, (ii) draw up technical documentation to demonstrate compliance and ensure record-keeping functions, and (iii) provide instructions for use to downstream deployers to enable compliance.

Accountability and human oversight

Regulatory frameworks consistently emphasise effective human oversight over AI decision-making processes, particularly for high-risk applications. This principle translates into requirements for:

  • human review processes – including structures to address automation bias, so that personnel remain aware of their tendency to over-rely on outputs produced by AI systems;

  • clear chains of responsibility for AI system outcomes – including maintaining records of oversight activities, decisions made and interventions performed; and

  • mechanisms for human intervention – for humans to pause, modify or override AI system operations.

Privacy and data protection

Privacy concerns form an integral part of AI regulation. Existing data protection frameworks are faced with addressing AI-specific challenges, such as the use of vast amounts of personal data for training, retention periods, and risks relating to the ability of AI systems to infer sensitive information or re-identify individuals from anonymised datasets.

In the EU, the European Data Protection Board has provided specific guidance on using personal data for AI model development and deployment, emphasising that compliance with data protection regulation remains essential throughout the AI lifecycle. Technical privacy-preserving measures for AI are also increasingly recognised and encouraged for organisations seeking to comply with privacy regulations, such as:

  • adding mathematical noise to datasets to protect privacy while preserving statistical utility;

  • training AI models across distributed datasets without centralising raw data;

  • enabling computation on encrypted data without decrypting it; and

  • creating artificial datasets that preserve statistical properties while removing individual identifiers.

Conclusion

The emergence of shared principles provides a foundation for building robust, internationally compatible AI governance frameworks. Organisations that align their AI practices with these principles will be better positioned to navigate the evolving regulatory environment while maintaining public trust.

If you would like to understand how to effectively leverage AI amidst the current regulatory frameworks, please get in touch.