Award-winning Singapore corporate law firm specialising in M&A ECM VC PE Corporate Law

tldr

The Legal Dispatch Room

The Model AI Governance Framework For Agentic AI: Practical Governance For Deploying AI Agents

On 22 January 2026, at the World Economic Forum, the Infocomm Media Development Authority of Singapore (IMDA) unveiled and published the Model AI Governance Framework for Agentic AI (the “Framework”). The Framework is intended for organisations looking to deploy agentic AI, whether by developing agents in-house or using third-party agentic solutions.

What is the Model AI Governance Framework for Agentic AI?

The Framework builds on Singapore’s earlier model AI governance frameworks and focuses specifically on agentic AI systems – AI systems capable of autonomous planning, tool use, decision-making, and multi-agent interaction.

Unlike traditional AI models, agentic AI may:

  • Access sensitive or regulated data

  • Interact with external tools and APIs

  • Modify databases or execute financial transactions

  • Operate with varying levels of autonomy

  • Interact with other AI agents

The expanded capability of agentic AI increases regulatory, operational, and cybersecurity risks, for example, where agents have access to sensitive data, can make changes to an environment (such as updating databases or making payments), and where multi-agent interactions may lead to more unpredictable outcomes.

The Framework sets out 4 governance principles for organisations to consider when deploying agents.

Governance principle 1: Assess and bound risks upfront

The Framework recommends that organisations looking to deploy agentic AI should start with risk identification and assessment. Organisations should consider agentic-specific factors when deciding whether a particular use case is suitable for agent deployment.

These factors should be framed by reference to likelihood and impact:

  • Factors affecting impact, such as the domain and use case in which the agentic AI is deployed (including tolerance for error), the ability of the agent to access sensitive data or external systems, and the scope and reversibility of the agent’s actions.

  • Factors affecting likelihood, such as the agent’s level of autonomy, task complexity, and the potential risks of external injections and cyberattacks.

Organisations should also develop a threat model for each agent use case, and (where relevant) use approaches such as taint tracing (tracking the flow of untrusted or “tainted” data) to understand how harmful content can propagate through an agent’s workflow.

Beyond assessing risk, the Framework emphasises bounding risk through design, in particular by:

  • applying limits on the agent’s access to tools and systems (especially where those tools can affect external environments); and

  • defining workflows that agents are constrained to follow and limiting the agent’s potential scope of impact when they malfunction.

In the interim, pending broader standardisation in the industry, the Framework suggests best practices to enable agent control and traceability, including:

  • Identification: issuing unique identities to agents, and maintaining records of agent attributes (for example, who created or deployed the agent, and what permissions were assigned).

  • Authorisation: adopting a least-privilege approach so agents are only authorised to perform actions strictly required for their intended function, with periodic review of permissions (particularly as agents evolve).

Governance principle 2: Make humans meaningfully accountable

Organisations should ensure human accountability notwithstanding agent autonomy and multi-actor lifecycles. The Framework highlights the importance of clearly allocating responsibilities across stakeholders, both within the organisation and with external vendors, while maintaining adaptive governance so organisations can respond quickly as the technology and risk landscape evolves.

The Framework suggests actions to ensure human accountability:

  • Clear allocation of responsibilities: Multiple actors may contribute to agent design, development, deployment and monitoring, which could potentially diffuse accountability. Organisations should clearly define who is responsible for governance across the agent lifecycle.

  • Meaningful human oversight (“human-in-the-loop”): More capable agents may increase automation bias (the tendency for humans to over-trust system outputs). Organisations should adapt “human-in-the-loop” measures, by:

    • defining significant checkpoints where human approval is required (for example, high-stakes or irreversible actions); and

    • regularly auditing whether human oversight remains effective over time.

Organisations should also implement practical measures to ensure approvals are effective. These include measures such as:

  • keeping approval requests contextual and digestible;

  • training reviewers on common failure modes of agentic AI (for example, hallucinated tool outputs, inconsistent reasoning, use of outdated material);

  • auditing oversight processes to mitigate risks like alert fatigue; and

  • complementing human oversight with automated, real-time monitoring to escalate unexpected or anomalous behaviour of the agentic AI.

Governance principle 3: Implement technical controls and processes

Organisations should impose technical controls and processes across three stages of the agentic AI’s lifecycle.

During the design and development stage, organisations should implement agent-specific controls such as:

  • Planning controls: (a) prompting agents to reflect on whether plans adhere to user instructions; (b) requiring clarification before proceeding; and (c) logging plans and reasoning for user verification.

  • Tool controls: (a) strict input formats; (b) least-privilege tool access; and (c) specific safeguards for data-related tools (for example, avoiding granting agents write access to sensitive tables, and requiring user takeover for entering sensitive secrets like passwords or API keys).

  • Protocol controls: (a) using standardised protocols (including in financial transactions handled by agents); and (b) (for Model Context Protocol servers) whitelisting trusted servers and sandboxing code execution.

Before deployment of the agentic AI, organisations should evaluate the agents for safety and security, including testing overall task execution, policy compliance and tool-use accuracy, testing responses to errors and edge cases, and testing at scale repeatedly and across varied datasets.

When deploying the agentic AI, organisations should gradually roll out the agent and continuously monitor and log agent behaviour post-deployment (for example, anomaly detection, alerts for certain logged events, using agents to monitor other agents).

Governance principle 4: Enable end-user responsibility

 Responsibility for the deployment of agentic AI does not rest solely on developers. Organisations are responsible for enabling end-user responsibility by equipping users with essential information to use agents appropriately and to exercise effective oversight.

  •  Users interacting with agentic AI should be equipped with information that allows them to appropriately supervise and verify agent behaviour (including understanding what the agent can do, what it has access to, and what checks the user should perform).

  • Users integrating agentic AI into work processes should maintain foundational skills and avoid over-delegation, so that human oversight remains meaningful, and organisations do not inadvertently hollow out the capability needed to supervise agents.

Next steps

The Framework is a living document, and IMDA continues to welcome feedback, suggestions on updates or refinements, and case studies demonstrating how the Framework can be applied in practice. IMDA is also developing further guidelines for testing agentic AI applications. These recent developments highlight the growing prevalence of agentic AI in day-to-day life, and consequently the need for sound governance and controls to support a trustworthy AI ecosystem in Singapore.

If you would like to discuss how these guidelines could be relevant to you, or what you should consider for your agent deployment programmes in light of these guidelines, please get in touch.