Building the Internet for AI That Acts
As AI moves from advice to action, infrastructure matters. Sui enables autonomous systems to act safely, coherently, and with proof
Main Takeaways
- AI is starting to do things, not just suggest them. New AI systems, or “agents,” can now book services, move resources, and complete multi-step tasks on their own. Once software takes action, it needs to be trusted in a very different way.
- The internet wasn’t built for software that acts autonomously. Today’s web assumes humans are in control: clicking buttons, retrying failed actions, and fixing mistakes.
- Sui is designed to let AI act safely and verifiably. Instead of layering AI on top, Sui treats execution as a core problem. It allows AI agents to carry out complex actions within clear parameters and settle outcomes as a single, provable result.
Overview
AI systems are moving beyond generating outputs and toward carrying out actions.
Agentic systems — software that can plan and execute multi-step workflows on a user’s behalf — are already coordinating services, managing resources, and transacting across the internet. As this shift accelerates, a structural limitation becomes clear: today’s web was built for human-driven interaction, not autonomous execution at machine speed.
This is why Sui is focusing on what’s known as agentic execution: the infrastructure that allows AI agents to operate within clear parameters, coordinate across systems, and settle outcomes as a single, verifiable result. Agents can’t be treated like any other app; they require an execution environment that supports their unique machine-led needs.
From Recommendation to Execution
For most of its recent history, AI has played an advisory role.
Models generated text, summarized information, or recommended next steps, leaving the final decision and action to a human. Agentic systems cross a different boundary. They don’t just suggest what to do; they assemble workflows and carry them out across tools and services in pursuit of a defined goal.
This transition matters because action introduces consequences. A model’s recommendation can be revised or ignored. By contrast, an executed action makes an irreversible change: a booking is made, a resource is allocated, a transaction is triggered. Once software begins to operate at this level, correctness becomes a matter of outcomes, not interpretation.
As AI systems take on responsibility, trust and coordination stop being optional. Actions must be authorized. Steps must align with intent. Outcomes must be final and auditable. The central question shifts from whether a system produced a plausible answer to whether it executed the right action, under the right constraints, with the expected result.
The challenge facing agentic systems is no longer ‘intelligence’. It’s executing actions across shared environments: multiple systems and services that no single entity controls. That exposes a deeper problem with how today’s internet is built.
Why Today’s Internet Breaks at Machine Speed
The internet was not designed for autonomous execution.
Its core patterns assume humans are present: sessions that expire, retries that require judgment, dashboards for inspection, and manual intervention when something goes wrong. APIs operate as isolated endpoints, permissions are enforced inside applications, and state, the shared facts about what has happened, is fragmented across services that do not share a common source of truth.
These assumptions break down when software operates autonomously.
When an AI agent operates on its own, partial success or ambiguous failure becomes dangerous. Without a shared source of truth, reconciling outcomes across systems risks duplication or inconsistency. What feels like flexibility to a human becomes fragility at machine speed.
As agentic workflows span more systems, this fragility compounds. Execution turns into a chain of assumptions rather than a coordinated process. Logs may exist, but they require interpretation; they record events, not authoritative outcomes.
Agentic systems don’t need more endpoints or faster APIs. If autonomous agents are going to operate reliably, they need shared truth, enforceable rules, and outcomes that settle cleanly. They need infrastructure designed for execution.
What Agentic Systems Actually Require
When AI systems start acting on their own, small gaps in infrastructure become hard failures. The breakdowns we see in today’s web all trace back to the same issue: actions are split across systems that don’t share state, authority, or a clear sense of completion. Humans can paper over that fragmentation; software acting independently can’t.
At a minimum, agentic systems need four foundational capabilities.
1. Shared, verifiable state
When agents operate across applications or organizations, they need a common source of truth. A network’s state can’t be implied or pieced together after the fact.
It must be directly verifiable so systems can reliably determine what is current, what has changed, and what the final outcome is.
2. Rules and permissions that move with data
Authority can’t be redefined at every boundary. Access rules and constraints need to travel with the data and actions they govern, so an agent remains authorized as it operates across systems or coordinates with other agents, rather than relying on ad-hoc checks at each step.
3. Atomic execution across workflows
Agentic actions rarely happen in a single step. They span multiple resources, services, and state changes. These workflows need to execute as a unit, either fully completing everywhere or failing cleanly, without leaving systems in partially completed states that require manual cleanup.
4. Proof of what happened
Shared state tells systems what is true now. Proof establishes why that state can be trusted.
Logs and best-effort traces aren’t enough. Agents, users, and auditors need certainty about how an action was executed, under what permissions, and whether it followed the intended rules. Execution should resolve into a definitive outcome with verifiable evidence, not require reconstruction or interpretation after the fact.
Taken together, these requirements point to a clear conclusion. Agentic systems don’t need another layer of services or orchestration tooling. They need an execution layer: infrastructure that can coordinate intent, enforce rules, and settle outcomes by default, making autonomous action possible without constant human oversight.
How The Sui Stack Approaches Agentic Execution
Sui was designed as a full-stack platform where execution is native to the network.
Instead of stitching together actions across applications and coordinating intent after the fact, Sui allows complex tasks to be executed directly and settled as a single, final outcome.
On Sui, actions are designed to be self-contained. Instead of spreading data, permissions, and history across different systems, the network groups them together so it’s always clear what an action can touch, who’s allowed to perform it, and what’s already happened.
That structure makes it possible to execute multi-step actions as a single operation. A workflow that spans several resources can be submitted once and either completes fully or doesn’t happen at all. For example, an agent booking travel can reserve a flight, confirm a hotel, and make the payment as one operation—so it either succeeds end-to-end or nothing is committed. There’s no partial execution to reconcile and no ambiguity.
When execution finishes, the result is final and verifiable. The network records a clear state change showing what happened, under which authority, and with what effect. Outcomes don’t need to be reconstructed from logs.
The result is an execution layer where agents can act with bounded authority, coordinate across systems, and rely on final outcomes without constant human oversight.
From Architecture to Practice
This shift toward agentic systems isn’t theoretical. As AI workflows move into production, builders are running into the limits of today’s infrastructure and looking for ways to execute actions safely, coordinate across services, and verify outcomes by default.
On Sui, these execution-first ideas are already reflected in the developer stack itself. Rather than abstractions, they appear as concrete components designed to support verifiable data, accountable execution, and programmatic value exchange.
The following deep dives show what agentic execution looks like in practice on Sui.
How These Ideas Are Being Applied
- Verifiable Inputs: How data provenance, integrity, and policy-aware access are handled for AI systems and agents → Better AI Starts with Verifiable Data.
- Verifiable Execution: How policies, permissions, and accountable execution are enforced across agentic workflows → Making AI Accountable by Design.
- Verifiable Value Exchange: How licensing, payments, and agentic commerce can be handled safely and programmatically → Data with Rights and When Agents Pay.
- End-to-End Systems: How these layers come together to form a coherent, verifiable AI economy → From Data to Decisions.
As AI agents take on more responsibility, the infrastructure beneath them matters more than ever. The differentiator won’t be ‘intelligence’ alone, but whether systems can turn intent into outcomes that are final, verifiable, and shared.
That’s the execution problem Sui is designed to solve.
For a detailed perspective on the shift toward agentic systems and the infrastructure required to support them, see “The Sui Developer Stack: Powering the Agentic Web,” a thread by Adeniyi Abiodun, Co-Founder at Mysten Labs, the original contributors to Sui.