Baidu DAA puts a new metric on the agent era
Baidu proposed Daily Active Agents at Create 2026. The platform race is moving from token consumption toward agents that actually complete work.
- What happened: Baidu used Create 2026 to propose
Daily Active Agentsas a platform metric for the agent era.- Robin Li argued that tokens measure cost and input, while agent platforms should be judged by the number of agents that actually work and produce outcomes.
- Product map: Baidu bundled DuMate, Miaoda/MeDo, Baidu Yijing, Famou Agent 2.0, and a full-stack AI Cloud into one agent portfolio.
- Builder angle: Coding-agent competition is moving beyond IDE assistance into disposable software and agent infrastructure.
- Baidu said 90% of the Miaoda app code was generated by Miaoda itself, and that Famou improved an automated port optimization case by 10.21%.
- Watch: DAA is still Baidu's framing. It needs shared definitions for "active" and "outcome" before it can become an industry metric.
Baidu introduced an interesting term at Baidu Create 2026 on May 13, 2026: Daily Active Agents, or DAA. If Daily Active Users became the signature metric of the mobile internet, Baidu's claim is that the agent era needs a different unit: the number of agents that actually work for people every day and deliver results.
This is not just a branding exercise, because Baidu paired the metric with a broad product slate. On the same stage it presented DuMate, a general-purpose agent; Miaoda and MeDo, its coding-agent products; Baidu Yijing, a digital human platform; Famou Agent 2.0, a self-evolving industrial agent; and a full-stack AI Cloud for large-scale agent applications. Baidu wants the conversation to move away from "how many model calls happened" and toward "how many agents completed useful work."
Until now, the AI platform race has usually been described through three kinds of numbers. The first is model performance: benchmarks such as MMLU, SWE-Bench, LMArena, and GPQA help rank frontier systems. The second is audience scale: weekly active ChatGPT users, Claude subscribers, Gemini app usage, and similar adoption signals explain consumer reach. The third is token throughput: for API businesses, tokens remain the core unit for usage, cost, margin, and infrastructure demand.
DAA is different from all three. It is not a capability score, not a human visitor count, and not an input volume metric. Baidu CEO Robin Li said tokens do not measure value; they measure cost. One million tokens can produce a rough report draft, run a sales campaign, or reschedule production in a way that saves real money. DAA tries to turn that gap into an output-oriented question: did an agent do useful work?
Why DAA now
Baidu is responding to a real measurement problem in AI applications. After generative AI entered the enterprise, many organizations started with usage metrics. How many people used the chatbot? How many prompts did they send? How quickly did monthly token volume grow? Those numbers are easy to capture, but they do not directly prove productivity. A rise in token usage can also mean a rise in cost.
User counts have the same weakness. A person opening an AI tool and asking a question is not the same as a workflow being completed. This is especially true for agent products, where longer user attention is not always a success signal. A good agent may reduce time in the product, finish work in the background, and call the human back only at approval points. In that world, DAU can understate or misread value.
Baidu's proposal is to judge the health of an AI platform by how many agents produced outcomes, not by how many people showed up. According to the official announcement, Robin Li predicted that global DAA could eventually exceed 10 billion. That number assumes a world with more agents than people: one person may operate several agents, business units may run hundreds or thousands, and disposable workflows may be generated continuously.
That does not make DAA a standard yet. There is no shared definition of an "active agent," no common test for an "outcome," and no agreed accounting method when one orchestrator calls several sub-agents. But that is exactly why the announcement matters. Baidu is trying to define the unit of the agent era early. A company that shapes the metric can also shape the product category around it.
DuMate is a portal agent
The first product layer is DuMate. Baidu announced a mobile edition at Create 2026. The official release says DuMate syncs between PC and mobile in real time, lets users start work anywhere, reads screens, operates software, processes files, and connects business systems end to end. Baidu Cloud's DuMate documentation points in the same direction: a desktop-grade AI agent that can see screens, use software, and connect files with work systems.
The key phrase is "unified gateway." Robin Li described DuMate as integrating Baidu Search AI API, the Miaoda coding agent, and Famou Agent. In other words, DuMate is less a single agent app than the entry point for Baidu's agent portfolio. Search, coding, deep research, data analysis, and app generation are meant to sit behind one upper-layer agent experience.
That resembles the direction of ChatGPT, Claude, and Gemini. Frontier AI apps are no longer just chat windows. They are becoming work surfaces connected to files, browsers, business apps, code execution, payments, documents, spreadsheets, and calendars. DuMate's differentiator is Baidu's domestic search, cloud, enterprise, and local app ecosystem. It may be less familiar to global developers, but inside China it can be positioned as an agent portal backed by search and cloud infrastructure.
Miaoda and MeDo push the disposable software thesis
The second layer is Miaoda, Baidu's coding agent. Baidu announced both a Miaoda app and an enterprise edition, saying that users can build applications without writing code. The more aggressive claim is that 90% of the Miaoda app's own code was generated by Miaoda itself. That is product marketing, but it also signals that the coding agent has entered its own product development loop.
Robin Li argued that coding agents can push development cost close to zero and make "disposable software" a reasonable option. Disposable software is software built for a specific moment and a specific purpose, without needing to survive as a long-lived product. An event operations dashboard, a one-campaign internal tool, a data app for a customer meeting, or a one-day logistics simulator can all fit this category.
For developers, that idea is uncomfortable but important. Traditional software engineering prizes maintainability, extensibility, and reuse. But when generation cost falls sharply, not every app needs to become a long-term product. It makes little sense to spend two weeks building an internal tool that saves two hours. If an agent can generate it in ten minutes and the user can validate it quickly, the economics change.
That does not mean disposable software removes the need for engineering discipline. It makes boundaries more important. A one-off app that reads only personal files carries a different risk profile from one that touches customer data, triggers payments, or calls operational systems. That is why Baidu announced an enterprise edition of Miaoda as well. Once coding agents move beyond toy apps, they immediately run into permissions, deployment, auditability, and data integration.
MeDo, the international version, is also worth watching. Baidu said MeDo will bring the Miaoda experience to overseas developers. Cursor, Replit, Lovable, Bolt, Claude Code, and Codex already make the global coding-agent market intensely competitive, so Baidu's ability to break through is still uncertain. But Baidu is not presenting Miaoda only as an app-building AI. It is placing it inside a broader story where the number of working agents can explode.
Baidu Yijing makes the agent visible
The third layer is Baidu Yijing. Rebranded from Huiboxing, it is described as a full-scenario digital human platform covering live streaming, video production, and real-time interactive experiences. Robin Li called digital humans "visible agents." When voice, expression, and gesture are attached, an agent stops being only a backend automation process and becomes an interface that customers can meet directly.
According to Baidu's announcement, Yijing includes three agent capabilities: script writing, video production, and intelligent editing. It supports 12 languages and native-level lip-sync accuracy. The developer takeaway is that digital humans are being framed less as generative video content and more as an agent deployment channel. Customer support, commerce live streams, education, marketing, internal guidance, and field operations instructions can all be packaged as agents that speak and appear on screen.
This connects to a broader channel race for AI agents. Some agents live inside IDEs. Some send approval requests through Slack or Teams. Some operate browsers. Yijing meets users through video and voice. When Baidu talks about DAA, "agent" does not only mean a code-running process. It also includes visible digital workers, content hosts, and sales assistants.
Famou is the industrial proof point
The fourth layer is Famou Agent 2.0. Baidu describes it as a self-evolving agent, with an expanded focus on high-value use cases such as production scheduling, process optimization, and logistics planning. The most concrete number in the announcement comes from an automated port case. Baidu says Famou Agent found better solutions for berth scheduling, equipment allocation, and cargo prioritization, improving performance by 10.21% over an already optimized baseline.
That number needs more detail before it can be fully evaluated. Its meaning depends on the baseline, the definition of performance, whether the improvement was throughput, cost, or delay, and whether the result came from simulation or live operations. Even so, the reason Baidu foregrounded the case is clear. Claims about agent productivity become more persuasive in industrial operations than in generic office automation, because small improvements in scheduling and logistics can translate into large cost differences.
Baidu also said Famou Agent 2.0 won nine of the hardest 15 tasks on MLE-Bench. That points in the same direction. Baidu is emphasizing self-verification and closed-loop execution rather than only conversational assistance. If an agent can generate candidates, verify them, absorb failure, and run again, productivity gains can compound.
This is where DAA becomes sharper. If DAA simply counts bots that are switched on, the metric is weak. If it counts agents like Famou that produce measurable operational improvements, it starts to look more like a productivity metric. Baidu's core message is not "many people use AI." It is "many agents solve real operational problems in closed loops."
| Product / layer | Role | Metric Baidu emphasized |
|---|---|---|
| DuMate | General-purpose agent gateway connecting search, coding, research, data analysis, and app generation | Screen reading, software operation, file processing, and business-system integration |
| Miaoda / MeDo | Coding agent and international version for building apps without coding knowledge | 90% of Miaoda app code generated by Miaoda itself |
| Baidu Yijing | Visible agent for live streaming, video creation, and interactive experiences | 12 languages and native-level lip-sync |
| Famou Agent 2.0 | Self-evolving agent for production scheduling, process optimization, and logistics planning | 10.21% improvement in an automated port case, plus nine wins among the hardest 15 MLE-Bench tasks |
| Baidu AI Cloud | Full-stack AI Cloud for large-scale agent applications | Kunlunxin clusters supporting training for ERNIE 5.1-series core models |
Why the infrastructure announcement came with it
Baidu did not announce only products. It also said it is repositioning AI Cloud as a full-stack AI Cloud for large-scale agent applications. It described upgrades across Agent Infra and AI Infra, and said dedicated clusters based on Kunlunxin AI chips supported training for ERNIE 5.1-series core models. Baidu also attached the claim that ERNIE 5.1, released in early May, ranked first among Chinese models on LMArena's text and search leaderboards.
That combination reflects the specific constraints of Chinese AI companies. U.S. model companies scale around NVIDIA GPUs, hyperscaler clouds, and global SaaS ecosystems. Baidu is presenting search, cloud, chips, models, and apps as one stack. Given geopolitical constraints and China's domestic cloud competition, an agent portfolio announcement is also an infrastructure sovereignty announcement.
Agents demand more complicated infrastructure than ordinary chatbots. They need to track long-running work, orchestrate tool calls, manage files and permissions, retry failed tasks, and keep state. In industrial environments, teams also need observability, security, data residency, latency control, and predictable cost. Baidu uses the term full-stack because it understands that an agent business cannot be reduced to a model API.
What DAA needs before it becomes a standard
DAA is an appealing frame, but it is also a risky one. The first problem is definition. If an agent sends one notification per day, is it active? If it performs background research without user approval, is it active? If one top-level agent calls ten sub-agents, is the count one or eleven? Do failed tasks count? Without answers, DAA can become another vanity metric.
The second problem is outcome verification. Baidu talks about agents that work and deliver results, but result quality differs by domain. For coding agents, passing tests, successful deployments, lack of regressions, and maintainability matter. For sales agents, revenue contribution and customer experience matter. For logistics agents, throughput, cost, safety, and delay matter. If DAA is going to become a platform metric, it needs to be paired with domain-specific completion and outcome measures, not just a raw count.
The third problem is accountability. As the number of agents grows, human work and agent work blur together. If a human only approves the action but the agent made the judgment, who owns the failure? Enterprise buyers cannot avoid that question. That is why recent news around OpenAI Daybreak, Endor Labs AURI, Google Workspace AI governance, and Red Hat's Ansible-backed agent execution layer all converges on permissions, audit logs, policy, and control planes. If DAA grows, governance has to grow with it.
The fourth problem is economics. Baidu is right that tokens are an incomplete value metric because they mostly describe cost. But agents also consume cost. Model calls, search, tool execution, sandboxes, browser automation, workflow orchestration, repository access, and human approval time all add up. A good DAA metric would not reward many agents in isolation. It would reward agents that produce outcomes at a reasonable cost.
The question for builders
For developers and AI teams, the point of this announcement is not that everyone should immediately adopt Baidu's products. The more important signal is that the language of platform competition is changing. Teams using coding agents already ask similar questions: how many pull requests did the agent open today, how many were merged, how many were reverted, how often did it fix its own test failures, and how far could it execute without violating security policy?
Baidu's DAA extends that question to the platform level. Work agents, coding agents, digital humans, industrial optimization agents, and cloud infrastructure are all folded into an "active agent economy." In that framing, the winner is not the company that keeps the most users staring at an app for the longest time. It is the company with the largest ecosystem of agents that reliably finish work.
It is not clear whether the market will adopt Baidu's exact term. OpenAI and Anthropic can define different metrics around ChatGPT, Codex, Claude, Cowork, and Claude Code. Microsoft can push governance metrics through Agent 365 and the Copilot ecosystem. Google can emphasize task completion and app connectivity inside Workspace and the Gemini API. DAA may end up being adapted rather than copied.
Even so, the announcement is worth remembering. The question of 2024 and 2025 was often "how smart is the model?" In 2026, the question is shifting toward "what work did the model actually finish?" Baidu packaged that shift as DAA. The name may or may not survive, but the direction is clear: the next advantage in AI platforms is not the ability to burn tokens. It is the ability to put agents to work and measure the results.