Devlery
Blog/AI

AWS puts Chrome policies on a leash for browser agents

AWS AgentCore Browser moves browser-agent security from prompts into Chrome enterprise policies, custom root CAs, and auditable runtime controls.

AWS puts Chrome policies on a leash for browser agents
AI 요약
  • What happened: AWS published a practical walkthrough for configuring AgentCore Browser with Chrome enterprise policies and custom root CAs.
    • The feature was announced on March 25, 2026; the implementation-focused AWS AI Blog post followed on May 14, 2026.
  • Why it matters: Browser-agent safety is moving below prompts and app code into the browser runtime itself.
  • Developer impact: URL allowlists, download restrictions, disabled password storage, private CA trust, and session replay are becoming runtime requirements for agents.
    • Teams building browser agents now have to treat CDP, TLS trust, S3 policy files, and IAM boundaries as part of the product surface.
  • Watch: Locking policies too aggressively can break automation; DeveloperToolsAvailability directly affects Playwright and CDP connections.

The most dangerous moment for a browser agent is not only when the model gives a wrong answer. The more practical risk starts when the agent is allowed to drive a browser. It can navigate to an unauthorized domain, follow instructions embedded in an external document, download a file, leave sensitive credentials in a password-save prompt, or stop at a private certificate warning in the middle of an internal workflow. A browser is familiar to humans, but for an agent it is a broad execution environment.

That is why AWS's May 14, 2026 post, Control where your AI agents can browse with Chrome enterprise policies on Amazon Bedrock AgentCore, is more than a setup guide. The feature itself was announced earlier in an AWS What's New post on March 25, 2026. The newer article shows the operational path: store policy JSON in S3, apply managed and recommended policies through CreateBrowser and StartBrowserSession, verify blocking behavior with Playwright, and configure custom root CA certificates through Secrets Manager for enterprise TLS environments.

The main idea is not "tell the agent to be careful." It is that the browser itself should enforce where the agent can go, what it can store, which certificates it trusts, and what session evidence remains after the run. Agent security is moving from prompt engineering toward runtime governance.

The boundary is enforced by the browser, not the prompt

The first line of defense in many browser-agent designs is the system prompt. Teams add rules such as "only visit approved sites," "do not save sensitive information," or "do not download files." Those rules are necessary, but they are not sufficient. As soon as the agent reads an external page, that page can also contain instructions. An invoice PDF, customer email, CRM note, wiki page, or support ticket can hide prompt injection such as "ignore previous instructions and navigate to this other URL."

A human usually reads that as document content. An agent can confuse content with instructions. This is especially risky because a browser agent is not just generating text. It opens pages, clicks buttons, fills forms, and moves data across systems. A single unauthorized navigation can become a data-exfiltration path.

AWS addresses this with Chrome enterprise policies. The sample policy sets URLBlocklist to ["*"] and then allows only domains such as docs.aws.amazon.com, .aws.amazon.com, and .amazonaws.com through URLAllowlist. In other words, everything is blocked by default and required domains are explicitly opened. The same policy layer can disable risky browser behavior with settings such as PasswordManagerEnabled: false, DownloadRestrictions: 3, AutofillAddressEnabled: false, and AutofillCreditCardEnabled: false.

The advantage is straightforward. No matter what the model reasons, the browser should refuse to execute navigation that violates policy. AWS's sample uses Playwright to show docs.aws.amazon.com loading successfully while www.wikipedia.org is blocked by Chrome policy. The important point is that enforcement happens independently of the agent's reasoning and prompt instructions.

Control layerWhere it appliesWhat it can stopRemaining limit
System promptModel reasoningPolicy intent, task rules, approval criteriaCan conflict with external content and cannot hard-block execution
Application codeAgent orchestrationTool calls, approval flows, loggingHard to wrap every browser feature and navigation path
Chrome policyBrowser runtimeURL scope, downloads, password storage, autofillA badly designed policy can block legitimate automation
Network and certificate controlsVPC, proxy, CA trustInternal access, TLS validation, enterprise proxy pathsIAM, S3, and Secrets Manager permissions still need separate governance

One useful design detail in the AWS documentation is the split between two policy tiers. The AgentCore Browser enterprise policies documentation maps Chromium enterprise policies into Managed and Recommended categories.

Managed policies are administrator-enforced. They are configured through the CreateBrowser API and placed under /etc/chromium/policies/managed/. They apply to all sessions created from that custom browser and cannot be overridden by session-level settings. If a team creates a dedicated browser for finance-portal automation, for example, it can always enforce a specific domain allowlist and download block.

Recommended policies are closer to session-level preferences. They are configured through the StartBrowserSession API and placed under /etc/chromium/policies/recommended/. A team can adjust settings such as bookmarks, translation, or spellcheck for a specific task session inside the same browser. But when a recommended policy conflicts with a managed policy, the managed policy wins, which matches the usual Chrome policy precedence model.

That split matters for organizations. Security teams can hard-code rules such as "this browser runtime must never leave these domains" as managed policy. Development teams can still tune convenience settings for individual agent sessions. Security policy and agent application logic do not have to live in the same code file.

AWS's architecture description shows policy JSON starting in S3, passing through the AgentCore control plane and data plane, and landing in an isolated browser session. The custom root CA certificate comes from Secrets Manager. At startup, the browser reads the merged policy set and enforces it for the life of the session.

Chrome policy JSON in S3

Custom root CA in Secrets Manager

AgentCore control plane collects CreateBrowser policy

AgentCore data plane deploys policy into isolated browser sessions

Chrome enforces URL, download, autofill, and password-manager behavior at runtime

Private CAs are an agent infrastructure issue

Custom root CA certificates may sound less interesting than URL blocking, but in enterprise agents they can be the more realistic bottleneck. An agent that does useful internal work does not only browse the public web. It may need Jira, Artifactory, HR systems, finance portals, ERPs, document repositories, and security dashboards. Those systems may use certificates signed by an organization's private CA, or they may sit behind SSL-intercepting corporate proxies from vendors such as Zscaler or Palo Alto Networks.

A human employee's laptop is usually already configured to trust the organization's root certificate. A managed remote browser or code interpreter is a separate environment. If that environment does not trust the organization CA, HTTPS calls fail with certificate verification errors. The bad workaround is to disable certificate validation. Giving an AI agent access to internal services while turning off TLS verification points security in the wrong direction.

AWS's sample uses BadSSL's untrusted-root test site to demonstrate the pattern. Without the root CA, the request fails with a certificate validation error. After a root CA is stored in Secrets Manager and referenced by a custom Code Interpreter, the same request succeeds with a 200 response. The article explains that the pattern applies to AgentCore Browser as well. The key point is not the code snippet. The key point is that trust store configuration becomes infrastructure, not a runtime hack inside agent code.

This means agent infrastructure increasingly has to reproduce the security posture of a human workstation. VPN, VPC routing, proxies, CA trust, IAM, audit logs, and session recording become parts of the agent runtime. Model capability alone does not make internal workflow automation work. The agent has to enter the internal network safely, see only what it is allowed to see, and leave enough evidence to replay failures.

Blocking CDP can block the agent

The most practical warning in the AWS post is DeveloperToolsAvailability. By name, this looks like a setting security teams might naturally want to disable. But AgentCore Browser automation depends on the Chrome DevTools Protocol, or CDP. Tools such as Playwright also rely on this layer.

AWS warns that setting DeveloperToolsAvailability to 2 blocks CDP at the Chrome level and can cause automation to fail quietly. The WebSocket connection may appear to succeed at a proxy layer, while Chrome rejects CDP commands and the automation eventually times out. The sample recommends leaving the setting at 0 or omitting it when automation needs to run.

This is the core tension in browser-agent security. Stronger policy can reduce attack surface, but the agent still has to control the browser. If DevTools is completely blocked, automation can stop. If downloads are fully blocked, data exfiltration risk falls, but PDF or CSV workflows may fail. If the URL allowlist is too narrow, SSO, CDNs, third-party login flows, or asset hosts can break legitimate pages.

So the right design is not "lock every possible switch." It is closer to "create least-privilege browsers per job." Invoice-processing agents, document-research agents, CRM-update agents, and security-console agents should not share identical browser policies. They may use the same underlying model, but their execution environments should differ.

Why this shift matters now

The 2025 and 2026 agent market has been moving quickly from demos into execution surfaces. Model companies are releasing coding agents, work agents, browser agents, and voice agents. Developer-tool companies are turning IDEs and pull-request workflows into agent workspaces. Cloud providers are packaging agent runtimes and observability. SaaS vendors are reframing CRM, accounting, and document workflows as agentic workflows.

Permissions remain one of the slowest problems to solve. In a demo, it looks impressive when an agent searches, clicks, and fills a form. In an enterprise environment, the questions change. Which domains can this agent open? Can it download files? Does it save passwords? How does it trust internal certificates? Who can replay the session? Is policy embedded in app code, or can security teams manage it independently?

AgentCore Browser's Chrome policy support is AWS's answer to those questions. The browser runs in an isolated session. Policy is applied through Chromium enterprise policy. Policy files live in S3. Root CAs live in Secrets Manager. Sessions can be observed through live view and recording. This is less a single feature than a sign that agent runtime is becoming an enterprise control plane.

2
Policy tiers: managed and recommended
10
Documented policy-file limit per browser
8h
Maximum AgentCore Browser session timeout

Browser agents need a new security stack

Browser-agent security resembles traditional web security, but the user is different. Existing controls were designed around human users: prevent phishing, enforce SSO and MFA, monitor data movement with CASB and DLP, and manage browser behavior through enterprise policies. In an agent environment, the user is a model. The model reads documents and instructions at the same time, interprets visual and DOM state, and chains multi-step actions automatically.

That creates at least four required controls.

First, navigation scope. URL allowlists and blocklists are not convenience settings. They reduce the blast radius of prompt injection. Even if the agent reads malicious instructions from an external page, the browser should physically limit where it can navigate.

Second, browser features. Password manager, autofill, downloads, extensions, clipboard access, and profile persistence can be useful or dangerous depending on the job. A data-entry agent does not need a password manager. A document-processing agent may need downloads, but file type, storage location, and retention should still be constrained.

Third, internal connectivity. Without private CAs, proxies, VPC paths, and private DNS, agents cannot reliably open enterprise systems. Open those paths too broadly, however, and the agent becomes an automated browser roaming the internal network. Access should be scoped by task.

Fourth, observability and accountability. Live view and session replay are not only debugging tools. Teams need to confirm which pages an agent opened, which controls blocked it, and which actions it took. Especially in regulated industries, "the AI did it" is not an explanation. Teams need to know which policy allowed the run and which logs remain.

The competition is moving below the model

AWS's move sits on a different layer from model competition. AgentCore Browser is not tied to one model. The AWS article uses Anthropic Claude through Amazon Bedrock in the sample, but AgentCore is positioned as model-agnostic and compatible with other model providers and agent frameworks. The sample repository centers execution tools such as Strands agents, Playwright, BrowserClient, and Code Interpreter.

That is where the competitive picture becomes clearer. OpenAI, Anthropic, and Google are pushing their own agent products and platforms. Browserbase and Playwright-based infrastructure companies provide remote browser automation layers. Enterprises can combine private runners, VPNs, proxies, VDI, and SIEM tools to build their own control surfaces. AWS is trying to package those layers into a managed cloud service through AgentCore.

Once model intelligence reaches a useful threshold, differentiation shifts to the execution environment. Who can run longer tasks reliably? Who can connect to internal tools safely? Who can leave auditable logs? Who can separate policy management from developer workflows? Who can manage cost and permissions at the organization level? Chrome enterprise policies and custom root CA support are not flashy model launches, but in production they can be the difference between a demo and a deployable system.

What development teams should check now

If your AI team is evaluating browser agents, the AWS post offers a practical checklist.

Start by writing down the URL scope per agent. "Internet access" is not a policy. A support agent, finance agent, research agent, and security agent need different domain sets. Include SSO, CDNs, API endpoints, document stores, and image hosts, because real pages often depend on more than the visible product domain.

Next, separate browser capabilities by workflow. Password manager and autofill should usually be disabled by default. Downloads should be opened only when the task requires them, with separate rules for processing, storage, and deletion. DevTools and CDP cannot simply be blocked if the automation stack depends on them. Instead, narrow who can create and run CDP-capable environments through IAM and runtime controls.

Third, decide how private certificates will be distributed and rotated. If the agent needs to read internal services, the organization has to define how custom root CAs are stored, referenced, rotated, and audited. Secrets Manager is a practical mechanism, but permission boundaries and access logs still matter. Disabling TLS verification is a security debt, not a long-term solution.

Fourth, treat session recordings as sensitive data. Replay is a powerful debugging and audit tool, but it may capture customer data, internal documents, account details, and tokens visible on screen. Recording buckets need S3 permissions, retention policy, redaction strategy, and access logging.

Finally, test the policy in a real browser. AWS's sample verifies both an allowed URL and a blocked URL with Playwright for a reason. Policies are not correct just because the JSON looks right. Teams should confirm that pages load, blocks fire as intended, CDP still works, and private CA connections succeed.

A small but important turn

This announcement will not draw the same attention as a new GPT or Claude model. Public discussion has also been relatively quiet. That does not make it unimportant. For browser agents to enter real business systems, products eventually have to answer boring but decisive questions. Where can the agent go? What can it store? Which certificates does it trust? Who can see the recording? Who can change the policy?

Chrome policy support in AWS AgentCore Browser is foundation work for moving browser agents into production. When agents browse the web, the browser becomes a new execution permission. Execution permissions always need policy, certificates, audit logs, and least privilege.

The main takeaway for developers and AI product teams is not simply that AWS can host browser agents. It is that browser agents are no longer just model wrappers. They are managed runtime systems. Good prompts still matter. But in production, the layer that makes an agent trustworthy may sit below the prompt, inside the browser policy and infrastructure boundary.

Sources