Skip to content
Cloudflare Docs

Changelog

New updates and improvements at Cloudflare.

Subscribe to RSS
View all RSS feeds

Select...
hero image
  1. This week's update

    This week’s focus highlights newly disclosed vulnerabilities in web frameworks, enterprise applications, and widely deployed CMS plugins. The vulnerabilities include SSRF, authentication bypass, arbitrary file upload, and remote code execution (RCE), exposing organizations to high-impact risks such as unauthorized access, system compromise, and potential data exposure. In addition, security rule enhancements have been deployed to cover general command injection and server-side injection attacks, further strengthening protections.

    Key Findings

    • Next.js (CVE-2025-57822): Improper handling of redirects in custom middleware can lead to server-side request forgery (SSRF) when user-supplied headers are forwarded. Attackers could exploit this to access internal services or cloud metadata endpoints. The issue has been resolved in versions 14.2.32 and 15.4.7. Developers using custom middleware should upgrade and verify proper redirect handling in next() calls.

    • ScriptCase (CVE-2025-47227, CVE-2025-47228): In the Production Environment extension in Netmake ScriptCase through 9.12.006 (23), two vulnerabilities allow attackers to reset admin accounts and execute system commands, potentially leading to full compromise of affected deployments.

    • Sar2HTML (CVE-2025-34030): In Sar2HTML version 3.2.2 and earlier, insufficient input sanitization of the plot parameter allows remote, unauthenticated attackers to execute arbitrary system commands. Exploitation could compromise the underlying server and its data.

    • Zhiyuan OA (CVE-2025-34040): An arbitrary file upload vulnerability exists in the Zhiyuan OA platform. Improper validation in the wpsAssistServlet interface allows unauthenticated attackers to upload crafted files via path traversal, which can be executed on the web server, leading to remote code execution.

    • WordPress:Plugin:InfiniteWP Client (CVE-2020-8772): A vulnerability in the InfiniteWP Client plugin allows attackers to perform restricted actions and gain administrative control of connected WordPress sites.

    Impact

    These vulnerabilities could allow attackers to gain unauthorized access, execute malicious code, or take full control of affected systems. The Next.js SSRF flaw may expose internal services or cloud metadata endpoints to attackers. Exploitations of ScriptCase and Sar2HTML could result in remote code execution, administrative takeover, and full server compromise. In Zhiyuan OA, the arbitrary file upload vulnerability allows attackers to execute malicious code on the web server, potentially exposing sensitive data and applications. The authentication bypass in WordPress InfiniteWP Client enables attackers to gain administrative access, risking data exposure and unauthorized control of connected sites.

    Administrators are strongly advised to apply vendor patches immediately, remove unsupported software, and review authentication and access controls to mitigate these risks.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset 100007DCommand Injection - Common Attack Commands ArgsLogBlockThis rule has been merged into the original rule "Command Injection - Common Attack Commands" (ID: ) for New WAF customers only.
    Cloudflare Managed Ruleset 100617Next.js - SSRF - CVE:CVE-2025-57822LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100659_BETACommon Payloads for Server-Side Template Injection - BetaLogBlockThis rule is merged into the original rule "Common Payloads for Server-Side Template Injection" (ID: )
    Cloudflare Managed Ruleset 100824BCrushFTP - Remote Code Execution - CVE:CVE-2025-54309 - 3LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100848ScriptCase - Auth Bypass - CVE:CVE-2025-47227LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100849ScriptCase - Command Injection - CVE:CVE-2025-47228LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100872WordPress:Plugin:InfiniteWP Client - Missing Authorization - CVE:CVE-2020-8772LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100873Sar2HTML - Command Injection - CVE:CVE-2025-34030LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100875Zhiyuan OA - Remote Code Execution - CVE:CVE-2025-34040LogBlockThis is a New Detection
  1. Announcement DateRelease DateRelease BehaviorLegacy Rule IDRule IDDescriptionComments
    2025-09-082025-09-15Log100646 Argo CD - Information Disclosure - CVE:CVE-2025-55190This is a New Detection
    2025-09-082025-09-15Log100874 DataEase - JNDI injection - CVE:CVE-2025-57773This is a New Detection
    2025-09-082025-09-15Log100880 Sitecore - Information Disclosure - CVE:CVE-2025-53694This is a New Detection
  1. We're excited to be a launch partner alongside Google to bring their newest embedding model, EmbeddingGemma, to Workers AI that delivers best-in-class performance for its size, enabling RAG and semantic search use cases.

    @cf/google/embeddinggemma-300m is a 300M parameter embedding model from Google, built from Gemma 3 and the same research used to create Gemini models. This multilingual model supports 100+ languages, making it ideal for RAG systems, semantic search, content classification, and clustering tasks.

    Using EmbeddingGemma in AutoRAG: Now you can leverage EmbeddingGemma directly through AutoRAG for your RAG pipelines. EmbeddingGemma's multilingual capabilities make it perfect for global applications that need to understand and retrieve content across different languages with exceptional accuracy.

    To use EmbeddingGemma for your AutoRAG projects:

    1. Go to Create in the AutoRAG dashboard
    2. Follow the setup flow for your new RAG instance
    3. In the Generate Index step, open up More embedding models and select @cf/google/embeddinggemma-300m as your embedding model
    4. Complete the setup to create an AutoRAG

    Try it out and let us know what you think!

  1. This week's update

    This week, new critical vulnerabilities were disclosed in Sitecore’s Sitecore Experience Manager (XM), Sitecore Experience Platform (XP), specifically versions 9.0 through 9.3, and 10.0 through 10.4. These flaws are caused by unsafe data deserialization and code reflection, leaving affected systems at high risk of exploitation.

    Key Findings

    • CVE-2025-53690: Remote Code Execution through Insecure Deserialization
    • CVE-2025-53691: Remote Code Execution through Insecure Deserialization
    • CVE-2025-53693: HTML Cache Poisoning through Unsafe Reflections

    Impact

    Exploitation could allow attackers to execute arbitrary code remotely on the affected system and conduct cache poisoning attacks, potentially leading to further compromise. Applying the latest vendor-released solution without delay is strongly recommended.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset 100878Sitecore - Remote Code Execution - CVE:CVE-2025-53691N/ABlockThis is a new detection
    Cloudflare Managed Ruleset 100631Sitecore - Cache Poisoning - CVE:CVE-2025-53693N/ABlockThis is a new detection
    Cloudflare Managed Ruleset 100879Sitecore - Remote Code Execution - CVE:CVE-2025-53690N/ABlockThis is a new detection
  1. You can now upload up to 100,000 static assets per Worker version

    • Paid and Workers for Platforms users can now upload up to 100,000 static assets per Worker version, a 5x increase from the previous limit of 20,000.
    • Customers on the free plan still have the same limit as before — 20,000 static assets per version of your Worker
    • The individual file size limit of 25 MiB remains unchanged for all customers.

    This increase allows you to build larger applications with more static assets without hitting limits.

    Wrangler

    To take advantage of the increased limits, you must use Wrangler version 4.34.0 or higher. Earlier versions of Wrangler will continue to enforce the previous 20,000 file limit.

    Learn more

    For more information about Workers static assets, see the Static Assets documentation and Platform Limits.

  1. You can now manage Workers, Versions, and Deployments as separate resources with a new, resource-oriented API (Beta).

    This new API is supported in the Cloudflare Terraform provider and the Cloudflare Typescript SDK, allowing platform teams to manage a Worker's infrastructure in Terraform, while development teams handle code deployments from a separate repository or workflow. We also designed this API with AI agents in mind, as a clear, predictable structure is essential for them to reliably build, test, and deploy applications.

    Try it out

    Before: Eight+ endpoints with mixed responsibilities

    Before

    The existing API was originally designed for simple, one-shot script uploads:

    Terminal window
    curl -X PUT "https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/workers/scripts/$SCRIPT_NAME" \
    -H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
    -H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
    -H "Content-Type: multipart/form-data" \
    -F 'metadata={
    "main_module": "worker.js",
    "compatibility_date": "$today$"
    }' \
    -F "worker.js=@worker.js;type=application/javascript+module"

    This API worked for creating a basic Worker, uploading all of its code, and deploying it immediately — but came with challenges:

    • A Worker couldn't exist without code: To create a Worker, you had to upload its code in the same API request. This meant platform teams couldn't provision Workers with the proper settings, and then hand them off to development teams to deploy the actual code.

    • Several endpoints implicitly created deployments: Simple updates like adding a secret or changing a script's content would implicitly create a new version and immediately deploy it.

    • Updating a setting was confusing: Configuration was scattered across eight endpoints with overlapping responsibilities. This ambiguity made it difficult for human developers (and even more so for AI agents) to reliably update a Worker via API.

    • Scripts used names as primary identifiers: This meant simple renames could turn into a risky migration, requiring you to create a brand new Worker and update every reference. If you were using Terraform, this could inadvertently destroy your Worker altogether.

    After: Three resources with clear boundaries

    After

    All endpoints now use simple JSON payloads, with script content embedded as base64-encoded strings -- a more consistent and reliable approach than the previous multipart/form-data format.

    • Worker: The parent resource representing your application. It has a stable UUID and holds persistent settings like name, tags, and logpush. You can now create a Worker to establish its identity and settings before any code is uploaded.

    • Version: An immutable snapshot of your code and its specific configuration, like bindings and compatibility_date. Creating a new version is a safe action that doesn't affect live traffic.

    • Deployment: An explicit action that directs traffic to a specific version.

    Why this matters

    You can now create Workers before uploading code

    Workers are now standalone resources that can be created and configured without any code. Platform teams can provision Workers with the right settings, then hand them off to development teams for implementation.

    Example: Typescript SDK

    // Step 1: Platform team creates the Worker resource (no code needed)
    const worker = await client.workers.beta.workers.create({
    name: "payment-service",
    account_id: "...",
    observability: {
    enabled: true,
    },
    });
    // Step 2: Development team adds code and creates a version later
    const version = await client.workers.beta.workers.versions.create(worker.id, {
    account_id: "...",
    main_module: "worker.js",
    compatibility_date: "$today",
    bindings: [ /*...*/ ],
    modules: [
    {
    name: "worker.js",
    content_type: "application/javascript+module",
    content_base64: Buffer.from(scriptContent).toString("base64"),
    },
    ],
    });
    // Step 3: Deploy explicitly when ready
    const deployment = await client.workers.scripts.deployments.create(worker.name, {
    account_id: "...",
    strategy: "percentage",
    versions: [
    {
    percentage: 100,
    version_id: version.id,
    },
    ],
    });

    Example: Terraform

    If you use Terraform, you can now declare the Worker in your Terraform configuration and manage configuration outside of Terraform in your Worker's wrangler.jsonc file and deploy code changes using Wrangler.

    resource "cloudflare_worker" "my_worker" {
    account_id = "..."
    name = "my-important-service"
    }
    # Manage Versions and Deployments here or outside of Terraform
    # resource "cloudflare_worker_version" "my_worker_version" {}
    # resource "cloudflare_workers_deployment" "my_worker_deployment" {}

    Deployments are always explicit, never implicit

    Creating a version and deploying it are now always explicit, separate actions - never implicit side effects. To update version-specific settings (like bindings), you create a new version with those changes. The existing deployed version remains unchanged until you explicitly deploy the new one.

    Terminal window
    # Step 1: Create a new version with updated settings (doesn't affect live traffic)
    POST /workers/workers/{id}/versions
    {
    "compatibility_date": "$today",
    "bindings": [
    {
    "name": "MY_NEW_ENV_VAR",
    "text": "new_value",
    "type": "plain_text"
    }
    ],
    "modules": [...]
    }
    # Step 2: Explicitly deploy when ready (now affects live traffic)
    POST /workers/scripts/{script_name}/deployments
    {
    "strategy": "percentage",
    "versions": [
    {
    "percentage": 100,
    "version_id": "new_version_id"
    }
    ]
    }

    Settings are clearly organized by scope

    Configuration is now logically divided: Worker settings (like name and tags) persist across all versions, while Version settings (like bindings and compatibility_date) are specific to each code snapshot.

    Terminal window
    # Worker settings (the parent resource)
    PUT /workers/workers/{id}
    {
    "name": "payment-service",
    "tags": ["production"],
    "logpush": true,
    }
    Terminal window
    # Version settings (the "code")
    POST /workers/workers/{id}/versions
    {
    "compatibility_date": "$today",
    "bindings": [...],
    "modules": [...]
    }

    /workers API endpoints now support UUIDs (in addition to names)

    The /workers/workers/ path now supports addressing a Worker by both its immutable UUID and its mutable name.

    Terminal window
    # Both work for the same Worker
    GET /workers/workers/29494978e03748669e8effb243cf2515 # UUID (stable for automation)
    GET /workers/workers/payment-service # Name (convenient for humans)

    This dual approach means:

    • Developers can use readable names for debugging.
    • Automation can rely on stable UUIDs to prevent errors when Workers are renamed.
    • Terraform can rename Workers without destroying and recreating them.

    Learn more

    Technical notes

    • The pre-existing Workers REST API remains fully supported. Once the new API exits beta, we'll provide a migration timeline with ample notice and comprehensive migration guides.
    • Existing Terraform resources and SDK methods will continue to be fully supported through the current major version.
    • While the Deployments API currently remains on the /scripts/ endpoint, we plan to introduce a new Deployments endpoint under /workers/ to match the new API structure.
  1. Cloudflare's API now supports rate limiting headers using the pattern developed by the IETF draft on rate limiting. This allows API consumers to know how many more calls are left until the rate limit is reached, as well as how long you will need to wait until more capacity is available.

    Our SDKs automatically work with these new headers, backing off when rate limits are approached. There is no action required for users of the latest Cloudflare SDKs to take advantage of this.

    As always, if you need any help with rate limits, please contact Support.

    Changes

    New Headers

    Headers that are always returned:

    • X-RateLimit-Limit: Total Number of requests the caller can make
    • X-RateLimit-Remaining: Number of requests before Rate Limit kicks in

    Returned only when a rate limit has been reached (error code: 429):

    • Retry-After: Number of Seconds until more capacity is available, rounded up
    • X-RateLimit-Reset: RFC 1123 Formatted Date as to when more capacity is available

    SDK Back offs

    • All SDKs will automatically respond to the headers, instituting a backoff when limits are approached.

    GraphQL and Edge APIs

    These new headers and back offs are only available for Cloudflare REST APIs, and will not affect GraphQL.

    For more information

  1. Starting December 1, 2025, list endpoints for the Cloudflare Tunnel API and Zero Trust Networks API will no longer return deleted tunnels, routes, subnets and virtual networks by default. This change makes the API behavior more intuitive by only returning active resources unless otherwise specified.

    No action is required if you already explicitly set is_deleted=false or if you only need to list active resources.

    This change affects the following API endpoints:

    What is changing?

    The default behavior of the is_deleted query parameter will be updated.

    ScenarioPrevious behavior (before December 1, 2025)New behavior (from December 1, 2025)
    is_deleted parameter is omittedReturns active & deleted tunnels, routes, subnets and virtual networksReturns only active tunnels, routes, subnets and virtual networks

    Action required

    If you need to retrieve deleted (or all) resources, please update your API calls to explicitly include the is_deleted parameter before December 1, 2025.

    To get a list of only deleted resources, you must now explicitly add the is_deleted=true query parameter to your request:

    Terminal window
    # Example: Get ONLY deleted Tunnels
    curl "https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/tunnels?is_deleted=true" \
    -H "Authorization: Bearer $API_TOKEN"
    # Example: Get ONLY deleted Virtual Networks
    curl "https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/teamnet/virtual_networks?is_deleted=true" \
    -H "Authorization: Bearer $API_TOKEN"

    Following this change, retrieving a complete list of both active and deleted resources will require two separate API calls: one to get active items (by omitting the parameter or using is_deleted=false) and one to get deleted items (is_deleted=true).

    Why we’re making this change

    This update is based on user feedback and aims to:

    • Create a more intuitive default: Aligning with common API design principles where list operations return only active resources by default.
    • Reduce unexpected results: Prevents users from accidentally operating on deleted resources that were returned unexpectedly.
    • Improve performance: For most users, the default query result will now be smaller and more relevant.

    To learn more, please visit the Cloudflare Tunnel API and Zero Trust Networks API documentation.

  1. To provide more granular controls, we refined the existing roles for Email Security and launched a new Email Security role as well.

    All Email Security roles no longer have read or write access to any of the other Zero Trust products:

    • Email Configuration Admin
    • Email Integration Admin
    • Email Security Read Only
    • Email Security Analyst
    • Email Security Policy Admin
    • Email Security Reporting

    To configure Data Loss Prevention (DLP) or Remote Browser Isolation (RBI), you now need to be an admin for the Zero Trust dashboard with the Cloudflare Zero Trust role.

    Also through customer feedback, we have created a new additive role to allow Email Security Analyst to create, edit, and delete Email Security policies, without needing to provide access via the Email Configuration Admin role. This role is called Email Security Policy Admin, which can read all settings, but has write access to allow policies, trusted domains, and blocked senders.

    This feature is available across these Email Security packages:

    • Advantage
    • Enterprise
    • Enterprise + PhishGuard
  1. This week's update

    This week, a critical vulnerability was disclosed in Fortinet FortiWeb (versions 7.6.3 and below, versions 7.4.7 and below, versions 7.2.10 and below, and versions 7.0.10 and below), linked to improper parameter handling that could allow unauthorized access.

    Key Findings

    • Fortinet FortiWeb (CVE-2025-52970): A vulnerability may allow an unauthenticated remote attacker with access to non-public information to log in as any existing user on the device via a specially crafted request.

    Impact

    Exploitation could allow an unauthenticated attacker to impersonate any existing user on the device, potentially enabling them to modify system settings or exfiltrate sensitive information, posing a serious security risk. Upgrading to the latest vendor-released version is strongly recommended.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset 100586Fortinet FortiWeb - Auth Bypass - CVE:CVE-2025-52970LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100136CXSS - JavaScript - Headers and BodyN/AN/ARule metadata description refined. Detection unchanged.
  1. Digital Experience Monitoring (DEX) provides visibility into device connectivity and performance across your Cloudflare SASE deployment.

    We've released an MCP server (Model Context Protocol) for DEX.

    The DEX MCP server is an AI tool that allows customers to ask a question like, "Show me the connectivity and performance metrics for the device used by carly‌@acme.com", and receive an answer that contains data from the DEX API.

    Any Cloudflare One customer using a Free, PayGo, or Enterprise account can access the DEX MCP Server. This feature is available to everyone.

    Customers can test the new DEX MCP server in less than one minute. To learn more, read the DEX MCP server documentation.

  1. This week's update

    This week, new critical vulnerabilities were disclosed in Next.js’s image optimization functionality, exposing a broad range of production environments to risks of data exposure and cache manipulation.

    Key Findings

    • CVE-2025-55173: Arbitrary file download from the server via image optimization.

    • CVE-2025-57752: Cache poisoning leading to unauthorized data disclosure.

    Impact

    Exploitation could expose sensitive files, leak user or backend data, and undermine application trust. Given Next.js’s wide use, immediate patching and cache hardening are strongly advised.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset 100613Next.js - Dangerous File Download - CVE:CVE-2025-55173N/ABlockThis is a new detection
    Cloudflare Managed Ruleset 100616Next.js - Information Disclosure - CVE:CVE-2025-57752N/ABlockThis is a new detection
  1. We improved AI crawler management with detailed analytics and introduced custom HTTP 402 responses for blocked crawlers. AI Audit has been renamed to AI Crawl Control and is now generally available.

    Enhanced Crawlers tab:

    • View total allowed and blocked requests for each AI crawler
    • Trend charts show crawler activity over your selected time range per crawler
    Updated AI Crawl Control table showing request counts and trend charts

    Custom block responses (paid plans): You can now return HTTP 402 "Payment Required" responses when blocking AI crawlers, enabling direct communication with crawler operators about licensing terms.

    For users on paid plans, when blocking AI crawlers you can configure:

    • Response code: Choose between 403 Forbidden or 402 Payment Required
    • Response body: Add a custom message with your licensing contact information
    AI Crawl Control block response configuration interface

    Example 402 response:

    HTTP 402 Payment Required
    Date: Mon, 24 Aug 2025 12:56:49 GMT
    Content-type: application/json
    Server: cloudflare
    Cf-Ray: 967e8da599d0c3fa-EWR
    Cf-Team: 2902f6db750000c3fa1e2ef400000001
    {
    "message": "Please contact the site owner for access."
    }
  1. Zero Trust has significantly upgraded its Shadow IT analytics, providing you with unprecedented visibility into your organizations use of SaaS tools. With this dashboard, you can review who is using an application and volumes of data transfer to the application.

    You can review these metrics against application type, such as Artificial Intelligence or Social Media. You can also mark applications with an approval status, including Unreviewed, In Review, Approved, and Unapproved designating how they can be used in your organization.

    Cloudflare One Analytics Dashboards

    These application statuses can also be used in Gateway HTTP policies, so you can block, isolate, limit uploads and downloads, and more based on the application status.

    Both the analytics and policies are accessible in the Cloudflare Zero Trust dashboard, empowering organizations with better visibility and control.

  1. New state-of-the-art models have landed on Workers AI! This time, we're introducing new partner models trained by our friends at Deepgram and Leonardo, hosted on Workers AI infrastructure.

    As well, we're introuding a new turn detection model that enables you to detect when someone is done speaking — useful for building voice agents!

    Read the blog for more details and check out some of the new models on our platform:

    You can filter out new partner models with the Partner capability on our Models page.

    As well, we're introducing WebSocket support for some of our audio models, which you can filter though the Realtime capability on our Models page. WebSockets allows you to create a bi-directional connection to our inference server with low latency — perfect for those that are building voice agents.

    An example python snippet on how to use WebSockets with our new Aura model:

    import json
    import os
    import asyncio
    import websockets
    uri = f"wss://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/deepgram/aura-1"
    input = [
    "Line one, out of three lines that will be provided to the aura model.",
    "Line two, out of three lines that will be provided to the aura model.",
    "Line three, out of three lines that will be provided to the aura model. This is a last line.",
    ]
    async def text_to_speech():
    async with websockets.connect(uri, additional_headers={"Authorization": os.getenv("CF_TOKEN")}) as websocket:
    print("connection established")
    for line in input:
    print(f"sending `{line}`")
    await websocket.send(json.dumps({"type": "Speak", "text": line}))
    print("line was sent, flushing")
    await websocket.send(json.dumps({"type": "Flush"}))
    print("flushed, recving")
    resp = await websocket.recv()
    print(f"response received {resp}")
    if __name__ == "__main__":
    asyncio.run(text_to_speech())
  1. Cloudflare CASB now supports three of the most widely used GenAI platforms — OpenAI ChatGPT, Anthropic Claude, and Google Gemini. These API-based integrations give security teams agentless visibility into posture, data, and compliance risks across their organization’s use of generative AI.

    Cloudflare CASB showing selection of new findings for ChatGPT, Claude, and Gemini integrations.

    Key capabilities

    • Agentless connections — connect ChatGPT, Claude, and Gemini tenants via API; no endpoint software required
    • Posture management — detect insecure settings and misconfigurations that could lead to data exposure
    • DLP detection — identify sensitive data in uploaded chat attachments or files
    • GenAI-specific insights — surface risks unique to each provider’s capabilities

    Learn more

    These integrations are available to all Cloudflare One customers today.

  1. You can now control who within your organization has access to internal MCP servers, by putting internal MCP servers behind Cloudflare Access.

    Self-hosted applications in Cloudflare Access now support OAuth for MCP server authentication. This allows Cloudflare to delegate access from any self-hosted application to an MCP server via OAuth. The OAuth access token authorizes the MCP server to make requests to your self-hosted applications on behalf of the authorized user, using that user's specific permissions and scopes.

    For example, if you have an MCP server designed for internal use within your organization, you can configure Access policies to ensure that only authorized users can access it, regardless of which MCP client they use. Support for internal, self-hosted MCP servers also works with MCP server portals, allowing you to provide a single MCP endpoint for multiple MCP servers. For more on MCP server portals, read the blog post on the Cloudflare Blog.

  1. MCP server portal

    An MCP server portal centralizes multiple Model Context Protocol (MCP) servers onto a single HTTP endpoint. Key benefits include:

    • Streamlined access to multiple MCP servers: MCP server portals support both unauthenticated MCP servers as well as MCP servers secured using any third-party or custom OAuth provider. Users log in to the portal URL through Cloudflare Access and are prompted to authenticate separately to each server that requires OAuth.
    • Customized tools per portal: Admins can tailor an MCP portal to a particular use case by choosing the specific tools and prompt templates that they want to make available to users through the portal. This allows users to access a curated set of tools and prompts — the less external context exposed to the AI model, the better the AI responses tend to be.
    • Observability: Once the user's AI agent is connected to the portal, Cloudflare Access logs the indiviudal requests made using the tools in the portal.

    This is available in an open beta for all customers across all plans! For more information check out our blog for this release.

  1. You can now list all vector identifiers in a Vectorize index using the new list-vectors operation. This enables bulk operations, auditing, and data migration workflows through paginated requests that maintain snapshot consistency.

    The operation is available via Wrangler CLI and REST API. Refer to the list-vectors best practices guide for detailed usage guidance.

  1. This week's update

    This week, critical vulnerabilities were disclosed that impact widely used open-source infrastructure, creating high-risk scenarios for code execution and operational disruption.

    Key Findings

    • Apache HTTP Server – Code Execution (CVE-2024-38474): A flaw in Apache HTTP Server allows attackers to achieve remote code execution, enabling full compromise of affected servers. This vulnerability threatens the confidentiality, integrity, and availability of critical web services.

    • Laravel (CVE-2024-55661): A security flaw in Laravel introduces the potential for remote code execution under specific conditions. Exploitation could provide attackers with unauthorized access to application logic and sensitive backend data.

    Impact

    These vulnerabilities pose severe risks to enterprise environments and open-source ecosystems. Remote code execution enables attackers to gain deep system access, steal data, disrupt services, and establish persistent footholds for broader intrusions. Given the widespread deployment of Apache HTTP Server and Laravel in production systems, timely patching and mitigation are critical.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset 100822_BETAWordPress:Plugin:WPBookit - Remote Code Execution - CVE:CVE-2025-6058N/ADisabledThis was merged in to the original rule "WordPress:Plugin:WPBookit - Remote Code Execution - CVE:CVE-2025-6058" (ID: )
    Cloudflare Managed Ruleset 100831Apache HTTP Server - Code Execution - CVE:CVE-2024-38474LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100846Laravel - Remote Code Execution - CVE:CVE-2024-55661LogDisabledThis is a New Detection
  1. JavaScript asset responses have been updated to use the text/javascript Content-Type header instead of application/javascript. While both MIME types are widely supported by browsers, the HTML Living Standard explicitly recommends text/javascript as the preferred type going forward.

    This change improves:

    • Standards alignment: Ensures consistency with the HTML spec and modern web platform guidance.
    • Interoperability: Some developer tools, validators, and proxies expect text/javascript and may warn or behave inconsistently with application/javascript.
    • Future-proofing: By following the spec-preferred MIME type, we reduce the risk of deprecation warnings or unexpected behavior in evolving browser environments.
    • Consistency: Most frameworks, CDNs, and hosting providers now default to text/javascript, so this change matches common ecosystem practice.

    Because all major browsers accept both MIME types, this update is backwards compatible and should not cause breakage.

    Users will see this change on the next deployment of their assets.

  1. Workers KV has completed rolling out performance improvements across all KV namespaces, providing a significant latency reduction on read operations for all KV users. This is due to architectural changes to KV's underlying storage infrastructure, which introduces a new metadata later and substantially improves redundancy.

    Workers KV latency improvements showing P95 and P99 performance gains in Europe, Asia, Africa and Middle East regions as measured within KV's internal storage gateway worker.

    Performance improvements

    The new hybrid architecture delivers substantial latency reductions throughout Europe, Asia, Middle East, Africa regions. Over the past 2 weeks, we have observed the following:

    • p95 latency: Reduced from ~150ms to ~50ms (67% decrease)
    • p99 latency: Reduced from ~350ms to ~250ms (29% decrease)
  1. Audit Logs v2 dataset is now available via Logpush.

    This expands on earlier releases of Audit Logs v2 in the API and Dashboard UI.

    We recommend creating a new Logpush job for the Audit Logs v2 dataset.

    Timelines for General Availability (GA) of Audit Logs v2 and the retirement of Audit Logs v1 will be shared in upcoming updates.

    For more details on Audit Logs v2, refer to the Audit Logs documentation.

  1. Cloudflare Logpush can now deliver logs from using fixed, dedicated egress IPs. By routing Logpush traffic through a Cloudflare zone enabled with Aegis IP, your log destination only needs to allow Aegis IPs making setup more secure.

    Highlights:

    • Fixed egress IPs ensure your destination only accepts traffic from known addresses.
    • Works with any supported Logpush destination.
    • Recommended to use a dedicated zone as a proxy for easier management.

    To get started, work with your Cloudflare account team to provision Aegis IPs, then configure your Logpush job to deliver logs through the proxy zone. For full setup instructions, refer to the Logpush documentation.

  1. RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset 100850Command Injection - Generic 2N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100851Remote Code Execution - Java DeserializationN/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100852Command Injection - Generic 3N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100853Remote Code Execution - Common Bash Bypass BetaN/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100854XSS - Generic JavaScriptN/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100855Command Injection - Generic 4N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100856PHP Object InjectionN/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100857Generic - Parameter FuzzingN/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100858Code Injection - Generic 4N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100859SQLi - UNION - 2N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100860Command Injection - Generic 5N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100861Command Execution - GenericN/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100862GraphQL Injection - 2N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100863Command Injection - Generic 6N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100864Code Injection - Generic 2N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100865PHP Object Injection - 2N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100866SQLi - LIKE 2N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100867SQLi - DROP - 2N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100868Code Injection - Generic 3N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100869Command Injection - Generic 7N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100870Command Injection - Generic 8N/ADisabledThis is a New Detection
    Cloudflare Managed Ruleset 100871SQLi - LIKE 3N/ADisabledThis is a New Detection
  1. A new GA release for the Windows WARP client is now available on the stable releases downloads page.

    This release contains a hotfix for pre-login for multi-user for the 2025.6.1135.0 release.

    Changes and improvements

    • Fixes an issue where new pre-login registrations were not being properly created.

    Known issues

    • For Windows 11 24H2 users, Microsoft has confirmed a regression that may lead to performance issues like mouse lag, audio cracking, or other slowdowns. Cloudflare recommends users experiencing these issues upgrade to a minimum Windows 11 24H2 KB5062553 or higher for resolution.

    • Devices using WARP client 2025.4.929.0 and up may experience Local Domain Fallback failures if a fallback server has not been configured. To configure a fallback server, refer to Route traffic to fallback server.

    • Devices with KB5055523 installed may receive a warning about Win32/ClickFix.ABA being present in the installer. To resolve this false positive, update Microsoft Security Intelligence to version 1.429.19.0 or later.

    • DNS resolution may be broken when the following conditions are all true:

      • WARP is in Secure Web Gateway without DNS filtering (tunnel-only) mode.
      • A custom DNS server address is configured on the primary network adapter.
      • The custom DNS server address on the primary network adapter is changed while WARP is connected.

      To work around this issue, please reconnect the WARP client by toggling off and back on.

  1. You can now create a client (a Durable Object stub) to a Durable Object with the new getByName method, removing the need to convert Durable Object names to IDs and then create a stub.

    // Before: (1) translate name to ID then (2) get a client
    const objectId = env.MY_DURABLE_OBJECT.idFromName("foo"); // or .newUniqueId()
    const stub = env.MY_DURABLE_OBJECT.get(objectId);
    // Now: retrieve client to Durable Object directly via its name
    const stub = env.MY_DURABLE_OBJECT.getByName("foo");
    // Use client to send request to the remote Durable Object
    const rpcResponse = await stub.sayHello();

    Each Durable Object has a globally-unique name, which allows you to send requests to a specific object from anywhere in the world. Thus, a Durable Object can be used to coordinate between multiple clients who need to work together. You can have billions of Durable Objects, providing isolation between application tenants.

    To learn more, visit the Durable Objects API Documentation or the getting started guide.

  1. Enterprise Gateway users can now use Bring Your Own IP (BYOIP) for dedicated egress IPs.

    Admins can now onboard and use their own IPv4 or IPv6 prefixes to egress traffic from Cloudflare, delivering greater control, flexibility, and compliance for network traffic.

    Get started by following the BYOIP onboarding process. Once your IPs are onboarded, go to Gateway > Egress policies and select or create an egress policy. In Select an egress IP, choose Use dedicated egress IPs (Cloudflare or BYOIP), then select your BYOIP address from the dropdown menu.

    Screenshot of a dropdown menu adding a BYOIP IPv4 address as a dedicated egress IP in a Gateway egress policy

    For more information, refer to BYOIP for dedicated egress IPs.

  1. A new GA release for the Windows WARP client is now available on the stable releases downloads page.

    This release contains minor fixes and improvements.

    Changes and improvements

    • Improvements to better manage multi-user pre-login registrations.
    • Fixed an issue preventing devices from reaching split-tunneled traffic even when WARP was disconnected.
    • Fix to prevent WARP from re-enabling its firewall rules after a user-initiated disconnect.
    • Improvement for faster client connectivity on high-latency captive portal networks.
    • Fixed an issue where recursive CNAME records could cause intermittent WARP connectivity issues.

    Known issues

    • For Windows 11 24H2 users, Microsoft has confirmed a regression that may lead to performance issues like mouse lag, audio cracking, or other slowdowns. Cloudflare recommends users experiencing these issues upgrade to a minimum Windows 11 24H2 version KB5062553 or higher for resolution.

    • Devices using WARP client 2025.4.929.0 and up may experience Local Domain Fallback failures if a fallback server has not been configured. To configure a fallback server, refer to Route traffic to fallback server.

    • Devices with KB5055523 installed may receive a warning about Win32/ClickFix.ABA being present in the installer. To resolve this false positive, update Microsoft Security Intelligence to version 1.429.19.0 or later.

    • DNS resolution may be broken when the following conditions are all true:

      • WARP is in Secure Web Gateway without DNS filtering (tunnel-only) mode.
      • A custom DNS server address is configured on the primary network adapter.
      • The custom DNS server address on the primary network adapter is changed while WARP is connected.

      To work around this issue, reconnect the WARP client by toggling off and back on.

  1. A new GA release for the macOS WARP client is now available on the stable releases downloads page.

    This release contains minor fixes and improvements.

    Changes and improvements

    • Fixed an issue preventing devices from reaching split-tunneled traffic even when WARP was disconnected.
    • Fix to prevent WARP from re-enabling its firewall rules after a user-initiated disconnect.
    • Improvement for faster client connectivity on high-latency captive portal networks.
    • Fixed an issue where recursive CNAME records could cause intermittent WARP connectivity issues.

    Known issues

    • macOS Sequoia: Due to changes Apple introduced in macOS 15.0.x, the WARP client may not behave as expected. Cloudflare recommends the use of macOS 15.4 or later.
    • Devices using WARP client 2025.4.929.0 and up may experience Local Domain Fallback failures if a fallback server has not been configured. To configure a fallback server, refer to Route traffic to fallback server.
  1. A new GA release for the Linux WARP client is now available on the stable releases downloads page.

    This release contains minor fixes and improvements.

    Changes and improvements

    • Fixed an issue preventing devices from reaching split-tunneled traffic even when WARP was disconnected.
    • Fix to prevent WARP from re-enabling its firewall rules after a user-initiated disconnect.
    • Improvement for faster client connectivity on high-latency captive portal networks.
    • Fixed an issue where recursive CNAME records could cause intermittent WARP connectivity issues.

    Known issues

    • Devices using WARP client 2025.4.929.0 and up may experience Local Domain Fallback failures if a fallback server has not been configured. To configure a fallback server, refer to Route traffic to fallback server.
  1. You can now subscribe to events from other Cloudflare services (for example, Workers KV, Workers AI, Workers) and consume those events via Queues, allowing you to build custom workflows, integrations, and logic in response to account activity.

    Event subscriptions architecture

    Event subscriptions allow you to receive messages when events occur across your Cloudflare account. Cloudflare products can publish structured events to a queue, which you can then consume with Workers or pull via HTTP from anywhere.

    To create a subscription, use the dashboard or Wrangler:

    Terminal window
    npx wrangler queues subscription create my-queue --source r2 --events bucket.created

    An event is a structured record of something happening in your Cloudflare account – like a Workers AI batch request being queued, a Worker build completing, or an R2 bucket being created. Events follow a consistent structure:

    Example R2 bucket created event
    {
    "type": "cf.r2.bucket.created",
    "source": {
    "type": "r2"
    },
    "payload": {
    "name": "my-bucket",
    "location": "WNAM"
    },
    "metadata": {
    "accountId": "f9f79265f388666de8122cfb508d7776",
    "eventTimestamp": "2025-07-28T10:30:00Z"
    }
    }

    Current event sources include R2, Workers KV, Workers AI, Workers Builds, Vectorize, Super Slurper, and Workflows. More sources and events are on the way.

    For more information on event subscriptions, available events, and how to get started, refer to our documentation.

  1. Wrangler's error screen has received several improvements to enhance your debugging experience!

    The error screen now features a refreshed design thanks to youch, with support for both light and dark themes, improved source map resolution logic that handles missing source files more reliably, and better error cause display.

    BeforeAfter (Light)After (Dark)
    Old error screenNew light theme error screenNew dark theme error screen

    Try it out now with npx wrangler@latest dev in your Workers project.

  1. This week's update

    This week, a series of critical vulnerabilities were discovered impacting core enterprise and open-source infrastructure. These flaws present a range of risks, providing attackers with distinct pathways for remote code execution, methods to breach internal network boundaries, and opportunities for critical data exposure and operational disruption.

    Key Findings

    • SonicWall SMA (CVE-2025-32819, CVE-2025-32820, CVE-2025-32821): A remote authenticated attacker with SSLVPN user privileges can bypass path traversal protections. These vulnerabilities enable a attacker to bypass security checks to read, modify, or delete arbitrary files. An attacker with administrative privileges can escalate this further, using a command injection flaw to upload malicious files, which could ultimately force the appliance to reboot to its factory default settings.

    • Ms-Swift Project (CVE-2025-50460): An unsafe deserialization vulnerability exists in the Ms-Swift project's handling of YAML configuration files. If an attacker can control the content of a configuration file passed to the application, they can embed a malicious payload that will execute arbitrary code and it can be executed during deserialization.

    • Apache Druid (CVE-2023-25194): This vulnerability in Apache Druid allows an attacker to cause the server to connect to a malicious LDAP server. By sending a specially crafted LDAP response, the attacker can trigger an unrestricted deserialization of untrusted data. If specific "gadgets" (classes that can be abused) are present in the server's classpath, this can be escalated to achieve Remote Code Execution (RCE).

    • Tenda AC8v4 (CVE-2025-51087, CVE-2025-51088): Vulnerabilities allow an authenticated attacker to trigger a stack-based buffer overflow. By sending malformed arguments in a request to specific endpoints, an attacker can crash the device or potentially achieve arbitrary code execution.

    • Open WebUI (CVE-2024-7959): This vulnerability allows a user to change the OpenAI URL endpoint to an arbitrary internal network address without proper validation. This flaw can be exploited to access internal services or cloud metadata endpoints, potentially leading to remote command execution if the attacker can retrieve instance secrets or access sensitive internal APIs.

    • BentoML (CVE-2025-54381): The vulnerability exists in the serialization/deserialization handlers for multipart form data and JSON requests, which automatically download files from user-provided URLs without proper validation of internal network addresses. This allows attackers to fetch from unintended internal services, including cloud metadata and localhost.

    • Adobe Experience Manager Forms (CVE-2025-54254): An Improper Restriction of XML External Entity Reference ('XXE') vulnerability that could lead to arbitrary file system read in Adobe AEM (≤6.5.23).

    Impact

    These vulnerabilities affect core infrastructure, from network security appliances like SonicWall to data platforms such as Apache Druid and ML frameworks like BentoML. The code execution and deserialization flaws are particularly severe, offering deep system access that allows attackers to steal data, disrupt services, and establish a foothold for broader intrusions. Simultaneously, SSRF and XXE vulnerabilities undermine network boundaries, exposing sensitive internal data and creating pathways for lateral movement. Beyond data-centric threats, flaws in edge devices like the Tenda router introduce the tangible risk of operational disruption, highlighting a multi-faceted threat to the security and stability of key enterprise systems.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset 100574SonicWall SMA - Remote Code Execution - CVE:CVE-2025-32819, CVE:CVE-2025-32820, CVE:CVE-2025-32821LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100576Ms-Swift Project - Remote Code Execution - CVE:CVE-2025-50460LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100585Apache Druid - Remote Code Execution - CVE:CVE-2023-25194LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100834Tenda AC8v4 - Auth Bypass - CVE:CVE-2025-51087, CVE:CVE-2025-51088LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100835Open WebUI - SSRF - CVE:CVE-2024-7959LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100837SQLi - OOBLogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100841BentoML - SSRF - CVE:CVE-2025-54381LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100841ABentoML - SSRF - CVE:CVE-2025-54381 - 2LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100841BBentoML - SSRF - CVE:CVE-2025-54381 - 3LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100845Adobe Experience Manager Forms - XSS - CVE:CVE-2025-54254LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100845AAdobe Experience Manager Forms - XSS - CVE:CVE-2025-54254 - 2LogBlockThis is a New Detection
  1. SSH with Cloudflare Access for Infrastructure now supports SFTP. It is compatible with SFTP clients, such as Cyberduck.

  1. Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high number of issues reported by the Cloudflare Community related to the v5 release. We have committed to releasing improvements on a two week cadence to ensure stability and reliability.

    One key change we adopted in recent weeks is a pivot to more comprehensive, test-driven development. We are still evaluating individual issues, but are also investing in much deeper testing to drive our stabilization efforts. We will subsequently be investing in comprehensive migration scripts. As a result, you will see several of the highest traffic APIs have been stabilized in the most recent release, and are supported by comprehensive acceptance tests.

    Thank you for continuing to raise issues. We triage them weekly and they help make our products stronger.

    Changes

    • Resources stabilized:
      • cloudflare_argo_smart_routing
      • cloudflare_bot_management
      • cloudflare_list
      • cloudflare_list_item
      • cloudflare_load_balancer
      • cloudflare_load_balancer_monitor
      • cloudflare_load_balancer_pool
      • cloudflare_spectrum_application
      • cloudflare_managed_transforms
      • cloudflare_url_normalization_settings
      • cloudflare_snippet
      • cloudflare_snippet_rules
      • cloudflare_zero_trust_access_application
      • cloudflare_zero_trust_access_group
      • cloudflare_zero_trust_access_identity_provider
      • cloudflare_zero_trust_access_mtls_certificate
      • cloudflare_zero_trust_access_mtls_hostname_settings
      • cloudflare_zero_trust_access_policy
      • cloudflare_zone
    • Multipart handling restored for cloudflare_snippet
    • cloudflare_bot_management diff issues resolves when running terraform plan and terraform apply
    • Other bug fixes

    For a more detailed look at all of the changes, refer to the changelog in GitHub.

    Issues Closed

    If you have an unaddressed issue with the provider, we encourage you to check the open issues and open a new one if one does not already exist for what you are experiencing.

    Upgrading

    We suggest holding off on migration to v5 while we work on stablization. This help will you avoid any blocking issues while the Terraform resources are actively being stablized.

    If you'd like more information on migrating to v5, please make use of the migration guide. We have provided automated migration scripts using Grit which simplify the transition. These migration scripts do not support implementations which use Terraform modules, so customers making use of modules need to migrate manually. Please make use of terraform plan to test your changes before applying, and let us know if you encounter any additional issues by reporting to our GitHub repository.

    For more info

  1. You can now create more granular, network-aware Custom Rules in Cloudflare Load Balancing using the Autonomous System Number (ASN) of an incoming request.

    This allows you to steer traffic with greater precision based on the network source of a request. For example, you can route traffic from specific Internet Service Providers (ISPs) or enterprise customers to dedicated infrastructure, optimize performance, or enforce compliance by directing certain networks to preferred data centers.

    Create a Load Balancing Custom Rule using AS Num

    To get started, create a Custom Rule in your Load Balancer and select AS Num from the Field dropdown.

  1. Brand Protection detects domains that may be impersonating your brand — from common misspellings (cloudfalre.com) to malicious concatenations (cloudflare-okta.com). Saved search queries run continuously and alert you when suspicious domains appear.

    You can now create and save multiple queries in a single step, streamlining setup and management. Available now via the Brand Protection bulk query creation API.

  1. We've updated preview URLs for Cloudflare Workers to support long branch names.

    Previously, branch and Worker names exceeding the 63-character DNS limit would cause alias generation to fail, leaving pull requests without aliased preview URLs. This particularly impacted teams relying on descriptive branch naming.

    Now, Cloudflare automatically truncates long branch names and appends a unique hash, ensuring every pull request gets a working preview link.

    How it works

    • 63 characters or less: <branch-name>-<worker-name> → Uses actual branch name as is
    • 64 characters or more: <truncated-branch-name>--<hash>-<worker-name> → Uses truncated name with 4-character hash
    • Hash generation: The hash is derived from the full branch name to ensure uniqueness
    • Stable URLs: The same branch always generates the same hash across all commits

    Requirements and compatibility

    • Wrangler 4.30.0 or later: This feature requires updating to wrangler@4.30.0+
    • No configuration needed: Works automatically with existing preview URL setups
  1. Cloudflare Access logs now support the Customer Metadata Boundary (CMB). If you have configured the CMB for your account, all Access logging will respect that configuration.

  1. We are changing how Python Workers are structured by default. Previously, handlers were defined at the top-level of a module as on_fetch, on_scheduled, etc. methods, but now they live in an entrypoint class.

    Here's an example of how to now define a Worker with a fetch handler:

    from workers import Response, WorkerEntrypoint
    class Default(WorkerEntrypoint):
    async def fetch(self, request):
    return Response("Hello World!")

    To keep using the old-style handlers, you can specify the disable_python_no_global_handlers compatibility flag in your wrangler file:

    {
    "compatibility_flags": [
    "disable_python_no_global_handlers"
    ]
    }

    Consult the Python Workers documentation for more details.

  1. The recent Cloudflare Terraform Provider and SDK releases (such as cloudflare-typescript) bring significant improvements to the Workers developer experience. These updates focus on reliability, performance, and adding Python Workers support.

    Terraform Improvements

    Fixed Unwarranted Plan Diffs

    Resolved several issues with the cloudflare_workers_script resource that resulted in unwarranted plan diffs, including:

    • Using Durable Objects migrations
    • Using some bindings such as secret_text
    • Using smart placement

    A resource should never show a plan diff if there isn't an actual change. This fix reduces unnecessary noise in your Terraform plan and is available in Cloudflare Terraform Provider 5.8.0.

    Improved File Management

    You can now specify content_file and content_sha256 instead of content. This prevents the Workers script content from being stored in the state file which greatly reduces plan diff size and noise. If your workflow synced plans remotely, this should now happen much faster since there is less data to sync. This is available in Cloudflare Terraform Provider 5.7.0.

    resource "cloudflare_workers_script" "my_worker" {
    account_id = "123456789"
    script_name = "my_worker"
    main_module = "worker.mjs"
    content_file = "worker.mjs"
    content_sha256 = filesha256("worker.mjs")
    }

    Assets Headers and Redirects Support

    Fixed the cloudflare_workers_script resource to properly support headers and redirects for Assets:

    resource "cloudflare_workers_script" "my_worker" {
    account_id = "123456789"
    script_name = "my_worker"
    main_module = "worker.mjs"
    content_file = "worker.mjs"
    content_sha256 = filesha256("worker.mjs")
    assets = {
    config = {
    headers = file("_headers")
    redirects = file("_redirects")
    }
    # Completion jwt from:
    # https://developers.cloudflare.com/api/resources/workers/subresources/assets/subresources/upload/
    jwt = "jwt"
    }
    }

    Available in Cloudflare Terraform Provider 5.8.0.

    Python Workers Support

    Added support for uploading Python Workers (beta) in Terraform. You can now deploy Python Workers with:

    resource "cloudflare_workers_script" "my_worker" {
    account_id = "123456789"
    script_name = "my_worker"
    content_file = "worker.py"
    content_sha256 = filesha256("worker.py")
    content_type = "text/x-python"
    }

    Available in Cloudflare Terraform Provider 5.8.0.

    SDK Enhancements

    Improved File Upload API

    Fixed an issue where Workers script versions in the SDK did not allow uploading files. This now works, and also has an improved files upload interface:

    const scriptContent = `
    export default {
    async fetch(request, env, ctx) {
    return new Response('Hello World!', { status: 200 });
    }
    };
    `;
    client.workers.scripts.versions.create('my-worker', {
    account_id: '123456789',
    metadata: {
    main_module: 'my-worker.mjs',
    },
    files: [
    await toFile(
    Buffer.from(scriptContent),
    'my-worker.mjs',
    {
    type: "application/javascript+module",
    }
    )
    ]
    });

    Will be available in cloudflare-typescript 4.6.0. A similar change will be available in cloudflare-python 4.4.0.

    Fixed updating KV values

    Previously when creating a KV value like this:

    await cf.kv.namespaces.values.update("my-kv-namespace", "key1", {
    account_id: "123456789",
    metadata: "my metadata",
    value: JSON.stringify({
    hello: "world"
    })
    });

    ...and recalling it in your Worker like this:

    const value = await c.env.KV.get<{hello: string}>("key1", "json");

    You'd get back this: {metadata:'my metadata', value:"{'hello':'world'}"} instead of the correct value of {hello: 'world'}

    This is fixed in cloudflare-typescript 4.5.0 and will be fixed in cloudflare-python 4.4.0.

  1. Cloudflare Logpush now supports IBM Cloud Logs as a native destination.

    Logs from Cloudflare can be sent to IBM Cloud Logs via Logpush. The setup can be done through the Logpush UI in the Cloudflare Dashboard or by using the Logpush API. The integration requires IBM Cloud Logs HTTP Source Address and an IBM API Key. The feature also allows for filtering events and selecting specific log fields.

    For more information, refer to Destination Configuration documentation.

  1. A minimal implementation of the MessageChannel API is now available in Workers. This means that you can use MessageChannel to send messages between different parts of your Worker, but not across different Workers.

    The MessageChannel and MessagePort APIs will be available by default at the global scope with any worker using a compatibility date of 2025-08-15 or later. It is also available using the expose_global_message_channel compatibility flag, or can be explicitly disabled using the no_expose_global_message_channel compatibility flag.

    const { port1, port2 } = new MessageChannel();
    port2.onmessage = (event) => {
    console.log('Received message:', event.data);
    };
    port2.postMessage('Hello from port2!');

    Any value that can be used with the structuredClone(...) API can be sent over the port.

    Differences

    There are a number of key limitations to the MessageChannel API in Workers:

    • Transfer lists are currently not supported. This means that you will not be able to transfer ownership of objects like ArrayBuffer or MessagePort between ports.
    • The MessagePort is not yet serializable. This means that you cannot send a MessagePort object through the postMessage method or via JSRPC calls.
    • The 'messageerror' event is only partially supported. If the 'onmessage' handler throws an error, the 'messageerror' event will be triggered, however, it will not be triggered when there are errors serializing or deserializing the message data. Instead, the error will be thrown when the postMessage method is called on the sending port.
    • The 'close' event will be emitted on both ports when one of the ports is closed, however it will not be emitted when the Worker is terminated or when one of the ports is garbage collected.
  1. This week's update focuses on a wide range of enterprise software, from network infrastructure and security platforms to content management systems and development frameworks. Flaws include unsafe deserialization, OS command injection, SSRF, authentication bypass, and arbitrary file upload — many of which allow unauthenticated remote code execution. Notable risks include Cisco Identity Services Engine and Ivanti EPMM, where successful exploitation could grant attackers full administrative control of core network infrastructure and popular web services such as WordPress, SharePoint, and Ingress-Nginx, where security bypasses and arbitrary file uploads could lead to complete site or server compromise.

    Key Findings

    • Cisco Identity Services Engine (CVE-2025-20281): Insufficient input validation in a specific API of Cisco Identity Services Engine (ISE) and ISE-PIC allows an unauthenticated, remote attacker to execute arbitrary code with root privileges on an affected device.

    • Wazuh Server (CVE-2025-24016): An unsafe deserialization vulnerability in Wazuh Server (versions 4.4.0 to 4.9.0) allows for remote code execution and privilege escalation. By injecting unsanitized data, an attacker can trigger an exception to execute arbitrary code on the server.

    • CrushFTP (CVE-2025-54309): A flaw in AS2 validation within CrushFTP allows remote attackers to gain administrative access via HTTPS on systems not using the DMZ proxy feature. This flaw can lead to unauthorized file access and potential system compromise.

    • Kentico Xperience CMS (CVE-2025-2747, CVE-2025-2748): Vulnerabilities in Kentico Xperience CMS could enable cross-site scripting (XSS), allowing attackers to inject malicious scripts into web pages. Additionally, a flaw could allow unauthenticated attackers to bypass the Staging Sync Server's authentication, potentially leading to administrative control over the CMS.

    • Node.js (CVE-2025-27210): An incomplete fix for a previous vulnerability (CVE-2025-23084) in Node.js affects the path.join() API method on Windows systems. The vulnerability can be triggered using reserved Windows device names such as CON, PRN, or AUX.

    • WordPress:Plugin:Simple File List (CVE-2025-34085, CVE-2020-36847): This vulnerability in the Simple File List plugin for WordPress allows an unauthenticated remote attacker to upload arbitrary files to a vulnerable site. This can be exploited to achieve remote code execution on the server.
      (Note: CVE-2025-34085 has been rejected as a duplicate.)

    • GeoServer (CVE-2024-29198): A Server-Side Request Forgery (SSRF) vulnerability exists in GeoServer's Demo request endpoint, which can be exploited where the Proxy Base URL has not been configured.

    • Ivanti EPMM (CVE-2025-6771): An OS command injection vulnerability in Ivanti Endpoint Manager Mobile (EPMM) before versions 12.5.0.2, 12.4.0.3, and 12.3.0.3 allows a remote, authenticated attacker with high privileges to execute arbitrary code.

    • Microsoft SharePoint (CVE-2024-38018): This is a remote code execution vulnerability affecting Microsoft SharePoint Server.

    • Manager-IO (CVE-2025-54122): A critical unauthenticated full read Server-Side Request Forgery (SSRF) vulnerability is present in the proxy handler of both Manager Desktop and Server editions up to version 25.7.18.2519. This allows an unauthenticated attacker to bypass network isolation and access internal services.

    • Ingress-Nginx (CVE-2025-1974): A vulnerability in the Ingress-Nginx controller for Kubernetes allows an attacker to bypass access control rules. An unauthenticated attacker with access to the pod network can achieve arbitrary code execution in the context of the ingress-nginx controller.

    • PaperCut NG/MF (CVE-2023-2533): A Cross-Site Request Forgery (CSRF) vulnerability has been identified in PaperCut NG/MF. Under specific conditions, an attacker could exploit this to alter security settings or execute arbitrary code if they can deceive an administrator with an active login session into clicking a malicious link.

    • SonicWall SMA (CVE-2025-40598): This vulnerability could allow an unauthenticated attacker to bypass security controls. This allows a remote, unauthenticated attacker to potentially execute arbitrary JavaScript code.

    • WordPress (CVE-2025-5394): The "Alone – Charity Multipurpose Non-profit WordPress Theme" for WordPress is vulnerable to arbitrary file uploads. A missing capability check allows unauthenticated attackers to upload ZIP files containing webshells disguised as plugins, leading to remote code execution.

    Impact

    These vulnerabilities span a broad range of enterprise technologies, including network access control systems, monitoring platforms, web servers, CMS platforms, cloud services, and collaboration tools. Exploitation techniques range from remote code execution and command injection to authentication bypass, SQL injection, path traversal, and configuration weaknesses.

    A critical flaw in perimeter devices like Ivanti EPMM or SonicWall SMA could allow an unauthenticated attacker to gain remote code execution, completely breaching the primary network defense. A separate vulnerability within Cisco's Identity Services Engine could then be exploited to bypass network segmentation, granting an attacker widespread internal access. Insecure deserialization issues in platforms like Wazuh Server and CrushFTP could then be used to run malicious payloads or steal sensitive files from administrative consoles. Weaknesses in web delivery controllers like Ingress-Nginx or popular content management systems such as WordPress, SharePoint, and Kentico Xperience create vectors to bypass security controls, exfiltrate confidential data, or fully compromise servers.




    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset 100538GeoServer - SSRF - CVE:CVE-2024-29198LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100548Ivanti EPMM - Remote Code Execution - CVE:CVE-2025-6771LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100550Microsoft SharePoint - Remote Code Execution - CVE:CVE-2024-38018LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100562Manager-IO - SSRF - CVE:CVE-2025-54122LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100565

    Cisco Identity Services Engine - Remote Code Execution - CVE:CVE-2025-20281

    LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100567Ingress-Nginx - Remote Code Execution - CVE:CVE-2025-1974LogDisabledThis is a New Detection
    Cloudflare Managed Ruleset 100569PaperCut NG/MF - Remote Code Execution - CVE:CVE-2023-2533LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100571SonicWall SMA - XSS - CVE:CVE-2025-40598LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100573WordPress - Dangerous File Upload - CVE:CVE-2025-5394LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100806 Wazuh Server - Remote Code Execution - CVE:CVE-2025-24016 Log Block This is a New Detection
    Cloudflare Managed Ruleset 100824 CrushFTP - Remote Code Execution - CVE:CVE-2025-54309 Log Block This is a New Detection
    Cloudflare Managed Ruleset 100824A CrushFTP - Remote Code Execution - CVE:CVE-2025-54309 - 2 Log Block This is a New Detection
    Cloudflare Managed Ruleset 100825AMI MegaRAC - Auth Bypass - CVE:CVE-2024-54085LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100826Kentico Xperience CMS - Auth Bypass - CVE:CVE-2025-2747LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100827Kentico Xperience CMS - XSS - CVE:CVE-2025-2748LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100828Node.js - Directory Traversal - CVE:CVE-2025-27210LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100829

    WordPress:Plugin:Simple File List - Remote Code Execution - CVE:CVE-2025-34085

    LogBlockThis is a New Detection
    Cloudflare Managed Ruleset 100829A

    WordPress:Plugin:Simple File List - Remote Code Execution - CVE:CVE-2025-34085 - 2

    LogDisabledThis is a New Detection
  1. Now, you can use .env files to provide secrets and override environment variables on the env object during local development with Wrangler and the Cloudflare Vite plugin.

    Previously in local development, if you wanted to provide secrets or environment variables during local development, you had to use .dev.vars files. This is still supported, but you can now also use .env files, which are more familiar to many developers.

    Using .env files in local development

    You can create a .env file in your project root to define environment variables that will be used when running wrangler dev or vite dev. The .env file should be formatted like a dotenv file, such as KEY="VALUE":

    .env
    TITLE="My Worker"
    API_TOKEN="dev-token"

    When you run wrangler dev or vite dev, the environment variables defined in the .env file will be available in your Worker code via the env object:

    export default {
    async fetch(request, env) {
    const title = env.TITLE; // "My Worker"
    const apiToken = env.API_TOKEN; // "dev-token"
    const response = await fetch(
    `https://api.example.com/data?token=${apiToken}`,
    );
    return new Response(`Title: ${title} - ` + (await response.text()));
    },
    };

    Multiple environments with .env files

    If your Worker defines multiple environments, you can set different variables for each environment (ex: production or staging) by creating files named .env.<environment-name>.

    When you use wrangler <command> --env <environment-name> or CLOUDFLARE_ENV=<environment-name> vite dev, the corresponding environment-specific file will also be loaded and merged with the .env file.

    For example, if you want to set different environment variables for the staging environment, you can create a file named .env.staging:

    .env.staging
    API_TOKEN="staging-token"

    When you run wrangler dev --env staging or CLOUDFLARE_ENV=staging vite dev, the environment variables from .env.staging will be merged onto those from .env.

    export default {
    async fetch(request, env) {
    const title = env.TITLE; // "My Worker" (from `.env`)
    const apiToken = env.API_TOKEN; // "staging-token" (from `.env.staging`, overriding the value from `.env`)
    const response = await fetch(
    `https://api.example.com/data?token=${apiToken}`,
    );
    return new Response(`Title: ${title} - ` + (await response.text()));
    },
    };

    Find out more

    For more information on how to use .env files with Wrangler and the Cloudflare Vite plugin, see the following documentation:

  1. New information about broadcast metrics and events is now available in Cloudflare Stream in the Live Input details of the Dashboard.

    Live Input details showing metrics

    You can now easily understand broadcast-side health and performance with new observability, which can help when troubleshooting common issues, particularly for new customers who are just getting started, and platform customers who may have limited visibility into how their end-users configure their encoders.

    To get started, start a live stream (just getting started?), then visit the Live Input details page in Dash.

    See our new live Troubleshooting guide to learn what these metrics mean and how to use them to address common broadcast issues.

  1. You can now import waitUntil from cloudflare:workers to extend your Worker's execution beyond the request lifecycle from anywhere in your code.

    Previously, waitUntil could only be accessed through the execution context (ctx) parameter passed to your Worker's handler functions. This meant that if you needed to schedule background tasks from deeply nested functions or utility modules, you had to pass the ctx object through multiple function calls to access waitUntil.

    Now, you can import waitUntil directly and use it anywhere in your Worker without needing to pass ctx as a parameter:

    import { waitUntil } from "cloudflare:workers";
    export function trackAnalytics(eventData) {
    const analyticsPromise = fetch("https://analytics.example.com/track", {
    method: "POST",
    body: JSON.stringify(eventData),
    });
    // Extend execution to ensure analytics tracking completes
    waitUntil(analyticsPromise);
    }

    This is particularly useful when you want to:

    • Schedule background tasks from utility functions or modules
    • Extend execution for analytics, logging, or cleanup operations
    • Avoid passing the execution context through multiple layers of function calls
    import { waitUntil } from "cloudflare:workers";
    export default {
    async fetch(request, env, ctx) {
    // Background task that should complete even after response is sent
    cleanupTempData(env.KV_NAMESPACE);
    return new Response("Hello, World!");
    }
    };
    function cleanupTempData(kvNamespace) {
    // This function can now use waitUntil without needing ctx
    const deletePromise = kvNamespace.delete("temp-key");
    waitUntil(deletePromise);
    }

    For more information, see the waitUntil documentation.

  1. When you deploy MX or Inline, not only can you apply email link isolation to suspicious links in all emails (including benign), you can now also apply email link isolation to all links of a specified disposition. This provides more flexibility in controlling user actions within emails.

    For example, you may want to deliver suspicious messages but isolate the links found within them so that users who choose to interact with the links will not accidentally expose your organization to threats. This means your end users are more secure than ever before.

    Expanded Email Link Isolation Configuration

    To isolate all links within a message based on the disposition, select Settings > Link Actions > View and select Configure. As with other other links you isolate, an interstitial will be provided to warn users that this site has been isolated and the link will be recrawled live to evaluate if there are any changes in our threat intel. Learn more about this feature on Configure link actions.

    This feature is available across these Email Security packages:

    • Enterprise
    • Enterprise + PhishGuard
  1. This week’s highlight focuses on two critical vulnerabilities affecting key infrastructure and enterprise content management platforms. Both flaws present significant remote code execution risks that can be exploited with minimal or no user interaction.

    Key Findings

    • Squid (≤6.3) — CVE-2025-54574: A heap buffer overflow occurs when processing Uniform Resource Names (URNs). This vulnerability may allow remote attackers to execute arbitrary code on the server. The issue has been resolved in version 6.4.

    • Adobe AEM (≤6.5.23) — CVE-2025-54253: Due to a misconfiguration, attackers can achieve remote code execution without requiring any user interaction, posing a severe threat to affected deployments.

    Impact

    Both vulnerabilities expose critical attack vectors that can lead to full server compromise. The Squid heap buffer overflow allows remote code execution by crafting malicious URNs, which can lead to server takeover or denial of service. Given Squid’s widespread use as a caching proxy, this flaw could be exploited to disrupt network traffic or gain footholds inside secure environments.

    Adobe AEM’s remote code execution vulnerability enables attackers to run arbitrary code on the content management server without any user involvement. This puts sensitive content, application integrity, and the underlying infrastructure at extreme risk. Exploitation could lead to data theft, defacement, or persistent backdoor installation.

    These findings reinforce the urgency of updating to the patched versions — Squid 6.4 and Adobe AEM 6.5.24 or later — and reviewing configurations to prevent exploitation.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset 100844Adobe Experience Manager Forms - Remote Code Execution - CVE:CVE-2025-54253N/ABlockThis is a New Detection
    Cloudflare Managed Ruleset 100840Squid - Buffer Overflow - CVE:CVE-2025-54574N/ABlockThis is a New Detection
  1. By setting the value of the cache property to no-cache, you can force Cloudflare's cache to revalidate its contents with the origin when making subrequests from Cloudflare Workers.

    index.js
    export default {
    async fetch(req, env, ctx) {
    const request = new Request("https://cloudflare.com", {
    cache: "no-cache",
    });
    const response = await fetch(request);
    return response;
    },
    };

    When no-cache is set, the Worker request will first look for a match in Cloudflare's cache, then:

    • If there is a match, a conditional request is sent to the origin, regardless of whether or not the match is fresh or stale. If the resource has not changed, the cached version is returned. If the resource has changed, it will be downloaded from the origin, updated in the cache, and returned.
    • If there is no match, Workers will make a standard request to the origin and cache the response.

    This increases compatibility with NPM packages and JavaScript frameworks that rely on setting the cache property, which is a cross-platform standard part of the Request interface. Previously, if you set the cache property on Request to 'no-cache', the Workers runtime threw an exception.

  1. Cloudflare Load Balancing Monitors support loading and applying settings for a specific zone to monitoring requests to origin endpoints. This feature has been migrated to new infrastructure to improve reliability, performance, and accuracy.

    All zone monitors have been tested against the new infrastructure. There should be no change to health monitoring results of currently healthy and active pools. Newly created or re-enabled pools may need validation of their monitor zone settings before being introduced to service, especially regarding correct application of mTLS.

    What you can expect:

    • More reliable application of zone settings to monitoring requests, including
      • Authenticated Origin Pulls
      • Aegis Egress IP Pools
      • Argo Smart Routing
      • HTTP/2 to Origin
    • Improved support and bug fixes for retries, redirects, and proxied origin resolution
    • Improved performance and reliability of monitoring requests withing the Cloudflare network
    • Unrelated CDN or WAF configuration changes should have no risk of impact to pool health
  1. Radar now introduces Certificate Transparency (CT) insights, providing visibility into certificate issuance trends based on Certificate Transparency logs currently monitored by Cloudflare.

    The following API endpoints are now available:

    For the summary and timeseries_groups endpoints, the following dimensions are available (and also usable as filters):

    • ca: Certification Authority (certificate issuer)
    • ca_owner: Certification Authority Owner
    • duration: Certificate validity duration (between NotBefore and NotAfter dates)
    • entry_type: Entry type (certificate vs. pre-certificate)
    • expiration_status: Expiration status (valid vs. expired)
    • has_ips: Presence of IP addresses in certificate Subject Alternative Names (SANs)
    • has_wildcards: Presence of wildcard DNS names in certificate SANs
    • log: CT log name
    • log_api: CT log API (RFC6962 vs. Static)
    • log_operator: CT log operator
    • public_key_algorithm: Public key algorithm of certificate's key
    • signature_algorithm: Signature algorithm used by CA to sign certificate
    • tld: Top-level domain for DNS names found in certificates SANs
    • validation_level: Validation level

    Check out the new Certificate Transparency insights in the new Radar page.

  1. The latest releases of @cloudflare/agents brings major improvements to MCP transport protocols support and agents connectivity. Key updates include:

    MCP elicitation support

    MCP servers can now request user input during tool execution, enabling interactive workflows like confirmations, forms, and multi-step processes. This feature uses durable storage to preserve elicitation state even during agent hibernation, ensuring seamless user interactions across agent lifecycle events.

    // Request user confirmation via elicitation
    const confirmation = await this.elicitInput({
    message: `Are you sure you want to increment the counter by ${amount}?`,
    requestedSchema: {
    type: "object",
    properties: {
    confirmed: {
    type: "boolean",
    title: "Confirm increment",
    description: "Check to confirm the increment",
    },
    },
    required: ["confirmed"],
    },
    });

    Check out our demo to see elicitation in action.

    HTTP streamable transport for MCP

    MCP now supports HTTP streamable transport which is recommended over SSE. This transport type offers:

    • Better performance: More efficient data streaming and reduced overhead
    • Improved reliability: Enhanced connection stability and error recover- Automatic fallback: If streamable transport is not available, it gracefully falls back to SSE
    export default MyMCP.serve("/mcp", {
    binding: "MyMCP",
    });

    The SDK automatically selects the best available transport method, gracefully falling back from streamable-http to SSE when needed.

    Enhanced MCP connectivity

    Significant improvements to MCP server connections and transport reliability:

    • Auto transport selection: Automatically determines the best transport method, falling back from streamable-http to SSE as needed
    • Improved error handling: Better connection state management and error reporting for MCP servers
    • Reliable prop updates: Centralized agent property updates ensure consistency across different contexts

    Lightweight .queue for fast task deferral

    You can use .queue() to enqueue background work — ideal for tasks like processing user messages, sending notifications etc.

    class MyAgent extends Agent {
    doSomethingExpensive(payload) {
    // a long running process that you want to run in the background
    }
    queueSomething() {
    await this.queue("doSomethingExpensive", somePayload); // this will NOT block further execution, and runs in the background
    await this.queue("doSomethingExpensive", someOtherPayload); // the callback will NOT run until the previous callback is complete
    // ... call as many times as you want
    }
    }

    Want to try it yourself? Just define a method like processMessage in your agent, and you’re ready to scale.

    New email adapter

    Want to build an AI agent that can receive and respond to emails automatically? With the new email adapter and onEmail lifecycle method, now you can.

    export class EmailAgent extends Agent {
    async onEmail(email: AgentEmail) {
    const raw = await email.getRaw();
    const parsed = await PostalMime.parse(raw);
    // create a response based on the email contents
    // and then send a reply
    await this.replyToEmail(email, {
    fromName: "Email Agent",
    body: `Thanks for your email! You've sent us "${parsed.subject}". We'll process it shortly.`,
    });
    }
    }

    You route incoming mail like this:

    export default {
    async email(email, env) {
    await routeAgentEmail(email, env, {
    resolver: createAddressBasedEmailResolver("EmailAgent"),
    });
    },
    };

    You can find a full example here.

    Automatic context wrapping for custom methods

    Custom methods are now automatically wrapped with the agent's context, so calling getCurrentAgent() should work regardless of where in an agent's lifecycle it's called. Previously this would not work on RPC calls, but now just works out of the box.

    export class MyAgent extends Agent {
    async suggestReply(message) {
    // getCurrentAgent() now correctly works, even when called inside an RPC method
    const { agent } = getCurrentAgent()!;
    return generateText({
    prompt: `Suggest a reply to: "${message}" from "${agent.name}"`,
    tools: [replyWithEmoji],
    });
    }
    }

    Try it out and tell us what you build!

  1. We’ve shipped a major release for the @cloudflare/sandbox SDK, turning it into a full-featured, container-based execution platform that runs securely on Cloudflare Workers.

    This update adds live streaming of output, persistent Python and JavaScript code interpreters with rich output support (charts, tables, HTML, JSON), file system access, Git operations, full background process control, and the ability to expose running services via public URLs.

    This makes it ideal for building AI agents, CI runners, cloud REPLs, data analysis pipelines, or full developer tools — all without managing infrastructure.

    Code interpreter (Python, JS, TS)

    Create persistent code contexts with support for rich visual + structured outputs.

    createCodeContext(options)

    Creates a new code execution context with persistent state.

    // Create a Python context
    const pythonCtx = await sandbox.createCodeContext({ language: "python" });
    // Create a JavaScript context
    const jsCtx = await sandbox.createCodeContext({ language: "javascript" });

    Options:

    • language: Programming language ('python' | 'javascript' | 'typescript')
    • cwd: Working directory (default: /workspace)
    • envVars: Environment variables for the context

    runCode(code, options)

    Executes code with optional streaming callbacks.

    // Simple execution
    const execution = await sandbox.runCode('print("Hello World")', {
    context: pythonCtx,
    });
    // With streaming callbacks
    await sandbox.runCode(
    `
    for i in range(5):
    print(f"Step {i}")
    time.sleep(1)
    `,
    {
    context: pythonCtx,
    onStdout: (output) => console.log("Real-time:", output.text),
    onResult: (result) => console.log("Result:", result),
    },
    );

    Options:

    • language: Programming language ('python' | 'javascript' | 'typescript')
    • cwd: Working directory (default: /workspace)
    • envVars: Environment variables for the context

    Real-time streaming output

    Returns a streaming response for real-time processing.

    const stream = await sandbox.runCodeStream(
    "import time; [print(i) for i in range(10)]",
    );
    // Process the stream as needed

    Rich output handling

    Interpreter outputs are auto-formatted and returned in multiple formats:

    • text
    • html (e.g., Pandas tables)
    • png, svg (e.g., Matplotlib charts)
    • json (structured data)
    • chart (parsed visualizations)
    const result = await sandbox.runCode(
    `
    import seaborn as sns
    import matplotlib.pyplot as plt
    data = sns.load_dataset("flights")
    pivot = data.pivot("month", "year", "passengers")
    sns.heatmap(pivot, annot=True, fmt="d")
    plt.title("Flight Passengers")
    plt.show()
    pivot.to_dict()
    `,
    { context: pythonCtx },
    );
    if (result.png) {
    console.log("Chart output:", result.png);
    }

    Preview URLs from Exposed Ports

    Start background processes and expose them with live URLs.

    await sandbox.startProcess("python -m http.server 8000");
    const preview = await sandbox.exposePort(8000);
    console.log("Live preview at:", preview.url);

    Full process lifecycle control

    Start, inspect, and terminate long-running background processes.

    const process = await sandbox.startProcess("node server.js");
    console.log(`Started process ${process.id} with PID ${process.pid}`);
    // Monitor the process
    const logStream = await sandbox.streamProcessLogs(process.id);
    for await (const log of parseSSEStream<LogEvent>(logStream)) {
    console.log(`Server: ${log.data}`);
    }
    • listProcesses() - List all running processes
    • getProcess(id) - Get detailed process status
    • killProcess(id, signal) - Terminate specific processes
    • killAllProcesses() - Kill all processes
    • streamProcessLogs(id, options) - Stream logs from running processes
    • getProcessLogs(id) - Get accumulated process output

    Git integration

    Clone Git repositories directly into the sandbox.

    await sandbox.gitCheckout("https://github.com/user/repo", {
    branch: "main",
    targetDir: "my-project",
    });

    Sandboxes are still experimental. We're using them to explore how isolated, container-like workloads might scale on Cloudflare — and to help define the developer experience around them.

  1. We're thrilled to be a Day 0 partner with OpenAI to bring their latest open models to Workers AI, including support for Responses API, Code Interpreter, and Web Search (coming soon).

    Get started with the new models at @cf/openai/gpt-oss-120b and @cf/openai/gpt-oss-20b. Check out the