Configuration
Wrangler optionally uses a configuration file to customize the development and deployment setup for a Worker.
It is best practice to treat Wrangler's configuration file as the source of truth for configuring a Worker.
{ "name": "my-worker", "main": "src/index.js", "compatibility_date": "2022-07-12", "workers_dev": false, "route": { "pattern": "example.org/*", "zone_name": "example.org" }, "kv_namespaces": [ { "binding": "<MY_NAMESPACE>", "id": "<KV_ID>" } ], "env": { "staging": { "name": "my-worker-staging", "route": { "pattern": "staging.example.org/*", "zone_name": "example.org" }, "kv_namespaces": [ { "binding": "<MY_NAMESPACE>", "id": "<STAGING_KV_ID>" } ] } }}
# Top-level configurationname = "my-worker"main = "src/index.js"compatibility_date = "2022-07-12"
workers_dev = falseroute = { pattern = "example.org/*", zone_name = "example.org" }
kv_namespaces = [ { binding = "<MY_NAMESPACE>", id = "<KV_ID>" }]
[env.staging]name = "my-worker-staging"route = { pattern = "staging.example.org/*", zone_name = "example.org" }
kv_namespaces = [ { binding = "<MY_NAMESPACE>", id = "<STAGING_KV_ID>" }]
You can define different configurations for a Worker using Wrangler environments. There is a default (top-level) environment and you can create named environments that provide environment-specific configuration.
These are defined under [env.<name>]
keys, such as [env.staging]
which you can then preview or deploy with the -e
/ --env
flag in the wrangler
commands like npx wrangler deploy --env staging
.
The majority of keys are inheritable, meaning that top-level configuration can be used in environments. Bindings, such as vars
or kv_namespaces
, are not inheritable and need to be defined explicitly.
Further, there are a few keys that can only appear at the top-level.
Top-level keys apply to the Worker as a whole (and therefore all environments). They cannot be defined within named environments.
-
keep_vars
boolean optional- Whether Wrangler should keep variables configured in the dashboard on deploy. Refer to source of truth.
-
migrations
object[] optional- When making changes to your Durable Object classes, you must perform a migration. Refer to Durable Object migrations.
-
send_metrics
boolean optional- Whether Wrangler should send usage data to Cloudflare for this project. Defaults to
true
. You can learn more about this in our data policy ↗.
- Whether Wrangler should send usage data to Cloudflare for this project. Defaults to
-
site
object optional deprecated- See the Workers Sites section below for more information. Cloudflare Pages and Workers Assets is preferred over this approach.
- This is not supported by the Cloudflare Vite plugin.
Inheritable keys are configurable at the top-level, and can be inherited (or overridden) by environment-specific configuration.
-
name
string required- The name of your Worker. Alphanumeric characters (
a
,b
,c
, etc.) and dashes (-
) only. Do not use underscores (_
).
- The name of your Worker. Alphanumeric characters (
-
main
string required- The path to the entrypoint of your Worker that will be executed. For example:
./src/index.ts
.
- The path to the entrypoint of your Worker that will be executed. For example:
-
compatibility_date
string required- A date in the form
yyyy-mm-dd
, which will be used to determine which version of the Workers runtime is used. Refer to Compatibility dates.
- A date in the form
-
account_id
string optional- This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the zone/route you provide, if you provide one. It can also be specified through the
CLOUDFLARE_ACCOUNT_ID
environment variable.
- This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the zone/route you provide, if you provide one. It can also be specified through the
-
compatibility_flags
string[] optional- A list of flags that enable features from upcoming features of the Workers runtime, usually used together with
compatibility_date
. Refer to compatibility dates.
- A list of flags that enable features from upcoming features of the Workers runtime, usually used together with
-
workers_dev
boolean optional- Enables use of
*.workers.dev
subdomain to deploy your Worker. If you have a Worker that is only forscheduled
events, you can set this tofalse
. Defaults totrue
. Refer to types of routes.
- Enables use of
-
preview_urls
boolean optional- Enables use of Preview URLs to test your Worker. Defaults to
false
. Refer to Preview URLs.
- Enables use of Preview URLs to test your Worker. Defaults to
-
route
Route optional- A route that your Worker should be deployed to. Only one of
routes
orroute
is required. Refer to types of routes.
- A route that your Worker should be deployed to. Only one of
-
routes
Route[] optional- An array of routes that your Worker should be deployed to. Only one of
routes
orroute
is required. Refer to types of routes.
- An array of routes that your Worker should be deployed to. Only one of
-
tsconfig
string optional- Path to a custom
tsconfig
. - Not applicable if you're using the Cloudflare Vite plugin.
- Path to a custom
-
triggers
object optional- Cron definitions to trigger a Worker's
scheduled
function. Refer to triggers.
- Cron definitions to trigger a Worker's
-
rules
Rule optional- An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use
Text
,Data
andCompiledWasm
modules, or when you wish to have a.js
file be treated as anESModule
instead ofCommonJS
. - Not applicable if you're using the Cloudflare Vite plugin.
- An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use
-
build
Build optional- Configures a custom build step to be run by Wrangler when building your Worker. Refer to Custom builds.
- Not applicable if you're using the Cloudflare Vite plugin.
-
no_bundle
boolean optional- Skip internal build steps and directly deploy your Worker script. You must have a plain JavaScript Worker with no dependencies.
- Not applicable if you're using the Cloudflare Vite plugin.
-
find_additional_modules
boolean optional- If true then Wrangler will traverse the file tree below
base_dir
. Any files that matchrules
will be included in the deployed Worker. Defaults to true ifno_bundle
is true, otherwise false. Can only be used with Module format Workers (not Service Worker format). - Not applicable if you're using the Cloudflare Vite plugin.
- If true then Wrangler will traverse the file tree below
-
base_dir
string optional- The directory in which module "rules" should be evaluated when including additional files (via
find_additional_modules
) into a Worker deployment. Defaults to the directory containing themain
entry point of the Worker if not specified. - Not applicable if you're using the Cloudflare Vite plugin.
- The directory in which module "rules" should be evaluated when including additional files (via
-
preserve_file_names
boolean optional- Determines whether Wrangler will preserve the file names of additional modules bundled with the Worker.
The default is to prepend filenames with a content hash.
For example,
34de60b44167af5c5a709e62a4e20c4f18c9e3b6-favicon.ico
. - Not applicable if you're using the Cloudflare Vite plugin.
- Determines whether Wrangler will preserve the file names of additional modules bundled with the Worker.
The default is to prepend filenames with a content hash.
For example,
-
minify
boolean optional- Minify the Worker script before uploading.
- If you're using the Cloudflare Vite plugin,
minify
is replaced by Vite'sbuild.minify
↗.
-
keep_names
boolean optional- Wrangler uses esbuild to process the Worker code for development and deployment. This option allows
you to specify whether esbuild should apply its keepNames ↗ logic to the code or not. Defaults to
true
.
- Wrangler uses esbuild to process the Worker code for development and deployment. This option allows
you to specify whether esbuild should apply its keepNames ↗ logic to the code or not. Defaults to
-
logpush
boolean optional- Enables Workers Trace Events Logpush for a Worker. Any scripts with this property will automatically get picked up by the Workers Logpush job configured for your account. Defaults to
false
. Refer to Workers Logpush.
- Enables Workers Trace Events Logpush for a Worker. Any scripts with this property will automatically get picked up by the Workers Logpush job configured for your account. Defaults to
-
limits
Limits optional- Configures limits to be imposed on execution at runtime. Refer to Limits.
-
observability
object optional- Configures automatic observability settings for telemetry data emitted from your Worker. Refer to Observability.
-
assets
Assets optional- Configures static assets that will be served. Refer to Assets for more details.
-
migrations
object optional- Maps a Durable Object from a class name to a runtime state. This communicates changes to the Durable Object (creation / deletion / rename / transfer) to the Workers runtime and provides the runtime with instructions on how to deal with those changes. Refer to Durable Objects migrations.
Non-inheritable keys are configurable at the top-level, but cannot be inherited by environments and must be specified for each environment.
-
define
Record<string, string> optional- A map of values to substitute when deploying your Worker.
- If you're using the Cloudflare Vite plugin,
define
is replaced by Vite'sdefine
↗.
-
vars
object optional- A map of environment variables to set when deploying your Worker. Refer to Environment variables.
-
durable_objects
object optional- A list of Durable Objects that your Worker should be bound to. Refer to Durable Objects.
-
kv_namespaces
object optional- A list of KV namespaces that your Worker should be bound to. Refer to KV namespaces.
-
r2_buckets
object optional- A list of R2 buckets that your Worker should be bound to. Refer to R2 buckets.
-
vectorize
object optional- A list of Vectorize indexes that your Worker should be bound to. Refer to Vectorize indexes.
-
services
object optional- A list of service bindings that your Worker should be bound to. Refer to service bindings.
-
tail_consumers
object optional- A list of the Tail Workers your Worker sends data to. Refer to Tail Workers.
There are three types of routes: Custom Domains, routes, and workers.dev
.
Custom Domains allow you to connect your Worker to a domain or subdomain, without having to make changes to your DNS settings or perform any certificate management.
-
pattern
string required- The pattern that your Worker should be run on, for example,
"example.com"
.
- The pattern that your Worker should be run on, for example,
-
custom_domain
boolean optional- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
false
.
- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
Example:
{ "routes": [ { "pattern": "shop.example.com", "custom_domain": true } ]}
[[routes]]pattern = "shop.example.com"custom_domain = true
Routes allow users to map a URL pattern to a Worker. A route can be configured as a zone ID route, a zone name route, or a simple route.
-
pattern
string required- The pattern that your Worker can be run on, for example,
"example.com/*"
.
- The pattern that your Worker can be run on, for example,
-
zone_id
string required- The ID of the zone that your
pattern
is associated with. Refer to Find zone and account IDs.
- The ID of the zone that your
Example:
{ "routes": [ { "pattern": "subdomain.example.com/*", "zone_id": "<YOUR_ZONE_ID>" } ]}
[[routes]]pattern = "subdomain.example.com/*"zone_id = "<YOUR_ZONE_ID>"
-
pattern
string required- The pattern that your Worker should be run on, for example,
"example.com/*"
.
- The pattern that your Worker should be run on, for example,
-
zone_name
string required- The name of the zone that your
pattern
is associated with. If you are using API tokens, this will require theAccount
scope.
- The name of the zone that your
Example:
{ "routes": [ { "pattern": "subdomain.example.com/*", "zone_name": "example.com" } ]}
[[routes]]pattern = "subdomain.example.com/*"zone_name = "example.com"
This is a simple route that only requires a pattern.
Example:
{ "route": "example.com/*"}
route = "example.com/*"
Cloudflare Workers accounts come with a workers.dev
subdomain that is configurable in the Cloudflare dashboard.
workers_dev
boolean optional- Whether the Worker runs on a custom
workers.dev
account subdomain. Defaults totrue
.
- Whether the Worker runs on a custom
{ "workers_dev": false}
workers_dev = false
Triggers allow you to define the cron
expression to invoke your Worker's scheduled
function. Refer to Supported cron expressions.
crons
string[] required- An array of
cron
expressions. - To disable a Cron Trigger, set
crons = []
. Commenting out thecrons
key will not disable a Cron Trigger.
- An array of
Example:
{ "triggers": { "crons": [ "* * * * *" ] }}
[triggers]crons = ["* * * * *"]
The Observability setting allows you to automatically ingest, store, filter, and analyze logging data emitted from Cloudflare Workers directly from your Cloudflare Worker's dashboard.
-
enabled
boolean required- When set to
true
on a Worker, logs for the Worker are persisted. Defaults totrue
for all new Workers.
- When set to
-
head_sampling_rate
number optional- A number between 0 and 1, where 0 indicates zero out of one hundred requests are logged, and 1 indicates every request is logged. If
head_sampling_rate
is unspecified, it is configured to a default value of 1 (100%). Read more about head-based sampling.
- A number between 0 and 1, where 0 indicates zero out of one hundred requests are logged, and 1 indicates every request is logged. If
Example:
{ "observability": { "enabled": true, "head_sampling_rate": 0.1 }}
[observability]enabled = truehead_sampling_rate = 0.1 # 10% of requests are logged
You can configure a custom build step that will be run before your Worker is deployed. Refer to Custom builds.
-
command
string optional- The command used to build your Worker. On Linux and macOS, the command is executed in the
sh
shell and thecmd
shell for Windows. The&&
and||
shell operators may be used.
- The command used to build your Worker. On Linux and macOS, the command is executed in the
-
cwd
string optional- The directory in which the command is executed.
-
watch_dir
string | string[] optional- The directory to watch for changes while using
wrangler dev
. Defaults to the current working directory.
- The directory to watch for changes while using
Example:
{ "build": { "command": "npm run build", "cwd": "build_cwd", "watch_dir": "build_watch_dir" }}
[build]command = "npm run build"cwd = "build_cwd"watch_dir = "build_watch_dir"
You can impose limits on your Worker's behavior at runtime. Limits are only supported for the Standard Usage Model. Limits are only enforced when deployed to Cloudflare's network, not in local development. The CPU limit can be set to a maximum of 300,000 milliseconds (5 minutes).
Each isolate has some built-in flexibility to allow for cases where your Worker infrequently runs over the configured limit. If your Worker starts hitting the limit consistently, its execution will be terminated according to the limit configured.
cpu_ms
number optional- The maximum CPU time allowed per invocation, in milliseconds.
Example:
{ "limits": { "cpu_ms": 100 }}
[limits]cpu_ms = 100
The Workers Browser Rendering API allows developers to programmatically control and interact with a headless browser instance and create automation flows for their applications and products.
A browser binding will provide your Worker with an authenticated endpoint to interact with a dedicated Chromium browser instance.
binding
string required- The binding name used to refer to the browser binding. The value (string) you set will be used to reference this headless browser in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
binding = "HEAD_LESS"
orbinding = "simulatedBrowser"
would both be valid names for the binding.
- The binding name used to refer to the browser binding. The value (string) you set will be used to reference this headless browser in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
Example:
{ "browser": { "binding": "<BINDING_NAME>" }}
[browser]binding = "<BINDING_NAME>"
D1 is Cloudflare's serverless SQL database. A Worker can query a D1 database (or databases) by creating a binding to each database for D1 Workers Binding API.
To bind D1 databases to your Worker, assign an array of the below object to the [[d1_databases]]
key.
-
binding
string required- The binding name used to refer to the D1 database. The value (string) you set will be used to reference this database in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
binding = "MY_DB"
orbinding = "productionDB"
would both be valid names for the binding.
- The binding name used to refer to the D1 database. The value (string) you set will be used to reference this database in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
-
database_name
string required- The name of the database. This is a human-readable name that allows you to distinguish between different databases, and is set when you first create the database.
-
database_id
string required- The ID of the database. The database ID is available when you first use
wrangler d1 create
or when you callwrangler d1 list
, and uniquely identifies your database.
- The ID of the database. The database ID is available when you first use
-
preview_database_id
string optional- The preview ID of this D1 database. If provided,
wrangler dev
uses this ID. Otherwise, it usesdatabase_id
. This option is required when usingwrangler dev --remote
.
- The preview ID of this D1 database. If provided,
-
migrations_dir
string optional- The migration directory containing the migration files. By default,
wrangler d1 migrations create
creates a folder namedmigrations
. You can usemigrations_dir
to specify a different folder containing the migration files (for example, if you have a mono-repo setup, and want to use a single D1 instance across your apps/packages). - For more information, refer to D1 Wrangler
migrations
commands and D1 migrations.
- The migration directory containing the migration files. By default,
Example:
{ "d1_databases": [ { "binding": "<BINDING_NAME>", "database_name": "<DATABASE_NAME>", "database_id": "<DATABASE_ID>" } ]}
[[d1_databases]]binding = "<BINDING_NAME>"database_name = "<DATABASE_NAME>"database_id = "<DATABASE_ID>"
Dispatch namespace bindings allow for communication between a dynamic dispatch Worker and a dispatch namespace. Dispatch namespace bindings are used in Workers for Platforms. Workers for Platforms helps you deploy serverless functions programmatically on behalf of your customers.
-
binding
string required- The binding name. The value (string) you set will be used to reference this database in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
binding = "MY_NAMESPACE"
orbinding = "productionNamespace"
would both be valid names for the binding.
- The binding name. The value (string) you set will be used to reference this database in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
-
namespace
string required- The name of the dispatch namespace.
-
outbound
object optionalservice
string required The name of the outbound Worker to bind to.parameters
array optional A list of parameters to pass data from your dynamic dispatch Worker to the outbound Worker.
{ "dispatch_namespaces": [ { "binding": "<BINDING_NAME>", "namespace": "<NAMESPACE_NAME>", "outbound": { "service": "<WORKER_NAME>", "parameters": [ "params_object" ] } } ]}
[[dispatch_namespaces]]binding = "<BINDING_NAME>"namespace = "<NAMESPACE_NAME>"outbound = {service = "<WORKER_NAME>", parameters = ["params_object"]}
Durable Objects provide low-latency coordination and consistent storage for the Workers platform.
To bind Durable Objects to your Worker, assign an array of the below object to the durable_objects.bindings
key.
-
name
string required- The name of the binding used to refer to the Durable Object.
-
class_name
string required- The exported class name of the Durable Object.
-
script_name
string optional- The name of the Worker where the Durable Object is defined, if it is external to this Worker. This option can be used both in local and remote development. In local development, you must run the external Worker in a separate process (via
wrangler dev
). In remote development, the appropriate remote binding must be used.
- The name of the Worker where the Durable Object is defined, if it is external to this Worker. This option can be used both in local and remote development. In local development, you must run the external Worker in a separate process (via
-
environment
string optional- The environment of the
script_name
to bind to.
- The environment of the
Example:
{ "durable_objects": { "bindings": [ { "name": "<BINDING_NAME>", "class_name": "<CLASS_NAME>" } ] }}
[[durable_objects.bindings]]name = "<BINDING_NAME>"class_name = "<CLASS_NAME>"
When making changes to your Durable Object classes, you must perform a migration. Refer to Durable Object migrations.
-
tag
string required- A unique identifier for this migration.
-
new_sqlite_classes
string[] optional- The new Durable Objects being defined.
-
renamed_classes
{from: string, to: string}[] optional- The Durable Objects being renamed.
-
deleted_classes
string[] optional- The Durable Objects being removed.
Example:
{ "migrations": [ { "tag": "v1", "new_sqlite_classes": [ "DurableObjectExample" ] }, { "tag": "v2", "renamed_classes": [ { "from": "DurableObjectExample", "to": "UpdatedName" } ], "deleted_classes": [ "DeprecatedClass" ] } ]}
[[migrations]]tag = "v1" # Should be unique for each entrynew_sqlite_classes = ["DurableObjectExample"] # Array of new classes
[[migrations]]tag = "v2"renamed_classes = [{from = "DurableObjectExample", to = "UpdatedName" }] # Array of rename directivesdeleted_classes = ["DeprecatedClass"] # Array of deleted class names
You can send an email about your Worker's activity from your Worker to an email address verified on Email Routing. This is useful for when you want to know about certain types of events being triggered, for example.
Before you can bind an email address to your Worker, you need to enable Email Routing and have at least one verified email address. Then, assign an array to the object (send_email) with the type of email binding you need.
-
name
string required- The binding name.
-
destination_address
string optional- The chosen email address you send emails to.
-
allowed_destination_addresses
string[] optional- The allowlist of email addresses you send emails to.
You can add one or more types of bindings to your Wrangler file. However, each attribute must be on its own line:
{ "send_email": [ { "name": "<NAME_FOR_BINDING1>" }, { "name": "<NAME_FOR_BINDING2>", "destination_address": "<YOUR_EMAIL>@example.com" }, { "name": "<NAME_FOR_BINDING3>", "allowed_destination_addresses": [ "<YOUR_EMAIL>@example.com", "<YOUR_EMAIL2>@example.com" ] } ]}
send_email = [ {name = "<NAME_FOR_BINDING1>"}, {name = "<NAME_FOR_BINDING2>", destination_address = "<YOUR_EMAIL>@example.com"}, {name = "<NAME_FOR_BINDING3>", allowed_destination_addresses = ["<YOUR_EMAIL>@example.com", "<YOUR_EMAIL2>@example.com"]},]
Environment variables are a type of binding that allow you to attach text strings or JSON values to your Worker.
Example:
{ "name": "my-worker-dev", "vars": { "API_HOST": "example.com", "API_ACCOUNT_ID": "example_user", "SERVICE_X_DATA": { "URL": "service-x-api.dev.example", "MY_ID": 123 } }}
name = "my-worker-dev"
[vars]API_HOST = "example.com"API_ACCOUNT_ID = "example_user"SERVICE_X_DATA = { URL = "service-x-api.dev.example", MY_ID = 123 }
Hyperdrive bindings allow you to interact with and query any Postgres database from within a Worker.
-
binding
string required- The binding name.
-
id
string required- The ID of the Hyperdrive configuration.
Example:
{ "compatibility_flags": [ "nodejs_compat_v2" ], "hyperdrive": [ { "binding": "<BINDING_NAME>", "id": "<ID>" } ]}
# required for database drivers to functioncompatibility_flags = ["nodejs_compat_v2"]
[[hyperdrive]]binding = "<BINDING_NAME>"id = "<ID>"
Cloudflare Images lets you make transformation requests to optimize, resize, and manipulate images stored in remote sources.
To bind Images to your Worker, assign an array of the below object to the images
key.
binding
(required). The name of the binding used to refer to the Images API.
{ "images": { "binding": "IMAGES", // i.e. available in your Worker on env.IMAGES },}
[images]binding = "IMAGES"
Workers KV is a global, low-latency, key-value data store. It stores data in a small number of centralized data centers, then caches that data in Cloudflare’s data centers after access.
To bind KV namespaces to your Worker, assign an array of the below object to the kv_namespaces
key.
-
binding
string required- The binding name used to refer to the KV namespace.
-
id
string required- The ID of the KV namespace.
-
preview_id
string optional- The preview ID of this KV namespace. This option is required when using
wrangler dev --remote
to develop against remote resources. If developing locally (without--remote
), this is an optional field.wrangler dev
will use this ID for the KV namespace. Otherwise,wrangler dev
will useid
.
- The preview ID of this KV namespace. This option is required when using
Example:
{ "kv_namespaces": [ { "binding": "<BINDING_NAME1>", "id": "<NAMESPACE_ID1>" }, { "binding": "<BINDING_NAME2>", "id": "<NAMESPACE_ID2>" } ]}
[[kv_namespaces]]binding = "<BINDING_NAME1>"id = "<NAMESPACE_ID1>"
[[kv_namespaces]]binding = "<BINDING_NAME2>"id = "<NAMESPACE_ID2>"
Queues is Cloudflare's global message queueing service, providing guaranteed delivery and message batching. To interact with a queue with Workers, you need a producer Worker to send messages to the queue and a consumer Worker to pull batches of messages out of the Queue. A single Worker can produce to and consume from multiple Queues.
To bind Queues to your producer Worker, assign an array of the below object to the [[queues.producers]]
key.
-
queue
string required- The name of the queue, used on the Cloudflare dashboard.
-
binding
string required- The binding name used to refer to the queue in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
binding = "MY_QUEUE"
orbinding = "productionQueue"
would both be valid names for the binding.
- The binding name used to refer to the queue in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
-
delivery_delay
number optional- The number of seconds to delay messages sent to a queue for by default. This can be overridden on a per-message or per-batch basis.
Example:
{ "queues": { "producers": [ { "binding": "<BINDING_NAME>", "queue": "<QUEUE_NAME>", "delivery_delay": 60 } ] }}
[[queues.producers]] binding = "<BINDING_NAME>" queue = "<QUEUE_NAME>" delivery_delay = 60 # Delay messages by 60 seconds before they are delivered to a consumer
To bind Queues to your consumer Worker, assign an array of the below object to the [[queues.consumers]]
key.
-
queue
string required- The name of the queue, used on the Cloudflare dashboard.
-
max_batch_size
number optional- The maximum number of messages allowed in each batch.
-
max_batch_timeout
number optional- The maximum number of seconds to wait for messages to fill a batch before the batch is sent to the consumer Worker.
-
max_retries
number optional- The maximum number of retries for a message, if it fails or
retryAll()
is invoked.
- The maximum number of retries for a message, if it fails or
-
dead_letter_queue
string optional- The name of another queue to send a message if it fails processing at least
max_retries
times. - If a
dead_letter_queue
is not defined, messages that repeatedly fail processing will be discarded. - If there is no queue with the specified name, it will be created automatically.
- The name of another queue to send a message if it fails processing at least
-
max_concurrency
number optional- The maximum number of concurrent consumers allowed to run at once. Leaving this unset will mean that the number of invocations will scale to the currently supported maximum.
- Refer to Consumer concurrency for more information on how consumers autoscale, particularly when messages are retried.
-
retry_delay
number optional- The number of seconds to delay retried messages for by default, before they are re-delivered to the consumer. This can be overridden on a per-message or per-batch basis when retrying messages.
Example:
{ "queues": { "consumers": [ { "queue": "my-queue", "max_batch_size": 10, "max_batch_timeout": 30, "max_retries": 10, "dead_letter_queue": "my-queue-dlq", "max_concurrency": 5, "retry_delay": 120 } ] }}
[[queues.consumers]] queue = "my-queue" max_batch_size = 10 max_batch_timeout = 30 max_retries = 10 dead_letter_queue = "my-queue-dlq" max_concurrency = 5 retry_delay = 120 # Delay retried messages by 2 minutes before re-attempting delivery
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
To bind R2 buckets to your Worker, assign an array of the below object to the r2_buckets
key.
-
binding
string required- The binding name used to refer to the R2 bucket.
-
bucket_name
string required- The name of this R2 bucket.
-
jurisdiction
string optional- The jurisdiction where this R2 bucket is located, if a jurisdiction has been specified. Refer to Jurisdictional Restrictions.
-
preview_bucket_name
string optional- The preview name of this R2 bucket. If provided,
wrangler dev
will use this name for the R2 bucket. Otherwise, it will usebucket_name
. This option is required when usingwrangler dev --remote
.
- The preview name of this R2 bucket. If provided,
Example:
{ "r2_buckets": [ { "binding": "<BINDING_NAME1>", "bucket_name": "<BUCKET_NAME1>" }, { "binding": "<BINDING_NAME2>", "bucket_name": "<BUCKET_NAME2>" } ]}
[[r2_buckets]]binding = "<BINDING_NAME1>"bucket_name = "<BUCKET_NAME1>"
[[r2_buckets]]binding = "<BINDING_NAME2>"bucket_name = "<BUCKET_NAME2>"