- Overview
- Install and load the library
- Usage
- Build options
- API
- Browser Support
- Limitations
- Development
- Integrations
- License
The web-vitals
library is a tiny (~2K, brotli'd), modular library for measuring all the Web Vitals metrics on real users, in a way that accurately matches how they're measured by Chrome and reported to other Google tools (e.g. Chrome User Experience Report, Page Speed Insights, Search Console's Speed Report).
The library supports all of the Core Web Vitals as well as a number of other metrics that are useful in diagnosing real-user performance issues.
The web-vitals
library uses the buffered
flag for PerformanceObserver, allowing it to access performance entries that occurred before the library was loaded.
This means you do not need to load this library early in order to get accurate performance data. In general, this library should be deferred until after other user-impacting code has loaded.
You can install this library from npm by running:
npm install web-vitals
Note
If you're not using npm, you can still load web-vitals
via <script>
tags from a CDN like unpkg.com. See the load web-vitals
from a CDN usage example below for details.
There are a few different builds of the web-vitals
library, and how you load the library depends on which build you want to use.
For details on the difference between the builds, see which build is right for you.
1. The "standard" build
To load the "standard" build, import modules from the web-vitals
package in your application code (as you would with any npm package and node-based build tool):
import {onLCP, onINP, onCLS} from 'web-vitals';
2. The "attribution" build
Measuring the Web Vitals scores for your real users is a great first step toward optimizing the user experience. But if your scores aren't good, the next step is to understand why they're not good and work to improve them.
The "attribution" build helps you do that by including additional diagnostic information with each metric to help you identify the root cause of poor performance as well as prioritize the most important things to fix.
The "attribution" build is slightly larger than the "standard" build (by about 1.5K, brotli'd), so while the code size is still small, it's only recommended if you're actually using these features.
To load the "attribution" build, change any import
statements that reference web-vitals
to web-vitals/attribution
:
import {onLCP, onINP, onCLS} from 'web-vitals';
import {onLCP, onINP, onCLS} from 'web-vitals/attribution';
Usage for each of the imported function is identical to the standard build, but when importing from the attribution build, the metric objects will contain an additional attribution
property.
See Send attribution data for usage examples, and the attribution
reference for details on what values are added for each metric.
The recommended way to use the web-vitals
package is to install it from npm and integrate it into your build process. However, if you're not using npm, it's still possible to use web-vitals
by requesting it from a CDN that serves npm package files.
The following examples show how to load web-vitals
from unpkg.com. It is also possible to load this from jsDelivr, and cdnjs.
Important! The unpkg.com, jsDelivr, and cdnjs CDNs are shown here for example purposes only. unpkg.com
, jsDelivr
, and cdnjs
are not affiliated with Google, and there are no guarantees that loading the library from those CDNs will continue to work in the future. Self-hosting the built files rather than loading from the CDN is better for security, reliability, and performance reasons.
Load the "standard" build (using a module script)
<!-- Append the `?module` param to load the module version of `web-vitals` -->
<script type="module">
import {onCLS, onINP, onLCP} from 'https://unpkg.com/web-vitals@5?module';
onCLS(console.log);
onINP(console.log);
onLCP(console.log);
</script>
Note: When the web-vitals code is isolated from the application code in this way, there is less need to depend on dynamic imports so this code uses a regular import
line.
Load the "standard" build (using a classic script)
<script>
(function () {
var script = document.createElement('script');
script.src = 'https://unpkg.com/web-vitals@5/dist/web-vitals.iife.js';
script.onload = function () {
// When loading `web-vitals` using a classic script, all the public
// methods can be found on the `webVitals` global namespace.
webVitals.onCLS(console.log);
webVitals.onINP(console.log);
webVitals.onLCP(console.log);
};
document.head.appendChild(script);
})();
</script>
Load the "attribution" build (using a module script)
<!-- Append the `?module` param to load the module version of `web-vitals` -->
<script type="module">
import {
onCLS,
onINP,
onLCP,
} from 'https://unpkg.com/web-vitals@5/dist/web-vitals.attribution.js?module';
onCLS(console.log);
onINP(console.log);
onLCP(console.log);
</script>
Load the "attribution" build (using a classic script)
<script>
(function () {
var script = document.createElement('script');
script.src =
'https://unpkg.com/web-vitals@5/dist/web-vitals.attribution.iife.js';
script.onload = function () {
// When loading `web-vitals` using a classic script, all the public
// methods can be found on the `webVitals` global namespace.
webVitals.onCLS(console.log);
webVitals.onINP(console.log);
webVitals.onLCP(console.log);
};
document.head.appendChild(script);
})();
</script>
Each of the Web Vitals metrics is exposed as a single function that takes a callback
function that will be called any time the metric value is available and ready to be reported.
The following example measures each of the Core Web Vitals metrics and logs the result to the console once its value is ready to report.
(The examples below import the "standard" build, but they will work with the "attribution" build as well.)
import {onCLS, onINP, onLCP} from 'web-vitals';
onCLS(console.log);
onINP(console.log);
onLCP(console.log);
Note that some of these metrics will not report until the user has interacted with the page, switched tabs, or the page starts to unload. If you don't see the values logged to the console immediately, try reloading the page (with preserve log enabled) or switching tabs and then switching back.
Also, in some cases a metric callback may never be called:
- INP is not reported if the user never interacts with the page.
- CLS, FCP, and LCP are not reported if the page was loaded in the background.
In other cases, a metric callback may be called more than once:
- CLS and INP should be reported any time the page's
visibilityState
changes to hidden. - All metrics are reported again (with the above exceptions) after a page is restored from the back/forward cache.
Warning
Do not call any of the Web Vitals functions (e.g. onCLS()
, onINP()
, onLCP()
) more than once per page load. Each of these functions creates a PerformanceObserver
instance and registers event listeners for the lifetime of the page. While the overhead of calling these functions once is negligible, calling them repeatedly on the same page may eventually result in a memory leak.
In most cases, you only want the callback
function to be called when the metric is ready to be reported. However, it is possible to report every change (e.g. each larger layout shift as it happens) by setting reportAllChanges
to true
in the optional, configuration object (second parameter).
[!IMPORTANT] >
reportAllChanges
only reports when the metric changes, not for each input to the metric. For example, a new layout shift that does not increase the CLS metric will not be reported even withreportAllChanges
set totrue
because the CLS metric has not changed. Similarly, for INP, each interaction is not reported even withreportAllChanges
set totrue
—just when an interaction causes an increase to INP.
This can be useful when debugging, but in general using reportAllChanges
is not needed (or recommended) for measuring these metrics in production.
import {onCLS} from 'web-vitals';
// Logs CLS as the value changes.
onCLS(console.log, {reportAllChanges: true});
Some analytics providers allow you to update the value of a metric, even after you've already sent it to their servers (overwriting the previously-sent value with the same id
).
Other analytics providers, however, do not allow this, so instead of reporting the new value, you need to report only the delta (the difference between the current value and the last-reported value). You can then compute the total value by summing all metric deltas sent with the same ID.
The following example shows how to use the id
and delta
properties:
import {onCLS, onINP, onLCP} from 'web-vitals';
function logDelta({name, id, delta}) {
console.log(`${name} matching ID ${id} changed by ${delta}`);
}
onCLS(logDelta);
onINP(logDelta);
onLCP(logDelta);
Note
The first time the callback
function is called, its value
and delta
properties will be the same.
In addition to using the id
field to group multiple deltas for the same metric, it can also be used to differentiate different metrics reported on the same page. For example, after a back/forward cache restore, a new metric object is created with a new id
(since back/forward cache restores are considered separate page visits).
The following example measures each of the Core Web Vitals metrics and reports them to a hypothetical /analytics
endpoint, as soon as each is ready to be sent.
The sendToAnalytics()
function uses the navigator.sendBeacon()
method, which is widely available across browsers, and supports sending data as the page is being unloaded.
import {onCLS, onINP, onLCP} from 'web-vitals';
function sendToAnalytics(metric) {
const body = JSON.stringify({
name: metric.name,
value: metric.value,
id: metric.id,
// Include additional data as needed...
});
// Use `navigator.sendBeacon()` to send the data, which supports
// sending while the page is unloading.
navigator.sendBeacon('/analytics', body);
}
onCLS(sendToAnalytics);
onINP(sendToAnalytics);
onLCP(sendToAnalytics);
Google Analytics does not support reporting metric distributions in any of its built-in reports; however, if you set a unique event parameter value (in this case, the metric_id, as shown in the example below) on every metric instance that you send to Google Analytics, you can create a report yourself by first getting the data via the Google Analytics Data API or via BigQuery export and then visualizing it any charting library you choose.
Google Analytics 4 introduces a new Event model allowing custom parameters instead of a fixed category, action, and label. It also supports non-integer values, making it easier to measure Web Vitals metrics compared to previous versions.
import {onCLS, onINP, onLCP} from 'web-vitals';
function sendToGoogleAnalytics({name, delta, value, id}) {
// Assumes the global `gtag()` function exists, see:
// https://developers.google.com/analytics/devguides/collection/ga4
gtag('event', name, {
// Built-in params:
value: delta, // Use `delta` so the value can be summed.
// Custom params:
metric_id: id, // Needed to aggregate events.
metric_value: value, // Optional.
metric_delta: delta, // Optional.
// OPTIONAL: any additional params or debug info here.
// See: https://web.dev/articles/debug-performance-in-the-field
// metric_rating: 'good' | 'needs-improvement' | 'poor',
// debug_info: '...',
// ...
});
}
onCLS(sendToGoogleAnalytics);
onINP(sendToGoogleAnalytics);
onLCP(sendToGoogleAnalytics);
For details on how to query this data in BigQuery, or visualise it in Looker Studio, see Measure and debug performance with Google Analytics 4 and BigQuery.
While web-vitals
can be called directly from Google Tag Manager, using a pre-defined custom template makes this considerably easier. Some recommended templates include:
- Core Web Vitals by Simo Ahava. See Track Core Web Vitals in GA4 with Google Tag Manager for usage and installation instructions.
- Web Vitals Template for Google Tag Manager by The Google Marketing Solutions team. See the README for usage and installation instructions.
When using the attribution build, you can send additional data to help you debug why the metric values are they way they are.
This example sends an additional debug_target
param to Google Analytics, corresponding to the element most associated with each metric.
import {onCLS, onINP, onLCP} from 'web-vitals/attribution';
function sendToGoogleAnalytics({name, delta, value, id, attribution}) {
const eventParams = {
// Built-in params:
value: delta, // Use `delta` so the value can be summed.
// Custom params:
metric_id: id, // Needed to aggregate events.
metric_value: value, // Optional.
metric_delta: delta, // Optional.
};
switch (name) {
case 'CLS':
eventParams.debug_target = attribution.largestShiftTarget;
break;
case 'INP':
eventParams.debug_target = attribution.interactionTarget;
break;
case 'LCP':
eventParams.debug_target = attribution.element;
break;
}
// Assumes the global `gtag()` function exists, see:
// https://developers.google.com/analytics/devguides/collection/ga4
gtag('event', name, eventParams);
}
onCLS(sendToGoogleAnalytics);
onINP(sendToGoogleAnalytics);
onLCP(sendToGoogleAnalytics);
Note
This example relies on custom event parameters in Google Analytics 4.
See Debug performance in the field for more information and examples.
Rather than reporting each individual Web Vitals metric separately, you can minimize your network usage by batching multiple metric reports together in a single network request.
However, since not all Web Vitals metrics become available at the same time, and since not all metrics are reported on every page, you cannot simply defer reporting until all metrics are available.
Instead, you should keep a queue of all metrics that were reported and flush the queue whenever the page is backgrounded or unloaded:
import {onCLS, onINP, onLCP} from 'web-vitals';
const queue = new Set();
function addToQueue(metric) {
queue.add(metric);
}
function flushQueue() {
if (queue.size > 0) {
// Replace with whatever serialization method you prefer.
// Note: JSON.stringify will likely include more data than you need.
const body = JSON.stringify([...queue]);
// Use `navigator.sendBeacon()` to send the data, which supports
// sending while the page is unloading.
navigator.sendBeacon('/analytics', body);
queue.clear();
}
}
onCLS(addToQueue);
onINP(addToQueue);
onLCP(addToQueue);
// Report all available metrics whenever the page is backgrounded or unloaded.
addEventListener('visibilitychange', () => {
if (document.visibilityState === 'hidden') {
flushQueue();
}
});
Note
See the Page Lifecycle guide for an explanation of why visibilitychange
is recommended over events like beforeunload
and unload
.
The web-vitals
package includes both "standard" and "attribution" builds, as well as different formats of each to allow developers to choose the format that best meets their needs or integrates with their architecture.
The following table lists all the builds distributed with the web-vitals
package on npm.
Filename (all within dist/* )
|
Export | Description |
web-vitals.js |
pkg.module |
An ES module bundle of all metric functions, without any attribution features. This is the "standard" build and is the simplest way to consume this library out of the box. |
web-vitals.umd.cjs |
pkg.main |
A UMD version of the web-vitals.js bundle (exposed on the self.webVitals.* namespace).
|
web-vitals.iife.js |
-- |
An IIFE version of the web-vitals.js bundle (exposed on the self.webVitals.* namespace).
|
web-vitals.attribution.js |
-- | An ES module version of all metric functions that includes attribution features. |
web-vitals.attribution.umd.cjs |
-- |
A UMD version of the web-vitals.attribution.js build (exposed on the self.webVitals.* namespace).
|
web-vitals.attribution.iife.js |
-- |
An IIFE version of the web-vitals.attribution.js build (exposed on the self.webVitals.* namespace).
|
Most developers will generally want to use "standard" build (via either the ES module or UMD version, depending on your bundler/build system), as it's the easiest to use out of the box and integrate into existing tools.
However, if you'd lke to collect additional debug information to help you diagnose performance bottlenecks based on real-user issues, use the "attribution" build.
For guidance on how to collect and use real-user data to debug performance issues, see Debug performance in the field.
All metrics types inherit from the following base interface:
interface Metric {
/**
* The name of the metric (in acronym form).
*/
name: 'CLS' | 'FCP' | 'INP' | 'LCP' | 'TTFB';
/**
* The current value of the metric.
*/
value: number;
/**
* The rating as to whether the metric value is within the "good",
* "needs improvement", or "poor" thresholds of the metric.
*/
rating: 'good' | 'needs-improvement' | 'poor';
/**
* The delta between the current value and the last-reported value.
* On the first report, `delta` and `value` will always be the same.
*/
delta: number;
/**
* A unique ID representing this particular metric instance. This ID can
* be used by an analytics tool to dedupe multiple values sent for the same
* metric instance, or to group multiple deltas together and calculate a
* total. It can also be used to differentiate multiple different metric
* instances sent from the same page, which can happen if the page is
* restored from the back/forward cache (in that case new metrics object
* get created).
*/
id: string;
/**
* Any performance entries relevant to the metric value calculation.
* The array may also be empty if the metric value was not based on any
* entries (e.g. a CLS value of 0 given no layout shifts).
*/
entries: PerformanceEntry[];
/**
* The type of navigation.
*
* This will be the value returned by the Navigation Timing API (or
* `undefined` if the browser doesn't support that API), with the following
* exceptions:
* - 'back-forward-cache': for pages that are restored from the bfcache.
* - 'back_forward' is renamed to 'back-forward' for consistency.
* - 'prerender': for pages that were prerendered.
* - 'restore': for pages that were discarded by the browser and then
* restored by the user.
*/
navigationType:
| 'navigate'
| 'reload'
| 'back-forward'
| 'back-forward-cache'
| 'prerender'
| 'restore';
}
Metric-specific subclasses:
interface CLSMetric extends Metric {
name: 'CLS';
entries: LayoutShift[];
}
interface FCPMetric extends Metric {
name: 'FCP';
entries: PerformancePaintTiming[];
}
interface INPMetric extends Metric {
name: 'INP';
entries: PerformanceEventTiming[];
}
interface LCPMetric extends Metric {
name: 'LCP';
entries: LargestContentfulPaint[];
}
interface TTFBMetric extends Metric {
name: 'TTFB';
entries: PerformanceNavigationTiming[];
}
The thresholds of metric's "good", "needs improvement", and "poor" ratings.
- Metric values up to and including [0] are rated "good"
- Metric values up to and including [1] are rated "needs improvement"
- Metric values above [1] are "poor"
Metric value | Rating |
---|---|
≦ [0] | "good" |
> [0] and ≦ [1] | "needs improvement" |
> [1] | "poor" |
type MetricRatingThresholds = [number, number];
See also Rating Thresholds.
interface ReportOpts {
reportAllChanges?: boolean;
}
Metric-specific subclasses:
interface INPReportOpts extends ReportOpts {
durationThreshold?: number;
}
A subclass of ReportOpts
used for each metric function exported in the attribution build.
interface AttributionReportOpts extends ReportOpts {
generateTarget?: (el: Node | null) => string | null | undefined;
}
Metric-specific subclasses:
interface INPAttributionReportOpts extends AttributionReportOpts {
durationThreshold?: number;
}
The LoadState
type is used in several of the metric attribution objects.
/**
* The loading state of the document. Note: this value is similar to
* `document.readyState` but it subdivides the "interactive" state into the
* time before and after the DOMContentLoaded event fires.
*
* State descriptions:
* - `loading`: the initial document response has not yet been fully downloaded
* and parsed. This is equivalent to the corresponding `readyState` value.
* - `dom-interactive`: the document has been fully loaded and parsed, but
* scripts may not have yet finished loading and executing.
* - `dom-content-loaded`: the document is fully loaded and parsed, and all
* scripts (except `async` scripts) have loaded and finished executing.
* - `complete`: the document and all of its sub-resources have finished
* loading. This is equivalent to the corresponding `readyState` value.
*/
type LoadState =
| 'loading'
| 'dom-interactive'
| 'dom-content-loaded'
| 'complete';
function onCLS(callback: (metric: CLSMetric) => void, opts?: ReportOpts): void;
Calculates the CLS value for the current page and calls the callback
function once the value is ready to be reported, along with all layout-shift
performance entries that were used in the metric value calculation. The reported value is a double (corresponding to a layout shift score).
Important
CLS should be continually monitored for changes throughout the entire lifespan of a page—including if the user returns to the page after it's been hidden/backgrounded. However, since browsers often will not fire additional callbacks once the user has backgrounded a page, callback
is always called when the page's visibility state changes to hidden. As a result, the callback
function might be called multiple times during the same page load (see Reporting only the delta of changes for how to manage this).
If the reportAllChanges
configuration option is set to true
, the callback
function will be called as soon as the value is initially determined as well as any time the value changes throughout the page lifespan (though not necessarily for every layout shift). Note that regardless of whether reportAllChanges
is used, the final reported value will be the same.
function onFCP(callback: (metric: FCPMetric) => void, opts?: ReportOpts): void;
Calculates the FCP value for the current page and calls the callback
function once the value is ready, along with the relevant paint
performance entry used to determine the value. The reported value is a DOMHighResTimeStamp
.
function onINP(
callback: (metric: INPMetric) => void,
opts?: INPReportOpts,
): void;
Calculates the INP value for the current page and calls the callback
function once the value is ready, along with the event
performance entries reported for that interaction. The reported value is a DOMHighResTimeStamp
.
Important
INP should be continually monitored for changes throughout the entire lifespan of a page—including if the user returns to the page after it's been hidden/backgrounded. However, since browsers often will not fire additional callbacks once the user has backgrounded a page, callback
is always called when the page's visibility state changes to hidden. As a result, the callback
function might be called multiple times during the same page load (see Reporting only the delta of changes for how to manage this).
A custom durationThreshold
configuration option can optionally be passed to control the minimum duration filter for event-timing
. Events which are faster than this threshold are not reported. Note that the first-input
entry is always observed, regardless of duration, to ensure you always have some INP score. The default threshold, after the library is initialized, is 40
milliseconds (the event-timing
default of 104
milliseconds applies to all events emitted before the library is initialised). This default threshold of 40
is chosen to strike a balance between usefulness and performance. Running this callback for any interaction that spans just one or two frames is likely not worth the insight that could be gained.
If the reportAllChanges
configuration option is set to true
, the callback
function will be called as soon as the value is initially determined as well as any time the value changes throughout the page lifespan (though not necessarily for every interaction). Note that regardless of whether reportAllChanges
is used, the final reported value will be the same.
function onLCP(callback: (metric: LCPMetric) => void, opts?: ReportOpts): void;
Calculates the LCP value for the current page and calls the callback
function once the value is ready (along with the relevant largest-contentful-paint
performance entry used to determine the value). The reported value is a DOMHighResTimeStamp
.
If the reportAllChanges
configuration option is set to true
, the callback
function will be called any time a new largest-contentful-paint
performance entry is dispatched, or once the final value of the metric has been determined. Note that regardless of whether reportAllChanges
is used, the final reported value will be the same.
function onTTFB(
callback: (metric: TTFBMetric) => void,
opts?: ReportOpts,
): void;
Calculates the