Open-source vector similarity search for Postgres
Store your vectors with the rest of your data. Supports:
- exact and approximate nearest neighbor search
- single-precision, half-precision, binary, and sparse vectors
- L2 distance, inner product, cosine distance, L1 distance, Hamming distance, and Jaccard distance
- any language with a Postgres client
Plus ACID compliance, point-in-time recovery, JOINs, and all of the other great features of Postgres
Compile and install the extension (supports Postgres 13+)
cd /tmp
git clone --branch v0.8.1 https://github.com/pgvector/pgvector.git
cd pgvector
make
make install # may need sudo
See the installation notes if you run into issues
You can also install it with Docker, Homebrew, PGXN, APT, Yum, pkg, or conda-forge, and it comes preinstalled with Postgres.app and many hosted providers. There are also instructions for GitHub Actions.
Ensure C++ support in Visual Studio is installed and run x64 Native Tools Command Prompt for VS [version]
as administrator. Then use nmake
to build:
set "PGROOT=C:\Program Files\PostgreSQL\17"
cd %TEMP%
git clone --branch v0.8.1 https://github.com/pgvector/pgvector.git
cd pgvector
nmake /F Makefile.win
nmake /F Makefile.win install
See the installation notes if you run into issues
You can also install it with Docker or conda-forge.
Enable the extension (do this once in each database where you want to use it)
CREATE EXTENSION vector;
Create a vector column with 3 dimensions
CREATE TABLE items (id bigserial PRIMARY KEY, embedding vector(3));
Insert vectors
INSERT INTO items (embedding) VALUES ('[1,2,3]'), ('[4,5,6]');
Get the nearest neighbors by L2 distance
SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5;
Also supports inner product (<#>
), cosine distance (<=>
), and L1 distance (<+>
)
Note: <#>
returns the negative inner product since Postgres only supports ASC
order index scans on operators
Create a new table with a vector column
CREATE TABLE items (id bigserial PRIMARY KEY, embedding vector(3));
Or add a vector column to an existing table
ALTER TABLE items ADD COLUMN embedding vector(3);
Also supports half-precision, binary, and sparse vectors
Insert vectors
INSERT INTO items (embedding) VALUES ('[1,2,3]'), ('[4,5,6]');
Or load vectors in bulk using COPY
(example)
COPY items (embedding) FROM STDIN WITH (FORMAT BINARY);
Upsert vectors
INSERT INTO items (id, embedding) VALUES (1, '[1,2,3]'), (2, '[4,5,6]')
ON CONFLICT (id) DO UPDATE SET embedding = EXCLUDED.embedding;
Update vectors
UPDATE items SET embedding = '[1,2,3]' WHERE id = 1;
Delete vectors
DELETE FROM items WHERE id = 1;
Get the nearest neighbors to a vector
SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5;
Supported distance functions are:
<->
- L2 distance<#>
- (negative) inner product<=>
- cosine distance<+>
- L1 distance<~>
- Hamming distance (binary vectors)<%>
- Jaccard distance (binary vectors)
Get the nearest neighbors to a row
SELECT * FROM items WHERE id != 1 ORDER BY embedding <-> (SELECT embedding FROM items WHERE id = 1) LIMIT 5;
Get rows within a certain distance
SELECT * FROM items WHERE embedding <-> '[3,1,2]' < 5;
Note: Combine with ORDER BY
and LIMIT
to use an index
Get the distance
SELECT embedding <-> '[3,1,2]' AS distance FROM items;
For inner product, multiply by -1 (since <#>
returns the negative inner product)
SELECT (embedding <#> '[3,1,2]') * -1 AS inner_product FROM items;
For cosine similarity, use 1 - cosine distance
SELECT 1 - (embedding <=> '[3,1,2]') AS cosine_similarity FROM items;
Average vectors
SELECT AVG(embedding) FROM items;
Average groups of vectors
SELECT category_id, AVG(embedding) FROM items GROUP BY category_id;
By default, pgvector performs exact nearest neighbor search, which provides perfect recall.
You can add an index to use approximate nearest neighbor search, which trades some recall for speed. Unlike typical indexes, you will see different results for queries after adding an approximate index.
Supported index types are:
An HNSW index creates a multilayer graph. It has better query performance than IVFFlat (in terms of speed-recall tradeoff), but has slower build times and uses more memory. Also, an index can be created without any data in the table since there isn’t a training step like IVFFlat.
Add an index for each distance function you want to use.
L2 distance
CREATE INDEX ON items USING hnsw (embedding vector_l2_ops);
Note: Use halfvec_l2_ops
for halfvec
and sparsevec_l2_ops
for sparsevec
(and similar with the other distance functions)
Inner product
CREATE INDEX ON items USING hnsw (embedding vector_ip_ops);
Cosine distance
CREATE INDEX ON items USING hnsw (embedding vector_cosine_ops);
L1 distance
CREATE INDEX ON items USING hnsw (embedding vector_l1_ops);
Hamming distance
CREATE INDEX ON items USING hnsw (embedding bit_hamming_ops);
Jaccard distance
CREATE INDEX ON items USING hnsw (embedding bit_jaccard_ops);
Supported types are:
vector
- up to 2,000 dimensionshalfvec
- up to 4,000 dimensionsbit
- up to 64,000 dimensionssparsevec
- up to 1,000 non-zero elements
Specify HNSW parameters
m
- the max number of connections per layer (16 by default)ef_construction
- the size of the dynamic candidate list for constructing the graph (64 by default)
CREATE INDEX ON items USING hnsw (embedding vector_l2_ops) WITH (m = 16, ef_construction = 64);
A higher value of ef_construction
provides better recall at the cost of index build time / insert speed.