Skip to content

Conversation

0xcadams
Copy link
Member

@0xcadams 0xcadams commented Jul 25, 2025

Adds a new @rocicorp/zero/expo package to add a SQLite StoreProvider for Expo/React Native.

import { schema } from "@/schema";
import { expoSQLiteStoreProvider } from "@rocicorp/zero/expo";
import { ZeroProvider } from "@rocicorp/zero/react";
import { Stack } from "expo-router";

const storeProvider = expoSQLiteStoreProvider();

export default function RootLayout() {
  return (
    <ZeroProvider
      kvStore={storeProvider}
      server="http://localhost:4848"
      userID="anon"
      schema={schema}
    >
      <Stack />
    </ZeroProvider>
  );
}

Builds on @austinm911's work.

  • Adds a core SQLiteDatabaseManager which handles schema setup, PRAGMAs, and connection handling.
  • Adds a generic SQLiteStore which works with bare SQLite and uses prepared statements with RWLock.

Note:
iOS and Android both had issues with WAL journal mode - they hang on COMMIT indefinitely. According to Expo, this is recommended, but it doesn't seem to work, so it is disabled by default, with a performance hit. Here is the performance difference with better-sqlite3:

Screenshot 2025-07-28 at 10 23 53 AM

Expo Demo

expo-sqlite.mp4

Copy link

vercel bot commented Jul 25, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
replicache-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Aug 7, 2025 6:36pm
zbugs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Aug 7, 2025 6:36pm

Copy link

github-actions bot commented Jul 25, 2025

🐰 Bencher Report

Branch0xcadams/expo
TestbedLinux
Click to view all benchmark results
BenchmarkThroughputBenchmark Result
operations / second (ops/s) x 1e3
(Result Δ%)
Lower Boundary
operations / second (ops/s) x 1e3
(Limit %)
src/client/custom.bench.ts > big schema📈 view plot
🚷 view threshold
386.77 ops/s x 1e3
(-0.78%)Baseline: 389.83 ops/s x 1e3
357.67 ops/s x 1e3
(92.48%)
src/client/zero.bench.ts > basics > All 1000 rows x 10 columns (numbers)📈 view plot
🚷 view threshold
1.63 ops/s x 1e3
(-0.71%)Baseline: 1.64 ops/s x 1e3
1.60 ops/s x 1e3
(98.50%)
src/client/zero.bench.ts > pk compare > pk = N📈 view plot
🚷 view threshold
30.21 ops/s x 1e3
(+0.58%)Baseline: 30.04 ops/s x 1e3
28.88 ops/s x 1e3
(95.60%)
src/client/zero.bench.ts > with filter > Lower rows 500 x 10 columns (numbers)📈 view plot
🚷 view threshold
2.55 ops/s x 1e3
(-0.93%)Baseline: 2.58 ops/s x 1e3
2.51 ops/s x 1e3
(98.24%)
🐰 View full continuous benchmarking report in Bencher

Copy link

github-actions bot commented Jul 25, 2025

🐰 Bencher Report

Branch0xcadams/expo
TestbedLinux
Click to view all benchmark results
BenchmarkFile SizeBenchmark Result
kilobytes (KB)
(Result Δ%)
Upper Boundary
kilobytes (KB)
(Limit %)
zero-package.tgz📈 view plot
🚷 view threshold
1,251.66 KB
(+0.53%)Baseline: 1,245.05 KB
1,269.95 KB
(98.56%)
zero.js📈 view plot
🚷 view threshold
202.56 KB
(0.00%)Baseline: 202.56 KB
206.62 KB
(98.04%)
zero.js.br📈 view plot
🚷 view threshold
56.77 KB
(0.00%)Baseline: 56.77 KB
57.91 KB
(98.04%)
🐰 View full continuous benchmarking report in Bencher

@arv
Copy link
Contributor

arv commented Jul 26, 2025

Exciting! I'm on vacation until beginning of August so I will not be able to provide any good feedback on this until then.

Copy link
Contributor

@tantaman tantaman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for figuring this out

Comment on lines 26 to 27
await withWrite(walStore, async wt => {
await wt.put('foo1', 'bar1');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the bench attempting to measure? Just raw transactions per second? Or do you want more information on writes per second?

Writes per second should be roughly on the order of transactions_per_second * writes_per_transaction so 1 write per transaction will give you the same writes per second as transactions per second.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is a good question. I was using this benchmark to improve performance of the SQLite store w/ better-sqlite3, as a way to test performance of the store as I was making changes to schema/PRAGMAs. I updated this to be simpler and compare WAL modes. Let me know if this doesn't fit into existing benchmarks or any ways you see improving this.


return Promise.resolve(write);
} catch (e) {
return Promise.reject(e);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same question as with read.

It does feel weird to be doing locking given the interface we expose to SQLite is technically synchronous.

@0xcadams
Copy link
Member Author

0xcadams commented Aug 6, 2025

I benchmarked performance (with @rocicorp/zero-sqlite3) of using a read connection pool versus opening a new connection on every read.

TLDR; the read pooling is roughly 8-14x faster.

The last test, plain read, is probably the clearest comparison, since it's just running 5 reads in parallel.

bench(
  `plain read`,
  async () => {
    const readP1 = withRead(store, async rt => {
      expect(await rt.get('foo')).equal('bar');
    });
    const readP2 = withRead(store, async rt => {
      expect(await rt.get('foo')).equal('bar');
    });
    const readP3 = withRead(store, async rt => {
      expect(await rt.get('foo')).equal('bar');
    });
    const readP4 = withRead(store, async rt => {
      expect(await rt.get('foo')).equal('bar');
    });
    const readP5 = withRead(store, async rt => {
      expect(await rt.get('foo')).equal('bar');
    });

    await Promise.all([readP1, readP2, readP3, readP4, readP5]);
  },
  {
    throws: true,
    setup: async () => {
      await withWrite(store, async wt => {
        await wt.put('foo', 'bar');
      });
    },
  },
);

With a read connection pool of 2:

Screenshot 2025-08-06 at 11 25 25 AM

With a new connection on every read:

Screenshot 2025-08-06 at 11 25 09 AM

Copy link
Contributor

@arv arv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

I know Matt is reviewing this already but I was very curious.

I looked at the locking structure and it seems to have the same semantics as the IDBStore.

}

put(key: string, value: ReadonlyJSONValue): Promise<void> {
this._preparedStatements.put.run([key, JSON.stringify(value)]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason why the type of run takes rest args but the callers always wrap in a single array? Consider removing one array allocation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch - moved to rest args

Copy link
Contributor

@tantaman tantaman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

new SQLiteDatabaseManager({
open: name => {
const filename = path.resolve(__dirname, `${name}.db`);
// this cannot be :memory: because multiple read connections must access
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's technically doable via memory with some fancy options. But yeah, better to just keep it file based.

@0xcadams 0xcadams enabled auto-merge (squash) August 7, 2025 17:38
@aboodman
Copy link
Contributor

aboodman commented Aug 7, 2025

OK on more thought I understand the reason for the design here.

I was wondering why we cannot rely on RWLock within the context and thereby share a single connection across the context. But the problem is that RWLock wants to allow read tx to overlap. And we cannot represent overlapped, separate read tx with a single connection (at least not cleanly?).

LGTM too.

@0xcadams
Copy link
Member Author

0xcadams commented Aug 7, 2025

Unfortunately the tests keep failing and I don't have permissions to override the PR requirements

@0xcadams 0xcadams merged commit f90fee6 into main Aug 7, 2025
20 of 23 checks passed
@0xcadams 0xcadams deleted the 0xcadams/expo branch August 7, 2025 20:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants