How to Get 60,000 Requests per Second out of SQLite on a $5 VPS?

Six PRAGMAs, two connection pools, and one rule about transactions — that is all it takes to turn the world’s most-deployed database into a surprisingly capable production backend.

Setup time 30 minutes
One-time cost Free
Running cost ~$5 / month
Peak throughput 60,000 RPS

Ingredients

SQLite is not a toy. It is the most widely deployed database engine on Earth: in every browser, every phone, and almost every embedded device. What it lacks in multi-server replication it more than compensates for with zero latency (the database is a local file), zero operational overhead, and a mature, battle-tested storage engine. For the overwhelming majority of side projects and small SaaS products it is not just good enough — it is the right tool.

The default settings, however, are designed for safety and broad compatibility, not throughput. A handful of configuration changes unlock most of the performance that SQLite is capable of.

1 Configure the six PRAGMAs

Run these six PRAGMA statements immediately after opening every connection, before doing anything else. They are cheap and idempotent.

Go

import (
    "database/sql"
    _ "github.com/mattn/go-sqlite3"
)

const dsn = "file:./app.db" +
    "?_journal_mode=WAL" +
    "&_busy_timeout=5000" +
    "&_synchronous=NORMAL" +
    "&_cache_size=-20000" +
    "&_foreign_keys=true" +
    "&_temp_store=memory"

db, err := sql.Open("sqlite3", dsn)

Python

import sqlite3

PRAGMAS = """
    PRAGMA journal_mode  = WAL;
    PRAGMA busy_timeout  = 5000;
    PRAGMA synchronous   = NORMAL;
    PRAGMA cache_size    = -20000;
    PRAGMA foreign_keys  = true;
    PRAGMA temp_store    = memory;
"""

def open_db(path: str) -> sqlite3.Connection:
    conn = sqlite3.connect(path, check_same_thread=False)
    conn.executescript(PRAGMAS)
    return conn

What each setting does:

2 Split into two connection pools

Open two separate pools against the same database file: one read-only, one for writes. Limit the write pool to a single connection. This is the most important structural decision.

SQLite allows multiple concurrent readers in WAL mode, but only one writer at a time. If all your connections compete for the write lock you will see SQLITE_BUSY errors even with busy_timeout set. A dedicated single-connection write pool makes contention impossible by design. Size the read pool to the number of CPU cores.

Go

import (
    "database/sql"
    "runtime"
    _ "github.com/mattn/go-sqlite3"
)

// Read pool: one connection per CPU core
readDB, _ := sql.Open("sqlite3", dsn+"&mode=ro")
readDB.SetMaxOpenConns(runtime.NumCPU())
readDB.SetMaxIdleConns(runtime.NumCPU())

// Write pool: exactly one connection
writeDB, _ := sql.Open("sqlite3", dsn)
writeDB.SetMaxOpenConns(1)
writeDB.SetMaxIdleConns(1)

Python

import sqlite3
import os
from queue import Queue

CPU_COUNT = os.cpu_count() or 4

def make_pool(path: str, size: int, read_only: bool) -> Queue:
    mode = "ro" if read_only else "rwc"
    uri = f"file:{path}?mode={mode}"
    pool: Queue = Queue(maxsize=size)
    for _ in range(size):
        conn = sqlite3.connect(uri, uri=True, check_same_thread=False)
        conn.executescript(PRAGMAS)
        pool.put(conn)
    return pool

read_pool  = make_pool("app.db", CPU_COUNT, read_only=True)
write_pool = make_pool("app.db", 1,         read_only=False)
Why Go outperforms Python here: Go goroutines are scheduled by the runtime at near-zero overhead — you can have tens of thousands of concurrent handlers with modest memory usage and no Global Interpreter Lock. Python’s GIL means only one thread executes Python bytecode at a time; CPU-bound paths cannot truly parallelise. For I/O-heavy SQLite workloads on a single machine the gap is real: identical configurations routinely reach 60–80K RPS in Go and 8–15K in Python on the same hardware. Use Python when development velocity matters most; use Go when you need to squeeze every last request out of a $5 machine.

3 Always use BEGIN IMMEDIATE for writes

SQLite’s default transaction mode is deferred: it acquires the write lock lazily, only when it first touches a writable page. This means a transaction can begin as a read, then attempt to upgrade to a write — and fail with SQLITE_BUSY if another writer holds the lock.

Declare your intent upfront with BEGIN IMMEDIATE. SQLite acquires the write lock straight away and holds it for the duration. No mid-transaction surprises.

Go

// sql.LevelSerializable maps to BEGIN IMMEDIATE in go-sqlite3
tx, err := writeDB.BeginTx(ctx, &sql.TxOptions{
    Isolation: sql.LevelSerializable,
})
if err != nil {
    return err
}
defer tx.Rollback()

// ... your writes ...

return tx.Commit()

Python

conn = write_pool.get()
try:
    conn.execute("BEGIN IMMEDIATE")
    conn.execute("INSERT INTO events (data) VALUES (?)", (payload,))
    conn.commit()
finally:
    write_pool.put(conn)

4 Leave cache=shared off

Several SQLite tutorials recommend enabling shared cache mode by appending cache=shared to the DSN. Do not do this.

Shared cache introduces table-level locking on top of the file-level locking SQLite already performs. The result is more SQLITE_BUSY errors, not fewer, and they appear to come from read operations, which makes them hard to diagnose. Shared cache is disabled by default for good reason. Leave it that way and rely on WAL mode and two pools instead.

5 Know when SQLite is not the right choice

SQLite is the right database for the vast majority of side projects, but it is not the right database for everything.

If none of the above apply, you almost certainly do not need PostgreSQL yet. Ship with SQLite, keep a Litestream backup to S3-compatible storage, and migrate later only if you actually hit the limits.

What you end up with

A backend that handles 60,000 read requests per second on hardware costing $5 a month — with no managed database bill, no connection pooler to configure, and no network round-trip to a database server. Operational overhead is a single file you can back up with cp, inspect with any SQLite client, and restore in seconds.

The bill that never comes

In early 2024 two widely-shared stories put indie-hacker infrastructure choices back in the spotlight. Netlify sent a developer a $104,000 invoice for a static website that got hit by a DDoS. A few months later the photo-sharing app Cara went viral and received a $96,280 bill from Vercel — for serving static files and API routes that any $5 VPS could have handled without breaking a sweat.

The common thread: platform pricing that scales with traffic, with no hard cap by default. A side project going viral is supposed to be good news. With usage-based pricing and no ceiling it can be a financial emergency instead.

A $5 VPS is the opposite model. The bill is $5 regardless of how much traffic arrives. You own the process, the file system, and the database. Nobody can shut you down or hand you an invoice you cannot pay. The trade-off is that you are responsible for your own uptime — but for most side projects “pretty reliable” is more than sufficient, and the numbers show that pretty reliable can also mean 6,500 RPS on authenticated dynamic pages with database reads on exactly that hardware, before you even apply the optimisations above.

The recipe in this article is what turns that $5 machine into something that can absorb a viral moment without ever opening your email in dread.

Further reading

Reference: The PRAGMA configuration and connection-pool pattern in this article were first synthesised and popularised by Nikita Melkozerov in a July 2024 thread on X. The techniques themselves are established SQLite best practices, not inventions of his or ours.