How to Get 60,000 Requests per Second out of SQLite on a $5 VPS?
Six PRAGMAs, two connection pools, and one rule about transactions — that is all it takes to turn the world’s most-deployed database into a surprisingly capable production backend.
Ingredients
- SQLite — ships with almost every language runtime, zero install needed
- A $5/month VPS — Hetzner CX22, DigitalOcean Droplet, or any comparable instance
- Go (recommended for maximum throughput) or Python
- One database file on local disk — no network round-trips, no connection handshakes
SQLite is not a toy. It is the most widely deployed database engine on Earth: in every browser, every phone, and almost every embedded device. What it lacks in multi-server replication it more than compensates for with zero latency (the database is a local file), zero operational overhead, and a mature, battle-tested storage engine. For the overwhelming majority of side projects and small SaaS products it is not just good enough — it is the right tool.
The default settings, however, are designed for safety and broad compatibility, not throughput. A handful of configuration changes unlock most of the performance that SQLite is capable of.
1 Configure the six PRAGMAs
Run these six PRAGMA statements immediately after opening every
connection, before doing anything else. They are cheap and idempotent.
Go
import (
"database/sql"
_ "github.com/mattn/go-sqlite3"
)
const dsn = "file:./app.db" +
"?_journal_mode=WAL" +
"&_busy_timeout=5000" +
"&_synchronous=NORMAL" +
"&_cache_size=-20000" +
"&_foreign_keys=true" +
"&_temp_store=memory"
db, err := sql.Open("sqlite3", dsn)
Python
import sqlite3
PRAGMAS = """
PRAGMA journal_mode = WAL;
PRAGMA busy_timeout = 5000;
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = -20000;
PRAGMA foreign_keys = true;
PRAGMA temp_store = memory;
"""
def open_db(path: str) -> sqlite3.Connection:
conn = sqlite3.connect(path, check_same_thread=False)
conn.executescript(PRAGMAS)
return conn
What each setting does:
-
journal_mode = WAL— Switches from rollback-journal to write-ahead log. In WAL mode reads and writes proceed concurrently; readers never block writers and writers never block readers. This single change accounts for the majority of the throughput gain. -
busy_timeout = 5000— When SQLite cannot acquire a lock it will retry for up to 5 000 ms before returningSQLITE_BUSY. Without this, any contention produces immediate errors instead of a brief wait. -
synchronous = NORMAL— SQLite syncs to disk less aggressively. Fully safe in WAL mode because WAL provides its own durability guarantees. Reducesfsync()calls significantly. -
cache_size = -20000— A negative value is interpreted as kilobytes, so this allocates 20 MB of RAM as a page cache. Frequently-read pages stay in memory; disk reads drop. -
foreign_keys = true— SQLite disables foreign-key enforcement by default for backwards-compatibility reasons. Unless you are doing something unusual, turn it on. -
temp_store = memory— Moves temporary tables and indices — used by sorting, grouping, and subquery materialisation — from disk into RAM. Queries withORDER BYorGROUP BYget noticeably faster.
2 Split into two connection pools
Open two separate pools against the same database file: one read-only, one for writes. Limit the write pool to a single connection. This is the most important structural decision.
SQLite allows multiple concurrent readers in WAL mode, but only one writer at a
time. If all your connections compete for the write lock you will see
SQLITE_BUSY errors even with busy_timeout set. A
dedicated single-connection write pool makes contention impossible by design.
Size the read pool to the number of CPU cores.
Go
import (
"database/sql"
"runtime"
_ "github.com/mattn/go-sqlite3"
)
// Read pool: one connection per CPU core
readDB, _ := sql.Open("sqlite3", dsn+"&mode=ro")
readDB.SetMaxOpenConns(runtime.NumCPU())
readDB.SetMaxIdleConns(runtime.NumCPU())
// Write pool: exactly one connection
writeDB, _ := sql.Open("sqlite3", dsn)
writeDB.SetMaxOpenConns(1)
writeDB.SetMaxIdleConns(1)
Python
import sqlite3
import os
from queue import Queue
CPU_COUNT = os.cpu_count() or 4
def make_pool(path: str, size: int, read_only: bool) -> Queue:
mode = "ro" if read_only else "rwc"
uri = f"file:{path}?mode={mode}"
pool: Queue = Queue(maxsize=size)
for _ in range(size):
conn = sqlite3.connect(uri, uri=True, check_same_thread=False)
conn.executescript(PRAGMAS)
pool.put(conn)
return pool
read_pool = make_pool("app.db", CPU_COUNT, read_only=True)
write_pool = make_pool("app.db", 1, read_only=False)
3 Always use BEGIN IMMEDIATE for writes
SQLite’s default transaction mode is deferred: it acquires
the write lock lazily, only when it first touches a writable page. This means
a transaction can begin as a read, then attempt to upgrade to a write —
and fail with SQLITE_BUSY if another writer holds the lock.
Declare your intent upfront with BEGIN IMMEDIATE. SQLite acquires
the write lock straight away and holds it for the duration. No mid-transaction
surprises.
Go
// sql.LevelSerializable maps to BEGIN IMMEDIATE in go-sqlite3
tx, err := writeDB.BeginTx(ctx, &sql.TxOptions{
Isolation: sql.LevelSerializable,
})
if err != nil {
return err
}
defer tx.Rollback()
// ... your writes ...
return tx.Commit()
Python
conn = write_pool.get()
try:
conn.execute("BEGIN IMMEDIATE")
conn.execute("INSERT INTO events (data) VALUES (?)", (payload,))
conn.commit()
finally:
write_pool.put(conn)
4 Leave cache=shared off
Several SQLite tutorials recommend enabling shared cache mode by
appending cache=shared to the DSN. Do not do this.
Shared cache introduces table-level locking on top of the file-level locking
SQLite already performs. The result is more SQLITE_BUSY
errors, not fewer, and they appear to come from read operations, which makes
them hard to diagnose. Shared cache is disabled by default for good reason.
Leave it that way and rely on WAL mode and two pools instead.
5 Know when SQLite is not the right choice
SQLite is the right database for the vast majority of side projects, but it is not the right database for everything.
- Multiple write-heavy servers — SQLite is a single-file database. If more than one application server needs to write concurrently, you need either a client-server database or a replication tool such as Litestream or LiteFS.
- Write throughput above ~10,000 RPS — all writes are serialised through a single lock. If your bottleneck is write volume rather than read volume, you will eventually hit a ceiling that no amount of PRAGMA tuning can move.
-
Very large databases (>100 GB) — SQLite handles
large files, but you lose the operational tooling that PostgreSQL provides:
streaming replication, online vacuuming, point-in-time recovery,
pg_dump, and so on. - Complex analytical queries on large datasets — for OLAP workloads, consider DuckDB instead. It shares SQLite’s embedded philosophy but is built around columnar storage and vectorised execution.
If none of the above apply, you almost certainly do not need PostgreSQL yet. Ship with SQLite, keep a Litestream backup to S3-compatible storage, and migrate later only if you actually hit the limits.
What you end up with
A backend that handles 60,000 read requests per second on hardware costing
$5 a month — with no managed database bill, no connection pooler to
configure, and no network round-trip to a database server. Operational
overhead is a single file you can back up with cp, inspect with
any SQLite client, and restore in seconds.
The bill that never comes
In early 2024 two widely-shared stories put indie-hacker infrastructure choices back in the spotlight. Netlify sent a developer a $104,000 invoice for a static website that got hit by a DDoS. A few months later the photo-sharing app Cara went viral and received a $96,280 bill from Vercel — for serving static files and API routes that any $5 VPS could have handled without breaking a sweat.
The common thread: platform pricing that scales with traffic, with no hard cap by default. A side project going viral is supposed to be good news. With usage-based pricing and no ceiling it can be a financial emergency instead.
A $5 VPS is the opposite model. The bill is $5 regardless of how much traffic arrives. You own the process, the file system, and the database. Nobody can shut you down or hand you an invoice you cannot pay. The trade-off is that you are responsible for your own uptime — but for most side projects “pretty reliable” is more than sufficient, and the numbers show that pretty reliable can also mean 6,500 RPS on authenticated dynamic pages with database reads on exactly that hardware, before you even apply the optimisations above.
The recipe in this article is what turns that $5 machine into something that can absorb a viral moment without ever opening your email in dread.
Further reading
- SQLite WAL mode — official documentation
- SQLite performance tuning — phiresky (deep technical dive)
- Litestream — continuous SQLite replication to S3
- Appropriate uses for SQLite — official guidance on when to switch
- Netlify $104k bill — Hacker News discussion