Confession: when I first added sqlx to a Rust service, I spent half a day fighting it. The compile-time SQL checks felt magical until I tried to run cargo check with the database offline. Then the magic became a rake to the face.
Eight months later that same service is in production, has handled a few million requests, and I haven’t ripped sqlx out. I’ve also figured out which features I actually use, which ones I avoided after the first burn, and which docs I wish someone had handed me on day one.
This isn’t a tutorial. The official sqlx README does that job fine. This is the post-deployment write-up. What bit me, what I now do without thinking, and the small handful of patterns that turned sqlx from “interesting library” into “boring infrastructure” in my codebase. If you’re picking a database layer for an Axum or Actix backend (I covered the framework choice in my Rust web frameworks roundup), read on.
The compile-time check is great. Until it isn’t.
The selling point of sqlx is sqlx::query!(), a macro that connects to a real database at compile time, runs an EXPLAIN, and gives you a struct typed to your actual schema. This is wonderful when it works.
Here’s what bit me: it requires DATABASE_URL to be set and the database to be reachable when you run cargo check. Fine on my laptop. Not fine in CI, not fine on a colleague’s machine, and not fine when you want to ship a Docker image build that doesn’t depend on a live Postgres.
The fix is offline mode. Run cargo sqlx prepare once with the database connected, commit the resulting .sqlx/ directory, and your builds work without a database. I now treat this as part of the workflow, not an afterthought:
# After any change to a query!() macro
cargo sqlx prepare
git add .sqlx
git commit -m "Update sqlx query metadata"
In CI, I run cargo sqlx prepare --check to fail the build if the cached metadata is stale. That single line has saved me three “works on my machine” incidents.
sqlx::query! vs sqlx::query: I use both
I see new projects pick one and stick with it. After a while, you want both.
Use query!() and query_as!() for everything where the SQL is fixed and the result shape is known. You get compile-time SQL validation and a typed struct back. No .try_get("column") calls, no string typos at runtime.
let user = sqlx::query_as!(
User,
"SELECT id, email, created_at FROM users WHERE id = $1",
user_id
)
.fetch_one(&pool)
.await?;
Use plain query() and query_as() (no exclamation mark) when you build SQL dynamically. Search filters, optional conditions, anything where the WHERE clause depends on runtime input. The macros can’t know your SQL ahead of time if you stitch it together at runtime.
let mut sql = String::from("SELECT * FROM events WHERE 1=1");
if after_id.is_some() {
sql.push_str(" AND id > $1");
}
let rows = sqlx::query_as::<_, Event>(&sql)
.bind(after_id)
.fetch_all(&pool)
.await?;
The mistake I see most often is people reaching for the QueryBuilder for everything. It’s there for a reason, but for static queries the macros win on safety. The docs.rs page for sqlx lays out the trait bounds explicitly if you want to dig into why the type inference works the way it does.
Connection pool: stop copying the default
Every sqlx tutorial I’ve read uses PgPoolOptions::new().connect(url).await?. That gives you a pool with max_connections = 10 and an acquire_timeout of 30 seconds. Both are wrong for most production services.
Postgres has a hard cap on max_connections (default 100). If your service has 4 instances each opening 10 connections, you’ve used 40% of the cap before any other service touches the database. I’ve seen “we ran out of Postgres connections” outages caused by exactly this.
What I actually configure now:
let pool = PgPoolOptions::new()
.max_connections(20)
.min_connections(5)
.acquire_timeout(Duration::from_secs(3))
.idle_timeout(Duration::from_secs(60 * 10))
.max_lifetime(Duration::from_secs(60 * 30))
.connect(&database_url)
.await?;
Three things changed. acquire_timeout is short: if I can’t get a connection in 3 seconds, something is wrong and I’d rather fail the request than queue. idle_timeout and max_lifetime are both set so connections recycle, because Postgres connections that live forever pin memory and can outlive load balancer changes. min_connections keeps a few warm so the first request after idle isn’t slow.
These numbers aren’t universal. They’re the starting point I tune from.
Transactions and the “but the trait” problem
This is the sqlx footgun that gets every Rust beginner. You write a helper function that takes a &PgPool, then you wrap a few helpers in a transaction and the borrow checker explodes.
// Doesn't work the way you'd hope:
async fn create_user(pool: &PgPool, email: &str) -> Result<User> {
sqlx::query_as!(User, "INSERT INTO users (email) VALUES ($1) RETURNING *", email)
.fetch_one(pool)
.await
}
// You can't pass the transaction here without changing the signature.
let mut tx = pool.begin().await?;
let user = create_user(&tx, "[email protected]").await?; // ERROR
tx.commit().await?;
The fix is to make your helpers generic over Executor:
async fn create_user<'e, E>(executor: E, email: &str) -> Result<User>
where
E: sqlx::Executor<'e, Database = sqlx::Postgres>,
{
sqlx::query_as!(User, "INSERT INTO users (email) VALUES ($1) RETURNING *", email)
.fetch_one(executor)
.await
}
Now both &pool and &mut tx work. I write almost every database helper this way. The signature looks ugly the first time but you stop noticing after a week, and you never have to choose between transaction-aware and pool-aware versions of the same function. If the lifetime annotation feels mysterious, my older borrow checker writeup covers why 'e shows up here.
Migrations: yes, just use sqlx-cli
I tried Refinery first because it had a nicer Rust-side API. I switched back to sqlx-cli after a month because the integration with the macros is too useful to give up. sqlx prepare and sqlx migrate run share state, and the migrations directory layout is what query!() introspects against.
# Once
cargo install sqlx-cli --no-default-features --features postgres
# New migration
sqlx migrate add -r create_users_table
# Run it
sqlx migrate run
# In your binary
sqlx::migrate!().run(&pool).await?;
sqlx::migrate!() reads the migrations/ folder at compile time and embeds the SQL into your binary. No filesystem dependency at runtime. This matters when you ship a Docker image, because the container does not need the migrations directory.
One thing to know: sqlx migrate add -r creates a paired .up.sql and .down.sql. The -r is for “reversible”. I’ve never actually run a down migration in production (I roll forward), but the discipline of writing one forces you to think about whether the change is safe to undo. Worth the 30 seconds.
What I’d do differently
If I were starting today, three changes from my original setup.
First, enable offline mode on day one. Don’t wait until the first CI failure to learn cargo sqlx prepare. The friction of “did I forget to commit .sqlx/?” is low once it’s a habit.
Second, write the helper functions generic over Executor from the first commit. Refactoring a few hundred lines of database code to add the generics later is annoying. Doing it from the start costs you an extra type parameter you stop seeing within a week.
Third, set the pool config explicitly, even for a hobby project. The defaults look reasonable and they’re not. Spending five minutes on max_connections and acquire_timeout upfront prevents a class of incident that’s painful to debug under load.
If you’re starting a new Rust service this week, do these three things in order: enable SQLX_OFFLINE in your build with cargo sqlx prepare, set max_connections in the pool builder, and add cargo sqlx prepare --check to CI. That’s it. Everything else (query macros, transactions, migrations) you can pick up as you need them.
I write up more of this kind of “what actually shipped” Rust work over on my project page. The Rust async book is still the right place to start if any of the await stuff in this post felt fast. It’s the one official Rust resource I keep open in a tab.