Confession: I spent a weekend last month rewriting the same tiny service, a CSV-to-JSON transformer with a small HTTP API, mostly because I had been losing an internal argument about which language to reach for on a side project. I lost the argument anyway, but I walked away with a cleaner answer than the usual “it depends.”
This isn’t a benchmark post. There are plenty of those, and most of them test something I don’t care about. These are the notes I took while building the same thing twice: what was annoying, what was fast, and what broke in production when I ran it for a week. If you’re stuck picking between Rust and Go for a 2026 project, I hope this saves you a weekend.
The service I built (twice)
Nothing fancy. A CLI plus an HTTP endpoint that accepts a CSV upload, parses it with a schema, does some light validation, and streams back newline-delimited JSON. Maybe 300 lines of real code in each language. I deliberately picked a workload that touches the stuff people argue about: allocation, async IO, error handling, and JSON.
The Go version used net/http, encoding/csv, and encoding/json. Around 280 lines.
The Rust version used axum 0.7, serde, the csv crate, and tokio. Around 340 lines.
Both ran behind the same nginx sidecar on a small Hetzner box. I stress-tested each one with oha at 500 concurrent connections.
Where Go wins on actual delivery speed

I had the Go version running end-to-end in about 90 minutes. The Rust version took the better part of a day, and I’ve written real Rust for three years.
Most of that gap isn’t the compiler fighting me. It’s the ecosystem. In Go, encoding/csv is in the standard library and it just works. In Rust, you pick between the csv crate, csv-async, or rolling your own streaming parser, and the second you want async reading, the decision tree splinters across three or four crates.
Here’s the Go handler that does almost everything:
func transform(w http.ResponseWriter, r *http.Request) {
reader := csv.NewReader(r.Body)
reader.FieldsPerRecord = -1
enc := json.NewEncoder(w)
header, err := reader.Read()
if err != nil {
http.Error(w, err.Error(), 400)
return
}
for {
row, err := reader.Read()
if err == io.EOF {
return
}
if err != nil {
http.Error(w, err.Error(), 400)
return
}
out := make(map[string]string, len(header))
for i, h := range header {
out[h] = row[i]
}
if err := enc.Encode(out); err != nil {
return
}
}
}
That reads, transforms, and streams. Any Go dev on the team can maintain it without a walkthrough. The equivalent Rust handler with axum and an async csv wrapper was about twice the length, mostly because I had to thread generic bounds through the streaming body type.
If “ship it by Friday” is the constraint, Go is still the right tool. Rust asks you to make more decisions upfront, and a lot of them don’t pay off on a 300-line service.
Where Rust wins once the code is alive
The interesting thing happened after I had been running both for a week.
The Go version had three panics in production over the first week. All three were the same category: I had missed a nil check on a map lookup deep in the validation code. A runtime panic in a goroutine that I didn’t recover, which crashed the request handler cleanly, but still, three bugs I shouldn’t have shipped.
The Rust version had zero. Not because I’m a better Rust programmer (I’m not) but because the type system wouldn’t let me deploy the bug. Option<T> forced me to handle the missing case at the place where it mattered, not three call frames later.
This is the real Rust sales pitch for me. Not performance. Not memory. The fact that the set of things that can go wrong at runtime is genuinely smaller.
Here’s the Rust validator I ended up with:
async fn validate_row(
row: &StringRecord,
schema: &Schema,
) -> Result<ValidatedRow, ValidationError> {
let name = row.get(schema.name_idx)
.ok_or(ValidationError::MissingField("name"))?;
let age: u32 = row.get(schema.age_idx)
.ok_or(ValidationError::MissingField("age"))?
.parse()
.map_err(|_| ValidationError::BadType("age"))?;
Ok(ValidatedRow { name: name.into(), age })
}
Every error case shows up in the function signature. You can’t forget one.
Memory and throughput: the numbers from my laptop
I ran both under oha -c 500 -n 50000 on the same Hetzner CX22:
| Metric | Go | Rust |
|---|---|---|
| RSS at steady state | 78 MB | 24 MB |
| Requests/sec | 18,400 | 24,700 |
| p99 latency | 42 ms | 19 ms |
| Binary size (stripped) | 9.1 MB | 5.8 MB |
Rust wins on every metric. I want to be honest about what that means, though. At the scale I’m actually running this service, maybe 200 req/min on a good day, nobody would ever notice the difference.
The real money with Rust shows up when you’re paying for memory, not CPU. If you’re running 30 instances of a Go service and Rust cuts RSS by 3x, that’s a real bill. On a side project, it’s a rounding error.
Concurrency: goroutines vs async Rust, honestly
I’ve seen plenty of takes on this. Here’s mine, after building the same thing in both.
Goroutines are easier. They compose better. go func() {}() is the simplest concurrency primitive in any mainstream language. You think less. You ship faster. The scheduler just works.
Async Rust is more expressive. It is also significantly more expensive to learn. The first time you hit a Send bound error because your closure captured an Rc<RefCell<T>>, you will want to throw your laptop out a window. Once you internalize the model, though, the compile-time guarantees are remarkable. Things that would be a race in Go are a type error in Rust.
If you like the style of “here is what this tool actually does to your code,” I did a similar deep dive on the Tailwind v4 migration that went through the same kind of “rewrite and see what breaks” process.
My honest take: if your concurrency is “fan out N requests and wait,” Go is a better tool. If your concurrency involves shared mutable state that actually matters, Rust pays for itself.
What I reach for now (and when I switch)
After the weekend and the week of running both, I’ve settled into a rough rule I use for my own projects. I do a lot of this kind of “pick the right tool for the job” work in my consulting practice, and the Rust-vs-Go answer mostly comes down to two questions.
Default to Go if the service is going to be small, the team already knows Go, you’re IO-bound, or you need to ship this week. The standard library will carry you a long way, and anyone can maintain it.
Default to Rust if the service is going to run a long time, memory pressure is real, you’re doing anything CPU-bound, or the cost of a runtime bug is high. The up-front investment pays back in the things that don’t break at 3am.
Don’t pick either based on a benchmark chart. Pick based on who is maintaining the code a year from now, and what they’ll be debugging at 3am.
One thing you can do this week
If you’ve only ever used one of these languages, pick a small service you actually run, an endpoint, a cron job, anything around 200 lines, and rewrite it in the other language. Deploy it for a week alongside the original. You’ll learn more in that week than in any benchmark post (including this one). The Rust async book and the Go blog on context are both worth reading first, but the real understanding comes from the rewrite.