{"id":141,"date":"2026-04-23T05:02:34","date_gmt":"2026-04-23T05:02:34","guid":{"rendered":"https:\/\/abrarqasim.com\/blog\/go-concurrency-patterns-what-i-actually-ship\/"},"modified":"2026-04-23T05:02:34","modified_gmt":"2026-04-23T05:02:34","slug":"go-concurrency-patterns-what-i-actually-ship","status":"publish","type":"post","link":"https:\/\/abrarqasim.com\/blog\/go-concurrency-patterns-what-i-actually-ship\/","title":{"rendered":"Go concurrency patterns: the handful I actually ship"},"content":{"rendered":"<p>Confession: I spent my first year with Go launching goroutines like they were free and wondering why my services kept eating memory. Turns out &ldquo;just add a go in front of it&rdquo; is not a concurrency strategy. Who knew.<\/p>\n<p>If you&rsquo;ve read the canonical Rob Pike talks you already know these patterns in theory. What I want to do here is walk through the four or five that show up in my actual work, the places where I have reached for them in a real service, and the mistakes I keep making anyway. No deep dive into the runtime. No lectures about CSP. Just the code I write on a Tuesday.<\/p>\n<p>For context, most of my Go lives in backend services and CLIs: small HTTP APIs, batch jobs that process rows from a database, a few glue scripts that fan out to third-party APIs. If your Go is embedded systems or a compiler, your mileage will vary.<\/p>\n<h2 id=\"worker-pool-the-one-i-reach-for-80-of-the-time\">Worker pool: the one I reach for 80% of the time<\/h2>\n<p>Whenever I find myself writing &ldquo;I need to do N things in parallel but not too many at once,&rdquo; the answer is a worker pool. Goroutine per item is tempting and almost always wrong once N grows past a few hundred. I learned this the expensive way, with memory graphs that looked like steep staircases.<\/p>\n<p>The shape is boring and that&rsquo;s the point:<\/p>\n<pre><code class=\"language-go\">func fetchAll(ctx context.Context, urls []string, concurrency int) []Result {\n    jobs := make(chan string)\n    results := make(chan Result)\n\n    var wg sync.WaitGroup\n    for i := 0; i &lt; concurrency; i++ {\n        wg.Add(1)\n        go func() {\n            defer wg.Done()\n            for url := range jobs {\n                results &lt;- fetch(ctx, url)\n            }\n        }()\n    }\n\n    go func() {\n        defer close(jobs)\n        for _, u := range urls {\n            select {\n            case jobs &lt;- u:\n            case &lt;-ctx.Done():\n                return\n            }\n        }\n    }()\n\n    go func() { wg.Wait(); close(results) }()\n\n    out := make([]Result, 0, len(urls))\n    for r := range results {\n        out = append(out, r)\n    }\n    return out\n}\n<\/code><\/pre>\n<p>Three things I got wrong the first few times. One, I forgot to close the jobs channel, which deadlocked the whole thing. Two, I forgot the separate goroutine that waits on the WaitGroup and closes results, which also deadlocked the range loop. Three, I sent on jobs without checking <code>ctx.Done()<\/code>, which leaked goroutines on cancellation.<\/p>\n<p>Pick your concurrency number with intent. For HTTP work, I start at the smaller of <code>runtime.NumCPU() * 4<\/code> and whatever the downstream service rate-limits to. For CPU-heavy work, I stay close to <code>NumCPU<\/code> and then profile. Guessing at 100 workers because it feels high is not a strategy.<\/p>\n<h2 id=\"fan-out-fan-in-when-parallel-fetches-actually-help\">Fan-out, fan-in: when parallel fetches actually help<\/h2>\n<p>Fan-out, fan-in is the pattern everyone mentions first, but I reach for it less than you&rsquo;d think. If the downstream work is IO-bound and the items are independent, it&rsquo;s great. If the items share a cache or a rate limit, it usually isn&rsquo;t.<\/p>\n<p>The simplest version I ever write uses <code>errgroup<\/code>, because I hate manually wiring up error channels:<\/p>\n<pre><code class=\"language-go\">import &quot;golang.org\/x\/sync\/errgroup&quot;\n\nfunc enrichOrders(ctx context.Context, ids []string) ([]Order, error) {\n    g, ctx := errgroup.WithContext(ctx)\n    out := make([]Order, len(ids))\n\n    for i, id := range ids {\n        i, id := i, id\n        g.Go(func() error {\n            o, err := lookupOrder(ctx, id)\n            if err != nil {\n                return err\n            }\n            out[i] = o\n            return nil\n        })\n    }\n\n    if err := g.Wait(); err != nil {\n        return nil, err\n    }\n    return out, nil\n}\n<\/code><\/pre>\n<p>The <code>errgroup.WithContext<\/code> cancellation is what makes this safe. If one call fails, every other goroutine gets a cancelled context and can bail. I use this pattern when the upstream can give me partial results and the downstream can tolerate one big error.<\/p>\n<p>For longer lists I combine this with the worker pool above. Fan-out to N workers, each of which does fan-in internally if needed. Anyone who tells you to &ldquo;just launch a goroutine per order&rdquo; has never run it against 50,000 orders.<\/p>\n<h2 id=\"pipelines-staged-processing-without-the-callback-sprawl\">Pipelines: staged processing without the callback sprawl<\/h2>\n<p>Whenever I have a batch job that does &ldquo;read rows, transform, enrich, write,&rdquo; I reach for a pipeline. Each stage is a function returning a channel. The shape reads like Unix pipes, which is probably what Rob Pike was going for.<\/p>\n<pre><code class=\"language-go\">func rows(ctx context.Context) &lt;-chan Row { \/* ... *\/ }\n\nfunc enrich(ctx context.Context, in &lt;-chan Row) &lt;-chan Enriched {\n    out := make(chan Enriched)\n    go func() {\n        defer close(out)\n        for r := range in {\n            select {\n            case out &lt;- enrichOne(r):\n            case &lt;-ctx.Done():\n                return\n            }\n        }\n    }()\n    return out\n}\n\nfunc write(ctx context.Context, in &lt;-chan Enriched) error {\n    for e := range in {\n        if err := saveOne(ctx, e); err != nil {\n            return err\n        }\n    }\n    return nil\n}\n<\/code><\/pre>\n<p>The thing I like about pipelines is that each stage is testable in isolation. Pass in a mock channel, assert on the output. No god-object orchestrator, no callback hell.<\/p>\n<p>The thing I keep getting wrong is backpressure. If your write stage is slow and your enrich stage is fast, the enrich stage will buffer in memory until the program crashes or gets OOM-killed. Unbuffered channels are the default for a reason. When I do buffer, I buffer with a small, explicit number and a comment saying why.<\/p>\n<p>For the long form version of this, the official Go blog piece on <a href=\"https:\/\/go.dev\/blog\/pipelines\" rel=\"nofollow noopener\" target=\"_blank\">pipelines and cancellation<\/a> is still one of the best pieces of writing on concurrent Go and it has aged well. Read it before you write your second pipeline.<\/p>\n<h2 id=\"context-aware-cancellation-the-one-i-wish-id-learned-first\">Context-aware cancellation: the one I wish I&rsquo;d learned first<\/h2>\n<p>If I could go back, the first thing I&rsquo;d teach past-me is that every goroutine I spawn needs a way to die. Not &ldquo;should have.&rdquo; Needs. Leaking goroutines is how Go services rot slowly; they don&rsquo;t crash, they just use more memory every week until someone restarts the pod.<\/p>\n<p>The discipline I ended up with:<\/p>\n<p>One, every long-running goroutine takes a <code>context.Context<\/code>. If the function signature doesn&rsquo;t give me one, I wrap the function.<\/p>\n<p>Two, every blocking channel send in a goroutine sits inside a select with <code>&lt;-ctx.Done()<\/code>.<\/p>\n<p>Three, I almost never reach for <code>context.Background()<\/code> in application code. If I think I need one, I stop and ask: whose deadline is this inheriting? If the answer is &ldquo;nobody&rsquo;s,&rdquo; that&rsquo;s usually a mistake.<\/p>\n<p>The official <a href=\"https:\/\/pkg.go.dev\/context\" rel=\"nofollow noopener\" target=\"_blank\">context package docs<\/a> are short and worth reading end to end. I reread them at least once a year because I keep finding subtle things I forgot.<\/p>\n<p>A real bug I shipped: I had a worker pool calling an external API with a 30-second timeout. No context. Some days the external API would hang for an hour. The pool would saturate, the queue would back up, and the health check would still report green because the HTTP handler itself was responsive. Adding a context with a deadline to the outgoing call took ten minutes and saved us weeks of alerts.<\/p>\n<h2 id=\"mistakes-i-still-make-and-how-i-spot-them\">Mistakes I still make (and how I spot them)<\/h2>\n<p>Five years in, I still trip over the same handful of things.<\/p>\n<p>I close channels from the receiver side. Don&rsquo;t do this. The sender closes. If multiple senders exist, coordinate them with a <code>sync.WaitGroup<\/code> and have one orchestrator close after <code>Wait<\/code>.<\/p>\n<p>I share maps between goroutines without a mutex. The race detector catches this in tests, when I remember to run <code>go test -race<\/code>. Which is not every time.<\/p>\n<p>I pass loop variables into goroutines without rebinding them. With Go 1.22&rsquo;s loop variable change this bites me less, but legacy code on older versions still does.<\/p>\n<p>I pick channel buffer sizes without thinking. Zero or one is almost always what I want. Larger buffers should have a reason in a comment.<\/p>\n<p>I forget that select with multiple ready cases chooses randomly. I keep expecting the order I wrote to matter. It doesn&rsquo;t.<\/p>\n<h2 id=\"what-to-do-with-this-tomorrow\">What to do with this tomorrow<\/h2>\n<p>Pick one service you already have in production. Grep it for <code>go func(<\/code>. For each result, ask three questions out loud: how does this goroutine exit; what happens if the parent context is cancelled; is this channel closed by the right side? If you can&rsquo;t answer for any of them, that&rsquo;s your homework for Thursday.<\/p>\n<p>And if you want to go deeper on Go specifically, I wrote up the three generics patterns that have actually shipped for me over on <a href=\"https:\/\/abrarqasim.com\/blog\/golang-generics-three-patterns-i-actually-use\/\" rel=\"noopener\">this post about Go generics<\/a>, same energy, different feature. More of my backend writing lives on my <a href=\"https:\/\/abrarqasim.com\/work\" rel=\"noopener\">work page<\/a> if you want to see what I&rsquo;m shipping.<\/p>\n<p>Concurrency in Go isn&rsquo;t hard because the primitives are hard. It&rsquo;s hard because the primitives are small and the failure modes are quiet. Worker pools, fan-out, pipelines, and context cancellation cover ninety percent of what I actually write. The rest is discipline, and discipline is mostly built by having shipped a goroutine leak and paid for it.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Go concurrency patterns I reach for on real projects: worker pools, pipelines, fan-in, and context cancellation, with code and the mistakes I still make.<\/p>\n","protected":false},"author":2,"featured_media":140,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"rank_math_title":"","rank_math_description":"Go concurrency patterns I reach for on real projects: worker pools, pipelines, fan-in, and context cancellation, with code and the mistakes I still make.","rank_math_focus_keyword":"golang concurrency patterns","rank_math_canonical_url":"","rank_math_robots":"","footnotes":""},"categories":[45],"tags":[49,133,130,47,131,65,132],"class_list":["post-141","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-programming","tag-backend","tag-context","tag-go-concurrency","tag-golang","tag-goroutines","tag-programming-languages","tag-worker-pool"],"_links":{"self":[{"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/posts\/141","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/comments?post=141"}],"version-history":[{"count":0,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/posts\/141\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/media\/140"}],"wp:attachment":[{"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/media?parent=141"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/categories?post=141"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/tags?post=141"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}