Skip to content

Golang Iterators 8 Months In: What I Actually Use

Golang Iterators 8 Months In: What I Actually Use

Confession: when Go 1.23 added range-over-function iterators in August 2024, I hated them on sight. They looked like a Python comprehension wearing a fake Go mustache, and I was already happy iterating with for and slices like a normal person. Eight months later I have rewritten three internal libraries to expose iterators, and the new pattern is the second-best language change Go has shipped in years (generics is still first; fight me).

This is a notes-from-production post, not a tutorial. If you want the spec, the official Go 1.23 release notes cover the language change in two paragraphs and the iter package docs explain the Seq and Seq2 types in another two. What I want to write down is which patterns earned a place in my code, which ones I tried and threw away, and the specific places where the new syntax is a sharp edge.

The one shape I write 90% of the time

Most of my custom iterators look exactly the same. They wrap a paged or streamed data source so the caller doesn’t have to think about pagination at all. Here is the before-and-after for our internal Stripe-style cursor pagination helper.

Before (Go 1.22 and earlier), a caller had to write the loop themselves and remember to pass the cursor along:

cursor := ""
for {
    page, err := client.ListInvoices(ctx, ListOptions{Cursor: cursor, Limit: 100})
    if err != nil {
        return err
    }
    for _, inv := range page.Items {
        if err := process(inv); err != nil {
            return err
        }
    }
    if page.NextCursor == "" {
        break
    }
    cursor = page.NextCursor
}

Three things in that snippet are easy to get wrong: forgetting to update cursor, forgetting to break on the empty next-cursor, and accidentally returning early without draining the page. I have shipped each of those bugs at least once.

With range-over-func, the iterator owns the pagination and the caller writes a regular for loop:

func (c *Client) Invoices(ctx context.Context) iter.Seq2[Invoice, error] {
    return func(yield func(Invoice, error) bool) {
        cursor := ""
        for {
            page, err := c.ListInvoices(ctx, ListOptions{Cursor: cursor, Limit: 100})
            if err != nil {
                yield(Invoice{}, err)
                return
            }
            for _, inv := range page.Items {
                if !yield(inv, nil) {
                    return
                }
            }
            if page.NextCursor == "" {
                return
            }
            cursor = page.NextCursor
        }
    }
}

// Caller:
for inv, err := range c.Invoices(ctx) {
    if err != nil {
        return err
    }
    if err := process(inv); err != nil {
        return err
    }
}

What I love about this is the caller code is shorter and the cancellation contract is now a single line: a normal break in the caller’s loop tells yield to return false, which exits the producer. No sentinel, no Close(). I have one of these wrappers for every paginated API we hit. The pattern composes well with the generics work I wrote about earlier — the iterator is generic in Invoice and gets dropped into a generic Collect helper without ceremony.

The pattern I tried and dropped: pure-functional pipelines

A few weeks in I went through a phase of writing things like Filter(Map(Take(seq, 100), parse), valid). It looks elegant. It is also a nightmare to debug because the iterator doesn’t actually run until something pulls on it, so a panic shows up in a stack trace that bears no resemblance to where you defined the pipeline.

More importantly, my coworkers hated reading it. We’re a Go shop. We’re here because we want flat code we can step through with a debugger. A six-line for loop with explicit conditionals is more honest than a one-liner that hides three closures. After two PRs of pushback I deleted my xiter helpers and went back to writing the loop. The pipeline aesthetic belongs in a language that was built for it, like Rust or Haskell. (For what it’s worth, the Go team is being conservative about adding xiter to the standard library for similar reasons.)

The rule I follow now: a custom iterator is worth it when it hides real complexity (pagination, streaming, tree traversal). It is not worth it as a stylistic preference over a clear loop.

Where the syntax bit me: early return in yield

The bool return value from yield exists so the caller can stop iteration early. Here is where I lost an afternoon. I wrote this:

func Walk(root *Node) iter.Seq[*Node] {
    return func(yield func(*Node) bool) {
        var visit func(n *Node)
        visit = func(n *Node) {
            if n == nil {
                return
            }
            yield(n)              // BUG: ignored return value
            visit(n.Left)
            visit(n.Right)
        }
        visit(root)
    }
}

Spot the bug: I’m not checking what yield returned, so a caller’s break does nothing on a recursive walk. The iterator keeps producing values and the loop keeps ignoring them. CPU pegged at 100%, a for loop that looked correct, and a producer that never stopped. The fix is to thread the bool back up:

func Walk(root *Node) iter.Seq[*Node] {
    return func(yield func(*Node) bool) {
        var visit func(n *Node) bool
        visit = func(n *Node) bool {
            if n == nil {
                return true
            }
            if !yield(n) {
                return false
            }
            return visit(n.Left) && visit(n.Right)
        }
        visit(root)
    }
}

Any recursive iterator needs this shape. The compiler will not save you. There is no errcheck-style linter for the yield bool that I’ve found, and go vet doesn’t flag it. Code review is your only defense, so I added “check yield return” to our review checklist.

Composition with channels: don’t, mostly

The most common bad-idea question I see online is “can I bridge a channel into an iterator?” Yes, mechanically:

func Chan[T any](ch <-chan T) iter.Seq[T] {
    return func(yield func(T) bool) {
        for v := range ch {
            if !yield(v) {
                return
            }
        }
    }
}

This works. It is also a footgun, because the iterator does not own the channel. If the caller breaks early, the producer goroutine on the other end of the channel is still pumping values into a buffer no one is reading. You end up with a goroutine leak that looks fine in tests and shows up in production after a week.

If you really need a channel-backed iterator, pass a context.Context so the producer can cancel cleanly on caller exit. I covered the broader pattern in my post on Go concurrency patterns I actually ship — the same advice applies here. The iterator is a consumer interface, not a substitute for select and done channels.

The slices and maps packages got nicer

The quiet win of the iterator update is what it did to the standard library. maps.Keys used to allocate a slice; now it returns a Seq and you can range over it without paying for the allocation:

for key := range maps.Keys(m) {
    if strings.HasPrefix(key, "x-") {
        process(key)
    }
}

Same for slices.Values, slices.All, slices.Backward. I use slices.Backward constantly when I’m walking a slice for safe deletion:

for i, v := range slices.Backward(items) {
    if shouldDelete(v) {
        items = slices.Delete(items, i, i+1)
    }
}

That used to be a manual reverse for i := len(items)-1; i >= 0; i-- loop. The new version reads the way it should.

When I still skip iterators

Three cases where I write a plain function instead:

First, when the producer is genuinely synchronous and small. A function returning []string is simpler than a Seq[string] for ten elements. Allocating a slice of ten strings costs nothing meaningful.

Second, when I want the result twice. Iterators are single-shot by convention — running through a Seq twice means the producer runs twice, which can mean two API calls, two database queries, two file reads. If callers might want to iterate more than once, a slice is honest about the cost.

Third, in tests. cmp.Diff on a []Invoice is one line. Diffing two iterators means collecting both into slices first, which means you’ve added ceremony for no gain.

This maps to the broader question of when a Go feature pays for its own complexity. For code I expect to write once and read fifty times, plain loops still win. For code I expect to wrap into a library and hand to other teams, iterators are the better contract. I keep most of my reusable libraries in a public portfolio repo and the conversion pattern has been the same in each: anything that paginates becomes a Seq2[T, error], anything that walks a tree becomes a Seq[T], and the rest stays a slice.

What I’d do this week if I were starting fresh

If you haven’t touched iterators yet, three concrete things you can do today.

Go find one place in your codebase where you have a for { ... if last { break } } pattern around an external API. Wrap it in an iterator. The diff will be smaller than you expect, the call site will get clearer, and you’ll learn the yield-bool dance in a context where it doesn’t matter much.

Replace one maps.Keys(m) or slices.SortedFunc callsite with the new iterator-based equivalent. The standard library calls are drop-in for most uses and the allocation savings are real on hot paths.

Resist writing your own Map and Filter. The Go team is still arguing about whether to ship them for a reason: in a language without method chaining, the syntax is genuinely ugly, and a regular for body with an if is more readable. Wait for the standard library to land an opinion before you build your own.

Iterators are not the most exciting change in Go this decade. They’re a small, well-scoped tool that quietly fixes a category of caller-side bugs. After eight months I reach for them more than I expected to, and I write less wrapper boilerplate than I did before. That’s enough.