Go made concurrency approachable. Before goroutines, writing concurrent code meant wrestling with thread pools, callback hell, or async/await ceremony. Goroutines are cheap, the syntax is minimal, channels are elegant. For many programs, the model works beautifully.
But as programs grow, Go's concurrency model reveals a structural gap: there's no built-in concept of a goroutine tree. Goroutines are launched, they do work, and the only built-in way to coordinate their lifecycle is channels, sync.WaitGroup, or context.Context — all manual and error-prone. This is where structured concurrency comes in.
What structured concurrency actually means
The term was coined by Nathaniel J. Smith in 2018 in a landmark blog post. The core idea is simple: concurrent tasks should have a clearly defined parent-child relationship, and a parent task cannot complete until all its children have completed.
This is the concurrency equivalent of structured programming — the insight that every goto could be replaced with loops and function calls, giving you a clear call stack. Structured concurrency gives you an equivalent guarantee for concurrent tasks: no goroutine (or coroutine or fiber) escapes the scope that created it.
"Structured concurrency means that the lifetime of tasks is nested within the lifetime of their parent."
Kotlin's coroutines implement this through CoroutineScope. Every coroutine is launched within a scope. When the scope completes, all coroutines launched within it complete or are cancelled. The scope is the unit of concurrency lifetime management.
// Kotlin — structured: child coroutines are bound to this scope
suspend fun fetchDashboard(): Dashboard = coroutineScope {
val user = async { fetchUser() }
val metrics = async { fetchMetrics() }
Dashboard(user.await(), metrics.await())
// scope exits only when both async blocks complete
}
What Go does instead
Go's answer to lifecycle management is context.Context. You create a context, pass it through the call tree, and cancel it to signal goroutines to stop. Combined with sync.WaitGroup to wait for completion:
func fetchDashboard(ctx context.Context) (*Dashboard, error) {
var wg sync.WaitGroup
var user *User
var metrics *Metrics
var userErr, metricsErr error
wg.Add(2)
go func() {
defer wg.Done()
user, userErr = fetchUser(ctx)
}()
go func() {
defer wg.Done()
metrics, metricsErr = fetchMetrics(ctx)
}()
wg.Wait()
if userErr != nil { return nil, userErr }
if metricsErr != nil { return nil, metricsErr }
return &Dashboard{User: user, Metrics: metrics}, nil
}
This works. But notice what's missing: if fetchUser returns an error, fetchMetrics continues running until it finishes naturally. There's no automatic cancellation of sibling goroutines on failure. You have to wire that up yourself with a cancel context — which most code doesn't bother to do.
golang.org/x/sync/errgroup
The errgroup package is Go's closest approximation to structured concurrency. It wraps the WaitGroup pattern and adds error propagation plus optional context cancellation:
func fetchDashboard(ctx context.Context) (*Dashboard, error) {
g, ctx := errgroup.WithContext(ctx)
var user *User
var metrics *Metrics
g.Go(func() error {
var err error
user, err = fetchUser(ctx)
return err
})
g.Go(func() error {
var err error
metrics, err = fetchMetrics(ctx)
return err
})
if err := g.Wait(); err != nil {
return nil, err
}
return &Dashboard{User: user, Metrics: metrics}, nil
}
This is much better. When either goroutine returns an error, the shared context is cancelled, signalling the other to stop. But only if that goroutine actually checks ctx.Done(). There's still no enforcement — goroutines can ignore cancellation signals and continue running.
What Go gets right
Go's model is honest about what it provides and what it doesn't. The runtime is extraordinarily good at scheduling goroutines efficiently. The channel model, when used correctly, produces clear and readable concurrent code. The explicit passing of context as a function argument — rather than thread-local storage or ambient values — makes cancellation and deadlines visible in every function signature.
For most production Go code, the combination of errgroup, disciplined context propagation, and sensible goroutine hygiene is sufficient. Go's pragmatism is also a feature: the model is simple enough that you can hold the whole thing in your head.
What Go gets wrong
The absence of a structured scope means goroutine leaks are common and hard to detect. Unlike with memory, the Go runtime has no garbage collector for goroutines. A goroutine that blocks forever because nobody signals its channel is invisible to profiling by default. Large Go codebases often develop informal conventions (always cancel contexts, always use defer wg.Done()) to compensate — but these conventions can't be enforced by the compiler.
The lack of a panic propagation contract across goroutines is also significant. A panic in a goroutine that isn't recovered crashes the whole program. There's no way to say "propagate panics from child goroutines to the parent scope" without building the mechanism yourself.
Will Go adopt structured concurrency?
It seems unlikely as a language-level primitive given Go's commitment to simplicity and backward compatibility. But the ecosystem is moving in that direction. Libraries like errgroup and singleflight encode structured patterns. Some proposals for a Go 2 concurrency overhaul have suggested scope-based goroutine management.
For now, Go's concurrency story is: excellent primitives, good ergonomics for simple cases, buy-your-own structure for complex ones. Knowing when you've crossed into "complex" — and building the structure accordingly — is one of the most important judgment calls in Go programming.