Skip to content

Chapter 4: Goroutines & Channels

Concurrency is Go’s superpower. Goroutines are lightweight threads managed by the Go runtime, and channels provide safe communication between them.

Go was built from the ground up with concurrency as a first-class feature. Unlike traditional threads that are heavyweight and expensive, goroutines are remarkably cheap - you can spawn thousands or even millions without breaking a sweat. This makes Go ideal for I/O-bound applications, web servers, and systems that need to handle many concurrent operations.

The philosophy behind Go’s concurrency model comes from Tony Hoare’s Communicating Sequential Processes (CSP). Instead of sharing memory between threads and using locks to coordinate access, Go encourages you to communicate through channels. The mantra: “Don’t communicate by sharing memory; share memory by communicating.”

This chapter covers goroutines, channels, and essential patterns for coordinating concurrent operations. By the end, you’ll understand not just the syntax, but when and why to use these primitives. You’ll learn the difference between buffered and unbuffered channels, how to use select for multiplexing, and practical patterns for building robust concurrent systems.

A goroutine is a function executing concurrently with other goroutines in the same address space. Think of them as ultra-lightweight threads - they start with a tiny stack (a few KB) that grows and shrinks as needed. The Go runtime multiplexes thousands of goroutines onto a small number of OS threads automatically.

This is fundamentally different from OS threads. Creating an OS thread might allocate 1-2 MB of stack space upfront. Creating a goroutine allocates just 2 KB. Want to handle 10,000 simultaneous connections? With OS threads, that’s 10-20 GB of memory just for stacks. With goroutines, it’s 20 MB. This efficiency is why Go excels at high-concurrency scenarios.

The Go scheduler manages goroutines using a work-stealing algorithm across multiple OS threads. When a goroutine blocks (on I/O, channels, or system calls), the scheduler runs another goroutine on that thread. You don’t manage threads - you just spawn goroutines and let the runtime handle scheduling.

Concurrency Made Simple: Starting a goroutine is as easy as prefixing a function call with go. No thread pools, no manual thread management, no pthread_create. The simplicity encourages you to use concurrency where it makes sense.

Natural Code Structure: Goroutines let you structure code the way you think about it. Need to fetch data from three APIs simultaneously? Launch three goroutines. Need to process a million items? Launch a goroutine per item (or use a worker pool pattern). The code reflects your intent.

I/O Performance: When a goroutine makes a blocking I/O call (network, file, database), the Go runtime switches to another goroutine instead of blocking the OS thread. This means your program stays responsive even with thousands of pending I/O operations.

CPU-Bound Parallelism: Go automatically runs goroutines in parallel across available CPU cores. No special configuration needed - if you have 8 cores, goroutines can execute on all 8 simultaneously.

Start a goroutine by prefixing any function call with go. The function executes concurrently while your program continues:

Channels are Go’s mechanism for safe communication between goroutines. They provide a way to send and receive values between concurrent executions without explicit locks or shared memory concerns. A channel is essentially a typed, thread-safe queue.

The key insight: instead of having goroutines share memory and coordinate access with mutexes, goroutines send data through channels. One goroutine sends, another receives. The channel handles synchronization automatically. This eliminates entire categories of concurrency bugs like race conditions and deadlocks (though channels can deadlock if misused).

Channels embody Go’s concurrency philosophy: “Don’t communicate by sharing memory; share memory by communicating.” When you send a value through a channel, you’re transferring ownership. The sender shouldn’t access that value afterward, and the receiver gets exclusive access. This makes reasoning about concurrent code much simpler.

Type Safety: Channels are typed. A chan int carries integers, chan string carries strings. The compiler prevents you from sending the wrong type. This catches errors at compile time rather than runtime.

Built-in Synchronization: Channels synchronize goroutines automatically. When a goroutine sends on a channel, it waits until another goroutine receives. No manual locking required. No race conditions. The language itself guarantees safe communication.

First-Class Language Feature: Unlike most languages where concurrency is bolted on through libraries, Go channels are part of the language. They have dedicated syntax (<-), work with select statements, and integrate deeply with the runtime.

Elegant Patterns: Channels enable elegant concurrency patterns: pipelines, fan-out/fan-in, timeouts, cancellation. These patterns are cumbersome or impossible in traditional threading models, but natural with channels.

Channels are typed conduits for sending and receiving values. Create them with make, send with <-, receive with <-:

Unbuffered channels (created with make(chan T)) require both sender and receiver to be ready simultaneously. The send operation blocks until another goroutine receives. The receive blocks until another goroutine sends. This creates a synchronization point - you’re guaranteed the receiver has the value before the sender continues.

Buffered channels (created with make(chan T, capacity)) have internal storage. Sends don’t block until the buffer is full. Receives don’t block until the buffer is empty. This decouples senders from receivers - they don’t need to rendezvous at the exact same moment.

Use Unbuffered Channels When:

  • You need strong synchronization guarantees - confirmation that the receiver got the value
  • Passing ownership of resources that shouldn’t be duplicated
  • Implementing request-response patterns where you need to know the request was received
  • Coordinating goroutines at specific points

Use Buffered Channels When:

  • You want to decouple producer and consumer rates - they don’t need to run at the same speed
  • Implementing fixed-size work queues where backpressure is desired
  • Batching operations - collect several items before processing
  • Preventing goroutine blocking when immediate sends are important

A Common Misconception: Buffered channels aren’t “faster” than unbuffered. They don’t magically improve performance. They change synchronization semantics. Use them when decoupling makes sense, not as a performance hack.

Capacity Guidelines: Choose buffer size based on semantics, not performance tuning. A buffer of 1 means “allow one pending send.” A buffer matching expected burst size prevents blocking during bursts. Arbitrarily large buffers (like 10000) often indicate design problems - unbounded buffers can hide resource exhaustion bugs.

You can restrict channel direction in function signatures:

Close channels to signal completion. Receivers can detect closed channels:

The select statement is to channels what switch is to values. It lets a goroutine wait on multiple channel operations simultaneously. Whichever channel operation can proceed first gets executed. If multiple are ready, one is chosen at random.

Think of select as a multiplexer for channels. Instead of blocking on a single channel receive, you can block on multiple receives (or sends) and handle whichever happens first. This is essential for timeouts, cancellation, combining multiple data sources, and non-blocking operations.

Timeouts: Wrap any channel operation in a select with time.After() to add a timeout. If the operation doesn’t complete in time, handle the timeout case.

Cancellation: Use select with a done channel to make any operation cancellable. When the done channel closes, the select can exit immediately instead of waiting for a potentially slow operation.

Fan-In: Merge multiple channels into one. Select from all input channels and forward values to a single output channel. This combines concurrent data sources.

Non-Blocking Operations: Use select with a default case to make channel operations non-blocking. Try to send or receive, but if the channel isn’t ready, do something else immediately.

select waits on multiple channel operations. Whichever is ready first gets executed:

Signal goroutine completion with a done channel:

Tickers create a channel that delivers values at regular intervals. They’re perfect for periodic tasks like polling, metrics collection, or rate limiting. Unlike time.Sleep in a loop, tickers account for execution time - if your task takes 100ms and you have a 1-second ticker, you still get events every second, not every 1.1 seconds.

The token bucket pattern controls the rate at which operations occur. Imagine a bucket that holds tokens. Operations consume tokens. Tokens refill at a fixed rate. When the bucket is empty, operations must wait for new tokens. This is the foundation of rate limiting.

These patterns - periodic tickers and token buckets - are building blocks for the more sophisticated rate limiter you’ll implement in the exercises. Understanding these simpler versions first makes the full implementation much clearer.

  1. Goroutines are cheap - spawn thousands without worry
  2. Channels synchronize - use them to coordinate goroutines
  3. Unbuffered blocks - both sender and receiver must be ready
  4. Buffered decouples - sender can continue until buffer is full
  5. Select multiplexes - wait on multiple channels simultaneously
  6. Close signals completion - only sender should close
  7. Always receive from closed - receives zero value and ok = false

Practice Exercises

Reinforce your understanding of goroutines and channels with these exercises

Simple Message Passing

~5 mineasy

Create a program where a goroutine sends your name through a channel and main receives and prints it.

Fan-Out Pattern

~15 minmedium

Create 3 worker goroutines that all read from the same job channel. Send 9 jobs and have workers print which worker processed each job.

Build a Rate Limiter

~20 minhard

Create a rate limiter that allows only 3 requests per second. Use a ticker channel to refill tokens and a buffered channel to store available tokens.


Chapter in progress
0 / 14 chapters completed

Next up: Chapter 5: Sync Primitives