Go sync.Mutex: CAS Fast Path, Spinning, and Starvation Mode
Go's sync.Mutex avoids kernel syscalls on the uncontended fast path using a single atomic CAS instruction. On contention, goroutines spin (up to 4 times) before sleeping via a semaphore. If a goroutine waits more than 1ms, the mutex enters starvation mode: ownership transfers directly to the waiting goroutine, bypassing newly arriving goroutines.
Mutex state fields
type Mutex struct {
state int32 // packed: locked bit + waiter count + flags
sema uint32 // semaphore for goroutine sleep/wake
}
const (
mutexLocked = 1 << iota // bit 0: mutex is held
mutexWoken // bit 1: a waiter is being woken
mutexStarving // bit 2: starvation mode active
mutexWaiterShift = iota // bits 3+: waiter count
starvationThresholdNs = 1e6 // 1ms before starvation mode
)
The state field packs multiple values into a single 32-bit integer: the lock bit, a "woken" flag (one goroutine is already being woken to avoid thundering herd), a starvation flag, and a 29-bit waiter count.
Fast path: uncontended lock
func (m *Mutex) Lock() {
// Try to grab the lock with a single atomic CAS
if atomic.CompareAndSwapInt32(&m.state, 0, mutexLocked) {
return // acquired — no syscall, no context switch
}
m.lockSlow()
}
atomic.CompareAndSwapInt32 is a CPU-level instruction (LOCK CMPXCHG on x86). If m.state == 0 (unlocked, no waiters), it atomically sets the locked bit and returns. The entire fast path is ~5 nanoseconds — no OS involvement.
Slow path: spinning then sleeping
When the CAS fails, lockSlow runs. It first spins:
// Spinning is allowed only when:
// - mutex is locked but NOT in starvation mode
// - runtime_canSpin: CPU count > 1, goroutine is running (not queued), <4 spins done
if old&(mutexLocked|mutexStarving) == mutexLocked && runtime_canSpin(iter) {
runtime_doSpin() // ~30 PAUSE instructions, yields CPU briefly
iter++
continue
}
After spinning, if the lock is still unavailable, the goroutine calls runtime_SemacquireMutex(&m.sema, ...) to sleep. The runtime parks the goroutine and records when it started waiting.
On Unlock():
func (m *Mutex) Unlock() {
new := atomic.AddInt32(&m.state, -mutexLocked) // clear locked bit
if new != 0 {
m.unlockSlow(new) // wake a waiter if needed
}
}
runtime_Semrelease wakes one sleeping goroutine, which retries the CAS.
Normal mode favors new goroutines over waiting goroutines — can cause starvation under sustained load
ConceptGo ConcurrencyIn normal mode, when a goroutine wakes from sleep to compete for the lock, it runs against goroutines that are already on-CPU. The on-CPU goroutines have an advantage: they don't need a context switch. Under sustained contention, a waiting goroutine can lose the CAS repeatedly as new goroutines keep arriving and grabbing the lock first. If a goroutine waits more than 1ms, Go switches the mutex to starvation mode.
Prerequisites
- Compare-and-swap (CAS)
- Goroutine scheduling
- Semaphores
Key Points
- Normal mode: FIFO queue for sleeping goroutines, but newly arrived goroutines compete before queued ones.
- Starvation mode (triggered after 1ms wait): mutex ownership is handed directly to the front-of-queue waiter. New goroutines queue at the back immediately without spinning.
- Exit starvation mode when: the waiter at the front is the last in queue, or it waited less than 1ms.
- mutexWoken flag prevents multiple wakeups — only one goroutine is woken per unlock to avoid thundering herd.
RWMutex for read-heavy workloads
When reads vastly outnumber writes, sync.RWMutex allows concurrent reads:
var mu sync.RWMutex
var data map[string]string
// Multiple goroutines can hold RLock simultaneously
func read(key string) string {
mu.RLock()
defer mu.RUnlock()
return data[key]
}
// Write requires exclusive access — blocks until all RLocks release
func write(key, value string) {
mu.Lock()
defer mu.Unlock()
data[key] = value
}
RWMutex has a write-starvation problem of its own: if reads are continuous, a writer can wait indefinitely. The standard library's implementation gives writers priority once they start waiting — new RLock() calls block if a writer is queued.
📝sync.Map: built-in concurrent map for specific access patterns
Go's built-in map is not safe for concurrent access. Options:
sync.Mutex+map: simple, correct, good for balanced read/writesync.RWMutex+map: better for read-heavy workloadssync.Map: optimized for two specific patterns:- Keys are written once and read many times (cache-like)
- Multiple goroutines write disjoint sets of keys
var m sync.Map
m.Store("key", "value")
if v, ok := m.Load("key"); ok {
fmt.Println(v)
}
// LoadOrStore: atomic check-and-set
actual, loaded := m.LoadOrStore("key", "new-value")
// loaded=true if key existed, actual = existing value
sync.Map uses two internal maps (read-only with atomic access, dirty with mutex) to avoid locking on reads of stable keys. For a general-purpose concurrent map with mixed read/write, a sharded sync.Mutex + map is usually faster.
Under high contention, a goroutine waits 2ms for a mutex. The mutex has been in normal mode the entire time. What happens next, and why?
mediumstarvationThresholdNs = 1ms. The goroutine has been sleeping in the semaphore queue for 2ms without acquiring the lock.
AThe goroutine keeps waiting — Go doesn't have starvation protection
Incorrect.Go's sync.Mutex has explicit starvation mode, triggered when a waiter exceeds starvationThresholdNs (1ms). Goroutines don't wait indefinitely in normal mode under sustained contention.BThe goroutine is killed by the runtime to prevent resource exhaustion
Incorrect.Go doesn't kill goroutines for waiting on a mutex. There's no timeout mechanism in sync.Mutex — starvation mode handles fairness, not termination.CWhen the goroutine is next woken and retries, it sets the mutexStarving flag. On the next unlock, the mutex is in starvation mode — ownership transfers directly to the front-of-queue waiter, bypassing new arrivals.
Correct!After waiting longer than starvationThresholdNs, the goroutine sets starving=true in its local state. When it's woken and successfully updates the mutex state via CAS, it sets the mutexStarving bit. In starvation mode: new goroutines queue at the back immediately (no spinning), and Unlock hands ownership directly to the front-of-queue waiter via runtime_Semrelease with handoff=true. The mutex exits starvation mode when the front waiter was the last in queue or waited less than 1ms.DThe goroutine bumps its priority and the OS scheduler gives it CPU time preferentially
Incorrect.Go's mutex starvation mode operates at the Go runtime level, not the OS scheduler level. The runtime handles fairness by changing how the mutex chooses its next holder, not by adjusting OS thread priorities.
Hint:What does starvationThresholdNs control, and what changes about mutex ownership in starvation mode?