Defer Performance and Costs
Understanding how defer works, its evolution in Go, when it's free, and when it has measurable costs.
Introduction
The defer statement is one of Go's most elegant features for resource cleanup. However, developers often wonder about its performance cost. The answer is nuanced: modern Go has optimized defer extensively, making it nearly free in most cases, but specific patterns still have measurable overhead.
How Defer Works
When you defer a function call, Go doesn't execute it immediately. Instead:
- The function arguments are evaluated at the
deferstatement - The deferred function is pushed onto a defer list
- When the enclosing function returns (normally or via panic), deferred functions execute in LIFO order
func Example() {
fmt.Println("1")
defer fmt.Println("3")
defer fmt.Println("2")
fmt.Println("1b")
}
// Output:
// 1
// 1b
// 2
// 3The defer list is maintained per-goroutine and requires tracking metadata for each deferred call.
Evolution of Defer Performance
Go has continuously optimized defer:
Go 1.12 and earlier: Defer had significant overhead, adding 25-50ns per deferred call.
Go 1.13: Open-coded defers optimization reduced overhead by ~30%. For simple cases, the compiler can inline defer cleanup directly instead of using the defer list.
Go 1.14+: Deferred calls with constant arguments and no closures are essentially zero-cost in straight-line code.
// Go 1.14+ optimizes this to zero-cost
func ReadFile(path string) ([]byte, error) {
f, err := os.Open(path)
if err != nil {
return nil, err
}
defer f.Close()
// ...
}
// The compiler recognizes the pattern and inlines cleanup directlyModern Defer Cost: Near Zero in Most Cases
For simple, straight-line defer operations, the cost is negligible:
func BenchmarkSimpleDefer(b *testing.B) {
b.Run("WithDefer", func(b *testing.B) {
for i := 0; i < b.N; i++ {
f := &File{}
f.Open()
defer f.Close()
f.Read()
}
})
b.Run("WithoutDefer", func(b *testing.B) {
for i := 0; i < b.N; i++ {
f := &File{}
f.Open()
f.Read()
f.Close()
}
})
b.Run("ManualDefer", func(b *testing.B) {
for i := 0; i < b.N; i++ {
f := &File{}
f.Open()
f.Read()
f.Close()
}
})
}
// All three versions are essentially identical in performanceResult: With modern optimizations, the defer version is as fast as manual cleanup.
The Critical Pitfall: Defer in Loops
The performance cliff appears when you use defer in loops:
// SLOW: Each iteration creates a defer record
for _, file := range files {
f, _ := os.Open(file)
defer f.Close()
process(f)
}
// This accumulates defer records on the defer list,
// executing them only after the loop completesBenchmark:
func BenchmarkDeferInLoop(b *testing.B) {
for i := 0; i < b.N; i++ {
for j := 0; j < 1000; j++ {
f := &mockFile{}
f.Open()
defer f.Close()
}
}
}
func BenchmarkManualCleanupInLoop(b *testing.B) {
for i := 0; i < b.N; i++ {
for j := 0; j < 1000; j++ {
f := &mockFile{}
f.Open()
f.Close()
}
}
}Results: DeferInLoop is 2-5x slower because it accumulates defer records and delays cleanup.
The Solution: Wrap Loop in Function
When you need defer-like behavior per loop iteration, wrap in an anonymous function:
// WRONG: defer accumulates in loop
for _, file := range files {
f, _ := os.Open(file)
defer f.Close()
process(f)
}
// RIGHT: wrap iteration in function
for _, file := range files {
func() {
f, _ := os.Open(file)
defer f.Close()
process(f)
}()
}
// Defers now execute at end of each iterationThe anonymous function is inlined by the compiler, incurring minimal overhead while getting per-iteration cleanup:
func BenchmarkDeferInWrappedLoop(b *testing.B) {
for i := 0; i < b.N; i++ {
for j := 0; j < 1000; j++ {
func() {
f := &mockFile{}
f.Open()
defer f.Close()
}()
}
}
}
// Performance is similar to manual cleanup but clearerDefer and Closures
When a defer statement creates a closure, it captures variables:
func Example() {
x := 10
defer func() {
fmt.Println(x) // Captures x
x = 20 // Modifies captured x
}()
x = 30
}
// Output: 30 (the modified value)The captured variables are evaluated at defer execution time, not at the defer statement:
func Example2() {
for i := 0; i < 3; i++ {
defer func() {
fmt.Println(i) // All closures capture the same i
}()
}
}
// Output: 3, 3, 3 (not 0, 1, 2)
// To capture the value, create a local copy:
for i := 0; i < 3; i++ {
i := i // Shadow i with a copy
defer func() {
fmt.Println(i) // Captures the copy
}()
}
// Output: 2, 1, 0Function Arguments: Evaluated at Defer Statement
Unlike closures, function arguments are evaluated immediately:
func Example() {
x := 10
defer fmt.Println(x) // x evaluated now (prints 10)
x = 20
}
// Output: 10
// This is useful for timing:
func Timer() {
defer func(start time.Time) {
fmt.Println("Elapsed:", time.Since(start))
}(time.Now()) // time.Now() evaluated at defer statement
// ... expensive operation
}This pattern avoids a second function call:
// Less efficient: calls time.Now() in the closure
defer func() {
start := time.Now() // No! start isn't captured until now
}()
// Better: evaluate at defer statement
start := time.Now()
defer func() {
fmt.Println("Elapsed:", time.Since(start))
}()
// Best for Go 1.14+: use the argument pattern
defer func(start time.Time) {
fmt.Println("Elapsed:", time.Since(start))
}(time.Now())Named Return Values with Defer
Defer can modify named return values:
func Example() (result int, err error) {
defer func() {
if r := recover(); r != nil {
result = -1
err = fmt.Errorf("panicked: %v", r)
}
}()
// Do work that might panic
risky()
result = 42
return // Uses named return values
}
// If risky() panics, defer sets result=-1 and err=errorThis is powerful for error handling but can obscure control flow. Use judiciously.
Benchmark: Function with Multiple Defers
type Resource struct {
closed bool
}
func (r *Resource) Close() {
r.closed = true
}
func BenchmarkMultipleDefers(b *testing.B) {
b.Run("ThreeDefers", func(b *testing.B) {
for i := 0; i < b.N; i++ {
r1 := &Resource{}
r2 := &Resource{}
r3 := &Resource{}
defer r1.Close()
defer r2.Close()
defer r3.Close()
// Use resources
_ = r1.closed
}
})
b.Run("ManualCleanup", func(b *testing.B) {
for i := 0; i < b.N; i++ {
r1 := &Resource{}
r2 := &Resource{}
r3 := &Resource{}
// Use resources
_ = r1.closed
r3.Close()
r2.Close()
r1.Close()
}
})
}Result: In modern Go, both are equivalent in performance.
Panic Recovery with Defer
Defer executes even when panic occurs:
func MustSucceed() {
defer func() {
if r := recover(); r != nil {
log.Printf("Recovered from panic: %v", r)
}
}()
risky() // Might panic
fmt.Println("Completed successfully")
}This is essential for robustness. The defer overhead is negligible compared to the security benefit.
Real-World Pattern: Resource Management
The common pattern for managing resources:
func ProcessFile(path string) error {
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
return process(f)
}
// Deferred Close ensures cleanup even if process returns errorThis pattern is so common and important that the minor overhead (if any) is completely justified.
When Defer IS Worth It (Almost Always)
- Ensures cleanup on panic: Unwinding properly even in catastrophic failure
- Readability: Cleanup code appears next to acquisition code
- Lock/unlock symmetry: Easy to verify correct pairing
- Transaction rollback: Database connections, file handles
- Modern Go optimization: Nearly free in straight-line code
// Clear and safe
func TransferFunds(from, to Account, amount Money) error {
from.Lock()
defer from.Unlock()
to.Lock()
defer to.Unlock()
// Locks released in correct order on any return path
return executeTransfer(from, to, amount)
}When Manual Cleanup Might Be Better
Very rare cases in extreme hot paths:
// Microsecond-scale hot path (99.9% of code doesn't qualify)
func FastPath() {
// If profiling shows defer is bottleneck:
r := getResource()
// ... work (MUST clean up in all paths)
result := r.Process()
r.Close()
return result
// Error path? MUST manually close!
// Exception path? MUST manually close!
}
// This is error-prone and rarely justifiedEven in hot paths, the safety of defer often outweighs micro-optimization.
Measuring Defer Cost
If you suspect defer overhead, measure it:
import "testing"
func BenchmarkYourFunction(b *testing.B) {
b.Run("WithDefer", func(b *testing.B) {
for i := 0; i < b.N; i++ {
f := OpenFile()
defer f.Close()
f.Process()
}
})
b.Run("WithoutDefer", func(b *testing.B) {
for i := 0; i < b.N; i++ {
f := OpenFile()
f.Process()
f.Close()
}
})
}
// Run with: go test -bench=. -benchmemIf the results are within 5% or less, the overhead is negligible and safety should win.
Summary and Recommendations
-
Use defer liberally: Modern Go has optimized it extensively. It's nearly free in straight-line code.
-
Avoid defer in loops: Each iteration adds to the defer list. Wrap iterations in anonymous functions instead.
-
Understand closure capture: Deferred closures capture variables at execution time (not definition time). Use the argument pattern or explicit local copies.
-
Don't micro-optimize: Defer is a safety tool. The performance benefit of removing it is rarely worth the complexity and error risk.
-
Profile before optimizing: If you think defer is slow, measure it. Modern benchmarks often show it's free.
-
Use for resource management: The readability and safety benefits far outweigh any performance considerations.
-
Combine with named returns: For error handling and wrapping, defer + named returns is powerful.
The maxim: Use defer everywhere except in tight loops. The one line to avoid is defer inside an unadorned loop body. Everything else is nearly free and dramatically improves code safety and clarity.