The Monitor

A practical guide to error handling in Go

16 minute read

Published

Share

A practical guide to error handling in Go
Wojciech Gancarczyk

Wojciech Gancarczyk

When you first start coding in Go, you quickly learn how error handling in the language differs from error handling in languages such as Java, Python, JavaScript, or Ruby. In those languages, throwing an exception automatically generates a stack trace. Go, by contrast, provides no built-in error tracing to reveal an error’s origin. But despite its lack of automatic error tracing, Go's error handling has matured significantly over time to give developers richer ways to capture context and diagnose failures.

This guide explores these richer capabilities in more depth. It shows how Go's idiomatic error handling patterns work and how to apply them in real applications. It then summarizes best practices for error handling in Go. Finally, it demonstrates how Datadog's Error Tracking and Orchestrion address the absence of built-in tracing by giving you clear visibility into where errors occur, how they propagate through code, and what context each error carries. With these added capabilities, teams can debug more quickly and keep Go services production-ready and reliable.

Evolution of error handling in Go

Go's error handling started from a minimal design and has gradually expanded to include more advanced features and idioms. The subsections below walk through this progression, showing how simple return values grew into richer approaches: wrapping errors with context, defining custom types through the error interface, using errors.Is and errors.As for type inspection, and combining errors with errors.Join.

Error values and wrapping

Error handling in Go begins with the built-in error interface, which has been part of the language since its earliest versions. Its definition is minimal:

type error interface {
Error() string
}

Any type that implements this method can be treated as an error. This design reflects Go's philosophy: errors are values, returned explicitly from functions rather than hidden behind exceptions.

From this simple definition, Go's standard library provides the tools to create actual error values. The two core functions are errors.New and fmt.Errorf. Both return values of type error, which means they satisfy the built-in error interface. The distinction lies in how they construct those values. The errors.New function is the simplest option: Each call creates a distinct error value, even if the message text is identical. By contrast, the fmt.Errorf function allows you to build error values by using formatted strings. Since Go 1.13, fmt.Errorf has also supported the %w verb, which lets developers wrap an existing error with added context. Together with the errors.Unwrap function, this makes it possible to preserve the original error and later trace the cause of a failure through multiple layers of code.

The following code snippet illustrates how to use these functions in practice.

func main() {
err1 := errors.New("something went wrong")
err2 := errors.New("something went wrong")
// This will print false since errors.New always returns distinct values.
fmt.Println("err1 == err2:", err1 == err2)
// This will wrap the previous error using fmt.Errorf (Go 1.13+).
wrapped := fmt.Errorf("failed to process request: %w", err1)
fmt.Println("wrapped error:", wrapped)
// This will unwrap the error to see the cause.
cause := errors.Unwrap(wrapped)
fmt.Println("unwrapped cause:", cause)
}

Error wrapping plays an important role in Go because it provides a way to trace the origin of failures in situations where they move through several layers of a call stack. Once an error is wrapped, the added context makes it easier to identify where the problem occurred and to diagnose the underlying cause, which in turn helps developers resolve issues more quickly.

Custom error types

In Go's early years, most code relied on simple error values created with errors.New or fmt.Errorf. As Go idioms matured, developers began to define custom error types more often. This shift took advantage of the fact that error is an interface, so any type that implements the Error() method can be treated as an error. This approach lets Go programs represent errors with richer information and allow callers to react differently depending on the type of error returned.

The following snippet shows how a custom QueryError type can be defined and checked at runtime, allowing the caller to handle it differently from a built-in error:

type QueryError struct {
Message string
}
func (e QueryError) Error() string { return e.Message }
func executeQuery() (int, error) {
randomBit := rand.Intn(2)
if randomBit == 1 {
return 0, errors.New("Unexpected error")
} else {
return 0, &QueryError{Message: "socket not found"}
}
}
func main() {
result, err := executeQuery()
if e, ok := err.(*QueryError); ok {
fmt.Println("QueryError:", e.Message)
} else if err != nil {
fmt.Println("An unexpected error occurred:", err)
} else {
fmt.Println("Query executed successfully, result:", result)
}
}

Extending custom errors with causes

While custom error types improve flexibility, introducing a new type for every failure condition quickly becomes cumbersome. A more scalable option is to enrich a single error type with structured information that distinguishes among categories of errors.

The snippet below extends QueryError with a Cause field and an enum of values (NetworkError, UserError, and UnknownError) so that the caller can branch on error categories without multiplying error types:

type QueryErrorCause int
const (
NetworkError QueryErrorCause = iota
UserError
UnknownError
)
func (q QueryErrorCause) String() string {
return [...]string{"NetworkError", "UserError", "UnknownError"}[q]
}
type QueryError struct {
Message string
Cause QueryErrorCause
}
func (e QueryError) Error() string {
return fmt.Sprintf("Failed to execute the query, %s", e.Message)
}
func executeQuery() (int, *QueryError) {
return 0, &QueryError{Message: "socket not found", Cause: NetworkError}
}
func main() {
result, err := executeQuery()
if err != nil {
if err.Cause == NetworkError {
// retry
}
if err.Cause == UserError {
// return error message
}
}
// ...
}

Type inspection and grouping with new language features

Although custom types and enums provided more structure, developers still needed simpler ways to inspect and extract errors as they moved through call stacks. Go 1.13 introduced errors.Is and errors.As to simplify unwrapping and type matching. Go 1.20 added errors.Join for grouping multiple errors.

The next snippet shows how errors.Is, errors.As, and errors.Join can be applied with QueryError to provide fine-grained inspection and handle combined errors cleanly:

type QueryErrorCause int
const (
NetworkError QueryErrorCause = iota
UserError
UnknownError
)
func (q QueryErrorCause) String() string {
return [...]string{"NetworkError", "UserError", "UnknownError"}[q]
}
// We are adding simple errors here that we will be able to use
// to highlight error unwrapping.
var (
ErrNetwork = errors.New("socket not found")
ErrUser = errors.New("user error")
ErrUnknown = errors.New("unknown error")
)
type QueryError struct {
Message string
Cause QueryErrorCause
// We are also adding an error here to allow error wrapping.
Err error
}
func (e QueryError) Error() string {
return fmt.Sprintf("Failed to execute the query, %s", e.Message)
}
// And finally the unwrapping support.
func (e QueryError) Unwrap() error {
return e.Err
}
func executeQuery() (int, *QueryError) {
return 0, &QueryError{
Message: "network error",
Cause: NetworkError,
Err: ErrNetwork,
}
}
func main() {
_, err := executeQuery()
if err != nil {
if errors.Is(err, ErrNetwork) {
// retry
} else if errors.Is(err, ErrUser) {
// return user-friendly message
} else {
// unknown error
}
}
var queryErr *QueryError
if errors.As(err, &queryErr) {
fmt.Printf("QueryError details: cause=%v, msg=%s\n", queryErr.Cause, queryErr.Message)
// special handling, or logging, monitoring, etc.
}
// This could be some other function call that returned an error.
otherErr := errors.New("timeout")
combinedErr := errors.Join(err, otherErr)
if errors.Is(combinedErr, ErrNetwork) {
fmt.Println("At least one error was a network error! Combined error:", combinedErr)
}
}

Best practices for handling errors in Go

The examples above highlight common patterns. From these, we can extract some best practices that help ensure Go programs remain idiomatic, debuggable, and easy to maintain:

  • Embrace Go's idiomatic error handling: Functions that can fail should return an error as part of a tuple. This explicit style is central to Go and keeps error handling visible in the flow of the program.
  • Prefer wrapping errors: Instead of using errors.New, prefer fmt.Errorf when returning a generic error. Wrapping allows errors to propagate through the call chain and, with errors.Unwrap, makes it possible to trace their origin.
  • Use custom errors for additional context: Simple text messages may not carry enough information to resolve an error. Defining custom types allows you to encode more detail, represent unique states, and give callers fine-grained control over resolution strategies.
  • Handle specific errors differently: Use errors.Is to check for specific underlying errors and handle them in tailored ways. Use errors.As to extract custom error types when more detail is needed for logging, reporting, or advanced handling.
  • Balance custom types with simplicity: Defining too many custom error types can make code harder to follow. Favor readability over excessive granularity, and introduce new types only when they provide clear value.

Tracking errors with Orchestrion and Datadog Error Tracing

Providing reliable performance from day one requires visibility not just into metrics but also into errors as they occur. But investigating errors is challenging in Go. Although its error wrapping capabilities do add useful context, the language (as we have mentioned) does not automatically produce a full stack trace. What's more, compensating for this lack of built-in visibility into the source of errors requires application tracing, which relies on instrumented code—and this is tricky with Go applications. Because Go applications are compiled into native binaries, they don't provide the runtime hooks that languages like Java, Python, or .NET use for automatic instrumentation. This makes it harder to add tracing without changing the code. As a result, capturing detailed context of application errors in Go often requires adding tracing libraries directly to the application.

To address this issue, Datadog created Orchestrion, an open source, compile-time instrumentation tool that integrates with the Go toolchain and inserts tracing hooks, enabling comprehensive tracing and error tracking without manual code changes.

Once instrumentation is integrated into the build via Orchestrion, Datadog Error Tracking becomes available automatically as an additional capability. Error Tracking can surface precise details about where an error originated, including the file and line number. These details make it much easier to locate and debug issues in Go applications. On top of this, Error Tracking provides filtering and grouping features that reduce noise, so related errors are clustered together, and the most important problems are easier to spot.

To show how this works in practice, the remainder of this blog post will demonstrate Orchestrion together with Datadog Error Tracking in the context of a simple HTTP API. The scaffolded server below provides a controlled way to illustrate three types of errors—handled errors, unhandled errors, and panics—and how Error Tracking presents them. Each request to the /roll-error endpoint triggers one of these outcomes, allowing us to explore Error Tracking's grouping and debugging features step by step.

func main() {
http.HandleFunc("/roll-error", rollErrorHandler)
log.Println("Server started on :8080")
if err := http.ListenAndServe(":8080", nil); err != nil {
log.Fatalf("Server failed: %v", err)
}
}
func rollErrorHandler(w http.ResponseWriter, r *http.Request) {
randomInt := rand.Intn(100)
ctx := r.Context()
switch {
case randomInt < 25:
w.Write([]byte(fmt.Sprintf("Rolled a number %d, no error", randomInt)))
case randomInt < 50:
if handledErr := handledError(randomInt); handledErr != nil {
w.Write([]byte("An error occurred, but it was handled gracefully returning a success response"))
} else {
w.Write([]byte("Rolled a number above 25 but handledError returned nil, no error occurred"))
}
case randomInt < 75:
_ = unhandledError(ctx)
http.Error(w, "An error occurred, but it was not handled", http.StatusInternalServerError)
default:
safePanic(ctx)
w.Write([]byte("Panicked, but recovered gracefully"))
}
}

Handled errors

Handled errors are those that are triggered by invalid or disallowed user input. They are common in systems where the request is structurally valid—for example, a properly formatted JSON payload sent with the correct HTTP method—but semantically fails because it violates application-level business rules.

In our simplified API, handled errors are deliberately raised when the random number generator produces a value between 25 and 50. Our implementation here is minimal as it is intended merely to demonstrate how such errors can be surfaced and tracked with Datadog Error Tracking. To make grouping behavior easier to see in the Error Tracking interface, the generated number is embedded directly in the error message. To illustrate, here is the function that raises handled errors and embeds the random number in the message:

// dd:span span.name:fooify.handledError
func handledError(randomInt int) error {
return fmt.Errorf("some business rule failed and error was handled gracefully for number %d", randomInt)
}

When Error Tracking processes these handled errors, it automatically groups them together. Even though the error messages differ because of the embedded number, they originate from the same line of source code and are therefore classified as a single issue.

The screenshot below shows how this appears in the Error Tracking interface. At the top, you can see the grouped issue and a timeline of occurrences. Beneath that, a sampled error displays the stack trace leading to the failure. By default, Error Tracking hides irrelevant third-party frames so that only frames from the application's code are shown. This reduces noise, highlights the true origin of the error, and helps developers focus on the most important parts of the stack trace.

Datadog Error Tracking showing a grouped handled error.
Datadog Error Tracking showing a grouped handled error.

Unhandled errors

Unhandled errors typically represent exceptional conditions that should result in an HTTP status code of 500 or higher. They often occur when a service receives unexpected input or enters a state in a downstream dependency that it cannot recover from. In these cases, the API could return an error response to the client, and the client could implement a retry mechanism to handle the failure more gracefully.

With only a small change to our scaffold, we can demonstrate how debug information can be attached to these unhandled errors. In a real application, the code would likely involve more complex logic, but here we model a simple case where a procedure calls two different downstream services. By tagging which service was invoked, you can capture metadata in Error Tracking that helps identify the root cause of an issue or at least narrow down the investigation.

The snippet below shows how the API adds tags to distinguish between Service A and Service B when raising an unhandled error:

import (
"github.com/DataDog/dd-trace-go/v2/ddtrace/tracer"
"math/rand"
"net/http"
)
// ...
case randomInt < 75:
randomBit := rand.Intn(4)
span, _ := tracer.SpanFromContext(ctx)
if randomBit == 1 {
_ = callExternalServiceA()
span.SetTag("callingExternalService", "Service A")
} else {
_ = callExternalServiceB()
span.SetTag("callingExternalService", "Service B")
}
http.Error(w, "An error occurred, but it was not handled", http.StatusInternalServerError)
// ...

In our example, we configured the callingExternalService tag with two distinct values to represent Service A and Service B. The goal was to achieve an approximate 25/75 distribution of errors between the two. After adding the tag to the issue panel in Error Tracking, we can confirm that the observed distribution matched our expectations. This result suggests that the majority of unhandled errors originated from Service B, allowing us to narrow the scope of investigation and focus troubleshooting efforts more effectively.

The screenshot below shows this breakdown in the Error Tracking interface. The error distribution panel displays the relative frequency of issues for each service, confirming that most errors are associated with Service B.

Datadog Error Tracking showing an unhandled error with tags indicating 75% from Service B and 25% from Service A.
Datadog Error Tracking showing an unhandled error with tags indicating 75% from Service B and 25% from Service A.

A note on panic

In Go, a panic signals that something exceptional has happened at runtime. Common causes include dereferencing a nil pointer, which results in a process crash, or attempting to access an array index that is out of bounds. In some Go libraries, panics are also used to enforce the exhaustiveness of enums by raising an error if an unexpected value is assigned.

In our example API, panics are triggered in the final branch of the error-handling switch. The snippet below shows how to recover from a panic using a deferred function. By combining recover with Datadog's tracer.StartSpanFromContext, the code captures the error, marks the operation as an error in the active span, and allows the panic to appear in Datadog Error Tracking as an issue with a full stack trace. This approach records the error while letting the application continue running.

func safePanic(ctx context.Context) {
span, ctx := tracer.StartSpanFromContext(ctx, "fooify.safePanic")
defer func() {
var err error
if r := recover(); r != nil {
err = fmt.Errorf("recovered from panic: %v", r)
}
span.Finish(tracer.WithError(err))
}()
var p *int
_ = *p
}

The screenshot below illustrates how a recovered panic is displayed in Datadog Error Tracking. The full stack trace is captured, but by default irrelevant third-party frames are hidden so you can focus on the application code that triggered the panic. Thanks to recover, the service keeps running while the panic is recorded and tracked as an error.

Datadog Error Tracking showing a recovered panic with stack trace, highlighting a nil pointer dereference in application code.
Datadog Error Tracking showing a recovered panic with stack trace, highlighting a nil pointer dereference in application code.

Other features of Error Tracking

The examples above highlight only a subset of what Datadog Error Tracking can do. On the issue overview page, handled and unhandled errors can be quickly distinguished by using the search parameter error.handling:handled. This separation makes it easier to triage problems and assign them to the right teams.

Error Tracking also goes beyond simply identifying errors. It captures a complete history of related events and actions, which allows teams to reproduce issues without relying solely on user feedback. Combined with volume metrics that show the frequency and impact of specific problems, this information helps teams prioritize which errors to address first.

Finally, Error Tracking integrates with alerting channels so that teams receive real-time notifications of critical issues. This makes it less likely for errors to slip into production unnoticed. And because all of this works with minimal setup when using Orchestrion, developers can gain full error visibility without a heavy lift.

Building reliable Go applications with observability in mind

Go's design treats errors as values and encourages explicit handling through return types, wrapping, and custom error types. Over time, the ecosystem has added tools like errors.Is, errors.As, and errors.Join to make inspection and classification easier. These patterns give developers flexibility, but they also highlight the need for strong observability to understand how errors propagate in real systems.

With Orchestrion's compile-time instrumentation and Datadog Error Tracking, teams can connect Go's idiomatic error-handling techniques to actionable insights in production. Error Tracking groups related issues, filters out noise, and provides detailed metadata so that developers can identify and resolve problems quickly. Together, these practices and tools help teams build Go applications that are resilient and debuggable in production.

For more information about Orchestrion, read our blog post on the topic. To read more about Datadog Error Tracking, see our documentation. And if you're not yet a Datadog customer, sign up for a 14-day .

Related Articles

How we tracked down a Go 1.24 memory regression across hundreds of pods

How we tracked down a Go 1.24 memory regression across hundreds of pods

How Go 1.24's Swiss Tables saved us hundreds of gigabytes

How Go 1.24's Swiss Tables saved us hundreds of gigabytes

Troubleshoot root causes with GitHub commit and ownership data in Error Tracking

Troubleshoot root causes with GitHub commit and ownership data in Error Tracking

Announcing Go tracer v2.0.0

Announcing Go tracer v2.0.0

Start monitoring your metrics in minutes