Logging as a Cross-cutting Concern in Go

Is it better to use a shared global logger and negotiate settings with every module, or to use private logger instances customised to taste? For most applications, logging is a cross-cutting concern. Logging is needed across the application, but the particulars of the logger need not vary across the application. The approach you take will depend on the needs of your application, so here are some considerations for each.

Private logger instances

The primary reason for using private logger instances is to customise the logging setup for specific modules. Suppose you wanted to log every output of your task runner to a separate sink, or you wanted to attach some attributes to every log that it produces. The easiest way to start with that is to instantiate a logger for that module.

// File: taskrunner/runner.go

package taskrunner

import (
    // snip
)

type serialRunner struct {
  logger *slog.Logger
  // snip
}

func NewSerialRunner(logger *slog.Logger, tasks []Task) serialRunner {
  // snip
}

func (sr serialRunner) Run(ctx context.Context) error {
  sr.logger.InfoContext(ctx, "Running tasks serially")
  // run tasks
  return nil
}
// File: main.go
package main

import (
    // snip
)

func main() {
    logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
    tasks := []taskrunner.Task{
        orderprocessing.NewPendingPaymentReminder(
            logger.With("logger", "orderprocessing/pending-payment-reminder")),
    }
    runner := taskrunner.NewSerialRunner(
        logger.With("logger", "taskrunner/serial-runner"), tasks)

    runner.Run(context.Background())
}

So easy. If you are big on making capabilities, this approach lets you see at a glance whether a module logs or not. It is also fine for greenfield projects. When you are just starting out, you do not have hundreds of functions to rewrite into accepting a logger. But if you have an existing project, it becomes cumbersome. This was the case as I migrated every project at work to use slog.

An alternative you may consider is to pass the logger down through the context. But that is even worse. First, logging is a static dependency, so the existence of a logger has to be known at build time. But you can read context values at runtime only. Also, it does not help you in functions that do not accept context parameters, or functions that do not have to log with a context.

Yet another drawback of private logger instances is that your code may churn more. If you want to add logging to a procedure to see what it does at runtime, you have to either hoist it into a struct method that can access a logger, or pass it a logger. Either way, you have more call sites to change. Honestly, I'm not a fan of that.

💡
Although I used it in my example, naming your logger is not a good reason to use private logger instances. slog will add the source locations of your logging calls if you set AddSource to true in your HandlerOptions.

Shared global logger

Logging is a often set up once and shared across every module. With slog, you can replace the default global logger by calling slog.SetDefault with your configured logger instance. Your program can then call the exported logging methods from the slog package directly.

In contrast to the private logger setup, you can no longer know whether a module logs by looking at its arguments. But that was never foolproof anyway. Go does not have a capabilities/effects system, so every function can log. It is futile to fight that. The upside is that you can now migrate your brownfield projects by changing the logging sites only. You avoid the drawbacks of private logger instances.

Best of both worlds?

Sometimes you need customisation still. Is there a way to get the best of both options? I think so. When we call the functions exported by slog, we are accessing the global logger. This is one valid use case for global variables. We can set the custom logger for each module as a global variable in the package and name it slog.

// File: taskrunner/runner.go

package taskrunner

import (
    "context"
    goslog "log/slog"
    "sync"
)

var (
  // We define a package-level logger named slog.
  // slog cannot be imported in this package without being aliased.
  slog *goslog.Logger
  initLogger = &sync.Once{}
)

type serialRunner struct {
  tasks []Task
}

func NewSerialRunner(logger *goslog.Logger, tasks []Task) serialRunner {
  initLogger.Do(func() {
    // We must take care to initialise the logger only once.
    slog = logger
  })
  return serialRunner{
    tasks: tasks,
  }
}

func (sr serialRunner) Run(ctx context.Context) error {
  slog.InfoContext(ctx, "Running tasks serially")
  // run tasks
  return nil
}

All we have changed is the logger setup. We can now customise the logger for specific modules while using a shared global setup everywhere else. Because slog.Logger is hierarchical, we can also share the global logging setup with our module-specific logger. Much better.

💡
This approach works even better if your modules are aligned to domain boundaries.

That is it for setting up logging as a cross-cutting concern. Next, we will look at how to set up simple tracing for our logs in the context of a web server. The goal will be to correlate a sequence of logs using a TraceID and a SpanID. See you then!