Logging
You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see Classic Logging.
Dependency
To use Logging, you must at least use the Pekko actors dependency in your project, and configure logging via the SLF4J backend, such as Logback configuration.
- sbt
val PekkoVersion = "1.0.0" libraryDependencies += "org.apache.pekko" %% "pekko-actor-typed" % PekkoVersion
- Maven
<properties> <scala.binary.version>2.13</scala.binary.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.pekko</groupId> <artifactId>pekko-bom_${scala.binary.version}</artifactId> <version>1.0.0</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.apache.pekko</groupId> <artifactId>pekko-actor-typed_${scala.binary.version}</artifactId> </dependency> </dependencies>
- Gradle
def versions = [ ScalaBinary: "2.13" ] dependencies { implementation platform("org.apache.pekko:pekko-bom_${versions.ScalaBinary}:1.0.0") implementation "org.apache.pekko:pekko-actor-typed_${versions.ScalaBinary}" }
Introduction
SLF4J is used for logging and Pekko provides access to an org.slf4j.Logger for a specific actor via the ActorContext
ActorContext
. You may also retrieve a Logger
with the ordinary org.slf4j.LoggerFactory.
To ensure that logging has minimal performance impact it’s important that you configure an asynchronous appender for the SLF4J backend. Logging generally means IO and locks, which can slow down the operations of your code if it was performed synchronously.
How to log
The ActorContext
ActorContext
provides access to an org.slf4j.Logger for a specific actor.
- Scala
-
source
Behaviors.receive[String] { (context, message) => context.log.info("Received message: {}", message) Behaviors.same }
- Java
-
source
public class MyLoggingBehavior extends AbstractBehavior<String> { public static Behavior<String> create() { return Behaviors.setup(MyLoggingBehavior::new); } private MyLoggingBehavior(ActorContext<String> context) { super(context); } @Override public Receive<String> createReceive() { return newReceiveBuilder().onMessage(String.class, this::onReceive).build(); } private Behavior<String> onReceive(String message) { getContext().getLog().info("Received message: {}", message); return this; } }
The Logger
via the ActorContext
will automatically have a name that corresponds to the Behavior
Behavior
of the actor when the log is accessed the first time. The class name when using AbstractBehavior
AbstractBehavior
or the class or object name where the Behavior
is defined when using the functional style. You can set a custom logger name with the setLoggerName
setLoggerName
of the ActorContext
.
- Scala
-
source
Behaviors.setup[String] { context => context.setLoggerName("com.myservice.BackendManager") context.log.info("Starting up") Behaviors.receiveMessage { message => context.log.debug("Received message: {}", message) Behaviors.same } }
- Java
-
source
public class BackendManager extends AbstractBehavior<String> { public static Behavior<String> create() { return Behaviors.setup( context -> { context.setLoggerName(BackendManager.class); context.getLog().info("Starting up"); return new BackendManager(context); }); } private BackendManager(ActorContext<String> context) { super(context); } @Override public Receive<String> createReceive() { return newReceiveBuilder().onMessage(String.class, this::onReceive).build(); } private Behavior<String> onReceive(String message) { getContext().getLog().debug("Received message: {}", message); return this; } }
The convention is to use logger names like fully qualified class names. The parameter to setLoggerName
can be a String
or a Class
, where the latter is convenience for the class name.
When logging via the ActorContext
the path of the actor will automatically be added as pekkoSource
Mapped Diagnostic Context (MDC) value. MDC is typically implemented with a ThreadLocal
by the SLF4J backend. To reduce performance impact, this MDC value is set when you access the log
getLog()
method so you shouldn’t cache the returned Logger
in your own field. That is handled by ActorContext
and retrieving the Logger
repeatedly with the log
getLog
method has low overhead. The MDC is cleared automatically after processing of current message has finished.
The Logger
is thread-safe but the log
getLog
method in ActorContext
is not thread-safe and should not be accessed from threads other than the ordinary actor message processing thread, such as Future
CompletionStage
callbacks.
It’s also perfectly fine to use a Logger retrieved via org.slf4j.LoggerFactory, but then the logging events will not include the pekkoSource
MDC value. This is the recommended way when logging outside of an actor, including logging from Future
CompletionStage
callbacks.
- Scala
-
source
val log = LoggerFactory.getLogger("com.myservice.BackendTask") Future { // some work "result" }.onComplete { case Success(result) => log.info("Task completed: {}", result) case Failure(exc) => log.error("Task failed", exc) }
- Java
-
source
class BackendTask { private final Logger log = LoggerFactory.getLogger(getClass()); void run() { CompletableFuture<String> task = CompletableFuture.supplyAsync( () -> { // some work return "result"; }); task.whenComplete( (result, exc) -> { if (exc == null) log.error("Task failed", exc); else log.info("Task completed: {}", result); }); } }
Placeholder arguments
The log message may contain argument placeholders {}
, which will be substituted if the log level is enabled. Compared to constructing a full string for the log message this has the advantage of avoiding superfluous string concatenation and object allocations when the log level is disabled. Some logging backends may also use these message templates before argument substitution to group and filter logging events.
It can be good to know that 3 or more arguments will result in the relatively small cost of allocating an array (vararg parameter) also when the log level is disabled. The methods with 1 or 2 arguments don’t allocate the vararg array.
When using the methods for 2 argument placeholders the compiler will often not be able to select the right method and report compiler error “ambiguous reference to overloaded definition”. To work around this problem you can use the trace2
, debug2
, info2
, warn2
or error2
extension methods that are added by import org.apache.pekko.actor.typed.scaladsl.LoggerOps
or import org.apache.pekko.actor.typed.scaladsl._
.
- Scala
-
source
import org.apache.pekko.actor.typed.scaladsl.LoggerOps Behaviors.receive[String] { (context, message) => context.log.info2("{} received message: {}", context.self.path.name, message) Behaviors.same }
When using the methods for 3 or more argument placeholders, the compiler will not be able to convert the method parameters to the vararg array when they contain primitive values such as Int
, and report compiler error “overloaded method value info with alternatives”. To work around this problem you can use the traceN
, debugN
, infoN
, warnN
or errorN
extension methods that are added by the same LoggerOps
import.
- Scala
-
source
import org.apache.pekko.actor.typed.scaladsl.LoggerOps Behaviors.receive[String] { (context, message) => context.log.infoN( "{} received message of size {} starting with: {}", context.self.path.name, message.length, message.take(10)) Behaviors.same }
If you find it tedious to add the import of LoggerOps
at many places you can make those additional methods available with a single implicit conversion placed in a root package object of your code:
- Scala
-
source
import scala.language.implicitConversions import org.apache.pekko.actor.typed.scaladsl.LoggerOps import org.slf4j.Logger package object myapp { implicit def loggerOps(logger: Logger): LoggerOps = LoggerOps(logger) }
Behaviors.logMessages
If you want very detailed logging of messages and signals you can decorate a Behavior
Behavior
with Behaviors.logMessages
Behaviors.logMessages
.
- Scala
-
source
import org.apache.pekko.actor.typed.LogOptions import org.slf4j.event.Level Behaviors.logMessages(LogOptions().withLevel(Level.TRACE), BackendManager())
- Java
-
source
import org.slf4j.event.Level; Behaviors.logMessages(LogOptions.create().withLevel(Level.TRACE), BackendManager.create());
MDC
MDC allows for adding additional context dependent attributes to log entries. Out of the box, Pekko will place the path of the actor in the MDC attribute pekkoSource
.
One or more tags can also be added to the MDC using the ActorTags
ActorTags
props. The tags will be rendered as a comma separated list and be put in the MDC attribute pekkoTags
. This can be used to categorize log entries from a set of different actors to allow easier filtering of logs:
- Scala
-
source
context.spawn(myBehavior, "MyActor", ActorTags("processing"))
- Java
-
source
context.spawn(myBehavior, "MyActor", ActorTags.create("processing"));
In addition to these two built in MDC attributes you can also decorate a Behavior
Behavior
with Behaviors.withMdc
Behaviors.withMdc
or use the org.slf4j.MDC API directly.
The Behaviors.withMdc
decorator has two parts. A static Map
of MDC attributes that are not changed, and a dynamic Map
that can be constructed for each message.
- Scala
-
source
val staticMdc = Map("startTime" -> system.startTime.toString) Behaviors.withMdc[BackendManager.Command]( staticMdc, mdcForMessage = (msg: BackendManager.Command) => Map("identifier" -> msg.identifier, "upTime" -> system.uptime.toString)) { BackendManager() }
- Java
-
source
Map<String, String> staticMdc = new HashMap<>(); staticMdc.put("startTime", String.valueOf(system.startTime())); Behaviors.withMdc( BackendManager2.Command.class, staticMdc, message -> { Map<String, String> msgMdc = new HashMap<>(); msgMdc.put("identifier", message.identifier()); msgMdc.put("upTime", String.valueOf(system.uptime())); return msgMdc; }, BackendManager2.create());
If you use the MDC API directly, be aware that MDC is typically implemented with a ThreadLocal
by the SLF4J backend. Pekko clears the MDC if logging is performed via the log
getLog()
of the ActorContext
and it is cleared automatically after processing of current message has finished, but only if you accessed log
getLog()
. The entire MDC is cleared, including attributes that you add yourself to the MDC. MDC is not cleared automatically if you use a Logger via LoggerFactory or not touch log
getLog()
in the ActorContext
.
SLF4J backend
To ensure that logging has minimal performance impact it’s important that you configure an asynchronous appender for the SLF4J backend. Logging generally means IO and locks, which can slow down the operations of your code if it was performed synchronously.
For production the SLF4J backend should be configured with an asynchronous appender as described here. Otherwise there is a risk of reduced performance and thread starvation problems of the dispatchers that are running actors and other tasks.
Logback
pekko-actor-typed
includes a dependency to the slf4j-api
. In your runtime, you also need a SLF4J backend. We recommend Logback:
- sbt
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.11"
- Maven
<dependencies> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>1.2.11</version> </dependency> </dependencies>
- Gradle
dependencies { implementation "ch.qos.logback:logback-classic:1.2.11" }
Logback has flexible configuration options and details can be found in the Logback manual and other external resources.
One part that is important to highlight is the importance of configuring an AsyncAppender, because it offloads rendering of logging events to a background thread, increasing performance. It doesn’t block the threads of the ActorSystem
ActorSystem
while the underlying infrastructure writes the log messages to disk or other configured destination. It also contains a feature which will drop INFO
and DEBUG
messages if the logging load is high.
A starting point for configuration of logback.xml
for production:
source<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>myapp.log</file>
<immediateFlush>false</immediateFlush>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>myapp_%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<pattern>[%date{ISO8601}] [%level] [%logger] [%marker] [%thread] - %msg MDC: {%mdc}%n</pattern>
</encoder>
</appender>
<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<queueSize>8192</queueSize>
<neverBlock>true</neverBlock>
<appender-ref ref="FILE" />
</appender>
<root level="INFO">
<appender-ref ref="ASYNC"/>
</root>
</configuration>
Note that the AsyncAppender may drop log events if the queue becomes full, which may happen if the logging backend can’t keep up with the throughput of produced log events. Dropping log events is necessary if you want to gracefully degrade your application if only your logging backend or filesystem is experiencing issues.
An alternative of the Logback AsyncAppender
with better performance is the Logstash async appender.
The ELK-stack is commonly used as logging infrastructure for production:
For development you might want to log to standard out, but also have all debug level logging to file, like in this example:
source<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
<encoder>
<pattern>[%date{ISO8601}] [%level] [%logger] [%marker] [%thread] - %msg MDC: {%mdc}%n</pattern>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>target/myapp-dev.log</file>
<encoder>
<pattern>[%date{ISO8601}] [%level] [%logger] [%marker] [%thread] - %msg MDC: {%mdc}%n</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="STDOUT"/>
<appender-ref ref="FILE"/>
</root>
</configuration>
Place the logback.xml
file in src/main/resources/logback.xml
. For tests you can define different logging configuration in src/test/resources/logback-test.xml
.
MDC values
When logging via the log
getLog()
of the ActorContext
, as described in How to log, Pekko includes a few MDC properties:
pekkoSource
: the actor’s pathpekkoAddress
: the full address of the ActorSystem, including hostname and port if Cluster is enabledpekkoTags
: tags defined in theProps
Props
of the actorsourceActorSystem
: the name of the ActorSystem
These MDC properties can be included in the Logback output with for example %X{pekkoSource}
specifier within the pattern layout configuration:
<encoder>
<pattern>%date{ISO8601} %-5level %logger{36} %X{pekkoSource} - %msg%n</pattern>
</encoder>
All MDC properties as key-value entries can be included with %mdc
:
<encoder>
<pattern>%date{ISO8601} %-5level %logger{36} - %msg MDC: {%mdc}%n</pattern>
</encoder>
Internal logging by Pekko
Event bus
For historical reasons logging by the Pekko internals and by classic actors are performed asynchronously through an event bus. Such log events are processed by an event handler actor, which then emits them to SLF4J or directly to standard out.
When pekko-actor-typed
and pekko-slf4j
are on the classpath this event handler actor will emit the events to SLF4J. The event.slf4j.Slf4jLogger
event.slf4j.Slf4jLogger
and event.slf4j.Slf4jLoggingFilter
event.slf4j.Slf4jLoggingFilter
are enabled automatically without additional configuration. This can be disabled by pekko.use-slf4j=off
configuration property.
In other words, you don’t have to do anything for the Pekko internal logging to end up in your configured SLF4J backend.
Log level
Ultimately the log level defined in the SLF4J backend is used. For the Pekko internal logging it will also check the level defined by the SLF4J backend before constructing the final log message and emitting it to the event bus.
However, there is an additional pekko.loglevel
configuration property that defines if logging events with lower log level should be discarded immediately without consulting the SLF4J backend. By default this is at INFO
level, which means that DEBUG
level logging from the Pekko internals will not reach the SLF4J backend even if DEBUG
is enabled in the backend.
You can enable DEBUG
level for pekko.loglevel
and control the actual level in the SLF4j backend without any significant overhead, also for production.
pekko.loglevel = "DEBUG"
To turn off all Pekko internal logging (not recommended) you can configure the log levels to be OFF
like this.
pekko {
stdout-loglevel = "OFF"
loglevel = "OFF"
}
The stdout-loglevel
is only in effect during system startup and shutdown, and setting it to OFF
as well, ensures that nothing gets logged during system startup or shutdown.
See Logger names for configuration of log level in SLF4J backend for certain modules of Pekko.
Logging to stdout during startup and shutdown
When the actor system is starting up and shutting down the configured loggers
are not used. Instead log messages are printed to stdout (System.out). The default log level for this stdout logger is WARNING
and it can be silenced completely by setting pekko.stdout-loglevel=OFF
.
Logging of Dead Letters
By default messages sent to dead letters are logged at info level. Existence of dead letters does not necessarily indicate a problem, but they are logged by default for the sake of caution. After a few messages this logging is turned off, to avoid flooding the logs. You can disable this logging completely or adjust how many dead letters are logged. During system shutdown it is likely that you see dead letters, since pending messages in the actor mailboxes are sent to dead letters. You can also disable logging of dead letters during shutdown.
pekko {
log-dead-letters = 10
log-dead-letters-during-shutdown = on
}
To customize the logging further or take other actions for dead letters you can subscribe to the Event Stream.
Auxiliary logging options
Pekko has a few configuration options for very low level debugging. These make more sense in development than in production.
You almost definitely need to have logging set to DEBUG to use any of the options below:
pekko {
loglevel = "DEBUG"
}
This config option is very good if you want to know what config settings are loaded by Pekko:
pekko {
# Log the complete configuration at INFO level when the actor system is started.
# This is useful when you are uncertain of what configuration is used.
log-config-on-start = on
}
If you want unhandled messages logged at DEBUG:
pekko {
actor {
debug {
# enable DEBUG logging of unhandled messages
unhandled = on
}
}
}
If you want to monitor subscriptions (subscribe/unsubscribe) on the ActorSystem.eventStream:
pekko {
actor {
debug {
# enable DEBUG logging of subscription changes on the eventStream
event-stream = on
}
}
}
Auxiliary remote logging options
If you want to see all messages that are sent through remoting at DEBUG log level, use the following config option. Note that this logs the messages as they are sent by the transport layer, not by an actor.
pekko.remote.artery {
# If this is "on", Pekko will log all outbound messages at DEBUG level,
# if off then they are not logged
log-sent-messages = on
}
If you want to see all messages that are received through remoting at DEBUG log level, use the following config option. Note that this logs the messages as they are received by the transport layer, not by an actor.
pekko.remote.artery {
# If this is "on", Pekko will log all inbound messages at DEBUG level,
# if off then they are not logged
log-received-messages = on
}
Logging of message types with payload size in bytes larger than the configured log-frame-size-exceeding
.
pekko.remote.artery {
log-frame-size-exceeding = 10000b
}
MDC values from Pekko internal logging
Since the logging is done asynchronously, the thread in which the logging was performed is captured in MDC with attribute name sourceThread
.
The path of the actor in which the logging was performed is available in the MDC with attribute name pekkoSource
.
The actor system name in which the logging was performed is available in the MDC with attribute name sourceActorSystem
, but that is typically also included in the pekkoSource
attribute.
The address of the actor system, containing host and port if the system is using cluster, is available through pekkoAddress
.
For typed actors the log event timestamp is taken when the log call was made but for Pekko’s internal logging as well as the classic actor logging is asynchronous which means that the timestamp of a log entry is taken from when the underlying logger implementation is called, which can be surprising at first. If you want to more accurately output the timestamp for such loggers, use the MDC attribute pekkoTimestamp
. Note that the MDC key will not have any value for a typed actor.
Markers
Pekko is logging some events with markers. Some of these events also include structured MDC properties.
- The “SECURITY” marker is used for highlighting security related events or incidents.
- Pekko Actor is using the markers defined in
ActorLogMarker
ActorLogMarker
. - Pekko Cluster is using the markers defined in
ClusterLogMarker
ClusterLogMarker
. - Pekko Remoting is using the markers defined in
RemoteLogMarker
RemoteLogMarker
. - Pekko Cluster Sharding is using the markers defined in
ShardingLogMarker
ShardingLogMarker
.
Markers and MDC properties are automatically picked up by the Logstash Logback encoder.
The marker can be included in the Logback output with %marker
and all MDC properties as key-value entries with %mdc
.
<encoder>
<pattern>[%date{ISO8601}] [%level] [%logger] [%marker] [%thread] - %msg MDC: {%mdc}%n</pattern>
</encoder>
Logger names
It can be useful to enable debug level or other SLF4J backend configuration for certain modules of Pekko when troubleshooting. Those logger names are typically prefixed with the package name of the classes in that module. For example, in Logback the configuration may look like this to enable debug logging for Cluster Sharding:
<logger name="pekko.cluster.sharding" level="DEBUG" />
<root level="INFO">
<appender-ref ref="ASYNC"/>
</root>
Other examples of logger names or prefixes:
pekko.cluster
pekko.cluster.Cluster
pekko.cluster.ClusterHeartbeat
pekko.cluster.ClusterGossip
pekko.cluster.ddata
pekko.cluster.pubsub
pekko.cluster.singleton
pekko.cluster.sharding
pekko.coordination.lease
pekko.discovery
pekko.persistence
pekko.remote
Logging in tests
Testing utilities are described in Testing.