Class AsyncWriteJournal
- java.lang.Object
-
- org.apache.pekko.persistence.journal.japi.AsyncRecovery
-
- org.apache.pekko.persistence.journal.japi.AsyncWriteJournal
-
- All Implemented Interfaces:
Actor
,AsyncRecovery
,AsyncWriteJournal
,WriteJournalBase
public abstract class AsyncWriteJournal extends AsyncRecovery implements AsyncWriteJournal
Java API: abstract journal, optimized for asynchronous, non-blocking writes.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.apache.pekko.actor.Actor
Actor.emptyBehavior$, Actor.ignoringBehavior$
-
Nested classes/interfaces inherited from interface org.apache.pekko.persistence.journal.AsyncWriteJournal
AsyncWriteJournal.Desequenced, AsyncWriteJournal.Desequenced$, AsyncWriteJournal.Resequencer
-
-
Constructor Summary
Constructors Constructor Description AsyncWriteJournal()
-
Method Summary
All Methods Instance Methods Abstract Methods Concrete Methods Modifier and Type Method Description scala.concurrent.Future<scala.runtime.BoxedUnit>
asyncDeleteMessagesTo(java.lang.String persistenceId, long toSequenceNr)
Plugin API: asynchronously deletes all persistent messages up totoSequenceNr
(inclusive).scala.concurrent.Future<scala.collection.immutable.Seq<scala.util.Try<scala.runtime.BoxedUnit>>>
asyncWriteMessages(scala.collection.immutable.Seq<AtomicWrite> messages)
Plugin API: asynchronously writes a batch (Seq
) of persistent messages to the journal.ActorContext
context()
Scala API: Stores the context for this actor, including self, and sender.scala.concurrent.Future<java.lang.Void>
doAsyncDeleteMessagesTo(java.lang.String persistenceId, long toSequenceNr)
Java API, Plugin API: synchronously deletes all persistent messages up to `toSequenceNr`.scala.concurrent.Future<java.lang.Long>
doAsyncReadHighestSequenceNr(java.lang.String persistenceId, long fromSequenceNr)
Java API, Plugin API: asynchronously reads the highest stored sequence number for the given `persistenceId`.scala.concurrent.Future<java.lang.Void>
doAsyncReplayMessages(java.lang.String persistenceId, long fromSequenceNr, long toSequenceNr, long max, java.util.function.Consumer<PersistentRepr> replayCallback)
Java API, Plugin API: asynchronously replays persistent messages.scala.concurrent.Future<java.lang.Iterable<java.util.Optional<java.lang.Exception>>>
doAsyncWriteMessages(java.lang.Iterable<AtomicWrite> messages)
Java API, Plugin API: asynchronously writes a batch (`Iterable`) of persistent messages to the journal.protected void
org$apache$pekko$actor$Actor$_setter_$context_$eq(ActorContext x$1)
Scala API: Stores the context for this actor, including self, and sender.protected void
org$apache$pekko$actor$Actor$_setter_$self_$eq(ActorRef x$1)
The 'self' field holds the ActorRef for this actor.protected void
org$apache$pekko$persistence$journal$AsyncWriteJournal$_setter_$receiveWriteJournal_$eq(scala.PartialFunction<java.lang.Object,scala.runtime.BoxedUnit> x$1)
protected void
org$apache$pekko$persistence$journal$WriteJournalBase$_setter_$persistence_$eq(Persistence x$1)
Persistence
persistence()
scala.PartialFunction<java.lang.Object,scala.runtime.BoxedUnit>
receiveWriteJournal()
ActorRef
self()
The 'self' field holds the ActorRef for this actor.-
Methods inherited from class org.apache.pekko.persistence.journal.japi.AsyncRecovery
asyncReadHighestSequenceNr, asyncReplayMessages
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.apache.pekko.actor.Actor
aroundPostRestart, aroundPostStop, aroundPreRestart, aroundPreStart, aroundReceive, postRestart, postStop, preRestart, preStart, sender, supervisorStrategy, unhandled
-
Methods inherited from interface org.apache.pekko.persistence.journal.AsyncRecovery
asyncReadHighestSequenceNr, asyncReplayMessages
-
Methods inherited from interface org.apache.pekko.persistence.journal.AsyncWriteJournal
isReplayFilterEnabled, receive, receivePluginInternal, resequencerCounter_$eq
-
Methods inherited from interface org.apache.pekko.persistence.journal.WriteJournalBase
adaptFromJournal, adaptToJournal, preparePersistentBatch
-
-
-
-
Method Detail
-
asyncDeleteMessagesTo
public final scala.concurrent.Future<scala.runtime.BoxedUnit> asyncDeleteMessagesTo(java.lang.String persistenceId, long toSequenceNr)
Description copied from interface:AsyncWriteJournal
Plugin API: asynchronously deletes all persistent messages up totoSequenceNr
(inclusive).This call is protected with a circuit-breaker. Message deletion doesn't affect the highest sequence number of messages, journal must maintain the highest sequence number and never decrease it.
- Specified by:
asyncDeleteMessagesTo
in interfaceAsyncWriteJournal
-
asyncWriteMessages
public final scala.concurrent.Future<scala.collection.immutable.Seq<scala.util.Try<scala.runtime.BoxedUnit>>> asyncWriteMessages(scala.collection.immutable.Seq<AtomicWrite> messages)
Description copied from interface:AsyncWriteJournal
Plugin API: asynchronously writes a batch (Seq
) of persistent messages to the journal.The batch is only for performance reasons, i.e. all messages don't have to be written atomically. Higher throughput can typically be achieved by using batch inserts of many records compared to inserting records one-by-one, but this aspect depends on the underlying data store and a journal implementation can implement it as efficient as possible. Journals should aim to persist events in-order for a given
persistenceId
as otherwise in case of a failure, the persistent state may be end up being inconsistent.Each
AtomicWrite
message contains the singlePersistentRepr
that corresponds to the event that was passed to thepersist
method of thePersistentActor
, or it contains severalPersistentRepr
that corresponds to the events that were passed to thepersistAll
method of thePersistentActor
. AllPersistentRepr
of theAtomicWrite
must be written to the data store atomically, i.e. all or none must be stored. If the journal (data store) cannot support atomic writes of multiple events it should reject such writes with aTry
Failure
with anUnsupportedOperationException
describing the issue. This limitation should also be documented by the journal plugin.If there are failures when storing any of the messages in the batch the returned
Future
must be completed with failure. TheFuture
must only be completed with success when all messages in the batch have been confirmed to be stored successfully, i.e. they will be readable, and visible, in a subsequent replay. If there is uncertainty about if the messages were stored or not theFuture
must be completed with failure.Data store connection problems must be signaled by completing the
Future
with failure.The journal can also signal that it rejects individual messages (
AtomicWrite
) by the returnedimmutable.Seq[Try[Unit}
. It is possible but not mandatory to reduce number of allocations by returningFuture.successful(Nil)
for the happy path, i.e. when no messages are rejected. Otherwise the returnedSeq
must have as many elements as the inputmessages
Seq
. EachTry
element signals if the correspondingAtomicWrite
is rejected or not, with an exception describing the problem. Rejecting a message means it was not stored, i.e. it must not be included in a later replay. Rejecting a message is typically done before attempting to store it, e.g. because of serialization error.Data store connection problems must not be signaled as rejections.
It is possible but not mandatory to reduce number of allocations by returning
Future.successful(Nil)
for the happy path, i.e. when no messages are rejected.Calls to this method are serialized by the enclosing journal actor. If you spawn work in asynchronous tasks it is alright that they complete the futures in any order, but the actual writes for a specific persistenceId should be serialized to avoid issues such as events of a later write are visible to consumers (query side, or replay) before the events of an earlier write are visible. A PersistentActor will not send a new WriteMessages request before the previous one has been completed.
Please note that the
sender
field of the contained PersistentRepr objects has been nulled out (i.e. set toActorRef.noSender
) in order to not use space in the journal for a sender reference that will likely be obsolete during replay.Please also note that requests for the highest sequence number may be made concurrently to this call executing for the same
persistenceId
, in particular it is possible that a restarting actor tries to recover before its outstanding writes have completed. In the latter case it is highly desirable to defer reading the highest sequence number until all outstanding writes have completed, otherwise the PersistentActor may reuse sequence numbers.This call is protected with a circuit-breaker.
- Specified by:
asyncWriteMessages
in interfaceAsyncWriteJournal
-
context
public ActorContext context()
Description copied from interface:Actor
Scala API: Stores the context for this actor, including self, and sender. It is implicit to support operations such asforward
.WARNING: Only valid within the Actor itself, so do not close over it and publish it to other threads!
pekko.actor.ActorContext
is the Scala API.getContext
returns apekko.actor.AbstractActor.ActorContext
, which is the Java API of the actor context.
-
org$apache$pekko$actor$Actor$_setter_$context_$eq
protected void org$apache$pekko$actor$Actor$_setter_$context_$eq(ActorContext x$1)
Description copied from interface:Actor
Scala API: Stores the context for this actor, including self, and sender. It is implicit to support operations such asforward
.WARNING: Only valid within the Actor itself, so do not close over it and publish it to other threads!
pekko.actor.ActorContext
is the Scala API.getContext
returns apekko.actor.AbstractActor.ActorContext
, which is the Java API of the actor context.- Specified by:
org$apache$pekko$actor$Actor$_setter_$context_$eq
in interfaceActor
-
org$apache$pekko$actor$Actor$_setter_$self_$eq
protected final void org$apache$pekko$actor$Actor$_setter_$self_$eq(ActorRef x$1)
Description copied from interface:Actor
The 'self' field holds the ActorRef for this actor. Can be used to send messages to itself:self ! message
- Specified by:
org$apache$pekko$actor$Actor$_setter_$self_$eq
in interfaceActor
-
org$apache$pekko$persistence$journal$AsyncWriteJournal$_setter_$receiveWriteJournal_$eq
protected final void org$apache$pekko$persistence$journal$AsyncWriteJournal$_setter_$receiveWriteJournal_$eq(scala.PartialFunction<java.lang.Object,scala.runtime.BoxedUnit> x$1)
- Specified by:
org$apache$pekko$persistence$journal$AsyncWriteJournal$_setter_$receiveWriteJournal_$eq
in interfaceAsyncWriteJournal
-
org$apache$pekko$persistence$journal$WriteJournalBase$_setter_$persistence_$eq
protected void org$apache$pekko$persistence$journal$WriteJournalBase$_setter_$persistence_$eq(Persistence x$1)
- Specified by:
org$apache$pekko$persistence$journal$WriteJournalBase$_setter_$persistence_$eq
in interfaceWriteJournalBase
-
persistence
public Persistence persistence()
- Specified by:
persistence
in interfaceWriteJournalBase
-
receiveWriteJournal
public final scala.PartialFunction<java.lang.Object,scala.runtime.BoxedUnit> receiveWriteJournal()
- Specified by:
receiveWriteJournal
in interfaceAsyncWriteJournal
-
self
public final ActorRef self()
Description copied from interface:Actor
The 'self' field holds the ActorRef for this actor. Can be used to send messages to itself:self ! message
-
doAsyncWriteMessages
public abstract scala.concurrent.Future<java.lang.Iterable<java.util.Optional<java.lang.Exception>>> doAsyncWriteMessages(java.lang.Iterable<AtomicWrite> messages)
Java API, Plugin API: asynchronously writes a batch (`Iterable`) of persistent messages to the journal.The batch is only for performance reasons, i.e. all messages don't have to be written atomically. Higher throughput can typically be achieved by using batch inserts of many records compared to inserting records one-by-one, but this aspect depends on the underlying data store and a journal implementation can implement it as efficient as possible. Journals should aim to persist events in-order for a given `persistenceId` as otherwise in case of a failure, the persistent state may be end up being inconsistent.
Each `AtomicWrite` message contains the single `PersistentRepr` that corresponds to the event that was passed to the `persist` method of the `PersistentActor`, or it contains several `PersistentRepr` that corresponds to the events that were passed to the `persistAll` method of the `PersistentActor`. All `PersistentRepr` of the `AtomicWrite` must be written to the data store atomically, i.e. all or none must be stored. If the journal (data store) cannot support atomic writes of multiple events it should reject such writes with an `Optional` with an `UnsupportedOperationException` describing the issue. This limitation should also be documented by the journal plugin.
If there are failures when storing any of the messages in the batch the returned `Future` must be completed with failure. The `Future` must only be completed with success when all messages in the batch have been confirmed to be stored successfully, i.e. they will be readable, and visible, in a subsequent replay. If there is uncertainty about if the messages were stored or not the `Future` must be completed with failure.
Data store connection problems must be signaled by completing the `Future` with failure.
The journal can also signal that it rejects individual messages (`AtomicWrite`) by the returned `Iterable<Optional<Exception>>`. The returned `Iterable` must have as many elements as the input `messages` `Iterable`. Each `Optional` element signals if the corresponding `AtomicWrite` is rejected or not, with an exception describing the problem. Rejecting a message means it was not stored, i.e. it must not be included in a later replay. Rejecting a message is typically done before attempting to store it, e.g. because of serialization error.
Data store connection problems must not be signaled as rejections.
Note that it is possible to reduce number of allocations by caching some result `Iterable` for the happy path, i.e. when no messages are rejected.
Calls to this method are serialized by the enclosing journal actor. If you spawn work in asynchronous tasks it is alright that they complete the futures in any order, but the actual writes for a specific persistenceId should be serialized to avoid issues such as events of a later write are visible to consumers (query side, or replay) before the events of an earlier write are visible. This can also be done with consistent hashing if it is too fine grained to do it on the persistenceId level. Normally a `PersistentActor` will only have one outstanding write request to the journal but it may emit several write requests when `persistAsync` is used and the max batch size is reached.
This call is protected with a circuit-breaker.
-
doAsyncDeleteMessagesTo
public abstract scala.concurrent.Future<java.lang.Void> doAsyncDeleteMessagesTo(java.lang.String persistenceId, long toSequenceNr)
Java API, Plugin API: synchronously deletes all persistent messages up to `toSequenceNr`.This call is protected with a circuit-breaker.
- See Also:
AsyncRecoveryPlugin
-
doAsyncReplayMessages
public abstract scala.concurrent.Future<java.lang.Void> doAsyncReplayMessages(java.lang.String persistenceId, long fromSequenceNr, long toSequenceNr, long max, java.util.function.Consumer<PersistentRepr> replayCallback)
Java API, Plugin API: asynchronously replays persistent messages. Implementations replay a message by calling `replayCallback`. The returned future must be completed when all messages (matching the sequence number bounds) have been replayed. The future must be completed with a failure if any of the persistent messages could not be replayed.The `replayCallback` must also be called with messages that have been marked as deleted. In this case a replayed message's `deleted` method must return `true`.
The `toSequenceNr` is the lowest of what was returned by
doAsyncReadHighestSequenceNr(java.lang.String, long)
and what the user specified as recoveryRecovery
parameter.- Parameters:
persistenceId
- id of the persistent actor.fromSequenceNr
- sequence number where replay should start (inclusive).toSequenceNr
- sequence number where replay should end (inclusive).max
- maximum number of messages to be replayed.replayCallback
- called to replay a single message. Can be called from any thread.
-
doAsyncReadHighestSequenceNr
public abstract scala.concurrent.Future<java.lang.Long> doAsyncReadHighestSequenceNr(java.lang.String persistenceId, long fromSequenceNr)
Java API, Plugin API: asynchronously reads the highest stored sequence number for the given `persistenceId`. The persistent actor will use the highest sequence number after recovery as the starting point when persisting new events. This sequence number is also used as `toSequenceNr` in subsequent call to [[#asyncReplayMessages]] unless the user has specified a lower `toSequenceNr`.- Parameters:
persistenceId
- id of the persistent actor.fromSequenceNr
- hint where to start searching for the highest sequence number.
-
-