Class PersistenceTestKitReadJournal
- java.lang.Object
-
- org.apache.pekko.persistence.testkit.query.scaladsl.PersistenceTestKitReadJournal
-
- All Implemented Interfaces:
CurrentEventsByPersistenceIdQuery
,CurrentEventsByTagQuery
,EventsByPersistenceIdQuery
,EventsByTagQuery
,PagedPersistenceIdsQuery
,ReadJournal
,CurrentEventsBySliceQuery
,EventsBySliceQuery
public final class PersistenceTestKitReadJournal extends java.lang.Object implements ReadJournal, EventsByPersistenceIdQuery, CurrentEventsByPersistenceIdQuery, CurrentEventsByTagQuery, CurrentEventsBySliceQuery, PagedPersistenceIdsQuery, EventsByTagQuery, EventsBySliceQuery
-
-
Constructor Summary
Constructors Constructor Description PersistenceTestKitReadJournal(ExtendedActorSystem system, com.typesafe.config.Config config, java.lang.String configPath)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description Source<EventEnvelope,NotUsed>
currentEventsByPersistenceId(java.lang.String persistenceId, long fromSequenceNr, long toSequenceNr)
Same type of query aspekko.persistence.query.scaladsl.EventsByPersistenceIdQuery#eventsByPersistenceId
but the event stream is completed immediately when it reaches the end of the "result set".long
currentEventsByPersistenceId$default$2()
long
currentEventsByPersistenceId$default$3()
<Event> Source<EventEnvelope<Event>,NotUsed>
currentEventsBySlices(java.lang.String entityType, int minSlice, int maxSlice, Offset offset)
Same type of query aspekko.persistence.query.typed.scaladsl.EventsBySliceQuery.eventsBySlices
but the event stream is completed immediately when it reaches the end of the "result set".Source<EventEnvelope,NotUsed>
currentEventsByTag(java.lang.String tag, Offset offset)
Same type of query aspekko.persistence.query.scaladsl.EventsByTagQuery#eventsByTag
but the event stream is completed immediately when it reaches the end of the "result set".Offset
currentEventsByTag$default$2()
Source<java.lang.String,NotUsed>
currentPersistenceIds(scala.Option<java.lang.String> afterId, long limit)
Get the current persistence ids.Source<EventEnvelope,NotUsed>
eventsByPersistenceId(java.lang.String persistenceId, long fromSequenceNr, long toSequenceNr)
Query events for a specificPersistentActor
identified bypersistenceId
.long
eventsByPersistenceId$default$2()
long
eventsByPersistenceId$default$3()
<Event> Source<EventEnvelope<Event>,NotUsed>
eventsBySlices(java.lang.String entityType, int minSlice, int maxSlice, Offset offset)
Query events for given slices.Source<EventEnvelope,NotUsed>
eventsByTag(java.lang.String tag, Offset offset)
Query events that have a specific tag.Offset
eventsByTag$default$2()
static java.lang.String
Identifier()
int
sliceForPersistenceId(java.lang.String persistenceId)
scala.collection.immutable.Seq<scala.collection.immutable.Range>
sliceRanges(int numberOfRanges)
-
-
-
Constructor Detail
-
PersistenceTestKitReadJournal
public PersistenceTestKitReadJournal(ExtendedActorSystem system, com.typesafe.config.Config config, java.lang.String configPath)
-
-
Method Detail
-
Identifier
public static java.lang.String Identifier()
-
eventsByPersistenceId
public Source<EventEnvelope,NotUsed> eventsByPersistenceId(java.lang.String persistenceId, long fromSequenceNr, long toSequenceNr)
Description copied from interface:EventsByPersistenceIdQuery
Query events for a specificPersistentActor
identified bypersistenceId
.You can retrieve a subset of all events by specifying
fromSequenceNr
andtoSequenceNr
or use0L
andLong.MaxValue
respectively to retrieve all events. The query will return all the events inclusive of thefromSequenceNr
andtoSequenceNr
values.The returned event stream should be ordered by sequence number.
The stream is not completed when it reaches the end of the currently stored events, but it continues to push new events when new events are persisted. Corresponding query that is completed when it reaches the end of the currently stored events is provided by
pekko.persistence.query.scaladsl.CurrentEventsByPersistenceIdQuery#currentEventsByPersistenceId
.- Specified by:
eventsByPersistenceId
in interfaceEventsByPersistenceIdQuery
-
eventsByPersistenceId$default$2
public long eventsByPersistenceId$default$2()
-
eventsByPersistenceId$default$3
public long eventsByPersistenceId$default$3()
-
currentEventsByPersistenceId
public Source<EventEnvelope,NotUsed> currentEventsByPersistenceId(java.lang.String persistenceId, long fromSequenceNr, long toSequenceNr)
Description copied from interface:CurrentEventsByPersistenceIdQuery
Same type of query aspekko.persistence.query.scaladsl.EventsByPersistenceIdQuery#eventsByPersistenceId
but the event stream is completed immediately when it reaches the end of the "result set". Events that are stored after the query is completed are not included in the event stream.- Specified by:
currentEventsByPersistenceId
in interfaceCurrentEventsByPersistenceIdQuery
-
currentEventsByPersistenceId$default$2
public long currentEventsByPersistenceId$default$2()
-
currentEventsByPersistenceId$default$3
public long currentEventsByPersistenceId$default$3()
-
currentEventsByTag
public Source<EventEnvelope,NotUsed> currentEventsByTag(java.lang.String tag, Offset offset)
Description copied from interface:CurrentEventsByTagQuery
Same type of query aspekko.persistence.query.scaladsl.EventsByTagQuery#eventsByTag
but the event stream is completed immediately when it reaches the end of the "result set". Depending on journal implementation, this may mean all events up to when the query is started, or it may include events that are persisted while the query is still streaming results. For eventually consistent stores, it may only include all events up to some point before the query is started.- Specified by:
currentEventsByTag
in interfaceCurrentEventsByTagQuery
-
currentEventsByTag$default$2
public Offset currentEventsByTag$default$2()
-
currentEventsBySlices
public <Event> Source<EventEnvelope<Event>,NotUsed> currentEventsBySlices(java.lang.String entityType, int minSlice, int maxSlice, Offset offset)
Description copied from interface:CurrentEventsBySliceQuery
Same type of query aspekko.persistence.query.typed.scaladsl.EventsBySliceQuery.eventsBySlices
but the event stream is completed immediately when it reaches the end of the "result set". Depending on journal implementation, this may mean all events up to when the query is started, or it may include events that are persisted while the query is still streaming results. For eventually consistent stores, it may only include all events up to some point before the query is started.- Specified by:
currentEventsBySlices
in interfaceCurrentEventsBySliceQuery
-
sliceForPersistenceId
public int sliceForPersistenceId(java.lang.String persistenceId)
- Specified by:
sliceForPersistenceId
in interfaceCurrentEventsBySliceQuery
- Specified by:
sliceForPersistenceId
in interfaceEventsBySliceQuery
-
sliceRanges
public scala.collection.immutable.Seq<scala.collection.immutable.Range> sliceRanges(int numberOfRanges)
- Specified by:
sliceRanges
in interfaceCurrentEventsBySliceQuery
- Specified by:
sliceRanges
in interfaceEventsBySliceQuery
-
currentPersistenceIds
public Source<java.lang.String,NotUsed> currentPersistenceIds(scala.Option<java.lang.String> afterId, long limit)
Get the current persistence ids.Not all plugins may support in database paging, and may simply use drop/take Pekko streams operators to manipulate the result set according to the paging parameters.
- Specified by:
currentPersistenceIds
in interfacePagedPersistenceIdsQuery
- Parameters:
afterId
- The ID to start returning results from, orNone
to return all ids. This should be an id returned from a previous invocation of this command. Callers should not assume that ids are returned in sorted order.limit
- The maximum results to return. Use Long.MaxValue to return all results. Must be greater than zero.- Returns:
- A source containing all the persistence ids, limited as specified.
-
eventsByTag
public Source<EventEnvelope,NotUsed> eventsByTag(java.lang.String tag, Offset offset)
Description copied from interface:EventsByTagQuery
Query events that have a specific tag. A tag can for example correspond to an aggregate root type (in DDD terminology).The consumer can keep track of its current position in the event stream by storing the
offset
and restart the query from a givenoffset
after a crash/restart.The exact meaning of the
offset
depends on the journal and must be documented by the read journal plugin. It may be a sequential id number that uniquely identifies the position of each event within the event stream. Distributed data stores cannot easily support those semantics and they may use a weaker meaning. For example it may be a timestamp (taken when the event was created or stored). Timestamps are not unique and not strictly ordered, since clocks on different machines may not be synchronized.In strongly consistent stores, where the
offset
is unique and strictly ordered, the stream should start from the next event after theoffset
. Otherwise, the read journal should ensure that between an invocation that returned an event with the givenoffset
, and this invocation, no events are missed. Depending on the journal implementation, this may mean that this invocation will return events that were already returned by the previous invocation, including the event with the passed inoffset
.The returned event stream should be ordered by
offset
if possible, but this can also be difficult to fulfill for a distributed data store. The order must be documented by the read journal plugin.The stream is not completed when it reaches the end of the currently stored events, but it continues to push new events when new events are persisted. Corresponding query that is completed when it reaches the end of the currently stored events is provided by
CurrentEventsByTagQuery.currentEventsByTag(java.lang.String, org.apache.pekko.persistence.query.Offset)
.- Specified by:
eventsByTag
in interfaceEventsByTagQuery
-
eventsByTag$default$2
public Offset eventsByTag$default$2()
-
eventsBySlices
public <Event> Source<EventEnvelope<Event>,NotUsed> eventsBySlices(java.lang.String entityType, int minSlice, int maxSlice, Offset offset)
Description copied from interface:EventsBySliceQuery
Query events for given slices. A slice is deterministically defined based on the persistence id. The purpose is to evenly distribute all persistence ids over the slices.The consumer can keep track of its current position in the event stream by storing the
offset
and restart the query from a givenoffset
after a crash/restart.The exact meaning of the
offset
depends on the journal and must be documented by the read journal plugin. It may be a sequential id number that uniquely identifies the position of each event within the event stream. Distributed data stores cannot easily support those semantics and they may use a weaker meaning. For example it may be a timestamp (taken when the event was created or stored). Timestamps are not unique and not strictly ordered, since clocks on different machines may not be synchronized.In strongly consistent stores, where the
offset
is unique and strictly ordered, the stream should start from the next event after theoffset
. Otherwise, the read journal should ensure that between an invocation that returned an event with the givenoffset
, and this invocation, no events are missed. Depending on the journal implementation, this may mean that this invocation will return events that were already returned by the previous invocation, including the event with the passed inoffset
.The returned event stream should be ordered by
offset
if possible, but this can also be difficult to fulfill for a distributed data store. The order must be documented by the read journal plugin.The stream is not completed when it reaches the end of the currently stored events, but it continues to push new events when new events are persisted. Corresponding query that is completed when it reaches the end of the currently stored events is provided by
CurrentEventsBySliceQuery.currentEventsBySlices
.- Specified by:
eventsBySlices
in interfaceEventsBySliceQuery
-
-