trait FlowOpsMat[+Out, +Mat] extends FlowOps[Out, Mat]
INTERNAL API: this trait will be changed in binary-incompatible ways for classes that are derived from it! Do not implement this interface outside the Pekko code base!
Binary compatibility is only maintained for callers of this trait’s interface.
- Source
- Flow.scala
- Alphabetic
- By Inheritance
- FlowOpsMat
- FlowOps
- AnyRef
- Any
- by any2stringadd
- by StringFormat
- by Ensuring
- by ArrowAssoc
- Hide All
- Show All
- Public
- Protected
Type Members
- abstract type Closed
- Definition Classes
- FlowOps
- abstract type ClosedMat[+M] <: Graph[_, M]
- abstract type Repr[+O] <: ReprMat[O, Mat] { ... /* 4 definitions in type refinement */ }
- Definition Classes
- FlowOpsMat → FlowOps
- abstract type ReprMat[+O, +M] <: FlowOpsMat[O, M] { ... /* 4 definitions in type refinement */ }
Abstract Value Members
- abstract def addAttributes(attr: Attributes): Repr[Out]
- Definition Classes
- FlowOps
- abstract def async: Repr[Out]
Put an asynchronous boundary around this
Flow
.Put an asynchronous boundary around this
Flow
.If this is a
SubFlow
(created e.g. bygroupBy
), this creates an asynchronous boundary around each materialized sub-flow, not the super-flow. That way, the super-flow will communicate with sub-flows asynchronously.- Definition Classes
- FlowOps
- abstract def mapMaterializedValue[Mat2](f: (Mat) => Mat2): ReprMat[Out, Mat2]
Transform the materialized value of this graph, leaving all other properties as they were.
- abstract def named(name: String): Repr[Out]
- Definition Classes
- FlowOps
- abstract def to[Mat2](sink: Graph[SinkShape[Out], Mat2]): Closed
Connect this Flow to a Sink, concatenating the processing steps of both.
Connect this Flow to a Sink, concatenating the processing steps of both.
+------------------------------+ | Resulting Sink[In, Mat] | | | | +------+ +------+ | | | | | | | In ~~> | flow | ~~Out~~> | sink | | | | Mat| | M| | | +------+ +------+ | +------------------------------+
The materialized value of the combined Sink will be the materialized value of the current flow (ignoring the given Sink’s value), use toMat if a different strategy is needed.
See also FlowOpsMat.toMat when access to materialized values of the parameter is needed.
- Definition Classes
- FlowOps
- abstract def toMat[Mat2, Mat3](sink: Graph[SinkShape[Out], Mat2])(combine: (Mat, Mat2) => Mat3): ClosedMat[Mat3]
Connect this Flow to a Sink, concatenating the processing steps of both.
Connect this Flow to a Sink, concatenating the processing steps of both.
+----------------------------+ | Resulting Sink | | | | +------+ +------+ | | | | | | | In ~~> | flow | ~Out~> | sink | | | | | | | | | +------+ +------+ | +----------------------------+
The
combine
function is used to compose the materialized values of this flow and that Sink into the materialized value of the resulting Sink.It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values. - abstract def via[T, Mat2](flow: Graph[FlowShape[Out, T], Mat2]): Repr[T]
Transform this Flow by appending the given processing steps.
Transform this Flow by appending the given processing steps.
+---------------------------------+ | Resulting Flow[In, T, Mat] | | | | +------+ +------+ | | | | | | | In ~~> | this | ~~Out~~> | flow | ~~> T | | Mat| | M| | | +------+ +------+ | +---------------------------------+
The materialized value of the combined Flow will be the materialized value of the current flow (ignoring the other Flow’s value), use viaMat if a different strategy is needed.
See also FlowOpsMat.viaMat when access to materialized values of the parameter is needed.
- Definition Classes
- FlowOps
- abstract def viaMat[T, Mat2, Mat3](flow: Graph[FlowShape[Out, T], Mat2])(combine: (Mat, Mat2) => Mat3): ReprMat[T, Mat3]
Transform this Flow by appending the given processing steps.
Transform this Flow by appending the given processing steps.
+---------------------------------+ | Resulting Flow[In, T, M2] | | | | +------+ +------+ | | | | | | | In ~~> | this | ~~Out~~> | flow | ~~> T | | Mat| | M| | | +------+ +------+ | +---------------------------------+
The
combine
function is used to compose the materialized values of this flow and that flow into the materialized value of the resulting Flow.It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values. - abstract def withAttributes(attr: Attributes): Repr[Out]
- Definition Classes
- FlowOps
Concrete Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- def +(other: String): String
- Implicit
- This member is added by an implicit conversion from FlowOpsMat[Out, Mat] toany2stringadd[FlowOpsMat[Out, Mat]] performed by method any2stringadd in scala.Predef.
- Definition Classes
- any2stringadd
- def ++[U >: Out, M](that: Graph[SourceShape[U], M]): Repr[U]
Concatenates this Flow with the given Source so the first element emitted by that source is emitted after the last element of this flow.
- def ->[B](y: B): (FlowOpsMat[Out, Mat], B)
- Implicit
- This member is added by an implicit conversion from FlowOpsMat[Out, Mat] toArrowAssoc[FlowOpsMat[Out, Mat]] performed by method ArrowAssoc in scala.Predef.
- Definition Classes
- ArrowAssoc
- Annotations
- @inline()
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- def aggregateWithBoundary[Agg, Emit](allocate: () => Agg)(aggregate: (Agg, Out) => (Agg, Boolean), harvest: (Agg) => Emit, emitOnTimer: Option[((Agg) => Boolean, FiniteDuration)]): Repr[Emit]
Aggregate input elements into an arbitrary data structure that can be completed and emitted downstream when custom condition is met which can be triggered by aggregate or timer.
Aggregate input elements into an arbitrary data structure that can be completed and emitted downstream when custom condition is met which can be triggered by aggregate or timer. It can be thought of a more general groupedWeightedWithin.
Emits when the aggregation function decides the aggregate is complete or the timer function returns true
Backpressures when downstream backpressures and the aggregate is complete
Completes when upstream completes and the last aggregate has been emitted downstream
Cancels when downstream cancels
- allocate
allocate the initial data structure for aggregated elements
- aggregate
update the aggregated elements, return true if ready to emit after update.
- harvest
this is invoked before emit within the current stage/operator
- emitOnTimer
decide whether the current aggregated elements can be emitted, the custom function is invoked on every interval
- Definition Classes
- FlowOps
- Annotations
- @ApiMayChange()
- def alsoTo(that: Graph[SinkShape[Out], _]): Repr[Out]
Attaches the given Sink to this Source, meaning that elements that pass through will also be sent to the Sink.
Attaches the given Sink to this Source, meaning that elements that pass through will also be sent to the Sink.
It is similar to #wireTap but will backpressure instead of dropping elements when the given Sink is not ready.
Emits when element is available and demand exists both from the Sink and the downstream.
Backpressures when downstream or Sink backpressures
Completes when upstream completes
Cancels when downstream or Sink cancels
- Definition Classes
- FlowOps
- def alsoToAll(those: Graph[SinkShape[Out], _]*): Repr[Out]
Attaches the given Sinks to this Source, meaning that elements that pass through will also be sent to the Sink.
Attaches the given Sinks to this Source, meaning that elements that pass through will also be sent to the Sink.
It is similar to #wireTap but will backpressure instead of dropping elements when the given Sinks is not ready.
Emits when element is available and demand exists both from the Sinks and the downstream.
Backpressures when downstream or any of the Sinks backpressures
Completes when upstream completes
Cancels when downstream or any of the Sinks cancels
- Definition Classes
- FlowOps
- def alsoToGraph[M](that: Graph[SinkShape[Out], M]): Graph[FlowShape[Out, Out], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def alsoToMat[Mat2, Mat3](that: Graph[SinkShape[Out], Mat2])(matF: (Mat, Mat2) => Mat3): ReprMat[Out, Mat3]
Attaches the given Sink to this Flow, meaning that elements that pass through will also be sent to the Sink.
Attaches the given Sink to this Flow, meaning that elements that pass through will also be sent to the Sink.
- See also
#alsoTo It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def ask[S](parallelism: Int)(ref: ActorRef)(implicit timeout: Timeout, tag: ClassTag[S]): Repr[S]
Use the
ask
pattern to send a request-reply message to the targetref
actor.Use the
ask
pattern to send a request-reply message to the targetref
actor. If any of the asks times out it will fail the stream with a pekko.pattern.AskTimeoutException.Do not forget to include the expected response type in the method call, like so:
flow.ask[ExpectedReply](parallelism = 4)(ref)
otherwise
Nothing
will be assumed, which is most likely not what you want.Parallelism limits the number of how many asks can be "in flight" at the same time. Please note that the elements emitted by this operator are in-order with regards to the asks being issued (i.e. same behavior as mapAsync).
The operator fails with an pekko.stream.WatchedActorTerminatedException if the target actor is terminated, or with an java.util.concurrent.TimeoutException in case the ask exceeds the timeout passed in.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the futures (in submission order) created by the ask pattern internally are completed
Backpressures when the number of futures reaches the configured parallelism and the downstream backpressures
Completes when upstream completes and all futures have been completed and all elements have been emitted
Fails when the passed in actor terminates, or a timeout is exceeded in any of the asks performed
Cancels when downstream cancels
- Definition Classes
- FlowOps
- Annotations
- @implicitNotFound()
- def ask[S](ref: ActorRef)(implicit timeout: Timeout, tag: ClassTag[S]): Repr[S]
Use the
ask
pattern to send a request-reply message to the targetref
actor.Use the
ask
pattern to send a request-reply message to the targetref
actor. If any of the asks times out it will fail the stream with a pekko.pattern.AskTimeoutException.Do not forget to include the expected response type in the method call, like so:
flow.ask[ExpectedReply](ref)
otherwise
Nothing
will be assumed, which is most likely not what you want.Defaults to parallelism of 2 messages in flight, since while one ask message may be being worked on, the second one still be in the mailbox, so defaulting to sending the second one a bit earlier than when first ask has replied maintains a slightly healthier throughput.
Similar to the plain ask pattern, the target actor is allowed to reply with
org.apache.pekko.util.Status
. Anorg.apache.pekko.util.Status#Failure
will cause the operator to fail with the cause carried in theFailure
message.The operator fails with an pekko.stream.WatchedActorTerminatedException if the target actor is terminated.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the futures (in submission order) created by the ask pattern internally are completed
Backpressures when the number of futures reaches the configured parallelism and the downstream backpressures
Completes when upstream completes and all futures have been completed and all elements have been emitted
Fails when the passed in actor terminates, or a timeout is exceeded in any of the asks performed
Cancels when downstream cancels
- Definition Classes
- FlowOps
- Annotations
- @implicitNotFound()
- def backpressureTimeout(timeout: FiniteDuration): Repr[Out]
If the time between the emission of an element and the following downstream demand exceeds the provided timeout, the stream is failed with a scala.concurrent.TimeoutException.
If the time between the emission of an element and the following downstream demand exceeds the provided timeout, the stream is failed with a scala.concurrent.TimeoutException. The timeout is checked periodically, so the resolution of the check is one period (equals to timeout value).
Emits when upstream emits an element
Backpressures when downstream backpressures
Completes when upstream completes or fails if timeout elapses between element emission and downstream demand.
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def batch[S](max: Long, seed: (Out) => S)(aggregate: (S, Out) => S): Repr[S]
Allows a faster upstream to progress independently of a slower subscriber by aggregating elements into batches until the subscriber is ready to accept them.
Allows a faster upstream to progress independently of a slower subscriber by aggregating elements into batches until the subscriber is ready to accept them. For example a batch step might store received elements in an array up to the allowed max limit if the upstream publisher is faster.
This only rolls up elements if the upstream is faster, but if the downstream is faster it will not duplicate elements.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when downstream stops backpressuring and there is an aggregated element available
Backpressures when there are
max
batched elements and 1 pending element and downstream backpressuresCompletes when upstream completes and there is no batched/pending element waiting
Cancels when downstream cancels
- max
maximum number of elements to batch before backpressuring upstream (must be positive non-zero)
- seed
Provides the first state for a batched value using the first unconsumed element as a start
- aggregate
Takes the currently batched value and the current pending element to produce a new aggregate
- Definition Classes
- FlowOps
- def batchWeighted[S](max: Long, costFn: (Out) => Long, seed: (Out) => S)(aggregate: (S, Out) => S): Repr[S]
Allows a faster upstream to progress independently of a slower subscriber by aggregating elements into batches until the subscriber is ready to accept them.
Allows a faster upstream to progress independently of a slower subscriber by aggregating elements into batches until the subscriber is ready to accept them. For example a batch step might concatenate
ByteString
elements up to the allowed max limit if the upstream publisher is faster.This element only rolls up elements if the upstream is faster, but if the downstream is faster it will not duplicate elements.
Batching will apply for all elements, even if a single element cost is greater than the total allowed limit. In this case, previous batched elements will be emitted, then the "heavy" element will be emitted (after being applied with the
seed
function) without batching further elements with it, and then the rest of the incoming elements are batched.Emits when downstream stops backpressuring and there is a batched element available
Backpressures when there are
max
weighted batched elements + 1 pending element and downstream backpressuresCompletes when upstream completes and there is no batched/pending element waiting
Cancels when downstream cancels
See also FlowOps.conflateWithSeed, FlowOps.batch
- max
maximum weight of elements to batch before backpressuring upstream (must be positive non-zero)
- costFn
a function to compute a single element weight
- seed
Provides the first state for a batched value using the first unconsumed element as a start
- aggregate
Takes the currently batched value and the current pending element to produce a new batch
- Definition Classes
- FlowOps
- def buffer(size: Int, overflowStrategy: OverflowStrategy): Repr[Out]
Adds a fixed size buffer in the flow that allows to store elements from a faster upstream until it becomes full.
Adds a fixed size buffer in the flow that allows to store elements from a faster upstream until it becomes full. Depending on the defined pekko.stream.OverflowStrategy it might drop elements or backpressure the upstream if there is no space available
Emits when downstream stops backpressuring and there is a pending element in the buffer
Backpressures when downstream backpressures or depending on OverflowStrategy:
- Backpressure - backpressures when buffer is full
- DropHead, DropTail, DropBuffer - never backpressures
- Fail - fails the stream if buffer gets full
Completes when upstream completes and buffered elements have been drained
Cancels when downstream cancels
- size
The size of the buffer in element count
- overflowStrategy
Strategy that is used when incoming elements cannot fit inside the buffer
- Definition Classes
- FlowOps
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @HotSpotIntrinsicCandidate() @native()
- def collect[T](pf: PartialFunction[Out, T]): Repr[T]
Transform this stream by applying the given partial function to each of the elements on which the function is defined as they pass through this processing step.
Transform this stream by applying the given partial function to each of the elements on which the function is defined as they pass through this processing step. Non-matching elements are filtered out.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the provided partial function is defined for the element
Backpressures when the partial function is defined for the element and downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def collectType[T](implicit tag: ClassTag[T]): Repr[T]
Transform this stream by testing the type of each of the elements on which the element is an instance of the provided type as they pass through this processing step.
Transform this stream by testing the type of each of the elements on which the element is an instance of the provided type as they pass through this processing step.
Non-matching elements are filtered out.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the element is an instance of the provided type
Backpressures when the element is an instance of the provided type and downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def completionTimeout(timeout: FiniteDuration): Repr[Out]
If the completion of the stream does not happen until the provided timeout, the stream is failed with a scala.concurrent.TimeoutException.
If the completion of the stream does not happen until the provided timeout, the stream is failed with a scala.concurrent.TimeoutException.
Emits when upstream emits an element
Backpressures when downstream backpressures
Completes when upstream completes or fails if timeout elapses before upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def concat[U >: Out, Mat2](that: Graph[SourceShape[U], Mat2]): Repr[U]
Concatenate the given Source to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Source’s elements will be produced.
Concatenate the given Source to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Source’s elements will be produced.
Note that the Source is materialized together with this Flow and is "detached" meaning it will in effect behave as a one element buffer in front of both the sources, that eagerly demands an element on start (so it can not be combined with
Source.lazy
to defer materialization ofthat
).The second source is then kept from producing elements by asserting back-pressure until its time comes.
When needing a concat operator that is not detached use #concatLazy
If this Flow gets upstream error - no elements from the given Source will be pulled.
Emits when element is available from current stream or from the given Source when current is completed
Backpressures when downstream backpressures
Completes when given Source completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def concatAllLazy[U >: Out](those: Graph[SourceShape[U], _]*): Repr[U]
Concatenate the given Sources to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Sources' elements will be produced.
Concatenate the given Sources to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Sources' elements will be produced.
Note that the Sources are materialized together with this Flow. If
lazy
materialization is what is needed the operator can be combined with for exampleSource.lazySource
to defer materialization ofthat
until the time when this source completes.The second source is then kept from producing elements by asserting back-pressure until its time comes.
For a concat operator that is detached, use #concat
If this Flow gets upstream error - no elements from the given Sources will be pulled.
Emits when element is available from current stream or from the given Sources when current is completed
Backpressures when downstream backpressures
Completes when given all those Sources completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def concatGraph[U >: Out, Mat2](that: Graph[SourceShape[U], Mat2], detached: Boolean): Graph[FlowShape[Out, U], Mat2]
- Attributes
- protected
- Definition Classes
- FlowOps
- def concatLazy[U >: Out, Mat2](that: Graph[SourceShape[U], Mat2]): Repr[U]
Concatenate the given Source to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Source’s elements will be produced.
Concatenate the given Source to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Source’s elements will be produced.
Note that the Source is materialized together with this Flow. If
lazy
materialization is what is needed the operator can be combined with for exampleSource.lazySource
to defer materialization ofthat
until the time when this source completes.The second source is then kept from producing elements by asserting back-pressure until its time comes.
For a concat operator that is detached, use #concat
If this Flow gets upstream error - no elements from the given Source will be pulled.
Emits when element is available from current stream or from the given Source when current is completed
Backpressures when downstream backpressures
Completes when given Source completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def concatLazyMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2])(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Concatenate the given Source to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Source’s elements will be produced.
Concatenate the given Source to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Source’s elements will be produced.
Note that the Source is materialized together with this Flow, if
lazy
materialization is what is needed the operator can be combined withSource.lazy
to defer materialization ofthat
.The second source is then kept from producing elements by asserting back-pressure until its time comes.
For a concat operator that is detached, use #concatMat
- See also
#concatLazy. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def concatMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2])(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Concatenate the given Source to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Source’s elements will be produced.
Concatenate the given Source to this Flow, meaning that once this Flow’s input is exhausted and all result elements have been generated, the Source’s elements will be produced.
Note that the Source is materialized together with this Flow and is "detached" meaning it will in effect behave as a one element buffer in front of both the sources, that eagerly demands an element on start (so it can not be combined with
Source.lazy
to defer materialization ofthat
).The second source is then kept from producing elements by asserting back-pressure until its time comes.
When needing a concat operator that is not detached use #concatLazyMat
- See also
#concat. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def conflate[O2 >: Out](aggregate: (O2, O2) => O2): Repr[O2]
Allows a faster upstream to progress independently of a slower subscriber by conflating elements into a summary until the subscriber is ready to accept them.
Allows a faster upstream to progress independently of a slower subscriber by conflating elements into a summary until the subscriber is ready to accept them. For example a conflate step might average incoming numbers if the upstream publisher is faster.
This version of conflate does not change the output type of the stream. See FlowOps.conflateWithSeed for a more flexible version that can take a seed function and transform elements while rolling up.
This element only rolls up elements if the upstream is faster, but if the downstream is faster it will not duplicate elements.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when downstream stops backpressuring and there is a conflated element available
Backpressures when never
Completes when upstream completes
Cancels when downstream cancels
- aggregate
Takes the currently aggregated value and the current pending element to produce a new aggregate See also FlowOps.conflate, FlowOps.limit, FlowOps.limitWeighted FlowOps.batch FlowOps.batchWeighted
- Definition Classes
- FlowOps
- def conflateWithSeed[S](seed: (Out) => S)(aggregate: (S, Out) => S): Repr[S]
Allows a faster upstream to progress independently of a slower subscriber by conflating elements into a summary until the subscriber is ready to accept them.
Allows a faster upstream to progress independently of a slower subscriber by conflating elements into a summary until the subscriber is ready to accept them. For example a conflate step might average incoming numbers if the upstream publisher is faster.
This version of conflate allows to derive a seed from the first element and change the aggregated type to be different than the input type. See FlowOps.conflate for a simpler version that does not change types.
This element only rolls up elements if the upstream is faster, but if the downstream is faster it will not duplicate elements.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when downstream stops backpressuring and there is a conflated element available
Backpressures when never
Completes when upstream completes
Cancels when downstream cancels
- seed
Provides the first state for a conflated value using the first unconsumed element as a start
- aggregate
Takes the currently aggregated value and the current pending element to produce a new aggregate See also FlowOps.conflate, FlowOps.limit, FlowOps.limitWeighted FlowOps.batch FlowOps.batchWeighted
- Definition Classes
- FlowOps
- def delay(of: FiniteDuration, strategy: DelayOverflowStrategy = DelayOverflowStrategy.dropTail): Repr[Out]
Shifts elements emission in time by a specified amount.
Shifts elements emission in time by a specified amount. It allows to store elements in internal buffer while waiting for next element to be emitted. Depending on the defined pekko.stream.DelayOverflowStrategy it might drop elements or backpressure the upstream if there is no space available in the buffer.
Delay precision is 10ms to avoid unnecessary timer scheduling cycles
Internal buffer has default capacity 16. You can set buffer size by calling
addAttributes(inputBuffer)
Emits when there is a pending element in the buffer and configured time for this element elapsed * EmitEarly - strategy do not wait to emit element if buffer is full
Backpressures when depending on OverflowStrategy * Backpressure - backpressures when buffer is full * DropHead, DropTail, DropBuffer - never backpressures * Fail - fails the stream if buffer gets full
Completes when upstream completes and buffered elements have been drained
Cancels when downstream cancels
- of
time to shift all messages
- strategy
Strategy that is used when incoming elements cannot fit inside the buffer
- Definition Classes
- FlowOps
- def delayWith(delayStrategySupplier: () => DelayStrategy[Out], overFlowStrategy: DelayOverflowStrategy): Repr[Out]
Shifts elements emission in time by an amount individually determined through delay strategy a specified amount.
Shifts elements emission in time by an amount individually determined through delay strategy a specified amount. It allows to store elements in internal buffer while waiting for next element to be emitted. Depending on the defined pekko.stream.DelayOverflowStrategy it might drop elements or backpressure the upstream if there is no space available in the buffer.
It determines delay for each ongoing element invoking
DelayStrategy.nextDelay(elem: T): FiniteDuration
.Note that elements are not re-ordered: if an element is given a delay much shorter than its predecessor, it will still have to wait for the preceding element before being emitted. It is also important to notice that scaladsl.DelayStrategy can be stateful.
Delay precision is 10ms to avoid unnecessary timer scheduling cycles.
Internal buffer has default capacity 16. You can set buffer size by calling
addAttributes(inputBuffer)
Emits when there is a pending element in the buffer and configured time for this element elapsed * EmitEarly - strategy do not wait to emit element if buffer is full
Backpressures when depending on OverflowStrategy * Backpressure - backpressures when buffer is full * DropHead, DropTail, DropBuffer - never backpressures * Fail - fails the stream if buffer gets full
Completes when upstream completes and buffered elements have been drained
Cancels when downstream cancels
- delayStrategySupplier
creates new DelayStrategy object for each materialization
- overFlowStrategy
Strategy that is used when incoming elements cannot fit inside the buffer
- Definition Classes
- FlowOps
- def detach: Repr[Out]
Detaches upstream demand from downstream demand without detaching the stream rates; in other words acts like a buffer of size 1.
Detaches upstream demand from downstream demand without detaching the stream rates; in other words acts like a buffer of size 1.
Emits when upstream emits an element
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def divertTo(that: Graph[SinkShape[Out], _], when: (Out) => Boolean): Repr[Out]
Attaches the given Sink to this Flow, meaning that elements will be sent to the Sink instead of being passed through if the predicate
when
returnstrue
.Attaches the given Sink to this Flow, meaning that elements will be sent to the Sink instead of being passed through if the predicate
when
returnstrue
.Emits when emits when an element is available from the input and the chosen output has demand
Backpressures when the currently chosen output back-pressures
Completes when upstream completes and no output is pending
Cancels when any of the downstreams cancel
- Definition Classes
- FlowOps
- def divertToGraph[M](that: Graph[SinkShape[Out], M], when: (Out) => Boolean): Graph[FlowShape[Out, Out], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def divertToMat[Mat2, Mat3](that: Graph[SinkShape[Out], Mat2], when: (Out) => Boolean)(matF: (Mat, Mat2) => Mat3): ReprMat[Out, Mat3]
Attaches the given Sink to this Flow, meaning that elements will be sent to the Sink instead of being passed through if the predicate
when
returnstrue
.Attaches the given Sink to this Flow, meaning that elements will be sent to the Sink instead of being passed through if the predicate
when
returnstrue
.- See also
#divertTo It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def drop(n: Long): Repr[Out]
Discard the given number of elements at the beginning of the stream.
Discard the given number of elements at the beginning of the stream. No elements will be dropped if
n
is zero or negative.Emits when the specified number of elements has been dropped already
Backpressures when the specified number of elements has been dropped and downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def dropWhile(p: (Out) => Boolean): Repr[Out]
Discard elements at the beginning of the stream while predicate is true.
Discard elements at the beginning of the stream while predicate is true. All elements will be taken after predicate returns false first time.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when predicate returned false and for all following stream elements
Backpressures when predicate returned false and downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def dropWithin(d: FiniteDuration): Repr[Out]
Discard the elements received within the given duration at beginning of the stream.
Discard the elements received within the given duration at beginning of the stream.
Emits when the specified time elapsed and a new upstream element arrives
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def ensuring(cond: (FlowOpsMat[Out, Mat]) => Boolean, msg: => Any): FlowOpsMat[Out, Mat]
- Implicit
- This member is added by an implicit conversion from FlowOpsMat[Out, Mat] toEnsuring[FlowOpsMat[Out, Mat]] performed by method Ensuring in scala.Predef.
- Definition Classes
- Ensuring
- def ensuring(cond: (FlowOpsMat[Out, Mat]) => Boolean): FlowOpsMat[Out, Mat]
- Implicit
- This member is added by an implicit conversion from FlowOpsMat[Out, Mat] toEnsuring[FlowOpsMat[Out, Mat]] performed by method Ensuring in scala.Predef.
- Definition Classes
- Ensuring
- def ensuring(cond: Boolean, msg: => Any): FlowOpsMat[Out, Mat]
- Implicit
- This member is added by an implicit conversion from FlowOpsMat[Out, Mat] toEnsuring[FlowOpsMat[Out, Mat]] performed by method Ensuring in scala.Predef.
- Definition Classes
- Ensuring
- def ensuring(cond: Boolean): FlowOpsMat[Out, Mat]
- Implicit
- This member is added by an implicit conversion from FlowOpsMat[Out, Mat] toEnsuring[FlowOpsMat[Out, Mat]] performed by method Ensuring in scala.Predef.
- Definition Classes
- Ensuring
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def expand[U](expander: (Out) => Iterator[U]): Repr[U]
Allows a faster downstream to progress independently of a slower upstream by extrapolating elements from an older element until new element comes from the upstream.
Allows a faster downstream to progress independently of a slower upstream by extrapolating elements from an older element until new element comes from the upstream. For example an expand step might repeat the last element for the subscriber until it receives an update from upstream.
This element will never "drop" upstream elements as all elements go through at least one extrapolation step. This means that if the upstream is actually faster than the upstream it will be backpressured by the downstream subscriber.
Expand does not support pekko.stream.Supervision.Restart and pekko.stream.Supervision.Resume. Exceptions from the
seed
function will complete the stream with failure.Emits when downstream stops backpressuring
Backpressures when downstream backpressures or iterator runs empty
Completes when upstream completes
Cancels when downstream cancels
- expander
Takes the current extrapolation state to produce an output element and the next extrapolation state.
- Definition Classes
- FlowOps
- See also
#extrapolate for a version that always preserves the original element and allows for an initial "startup" element.
- def extrapolate[U >: Out](extrapolator: (U) => Iterator[U], initial: Option[U] = None): Repr[U]
Allows a faster downstream to progress independent of a slower upstream.
Allows a faster downstream to progress independent of a slower upstream.
This is achieved by introducing "extrapolated" elements - based on those from upstream - whenever downstream signals demand.
Extrapolate does not support pekko.stream.Supervision.Restart and pekko.stream.Supervision.Resume. Exceptions from the
extrapolate
function will complete the stream with failure.Emits when downstream stops backpressuring, AND EITHER upstream emits OR initial element is present OR
extrapolate
is non-empty and applicableBackpressures when downstream backpressures or current
extrapolate
runs emptyCompletes when upstream completes and current
extrapolate
runs emptyCancels when downstream cancels
- extrapolator
takes the current upstream element and provides a sequence of "extrapolated" elements based on the original, to be emitted in case downstream signals demand.
- initial
the initial element to be emitted, in case upstream is able to stall the entire stream.
- def filter(p: (Out) => Boolean): Repr[Out]
Only pass on those elements that satisfy the given predicate.
Only pass on those elements that satisfy the given predicate.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the given predicate returns true for the element
Backpressures when the given predicate returns true for the element and downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def filterNot(p: (Out) => Boolean): Repr[Out]
Only pass on those elements that NOT satisfy the given predicate.
Only pass on those elements that NOT satisfy the given predicate.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the given predicate returns false for the element
Backpressures when the given predicate returns false for the element and downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def flatMapConcat[T, M](f: (Out) => Graph[SourceShape[T], M]): Repr[T]
Transform each input element into a
Source
of output elements that is then flattened into the output stream by concatenation, fully consuming one Source after the other.Transform each input element into a
Source
of output elements that is then flattened into the output stream by concatenation, fully consuming one Source after the other.Emits when a currently consumed substream has an element available
Backpressures when downstream backpressures
Completes when upstream completes and all consumed substreams complete
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def flatMapMerge[T, M](breadth: Int, f: (Out) => Graph[SourceShape[T], M]): Repr[T]
Transform each input element into a
Source
of output elements that is then flattened into the output stream by merging, where at mostbreadth
substreams are being consumed at any given time.Transform each input element into a
Source
of output elements that is then flattened into the output stream by merging, where at mostbreadth
substreams are being consumed at any given time.Emits when a currently consumed substream has an element available
Backpressures when downstream backpressures
Completes when upstream completes and all consumed substreams complete
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def flatMapPrefix[Out2, Mat2](n: Int)(f: (Seq[Out]) => Flow[Out, Out2, Mat2]): Repr[Out2]
Takes up to
n
elements from the stream (less thann
only if the upstream completes before emittingn
elements), then applyf
on these elements in order to obtain a flow, this flow is then materialized and the rest of the input is processed by this flow (similar to via).Takes up to
n
elements from the stream (less thann
only if the upstream completes before emittingn
elements), then applyf
on these elements in order to obtain a flow, this flow is then materialized and the rest of the input is processed by this flow (similar to via). This method returns a flow consuming the rest of the stream producing the materialized flow's output.Emits when the materialized flow emits. Notice the first
n
elements are buffered internally before materializing the flow and connecting it to the rest of the upstream - producing elements at its own discretion (might 'swallow' or multiply elements).Backpressures when the materialized flow backpressures
Completes when the materialized flow completes. If upstream completes before producing
n
elements,f
will be applied with the provided elements, the resulting flow will be materialized and signalled for upstream completion, it can then complete or continue to emit elements at its own discretion.Cancels when the materialized flow cancels. When downstream cancels before materialization of the nested flow, the operator's default behavior is to cancel immediately, this behavior can be controlled by setting the pekko.stream.Attributes.NestedMaterializationCancellationPolicy attribute on the flow. When this attribute is configured to true, downstream cancellation is delayed until the nested flow's materialization which is then immediately cancelled (with the original cancellation cause).
- n
the number of elements to accumulate before materializing the downstream flow.
- f
a function that produces the downstream flow based on the upstream's prefix.
- Definition Classes
- FlowOps
- def flatMapPrefixMat[Out2, Mat2, Mat3](n: Int)(f: (Seq[Out]) => Flow[Out, Out2, Mat2])(matF: (Mat, Future[Mat2]) => Mat3): ReprMat[Out2, Mat3]
mat version of #flatMapPrefix, this method gives access to a future materialized value of the downstream flow.
mat version of #flatMapPrefix, this method gives access to a future materialized value of the downstream flow. see #flatMapPrefix for details.
- def fold[T](zero: T)(f: (T, Out) => T): Repr[T]
Similar to
scan
but only emits its result when the upstream completes, after which it also completes.Similar to
scan
but only emits its result when the upstream completes, after which it also completes. Applies the given function towards its current and next value, yielding the next current value.If the function
f
throws an exception and the supervision decision is pekko.stream.Supervision.Restart current value starts atzero
again the stream will continue.Adheres to the ActorAttributes.SupervisionStrategy attribute.
Note that the
zero
value must be immutable.Emits when upstream completes
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
See also FlowOps.scan
- Definition Classes
- FlowOps
- def foldAsync[T](zero: T)(f: (T, Out) => Future[T]): Repr[T]
Similar to
fold
but with an asynchronous function.Similar to
fold
but with an asynchronous function. Applies the given function towards its current and next value, yielding the next current value.Adheres to the ActorAttributes.SupervisionStrategy attribute.
If the function
f
returns a failure and the supervision decision is pekko.stream.Supervision.Restart current value starts atzero
again the stream will continue.Note that the
zero
value must be immutable.Emits when upstream completes
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
See also FlowOps.fold
- Definition Classes
- FlowOps
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- def groupBy[K](maxSubstreams: Int, f: (Out) => K): SubFlow[Out, Mat, Repr, Closed]
This operation demultiplexes the incoming stream into separate output streams, one for each element key.
This operation demultiplexes the incoming stream into separate output streams, one for each element key. The key is computed for each element using the given function. When a new key is encountered for the first time a new substream is opened and subsequently fed with all elements belonging to that key.
WARNING: The operator keeps track of all keys of streams that have already been closed. If you expect an infinite number of keys this can cause memory issues. Elements belonging to those keys are drained directly and not send to the substream.
- def groupBy[K](maxSubstreams: Int, f: (Out) => K, allowClosedSubstreamRecreation: Boolean): SubFlow[Out, Mat, Repr, Closed]
This operation demultiplexes the incoming stream into separate output streams, one for each element key.
This operation demultiplexes the incoming stream into separate output streams, one for each element key. The key is computed for each element using the given function. When a new key is encountered for the first time a new substream is opened and subsequently fed with all elements belonging to that key.
WARNING: If
allowClosedSubstreamRecreation
is set tofalse
(default behavior) the operator keeps track of all keys of streams that have already been closed. If you expect an infinite number of keys this can cause memory issues. Elements belonging to those keys are drained directly and not send to the substream.Note: If
allowClosedSubstreamRecreation
is set totrue
substream completion and incoming elements are subject to race-conditions. If elements arrive for a stream that is in the process of closing these elements might get lost.The object returned from this method is not a normal Source or Flow, it is a SubFlow. This means that after this operator all transformations are applied to all encountered substreams in the same fashion. Substream mode is exited either by closing the substream (i.e. connecting it to a Sink) or by merging the substreams back together; see the
to
andmergeBack
methods on SubFlow for more information.It is important to note that the substreams also propagate back-pressure as any other stream, which means that blocking one substream will block the
groupBy
operator itself—and thereby all substreams—once all internal or explicit buffers are filled.If the group by function
f
throws an exception and the supervision decision is pekko.stream.Supervision.Stop the stream and substreams will be completed with failure.If the group by function
f
throws an exception and the supervision decision is pekko.stream.Supervision.Resume or pekko.stream.Supervision.Restart the element is dropped and the stream and substreams continue.Function
f
MUST NOT returnnull
. This will throw exception and trigger supervision decision mechanism.Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when an element for which the grouping function returns a group that has not yet been created. Emits the new group
Backpressures when there is an element pending for a group whose substream backpressures
Completes when upstream completes
Cancels when downstream cancels and all substreams cancel
- maxSubstreams
configures the maximum number of substreams (keys) that are supported; if more distinct keys are encountered then the stream fails
- f
computes the key for each element
- allowClosedSubstreamRecreation
enables recreation of already closed substreams if elements with their corresponding keys arrive after completion
- Definition Classes
- FlowOps
- def grouped(n: Int): Repr[Seq[Out]]
Chunk up this stream into groups of the given size, with the last group possibly smaller than requested due to end-of-stream.
Chunk up this stream into groups of the given size, with the last group possibly smaller than requested due to end-of-stream.
n
must be positive, otherwise IllegalArgumentException is thrown.Emits when the specified number of elements have been accumulated or upstream completed
Backpressures when a group has been assembled and downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def groupedWeighted(minWeight: Long)(costFn: (Out) => Long): Repr[Seq[Out]]
Chunk up this stream into groups of elements that have a cumulative weight greater than or equal to the
minWeight
, with the last group possibly smaller than requestedminWeight
due to end-of-stream.Chunk up this stream into groups of elements that have a cumulative weight greater than or equal to the
minWeight
, with the last group possibly smaller than requestedminWeight
due to end-of-stream.minWeight
must be positive, otherwise IllegalArgumentException is thrown.costFn
must return a non-negative result for all inputs, otherwise the stage will fail with an IllegalArgumentException.Emits when the cumulative weight of elements is greater than or equal to the
minWeight
or upstream completedBackpressures when a buffered group weighs more than
minWeight
and downstream backpressuresCompletes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def groupedWeightedWithin(maxWeight: Long, maxNumber: Int, d: FiniteDuration)(costFn: (Out) => Long): Repr[Seq[Out]]
Chunk up this stream into groups of elements received within a time window, or limited by the weight and number of the elements, whatever happens first.
Chunk up this stream into groups of elements received within a time window, or limited by the weight and number of the elements, whatever happens first. Empty groups will not be emitted if no elements are received from upstream. The last group before end-of-stream will contain the buffered elements since the previously emitted group.
maxWeight
must be positive,maxNumber
must be positive, andd
must be greater than 0 seconds, otherwise IllegalArgumentException is thrown.Emits when the configured time elapses since the last group has been emitted or weight limit reached
Backpressures when downstream backpressures, and buffered group (+ pending element) weighs more than
maxWeight
or has more thanmaxNumber
elementsCompletes when upstream completes (emits last group)
Cancels when downstream completes
- Definition Classes
- FlowOps
- def groupedWeightedWithin(maxWeight: Long, d: FiniteDuration)(costFn: (Out) => Long): Repr[Seq[Out]]
Chunk up this stream into groups of elements received within a time window, or limited by the weight of the elements, whatever happens first.
Chunk up this stream into groups of elements received within a time window, or limited by the weight of the elements, whatever happens first. Empty groups will not be emitted if no elements are received from upstream. The last group before end-of-stream will contain the buffered elements since the previously emitted group.
maxWeight
must be positive, andd
must be greater than 0 seconds, otherwise IllegalArgumentException is thrown.Emits when the configured time elapses since the last group has been emitted or weight limit reached
Backpressures when downstream backpressures, and buffered group (+ pending element) weighs more than
maxWeight
Completes when upstream completes (emits last group)
Cancels when downstream completes
- Definition Classes
- FlowOps
- def groupedWithin(n: Int, d: FiniteDuration): Repr[Seq[Out]]
Chunk up this stream into groups of elements received within a time window, or limited by the given number of elements, whatever happens first.
Chunk up this stream into groups of elements received within a time window, or limited by the given number of elements, whatever happens first. Empty groups will not be emitted if no elements are received from upstream. The last group before end-of-stream will contain the buffered elements since the previously emitted group.
n
must be positive, andd
must be greater than 0 seconds, otherwise IllegalArgumentException is thrown.Emits when the configured time elapses since the last group has been emitted or
n
elements is bufferedBackpressures when downstream backpressures, and there are
n+1
buffered elementsCompletes when upstream completes (emits last group)
Cancels when downstream completes
- Definition Classes
- FlowOps
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- def idleTimeout(timeout: FiniteDuration): Repr[Out]
If the time between two processed elements exceeds the provided timeout, the stream is failed with a scala.concurrent.TimeoutException.
If the time between two processed elements exceeds the provided timeout, the stream is failed with a scala.concurrent.TimeoutException. The timeout is checked periodically, so the resolution of the check is one period (equals to timeout value).
Emits when upstream emits an element
Backpressures when downstream backpressures
Completes when upstream completes or fails if timeout elapses between two emitted elements
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def initialDelay(delay: FiniteDuration): Repr[Out]
Delays the initial element by the specified duration.
Delays the initial element by the specified duration.
Emits when upstream emits an element if the initial delay is already elapsed
Backpressures when downstream backpressures or initial delay is not yet elapsed
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def initialTimeout(timeout: FiniteDuration): Repr[Out]
If the first element has not passed through this operator before the provided timeout, the stream is failed with a scala.concurrent.TimeoutException.
If the first element has not passed through this operator before the provided timeout, the stream is failed with a scala.concurrent.TimeoutException.
Emits when upstream emits an element
Backpressures when downstream backpressures
Completes when upstream completes or fails if timeout elapses before first element arrives
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def interleave[U >: Out](that: Graph[SourceShape[U], _], segmentSize: Int, eagerClose: Boolean): Repr[U]
Interleave is a deterministic merge of the given Source with elements of this Flow.
Interleave is a deterministic merge of the given Source with elements of this Flow. It first emits
segmentSize
number of elements from this flow to downstream, then - same amount forthat
source, then repeat process.If eagerClose is false and one of the upstreams complete the elements from the other upstream will continue passing through the interleave operator. If eagerClose is true and one of the upstream complete interleave will cancel the other upstream and complete itself.
If it gets error from one of upstreams - stream completes with failure.
Emits when element is available from the currently consumed upstream
Backpressures when downstream backpressures. Signal to current upstream, switch to next upstream when received
segmentSize
elementsCompletes when the Flow and given Source completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def interleave[U >: Out](that: Graph[SourceShape[U], _], segmentSize: Int): Repr[U]
Interleave is a deterministic merge of the given Source with elements of this Flow.
Interleave is a deterministic merge of the given Source with elements of this Flow. It first emits
segmentSize
number of elements from this flow to downstream, then - same amount forthat
source, then repeat process.Example:
Source(List(1, 2, 3)).interleave(List(4, 5, 6, 7), 2) // 1, 2, 4, 5, 3, 6, 7
After one of upstreams is complete then all the rest elements will be emitted from the second one
If it gets error from one of upstreams - stream completes with failure.
Emits when element is available from the currently consumed upstream
Backpressures when downstream backpressures. Signal to current upstream, switch to next upstream when received
segmentSize
elementsCompletes when the Flow and given Source completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def interleaveAll[U >: Out](those: Seq[Graph[SourceShape[U], _]], segmentSize: Int, eagerClose: Boolean): Repr[U]
Interleave is a deterministic merge of the given Sources with elements of this Flow.
Interleave is a deterministic merge of the given Sources with elements of this Flow. It first emits
segmentSize
number of elements from this flow to downstream, then - same amount forthat
source, then repeat process.If eagerClose is false and one of the upstreams complete the elements from the other upstream will continue passing through the interleave operator. If eagerClose is true and one of the upstream complete interleave will cancel the other upstream and complete itself.
If it gets error from one of upstreams - stream completes with failure.
Emits when element is available from the currently consumed upstream
Backpressures when downstream backpressures. Signal to current upstream, switch to next upstream when received
segmentSize
elementsCompletes when the Flow and given Source completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def interleaveGraph[U >: Out, M](that: Graph[SourceShape[U], M], segmentSize: Int, eagerClose: Boolean = false): Graph[FlowShape[Out, U], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def interleaveMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2], segmentSize: Int, eagerClose: Boolean)(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Interleave is a deterministic merge of the given Source with elements of this Flow.
Interleave is a deterministic merge of the given Source with elements of this Flow. It first emits
segmentSize
number of elements from this flow to downstream, then - same amount forthat
source, then repeat process.If eagerClose is false and one of the upstreams complete the elements from the other upstream will continue passing through the interleave operator. If eagerClose is true and one of the upstream complete interleave will cancel the other upstream and complete itself.
If it gets error from one of upstreams - stream completes with failure.
- Annotations
- @nowarn()
- See also
#interleave. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def interleaveMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2], segmentSize: Int)(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Interleave is a deterministic merge of the given Source with elements of this Flow.
Interleave is a deterministic merge of the given Source with elements of this Flow. It first emits
segmentSize
number of elements from this flow to downstream, then - same amount forthat
source, then repeat process.After one of upstreams is complete then all the rest elements will be emitted from the second one
If it gets error from one of upstreams - stream completes with failure.
- Annotations
- @nowarn()
- See also
#interleave. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def intersperse[T >: Out](inject: T): Repr[T]
Intersperses stream with provided element, similar to how scala.collection.immutable.List.mkString injects a separator between a List's elements.
Intersperses stream with provided element, similar to how scala.collection.immutable.List.mkString injects a separator between a List's elements.
Additionally can inject start and end marker elements to stream.
Examples:
val nums = Source(List(1,2,3)).map(_.toString) nums.intersperse(",") // 1 , 2 , 3 nums.intersperse("[", ",", "]") // [ 1 , 2 , 3 ]
Emits when upstream emits (or before with the
start
element if provided)Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def intersperse[T >: Out](start: T, inject: T, end: T): Repr[T]
Intersperses stream with provided element, similar to how scala.collection.immutable.List.mkString injects a separator between a List's elements.
Intersperses stream with provided element, similar to how scala.collection.immutable.List.mkString injects a separator between a List's elements.
Additionally can inject start and end marker elements to stream.
Examples:
val nums = Source(List(1,2,3)).map(_.toString) nums.intersperse(",") // 1 , 2 , 3 nums.intersperse("[", ",", "]") // [ 1 , 2 , 3 ]
In case you want to only prepend or only append an element (yet still use the
intercept
feature to inject a separator between elements, you may want to use the following pattern instead of the 3-argument version of intersperse (See Source.concat for semantics details):Source.single(">> ") ++ Source(List("1", "2", "3")).intersperse(",") Source(List("1", "2", "3")).intersperse(",") ++ Source.single("END")
Emits when upstream emits (or before with the
start
element if provided)Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def keepAlive[U >: Out](maxIdle: FiniteDuration, injectedElem: () => U): Repr[U]
Injects additional elements if upstream does not emit for a configured amount of time.
Injects additional elements if upstream does not emit for a configured amount of time. In other words, this operator attempts to maintains a base rate of emitted elements towards the downstream.
If the downstream backpressures then no element is injected until downstream demand arrives. Injected elements do not accumulate during this period.
Upstream elements are always preferred over injected elements.
Emits when upstream emits an element or if the upstream was idle for the configured period
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def limit(max: Long): Repr[Out]
Ensure stream boundedness by limiting the number of elements from upstream.
Ensure stream boundedness by limiting the number of elements from upstream. If the number of incoming elements exceeds max, it will signal upstream failure
StreamLimitException
downstream.Due to input buffering some elements may have been requested from upstream publishers that will then not be processed downstream of this step.
Emits when upstream emits and the number of emitted elements has not reached max
Backpressures when downstream backpressures
Completes when upstream completes and the number of emitted elements has not reached max
Errors when the total number of incoming element exceeds max
Cancels when downstream cancels
See also FlowOps.take, FlowOps.takeWithin, FlowOps.takeWhile
- Definition Classes
- FlowOps
- def limitWeighted[T](max: Long)(costFn: (Out) => Long): Repr[Out]
Ensure stream boundedness by evaluating the cost of incoming elements using a cost function.
Ensure stream boundedness by evaluating the cost of incoming elements using a cost function. Exactly how many elements will be allowed to travel downstream depends on the evaluated cost of each element. If the accumulated cost exceeds max, it will signal upstream failure
StreamLimitException
downstream.Due to input buffering some elements may have been requested from upstream publishers that will then not be processed downstream of this step.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when upstream emits and the accumulated cost has not reached max
Backpressures when downstream backpressures
Completes when upstream completes and the number of emitted elements has not reached max
Errors when when the accumulated cost exceeds max
Cancels when downstream cancels
See also FlowOps.take, FlowOps.takeWithin, FlowOps.takeWhile
- Definition Classes
- FlowOps
- def log(name: String, extract: (Out) => Any = ConstantFun.scalaIdentityFunction)(implicit log: LoggingAdapter = null): Repr[Out]
Logs elements flowing through the stream as well as completion and erroring.
Logs elements flowing through the stream as well as completion and erroring.
By default element and completion signals are logged on debug level, and errors are logged on Error level. This can be adjusted according to your needs by providing a custom Attributes.LogLevels attribute on the given Flow:
Uses implicit LoggingAdapter if available, otherwise uses an internally created one, which uses
org.apache.pekko.event.Logging(materializer.system, materializer)
as its source (use this class to configure slf4j loggers).Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the mapping function returns an element
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def logWithMarker(name: String, marker: (Out) => LogMarker, extract: (Out) => Any = ConstantFun.scalaIdentityFunction)(implicit log: MarkerLoggingAdapter = null): Repr[Out]
Logs elements flowing through the stream as well as completion and erroring.
Logs elements flowing through the stream as well as completion and erroring.
By default element and completion signals are logged on debug level, and errors are logged on Error level. This can be adjusted according to your needs by providing a custom Attributes.LogLevels attribute on the given Flow:
Uses implicit MarkerLoggingAdapter if available, otherwise uses an internally created one, which uses
org.apache.pekko.event.Logging.withMarker(materializer.system, materializer)
as its source (use this class to configure slf4j loggers).Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the mapping function returns an element
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def map[T](f: (Out) => T): Repr[T]
Transform this stream by applying the given function to each of the elements as they pass through this processing step.
Transform this stream by applying the given function to each of the elements as they pass through this processing step.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the mapping function returns an element
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def mapAsync[T](parallelism: Int)(f: (Out) => Future[T]): Repr[T]
Transform this stream by applying the given function to each of the elements as they pass through this processing step.
Transform this stream by applying the given function to each of the elements as they pass through this processing step. The function returns a
Future
and the value of that future will be emitted downstream. The number of Futures that shall run in parallel is given as the first argument to
. These Futures may complete in any order, but the elements that are emitted downstream are in the same order as received from upstream.mapAsync
If the function
f
throws an exception or if theFuture
is completed with failure and the supervision decision is pekko.stream.Supervision.Stop the stream will be completed with failure.If the function
f
throws an exception or if theFuture
is completed with failure and the supervision decision is pekko.stream.Supervision.Resume or pekko.stream.Supervision.Restart or theFuture
completed withnull
, the element is dropped and the stream continues.The function
f
is always invoked on the elements in the order they arrive.Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the Future returned by the provided function finishes for the next element in sequence
Backpressures when the number of futures reaches the configured parallelism and the downstream backpressures or the first future is not completed
Completes when upstream completes and all futures have been completed and all elements have been emitted
Cancels when downstream cancels
- Definition Classes
- FlowOps
- See also
- def mapAsyncUnordered[T](parallelism: Int)(f: (Out) => Future[T]): Repr[T]
Transform this stream by applying the given function to each of the elements as they pass through this processing step.
Transform this stream by applying the given function to each of the elements as they pass through this processing step. The function returns a
Future
and the value of that future will be emitted downstream. The number of Futures that shall run in parallel is given as the first argument to
. Each processed element will be emitted downstream as soon as it is ready, i.e. it is possible that the elements are not emitted downstream in the same order as received from upstream.mapAsyncUnordered
If the function
f
throws an exception or if theFuture
is completed with failure and the supervision decision is pekko.stream.Supervision.Stop the stream will be completed with failure.If the function
f
throws an exception or if theFuture
is completed with failure and the supervision decision is pekko.stream.Supervision.Resume or pekko.stream.Supervision.Restart or theFuture
completed withnull
, the element is dropped and the stream continues.The function
f
is always invoked on the elements in the order they arrive (even though the result of the futures returned byf
might be emitted in a different order).Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when any of the Futures returned by the provided function complete
Backpressures when the number of futures reaches the configured parallelism and the downstream backpressures
Completes when upstream completes and all futures have been completed and all elements have been emitted
Cancels when downstream cancels
- def mapConcat[T](f: (Out) => IterableOnce[T]): Repr[T]
Transform each input element into an
Iterable
of output elements that is then flattened into the output stream.Transform each input element into an
Iterable
of output elements that is then flattened into the output stream.The returned
Iterable
MUST NOT containnull
values, as they are illegal as stream elements - according to the Reactive Streams specification.Emits when the mapping function returns an element or there are still remaining elements from the previously calculated collection
Backpressures when downstream backpressures or there are still remaining elements from the previously calculated collection
Completes when upstream completes and all remaining elements have been emitted
Cancels when downstream cancels
- Definition Classes
- FlowOps
- Annotations
- @nowarn()
- def mapError(pf: PartialFunction[Throwable, Throwable]): Repr[Out]
While similar to recover this operator can be used to transform an error signal to a different one *without* logging it as an error in the process.
While similar to recover this operator can be used to transform an error signal to a different one *without* logging it as an error in the process. So in that sense it is NOT exactly equivalent to
recover(t => throw t2)
since recover would log thet2
error.Since the underlying failure signal onError arrives out-of-band, it might jump over existing elements. This operator can recover the failure signal, but not the skipped elements, which will be dropped.
Similarly to recover throwing an exception inside
mapError
_will_ be logged.Emits when element is available from the upstream or upstream is failed and pf returns an element
Backpressures when downstream backpressures
Completes when upstream completes or upstream failed with exception pf can handle
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def merge[U >: Out, M](that: Graph[SourceShape[U], M], eagerComplete: Boolean = false): Repr[U]
Merge the given Source to this Flow, taking elements as they arrive from input streams, picking randomly when several elements ready.
Merge the given Source to this Flow, taking elements as they arrive from input streams, picking randomly when several elements ready.
Emits when one of the inputs has an element available
Backpressures when downstream backpressures
Completes when all upstreams complete (eagerComplete=false) or one upstream completes (eagerComplete=true), default value is
false
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def mergeAll[U >: Out](those: Seq[Graph[SourceShape[U], _]], eagerComplete: Boolean): Repr[U]
Merge the given Sources to this Flow, taking elements as they arrive from input streams, picking randomly when several elements ready.
Merge the given Sources to this Flow, taking elements as they arrive from input streams, picking randomly when several elements ready.
Emits when one of the inputs has an element available
Backpressures when downstream backpressures
Completes when all upstreams complete (eagerComplete=false) or one upstream completes (eagerComplete=true), default value is
false
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def mergeGraph[U >: Out, M](that: Graph[SourceShape[U], M], eagerComplete: Boolean): Graph[FlowShape[Out, U], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def mergeLatest[U >: Out, M](that: Graph[SourceShape[U], M], eagerComplete: Boolean = false): Repr[Seq[U]]
MergeLatest joins elements from N input streams into stream of lists of size N.
MergeLatest joins elements from N input streams into stream of lists of size N. i-th element in list is the latest emitted element from i-th input stream. MergeLatest emits list for each element emitted from some input stream, but only after each input stream emitted at least one element.
Emits when an element is available from some input and each input emits at least one element from stream start
Completes when all upstreams complete (eagerClose=false) or one upstream completes (eagerClose=true)
- Definition Classes
- FlowOps
- def mergeLatestGraph[U >: Out, M](that: Graph[SourceShape[U], M], eagerComplete: Boolean): Graph[FlowShape[Out, Seq[U]], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def mergeLatestMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2], eagerClose: Boolean)(matF: (Mat, Mat2) => Mat3): ReprMat[Seq[U], Mat3]
MergeLatest joins elements from N input streams into stream of lists of size N.
MergeLatest joins elements from N input streams into stream of lists of size N. i-th element in list is the latest emitted element from i-th input stream. MergeLatest emits list for each element emitted from some input stream, but only after each input stream emitted at least one element.
- See also
#mergeLatest. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def mergeMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2], eagerComplete: Boolean = false)(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Merge the given Source to this Flow, taking elements as they arrive from input streams, picking randomly when several elements ready.
Merge the given Source to this Flow, taking elements as they arrive from input streams, picking randomly when several elements ready.
- See also
#merge. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def mergePreferred[U >: Out, M](that: Graph[SourceShape[U], M], preferred: Boolean, eagerComplete: Boolean = false): Repr[U]
Merge two sources.
Merge two sources. Prefer one source if both sources have elements ready.
emits when one of the inputs has an element available. If multiple have elements available, prefer the 'right' one when 'preferred' is 'true', or the 'left' one when 'preferred' is 'false'.
backpressures when downstream backpressures
completes when all upstreams complete (This behavior is changeable to completing when any upstream completes by setting
eagerComplete=true
.)- Definition Classes
- FlowOps
- Annotations
- @nowarn()
- def mergePreferredGraph[U >: Out, M](that: Graph[SourceShape[U], M], preferred: Boolean, eagerComplete: Boolean): Graph[FlowShape[Out, U], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- Annotations
- @nowarn()
- def mergePreferredMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2], preferred: Boolean, eagerClose: Boolean)(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Merge two sources.
Merge two sources. Prefer one source if both sources have elements ready.
- See also
#mergePreferred It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def mergePrioritized[U >: Out, M](that: Graph[SourceShape[U], M], leftPriority: Int, rightPriority: Int, eagerComplete: Boolean = false): Repr[U]
Merge two sources.
Merge two sources. Prefer the sources depending on the 'priority' parameters.
emits when one of the inputs has an element available, preferring inputs based on the 'priority' parameters if both have elements available
backpressures when downstream backpressures
completes when both upstreams complete (This behavior is changeable to completing when any upstream completes by setting
eagerComplete=true
.)- Definition Classes
- FlowOps
- def mergePrioritizedGraph[U >: Out, M](that: Graph[SourceShape[U], M], leftPriority: Int, rightPriority: Int, eagerComplete: Boolean): Graph[FlowShape[Out, U], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def mergePrioritizedMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2], leftPriority: Int, rightPriority: Int, eagerClose: Boolean)(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Merge two sources.
Merge two sources. Prefer the sources depending on the 'priority' parameters.
It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values. - def mergeSorted[U >: Out, M](that: Graph[SourceShape[U], M])(implicit ord: Ordering[U]): Repr[U]
Merge the given Source to this Flow, taking elements as they arrive from input streams, picking always the smallest of the available elements (waiting for one element from each side to be available).
Merge the given Source to this Flow, taking elements as they arrive from input streams, picking always the smallest of the available elements (waiting for one element from each side to be available). This means that possible contiguity of the input streams is not exploited to avoid waiting for elements, this merge will block when one of the inputs does not have more elements (and does not complete).
Emits when all of the inputs have an element available
Backpressures when downstream backpressures
Completes when all upstreams complete
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def mergeSortedGraph[U >: Out, M](that: Graph[SourceShape[U], M])(implicit ord: Ordering[U]): Graph[FlowShape[Out, U], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def mergeSortedMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2])(matF: (Mat, Mat2) => Mat3)(implicit ord: Ordering[U]): ReprMat[U, Mat3]
Merge the given Source to this Flow, taking elements as they arrive from input streams, picking always the smallest of the available elements (waiting for one element from each side to be available).
Merge the given Source to this Flow, taking elements as they arrive from input streams, picking always the smallest of the available elements (waiting for one element from each side to be available). This means that possible contiguity of the input streams is not exploited to avoid waiting for elements, this merge will block when one of the inputs does not have more elements (and does not complete).
- See also
#mergeSorted. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def monitor: ReprMat[Out, (Mat, FlowMonitor[Out])]
Materializes to
(Mat, FlowMonitor[Out])
, which is unlike most other operators (!), in which usually the default materialized value keeping semantics is to keep the left value (by passingKeep.left()
to a*Mat
version of a method).Materializes to
(Mat, FlowMonitor[Out])
, which is unlike most other operators (!), in which usually the default materialized value keeping semantics is to keep the left value (by passingKeep.left()
to a*Mat
version of a method). This operator is an exception from that rule and keeps both values since dropping its sole purpose is to introduce that materialized value.The
FlowMonitor[Out]
allows monitoring of the current flow. All events are propagated by the monitor unchanged. Note that the monitor inserts a memory barrier every time it processes an event, and may therefor affect performance. - def monitorMat[Mat2](combine: (Mat, FlowMonitor[Out]) => Mat2): ReprMat[Out, Mat2]
Materializes to
FlowMonitor[Out]
that allows monitoring of the current flow.Materializes to
FlowMonitor[Out]
that allows monitoring of the current flow. All events are propagated by the monitor unchanged. Note that the monitor inserts a memory barrier every time it processes an event, and may therefor affect performance.The
combine
function is used to combine theFlowMonitor
with this flow's materialized value. - final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- def orElse[U >: Out, Mat2](secondary: Graph[SourceShape[U], Mat2]): Repr[U]
Provides a secondary source that will be consumed if this stream completes without any elements passing by.
Provides a secondary source that will be consumed if this stream completes without any elements passing by. As soon as the first element comes through this stream, the alternative will be cancelled.
Note that this Flow will be materialized together with the Source and just kept from producing elements by asserting back-pressure until its time comes or it gets cancelled.
On errors the operator is failed regardless of source of the error.
Emits when element is available from first stream or first stream closed without emitting any elements and an element is available from the second stream
Backpressures when downstream backpressures
Completes when the primary stream completes after emitting at least one element, when the primary stream completes without emitting and the secondary stream already has completed or when the secondary stream completes
Cancels when downstream cancels and additionally the alternative is cancelled as soon as an element passes by from this stream.
- Definition Classes
- FlowOps
- def orElseGraph[U >: Out, Mat2](secondary: Graph[SourceShape[U], Mat2]): Graph[FlowShape[Out, U], Mat2]
- Attributes
- protected
- Definition Classes
- FlowOps
- def orElseMat[U >: Out, Mat2, Mat3](secondary: Graph[SourceShape[U], Mat2])(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Provides a secondary source that will be consumed if this stream completes without any elements passing by.
Provides a secondary source that will be consumed if this stream completes without any elements passing by. As soon as the first element comes through this stream, the alternative will be cancelled.
Note that this Flow will be materialized together with the Source and just kept from producing elements by asserting back-pressure until its time comes or it gets cancelled.
On errors the operator is failed regardless of source of the error.
Emits when element is available from first stream or first stream closed without emitting any elements and an element is available from the second stream
Backpressures when downstream backpressures
Completes when the primary stream completes after emitting at least one element, when the primary stream completes without emitting and the secondary stream already has completed or when the secondary stream completes
Cancels when downstream cancels and additionally the alternative is cancelled as soon as an element passes by from this stream.
- def prefixAndTail[U >: Out](n: Int): Repr[(Seq[Out], Source[U, NotUsed])]
Takes up to
n
elements from the stream (less thann
only if the upstream completes before emittingn
elements) and returns a pair containing a strict sequence of the taken element and a stream representing the remaining elements.Takes up to
n
elements from the stream (less thann
only if the upstream completes before emittingn
elements) and returns a pair containing a strict sequence of the taken element and a stream representing the remaining elements. If n is zero or negative, then this will return a pair of an empty collection and a stream containing the whole upstream unchanged.In case of an upstream error, depending on the current state
- the master stream signals the error if less than
n
elements has been seen, and therefore the substream has not yet been emitted - the tail substream signals the error after the prefix and tail has been emitted by the main stream (at that point the main stream has already completed)
Emits when the configured number of prefix elements are available. Emits this prefix, and the rest as a substream
Backpressures when downstream backpressures or substream backpressures
Completes when prefix elements have been consumed and substream has been consumed
Cancels when downstream cancels or substream cancels
- Definition Classes
- FlowOps
- the master stream signals the error if less than
- def prepend[U >: Out, Mat2](that: Graph[SourceShape[U], Mat2]): Repr[U]
Prepend the given Source to this Flow, meaning that before elements are generated from this Flow, the Source's elements will be produced until it is exhausted, at which point Flow elements will start being produced.
Prepend the given Source to this Flow, meaning that before elements are generated from this Flow, the Source's elements will be produced until it is exhausted, at which point Flow elements will start being produced.
Note that the Source is materialized together with this Flow and is "detached" meaning in effect behave as a one element buffer in front of both the sources, that eagerly demands an element on start (so it can not be combined with
Source.lazy
to defer materialization ofthat
).This flow will then be kept from producing elements by asserting back-pressure until its time comes.
When needing a prepend operator that is not detached use #prependLazy
Emits when element is available from the given Source or from current stream when the Source is completed
Backpressures when downstream backpressures
Completes when this Flow completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def prependGraph[U >: Out, Mat2](that: Graph[SourceShape[U], Mat2], detached: Boolean): Graph[FlowShape[Out, U], Mat2]
- Attributes
- protected
- Definition Classes
- FlowOps
- def prependLazy[U >: Out, Mat2](that: Graph[SourceShape[U], Mat2]): Repr[U]
Prepend the given Source to this Flow, meaning that before elements are generated from this Flow, the Source's elements will be produced until it is exhausted, at which point Flow elements will start being produced.
Prepend the given Source to this Flow, meaning that before elements are generated from this Flow, the Source's elements will be produced until it is exhausted, at which point Flow elements will start being produced.
Note that the Source is materialized together with this Flow and will then be kept from producing elements by asserting back-pressure until its time comes.
When needing a prepend operator that is also detached use #prepend
If the given Source gets upstream error - no elements from this Flow will be pulled.
Emits when element is available from the given Source or from current stream when the Source is completed
Backpressures when downstream backpressures
Completes when this Flow completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def prependLazyMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2])(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Prepend the given Source to this Flow, meaning that before elements are generated from this Flow, the Source's elements will be produced until it is exhausted, at which point Flow elements will start being produced.
Prepend the given Source to this Flow, meaning that before elements are generated from this Flow, the Source's elements will be produced until it is exhausted, at which point Flow elements will start being produced.
Note that the Source is materialized together with this Flow and is "detached" meaning in effect behave as a one element buffer in front of both the sources, that eagerly demands an element on start (so it can not be combined with
Source.lazy
to defer materialization ofthat
).This flow will then be kept from producing elements by asserting back-pressure until its time comes.
When needing a prepend operator that is not detached use #prependLazyMat
- See also
#prependLazy. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def prependMat[U >: Out, Mat2, Mat3](that: Graph[SourceShape[U], Mat2])(matF: (Mat, Mat2) => Mat3): ReprMat[U, Mat3]
Prepend the given Source to this Flow, meaning that before elements are generated from this Flow, the Source's elements will be produced until it is exhausted, at which point Flow elements will start being produced.
Prepend the given Source to this Flow, meaning that before elements are generated from this Flow, the Source's elements will be produced until it is exhausted, at which point Flow elements will start being produced.
Note that this Flow will be materialized together with the Source and just kept from producing elements by asserting back-pressure until its time comes.
If the given Source gets upstream error - no elements from this Flow will be pulled.
When needing a concat operator that is not detached use #prependLazyMat
- See also
#prepend. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def recover[T >: Out](pf: PartialFunction[Throwable, T]): Repr[T]
Recover allows to send last element on failure and gracefully complete the stream Since the underlying failure signal onError arrives out-of-band, it might jump over existing elements.
Recover allows to send last element on failure and gracefully complete the stream Since the underlying failure signal onError arrives out-of-band, it might jump over existing elements. This operator can recover the failure signal, but not the skipped elements, which will be dropped.
Throwing an exception inside
recover
_will_ be logged on ERROR level automatically.Emits when element is available from the upstream or upstream is failed and pf returns an element
Backpressures when downstream backpressures
Completes when upstream completes or upstream failed with exception pf can handle
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def recoverWith[T >: Out](pf: PartialFunction[Throwable, Graph[SourceShape[T], NotUsed]]): Repr[T]
RecoverWith allows to switch to alternative Source on flow failure.
RecoverWith allows to switch to alternative Source on flow failure. It will stay in effect after a failure has been recovered so that each time there is a failure it is fed into the
pf
and a new Source may be materialized.Since the underlying failure signal onError arrives out-of-band, it might jump over existing elements. This operator can recover the failure signal, but not the skipped elements, which will be dropped.
Throwing an exception inside
recoverWith
_will_ be logged on ERROR level automatically.Emits when element is available from the upstream or upstream is failed and element is available from alternative Source
Backpressures when downstream backpressures
Completes when upstream completes or upstream failed with exception pf can handle
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def recoverWithRetries[T >: Out](attempts: Int, pf: PartialFunction[Throwable, Graph[SourceShape[T], NotUsed]]): Repr[T]
RecoverWithRetries allows to switch to alternative Source on flow failure.
RecoverWithRetries allows to switch to alternative Source on flow failure. It will stay in effect after a failure has been recovered up to
attempts
number of times so that each time there is a failure it is fed into thepf
and a new Source may be materialized. Note that if you pass in 0, this won't attempt to recover at all.A negative
attempts
number is interpreted as "infinite", which results in the exact same behavior asrecoverWith
.Since the underlying failure signal onError arrives out-of-band, it might jump over existing elements. This operator can recover the failure signal, but not the skipped elements, which will be dropped.
Throwing an exception inside
recoverWithRetries
_will_ be logged on ERROR level automatically.Emits when element is available from the upstream or upstream is failed and element is available from alternative Source
Backpressures when downstream backpressures
Completes when upstream completes or upstream failed with exception pf can handle
Cancels when downstream cancels
- attempts
Maximum number of retries or -1 to retry indefinitely
- pf
Receives the failure cause and returns the new Source to be materialized if any
- Definition Classes
- FlowOps
- def reduce[T >: Out](f: (T, T) => T): Repr[T]
Similar to
fold
but uses first element as zero element.Similar to
fold
but uses first element as zero element. Applies the given function towards its current and next value, yielding the next current value.If the stream is empty (i.e. completes before signalling any elements), the reduce operator will fail its downstream with a NoSuchElementException, which is semantically in-line with that Scala's standard library collections do in such situations.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when upstream completes
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
See also FlowOps.fold
- Definition Classes
- FlowOps
- def scan[T](zero: T)(f: (T, Out) => T): Repr[T]
Similar to
fold
but is not a terminal operation, emits its current value which starts atzero
and then applies the current and next value to the given functionf
, emitting the next current value.Similar to
fold
but is not a terminal operation, emits its current value which starts atzero
and then applies the current and next value to the given functionf
, emitting the next current value.If the function
f
throws an exception and the supervision decision is pekko.stream.Supervision.Restart current value starts atzero
again the stream will continue.Adheres to the ActorAttributes.SupervisionStrategy attribute.
Note that the
zero
value must be immutable.Emits when the function scanning the element returns a new element
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
See also FlowOps.scanAsync
- Definition Classes
- FlowOps
- def scanAsync[T](zero: T)(f: (T, Out) => Future[T]): Repr[T]
Similar to
scan
but with an asynchronous function, emits its current value which starts atzero
and then applies the current and next value to the given functionf
, emitting aFuture
that resolves to the next current value.Similar to
scan
but with an asynchronous function, emits its current value which starts atzero
and then applies the current and next value to the given functionf
, emitting aFuture
that resolves to the next current value.If the function
f
throws an exception and the supervision decision is pekko.stream.Supervision.Restart current value starts atzero
again the stream will continue.If the function
f
throws an exception and the supervision decision is pekko.stream.Supervision.Resume current value starts at the previous current value, or zero when it doesn't have one, and the stream will continue.Adheres to the ActorAttributes.SupervisionStrategy attribute.
Note that the
zero
value must be immutable.Emits when the future returned by
f
completesBackpressures when downstream backpressures
Completes when upstream completes and the last future returned by
f
completesCancels when downstream cancels
See also FlowOps.scan
- Definition Classes
- FlowOps
- def sliding(n: Int, step: Int = 1): Repr[Seq[Out]]
Apply a sliding window over the stream and return the windows as groups of elements, with the last group possibly smaller than requested due to end-of-stream.
Apply a sliding window over the stream and return the windows as groups of elements, with the last group possibly smaller than requested due to end-of-stream.
n
must be positive, otherwise IllegalArgumentException is thrown.step
must be positive, otherwise IllegalArgumentException is thrown.Emits when enough elements have been collected within the window or upstream completed
Backpressures when a window has been assembled and downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def splitAfter(p: (Out) => Boolean): SubFlow[Out, Mat, Repr, Closed]
This operation applies the given predicate to all incoming elements and emits them to a stream of output streams.
This operation applies the given predicate to all incoming elements and emits them to a stream of output streams. It *ends* the current substream when the predicate is true.
- Definition Classes
- FlowOps
- See also
- def splitAfter(substreamCancelStrategy: SubstreamCancelStrategy)(p: (Out) => Boolean): SubFlow[Out, Mat, Repr, Closed]
This operation applies the given predicate to all incoming elements and emits them to a stream of output streams.
This operation applies the given predicate to all incoming elements and emits them to a stream of output streams. It *ends* the current substream when the predicate is true. This means that for the following series of predicate values, three substreams will be produced with lengths 2, 2, and 3:
false, true, // elements go into first substream false, true, // elements go into second substream false, false, true // elements go into third substream
The object returned from this method is not a normal Source or Flow, it is a SubFlow. This means that after this operator all transformations are applied to all encountered substreams in the same fashion. Substream mode is exited either by closing the substream (i.e. connecting it to a Sink) or by merging the substreams back together; see the
to
andmergeBack
methods on SubFlow for more information.It is important to note that the substreams also propagate back-pressure as any other stream, which means that blocking one substream will block the
splitAfter
operator itself—and thereby all substreams—once all internal or explicit buffers are filled.If the split predicate
p
throws an exception and the supervision decision is pekko.stream.Supervision.Stop the stream and substreams will be completed with failure.If the split predicate
p
throws an exception and the supervision decision is pekko.stream.Supervision.Resume or pekko.stream.Supervision.Restart the element is dropped and the stream and substreams continue.Emits when an element passes through. When the provided predicate is true it emits the element and opens a new substream for subsequent element
Backpressures when there is an element pending for the next substream, but the previous is not fully consumed yet, or the substream backpressures
Completes when upstream completes
Cancels when downstream cancels and substreams cancel on
SubstreamCancelStrategy.drain
, downstream cancels or any substream cancels onSubstreamCancelStrategy.propagate
See also FlowOps.splitWhen.
- Definition Classes
- FlowOps
- def splitWhen(p: (Out) => Boolean): SubFlow[Out, Mat, Repr, Closed]
This operation applies the given predicate to all incoming elements and emits them to a stream of output streams, always beginning a new one with the current element if the given predicate returns true for it.
This operation applies the given predicate to all incoming elements and emits them to a stream of output streams, always beginning a new one with the current element if the given predicate returns true for it.
- Definition Classes
- FlowOps
- See also
- def splitWhen(substreamCancelStrategy: SubstreamCancelStrategy)(p: (Out) => Boolean): SubFlow[Out, Mat, Repr, Closed]
This operation applies the given predicate to all incoming elements and emits them to a stream of output streams, always beginning a new one with the current element if the given predicate returns true for it.
This operation applies the given predicate to all incoming elements and emits them to a stream of output streams, always beginning a new one with the current element if the given predicate returns true for it. This means that for the following series of predicate values, three substreams will be produced with lengths 1, 2, and 3:
false, // element goes into first substream true, false, // elements go into second substream true, false, false // elements go into third substream
In case the *first* element of the stream matches the predicate, the first substream emitted by splitWhen will start from that element. For example:
true, false, false // first substream starts from the split-by element true, false // subsequent substreams operate the same way
The object returned from this method is not a normal Source or Flow, it is a SubFlow. This means that after this operator all transformations are applied to all encountered substreams in the same fashion. Substream mode is exited either by closing the substream (i.e. connecting it to a Sink) or by merging the substreams back together; see the
to
andmergeBack
methods on SubFlow for more information.It is important to note that the substreams also propagate back-pressure as any other stream, which means that blocking one substream will block the
splitWhen
operator itself—and thereby all substreams—once all internal or explicit buffers are filled.If the split predicate
p
throws an exception and the supervision decision is pekko.stream.Supervision.Stop the stream and substreams will be completed with failure.If the split predicate
p
throws an exception and the supervision decision is pekko.stream.Supervision.Resume or pekko.stream.Supervision.Restart the element is dropped and the stream and substreams continue.Emits when an element for which the provided predicate is true, opening and emitting a new substream for subsequent element
Backpressures when there is an element pending for the next substream, but the previous is not fully consumed yet, or the substream backpressures
Completes when upstream completes
Cancels when downstream cancels and substreams cancel on
SubstreamCancelStrategy.drain
, downstream cancels or any substream cancels onSubstreamCancelStrategy.propagate
See also FlowOps.splitAfter.
- Definition Classes
- FlowOps
- def statefulMap[S, T](create: () => S)(f: (S, Out) => (S, T), onComplete: (S) => Option[T]): Repr[T]
Transform each stream element with the help of a state.
Transform each stream element with the help of a state.
The state creation function is invoked once when the stream is materialized and the returned state is passed to the mapping function for mapping the first element. The mapping function returns a mapped element to emit downstream and a state to pass to the next mapping function. The state can be the same for each mapping return, be a new immutable state but it is also safe to use a mutable state. The returned
T
MUST NOT benull
as it is illegal as stream element - according to the Reactive Streams specification.For stateless variant see FlowOps.map.
The
onComplete
function is called only once when the upstream or downstream finished, You can do some clean-up here, and if the returned value is not empty, it will be emitted to the downstream if available, otherwise the value will be dropped.Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the mapping function returns an element and downstream is ready to consume it
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- S
the type of the state
- T
the type of the output elements
- create
a function that creates the initial state
- f
a function that transforms the upstream element and the state into a pair of next state and output element
- onComplete
a function that transforms the ongoing state into an optional output element
- Definition Classes
- FlowOps
- def statefulMapConcat[T](f: () => (Out) => IterableOnce[T]): Repr[T]
Transform each input element into an
Iterable
of output elements that is then flattened into the output stream.Transform each input element into an
Iterable
of output elements that is then flattened into the output stream. The transformation is meant to be stateful, which is enabled by creating the transformation function anew for every materialization — the returned function will typically close over mutable objects to store state between invocations. For the stateless variant see FlowOps.mapConcat.The returned
Iterable
MUST NOT containnull
values, as they are illegal as stream elements - according to the Reactive Streams specification.This operator doesn't handle upstream's completion signal since the state kept in the closure can be lost. Use FlowOps.statefulMap instead.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the mapping function returns an element or there are still remaining elements from the previously calculated collection
Backpressures when downstream backpressures or there are still remaining elements from the previously calculated collection
Completes when upstream completes and all remaining elements has been emitted
Cancels when downstream cancels
See also FlowOps.mapConcat
- Definition Classes
- FlowOps
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def take(n: Long): Repr[Out]
Terminate processing (and cancel the upstream publisher) after the given number of elements.
Terminate processing (and cancel the upstream publisher) after the given number of elements. Due to input buffering some elements may have been requested from upstream publishers that will then not be processed downstream of this step.
The stream will be completed without producing any elements if
n
is zero or negative.Emits when the specified number of elements to take has not yet been reached
Backpressures when downstream backpressures
Completes when the defined number of elements has been taken or upstream completes
Cancels when the defined number of elements has been taken or downstream cancels
See also FlowOps.limit, FlowOps.limitWeighted
- Definition Classes
- FlowOps
- def takeWhile(p: (Out) => Boolean, inclusive: Boolean): Repr[Out]
Terminate processing (and cancel the upstream publisher) after predicate returns false for the first time, including the first failed element iff inclusive is true Due to input buffering some elements may have been requested from upstream publishers that will then not be processed downstream of this step.
Terminate processing (and cancel the upstream publisher) after predicate returns false for the first time, including the first failed element iff inclusive is true Due to input buffering some elements may have been requested from upstream publishers that will then not be processed downstream of this step.
The stream will be completed without producing any elements if predicate is false for the first stream element.
Adheres to the ActorAttributes.SupervisionStrategy attribute.
Emits when the predicate is true
Backpressures when downstream backpressures
Completes when predicate returned false (or 1 after predicate returns false if
inclusive
or upstream completesCancels when predicate returned false or downstream cancels
See also FlowOps.limit, FlowOps.limitWeighted
- Definition Classes
- FlowOps
- def takeWhile(p: (Out) => Boolean): Repr[Out]
Terminate processing (and cancel the upstream publisher) after predicate returns false for the first time, Due to input buffering some elements may have been requested from upstream publishers that will then not be processed downstream of this step.
Terminate processing (and cancel the upstream publisher) after predicate returns false for the first time, Due to input buffering some elements may have been requested from upstream publishers that will then not be processed downstream of this step.
The stream will be completed without producing any elements if predicate is false for the first stream element.
Emits when the predicate is true
Backpressures when downstream backpressures
Completes when predicate returned false (or 1 after predicate returns false if
inclusive
or upstream completesCancels when predicate returned false or downstream cancels
See also FlowOps.limit, FlowOps.limitWeighted
- Definition Classes
- FlowOps
- def takeWithin(d: FiniteDuration): Repr[Out]
Terminate processing (and cancel the upstream publisher) after the given duration.
Terminate processing (and cancel the upstream publisher) after the given duration. Due to input buffering some elements may have been requested from upstream publishers that will then not be processed downstream of this step.
Note that this can be combined with #take to limit the number of elements within the duration.
Emits when an upstream element arrives
Backpressures when downstream backpressures
Completes when upstream completes or timer fires
Cancels when downstream cancels or timer fires
- Definition Classes
- FlowOps
- def throttle(cost: Int, per: FiniteDuration, maximumBurst: Int, costCalculation: (Out) => Int, mode: ThrottleMode): Repr[Out]
Sends elements downstream with speed limited to
cost/per
.Sends elements downstream with speed limited to
cost/per
. Cost is calculating for each element individually by callingcalculateCost
function. This operator works for streams when elements have different cost(length). Streams ofByteString
for example.Throttle implements the token bucket model. There is a bucket with a given token capacity (burst size or maximumBurst). Tokens drops into the bucket at a given rate and can be
spared
for later use up to bucket capacity to allow some burstiness. Whenever stream wants to send an element, it takes as many tokens from the bucket as element costs. If there isn't any, throttle waits until the bucket accumulates enough tokens. Elements that costs more than the allowed burst will be delayed proportionally to their cost minus available tokens, meeting the target rate. Bucket is full when stream just materialized and started.Parameter
mode
manages behavior when upstream is faster than throttle rate:- pekko.stream.ThrottleMode.Shaping makes pauses before emitting messages to meet throttle rate
- pekko.stream.ThrottleMode.Enforcing fails with exception when upstream is faster than throttle rate. Enforcing cannot emit elements that cost more than the maximumBurst
It is recommended to use non-zero burst sizes as they improve both performance and throttling precision by allowing the implementation to avoid using the scheduler when input rates fall below the enforced limit and to reduce most of the inaccuracy caused by the scheduler resolution (which is in the range of milliseconds).
WARNING: Be aware that throttle is using scheduler to slow down the stream. This scheduler has minimal time of triggering next push. Consequently it will slow down the stream as it has minimal pause for emitting. This can happen in case burst is 0 and speed is higher than 30 events per second. You need to increase the
maximumBurst
if elements arrive with small interval (30 milliseconds or less). Use the overloadedthrottle
method withoutmaximumBurst
parameter to automatically calculate themaximumBurst
based on the given rate (cost/per
). In other words the throttler always enforces the rate limit whenmaximumBurst
parameter is given, but in certain cases (mostly due to limited scheduler resolution) it enforces a tighter bound than what was prescribed.Emits when upstream emits an element and configured time per each element elapsed
Backpressures when downstream backpressures or the incoming rate is higher than the speed limit
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def throttle(cost: Int, per: FiniteDuration, costCalculation: (Out) => Int): Repr[Out]
Sends elements downstream with speed limited to
cost/per
.Sends elements downstream with speed limited to
cost/per
. Cost is calculating for each element individually by callingcalculateCost
function. This operator works for streams when elements have different cost(length). Streams ofByteString
for example.Throttle implements the token bucket model. There is a bucket with a given token capacity (burst size). Tokens drops into the bucket at a given rate and can be
spared
for later use up to bucket capacity to allow some burstiness. Whenever stream wants to send an element, it takes as many tokens from the bucket as element costs. If there isn't any, throttle waits until the bucket accumulates enough tokens. Elements that costs more than the allowed burst will be delayed proportionally to their cost minus available tokens, meeting the target rate. Bucket is full when stream just materialized and started.The burst size is calculated based on the given rate (
cost/per
) as 0.1 * rate, for example: - rate < 20/second => burst size 1 - rate 20/second => burst size 2 - rate 100/second => burst size 10 - rate 200/second => burst size 20The throttle
mode
is pekko.stream.ThrottleMode.Shaping, which makes pauses before emitting messages to meet throttle rate.Emits when upstream emits an element and configured time per each element elapsed
Backpressures when downstream backpressures or the incoming rate is higher than the speed limit
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def throttle(elements: Int, per: FiniteDuration, maximumBurst: Int, mode: ThrottleMode): Repr[Out]
Sends elements downstream with speed limited to
elements/per
.Sends elements downstream with speed limited to
elements/per
. In other words, this operator set the maximum rate for emitting messages. This operator works for streams where all elements have the same cost or length.Throttle implements the token bucket model. There is a bucket with a given token capacity (burst size or maximumBurst). Tokens drops into the bucket at a given rate and can be
spared
for later use up to bucket capacity to allow some burstiness. Whenever stream wants to send an element, it takes as many tokens from the bucket as element costs. If there isn't any, throttle waits until the bucket accumulates enough tokens. Elements that costs more than the allowed burst will be delayed proportionally to their cost minus available tokens, meeting the target rate. Bucket is full when stream just materialized and started.Parameter
mode
manages behavior when upstream is faster than throttle rate:- pekko.stream.ThrottleMode.Shaping makes pauses before emitting messages to meet throttle rate
- pekko.stream.ThrottleMode.Enforcing fails with exception when upstream is faster than throttle rate. Enforcing cannot emit elements that cost more than the maximumBurst
It is recommended to use non-zero burst sizes as they improve both performance and throttling precision by allowing the implementation to avoid using the scheduler when input rates fall below the enforced limit and to reduce most of the inaccuracy caused by the scheduler resolution (which is in the range of milliseconds).
WARNING: Be aware that throttle is using scheduler to slow down the stream. This scheduler has minimal time of triggering next push. Consequently it will slow down the stream as it has minimal pause for emitting. This can happen in case burst is 0 and speed is higher than 30 events per second. You need to increase the
maximumBurst
if elements arrive with small interval (30 milliseconds or less). Use the overloadedthrottle
method withoutmaximumBurst
parameter to automatically calculate themaximumBurst
based on the given rate (cost/per
). In other words the throttler always enforces the rate limit whenmaximumBurst
parameter is given, but in certain cases (mostly due to limited scheduler resolution) it enforces a tighter bound than what was prescribed.Emits when upstream emits an element and configured time per each element elapsed
Backpressures when downstream backpressures or the incoming rate is higher than the speed limit
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def throttle(elements: Int, per: FiniteDuration): Repr[Out]
Sends elements downstream with speed limited to
elements/per
.Sends elements downstream with speed limited to
elements/per
. In other words, this operator set the maximum rate for emitting messages. This operator works for streams where all elements have the same cost or length.Throttle implements the token bucket model. There is a bucket with a given token capacity (burst size). Tokens drops into the bucket at a given rate and can be
spared
for later use up to bucket capacity to allow some burstiness. Whenever stream wants to send an element, it takes as many tokens from the bucket as element costs. If there isn't any, throttle waits until the bucket accumulates enough tokens. Elements that costs more than the allowed burst will be delayed proportionally to their cost minus available tokens, meeting the target rate. Bucket is full when stream just materialized and started.The burst size is calculated based on the given rate (
cost/per
) as 0.1 * rate, for example: - rate < 20/second => burst size 1 - rate 20/second => burst size 2 - rate 100/second => burst size 10 - rate 200/second => burst size 20The throttle
mode
is pekko.stream.ThrottleMode.Shaping, which makes pauses before emitting messages to meet throttle rate.Emits when upstream emits an element and configured time per each element elapsed
Backpressures when downstream backpressures or the incoming rate is higher than the speed limit
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- def watch(ref: ActorRef): Repr[Out]
The operator fails with an pekko.stream.WatchedActorTerminatedException if the target actor is terminated.
The operator fails with an pekko.stream.WatchedActorTerminatedException if the target actor is terminated.
Emits when upstream emits
Backpressures when downstream backpressures
Completes when upstream completes
Fails when the watched actor terminates
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def watchTermination[Mat2]()(matF: (Mat, Future[Done]) => Mat2): ReprMat[Out, Mat2]
Materializes to
Future[Done]
that completes on getting termination message.Materializes to
Future[Done]
that completes on getting termination message. The Future completes with success when received complete message from upstream or cancel from downstream. It fails with the propagated error when received error message from upstream or downstream.It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values. - def wireTap(that: Graph[SinkShape[Out], _]): Repr[Out]
Attaches the given Sink to this Flow as a wire tap, meaning that elements that pass through will also be sent to the wire-tap Sink, without the latter affecting the mainline flow.
Attaches the given Sink to this Flow as a wire tap, meaning that elements that pass through will also be sent to the wire-tap Sink, without the latter affecting the mainline flow. If the wire-tap Sink backpressures, elements that would've been sent to it will be dropped instead.
It is similar to #alsoTo which does backpressure instead of dropping elements.
Emits when element is available and demand exists from the downstream; the element will also be sent to the wire-tap Sink if there is demand.
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def wireTap(f: (Out) => Unit): Repr[Out]
This is a simplified version of
wireTap(Sink)
that takes only a simple function.This is a simplified version of
wireTap(Sink)
that takes only a simple function. Elements will be passed into this "side channel" function, and any of its results will be ignored.If the wire-tap operation is slow (it backpressures), elements that would've been sent to it will be dropped instead. It is similar to #alsoTo which does backpressure instead of dropping elements.
This operation is useful for inspecting the passed through element, usually by means of side-effecting operations (such as
println
, or emitting metrics), for each element without having to modify it.For logging signals (elements, completion, error) consider using the log operator instead, along with appropriate
ActorAttributes.logLevels
.Emits when upstream emits an element; the same element will be passed to the attached function, as well as to the downstream operator
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def wireTapGraph[M](that: Graph[SinkShape[Out], M]): Graph[FlowShape[Out, Out], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def wireTapMat[Mat2, Mat3](that: Graph[SinkShape[Out], Mat2])(matF: (Mat, Mat2) => Mat3): ReprMat[Out, Mat3]
Attaches the given Sink to this Flow as a wire tap, meaning that elements that pass through will also be sent to the wire-tap Sink, without the latter affecting the mainline flow.
Attaches the given Sink to this Flow as a wire tap, meaning that elements that pass through will also be sent to the wire-tap Sink, without the latter affecting the mainline flow. If the wire-tap Sink backpressures, elements that would've been sent to it will be dropped instead.
It is similar to #alsoToMat which does backpressure instead of dropping elements.
- See also
#wireTap It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def zip[U](that: Graph[SourceShape[U], _]): Repr[(Out, U)]
Combine the elements of current flow and the given Source into a stream of tuples.
- def zipAll[U, A >: Out](that: Graph[SourceShape[U], _], thisElem: A, thatElem: U): Repr[(A, U)]
Combine the elements of current flow and the given Source into a stream of tuples.
Combine the elements of current flow and the given Source into a stream of tuples.
Emits when at first emits when both inputs emit, and then as long as any input emits (coupled to the default value of the completed input).
Backpressures when downstream backpressures
Completes when all upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def zipAllFlow[U, A >: Out, Mat2](that: Graph[SourceShape[U], Mat2], thisElem: A, thatElem: U): Flow[Out, (A, U), Mat2]
- Attributes
- protected
- Definition Classes
- FlowOps
- def zipAllMat[U, Mat2, Mat3, A >: Out](that: Graph[SourceShape[U], Mat2], thisElem: A, thatElem: U)(matF: (Mat, Mat2) => Mat3): ReprMat[(A, U), Mat3]
Combine the elements of current flow and the given Source into a stream of tuples.
Combine the elements of current flow and the given Source into a stream of tuples.
- See also
#zipAll Emits when at first emits when both inputs emit, and then as long as any input emits (coupled to the default value of the completed input). Backpressures when downstream backpressures Completes when all upstream completes Cancels when downstream cancels
- def zipGraph[U, M](that: Graph[SourceShape[U], M]): Graph[FlowShape[Out, (Out, U)], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def zipLatest[U](that: Graph[SourceShape[U], _]): Repr[(Out, U)]
Combine the elements of 2 streams into a stream of tuples, picking always the latest element of each.
Combine the elements of 2 streams into a stream of tuples, picking always the latest element of each.
A
ZipLatest
has aleft
and aright
input port and oneout
port.No element is emitted until at least one element from each Source becomes available.
Emits when all of the inputs have at least an element available, and then each time an element becomes available on either of the inputs
Backpressures when downstream backpressures
Completes when any upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def zipLatestGraph[U, M](that: Graph[SourceShape[U], M]): Graph[FlowShape[Out, (Out, U)], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def zipLatestMat[U, Mat2, Mat3](that: Graph[SourceShape[U], Mat2])(matF: (Mat, Mat2) => Mat3): ReprMat[(Out, U), Mat3]
Combine the elements of current flow and the given Source into a stream of tuples, picking always the latest of the elements of each source.
Combine the elements of current flow and the given Source into a stream of tuples, picking always the latest of the elements of each source.
- See also
#zipLatest. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def zipLatestWith[Out2, Out3](that: Graph[SourceShape[Out2], _], eagerComplete: Boolean)(combine: (Out, Out2) => Out3): Repr[Out3]
Combine the elements of multiple streams into a stream of combined elements using a combiner function, picking always the latest of the elements of each source.
Combine the elements of multiple streams into a stream of combined elements using a combiner function, picking always the latest of the elements of each source.
No element is emitted until at least one element from each Source becomes available. Whenever a new element appears, the zipping function is invoked with a tuple containing the new element and the other last seen elements.
Emits when all of the inputs have at least an element available, and then each time an element becomes available on either of the inputs
Backpressures when downstream backpressures
Completes when any upstream completes if
eagerComplete
is enabled or wait for all upstreams to completeCancels when downstream cancels
- Definition Classes
- FlowOps
- def zipLatestWith[Out2, Out3](that: Graph[SourceShape[Out2], _])(combine: (Out, Out2) => Out3): Repr[Out3]
Combine the elements of multiple streams into a stream of combined elements using a combiner function, picking always the latest of the elements of each source.
Combine the elements of multiple streams into a stream of combined elements using a combiner function, picking always the latest of the elements of each source.
No element is emitted until at least one element from each Source becomes available. Whenever a new element appears, the zipping function is invoked with a tuple containing the new element and the other last seen elements.
Emits when all of the inputs have at least an element available, and then each time an element becomes available on either of the inputs
Backpressures when downstream backpressures
Completes when any of the upstreams completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def zipLatestWithGraph[Out2, Out3, M](that: Graph[SourceShape[Out2], M], eagerComplete: Boolean)(combine: (Out, Out2) => Out3): Graph[FlowShape[Out, Out3], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def zipLatestWithGraph[Out2, Out3, M](that: Graph[SourceShape[Out2], M])(combine: (Out, Out2) => Out3): Graph[FlowShape[Out, Out3], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def zipLatestWithMat[Out2, Out3, Mat2, Mat3](that: Graph[SourceShape[Out2], Mat2], eagerComplete: Boolean)(combine: (Out, Out2) => Out3)(matF: (Mat, Mat2) => Mat3): ReprMat[Out3, Mat3]
Put together the elements of current flow and the given Source into a stream of combined elements using a combiner function, picking always the latest of the elements of each source.
Put together the elements of current flow and the given Source into a stream of combined elements using a combiner function, picking always the latest of the elements of each source.
- See also
#zipLatestWith. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def zipLatestWithMat[Out2, Out3, Mat2, Mat3](that: Graph[SourceShape[Out2], Mat2])(combine: (Out, Out2) => Out3)(matF: (Mat, Mat2) => Mat3): ReprMat[Out3, Mat3]
Put together the elements of current flow and the given Source into a stream of combined elements using a combiner function, picking always the latest of the elements of each source.
Put together the elements of current flow and the given Source into a stream of combined elements using a combiner function, picking always the latest of the elements of each source.
- See also
#zipLatestWith. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
- def zipMat[U, Mat2, Mat3](that: Graph[SourceShape[U], Mat2])(matF: (Mat, Mat2) => Mat3): ReprMat[(Out, U), Mat3]
Combine the elements of current flow and the given Source into a stream of tuples.
- def zipWith[Out2, Out3](that: Graph[SourceShape[Out2], _])(combine: (Out, Out2) => Out3): Repr[Out3]
Put together the elements of current flow and the given Source into a stream of combined elements using a combiner function.
Put together the elements of current flow and the given Source into a stream of combined elements using a combiner function.
Emits when all of the inputs have an element available
Backpressures when downstream backpressures
Completes when any upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- def zipWithGraph[Out2, Out3, M](that: Graph[SourceShape[Out2], M])(combine: (Out, Out2) => Out3): Graph[FlowShape[Out, Out3], M]
- Attributes
- protected
- Definition Classes
- FlowOps
- def zipWithIndex: Repr[(Out, Long)]
Combine the elements of current flow into a stream of tuples consisting of all elements paired with their index.
Combine the elements of current flow into a stream of tuples consisting of all elements paired with their index. Indices start at 0.
Emits when upstream emits an element and is paired with their index
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- Definition Classes
- FlowOps
- Annotations
- @nowarn()
- def zipWithMat[Out2, Out3, Mat2, Mat3](that: Graph[SourceShape[Out2], Mat2])(combine: (Out, Out2) => Out3)(matF: (Mat, Mat2) => Mat3): ReprMat[Out3, Mat3]
Put together the elements of current flow and the given Source into a stream of combined elements using a combiner function.
Put together the elements of current flow and the given Source into a stream of combined elements using a combiner function.
- See also
#zipWith. It is recommended to use the internally optimized
Keep.left
andKeep.right
combiners where appropriate instead of manually writing functions that pass through one of the values.
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)
- def formatted(fmtstr: String): String
- Implicit
- This member is added by an implicit conversion from FlowOpsMat[Out, Mat] toStringFormat[FlowOpsMat[Out, Mat]] performed by method StringFormat in scala.Predef.
- Definition Classes
- StringFormat
- Annotations
- @deprecated @inline()
- Deprecated
(Since version 2.12.16) Use
formatString.format(value)
instead ofvalue.formatted(formatString)
, or use thef""
string interpolator. In Java 15 and later,formatted
resolves to the new method in String which has reversed parameters.
- def monitor[Mat2]()(combine: (Mat, FlowMonitor[Out]) => Mat2): ReprMat[Out, Mat2]
Materializes to
FlowMonitor[Out]
that allows monitoring of the current flow.Materializes to
FlowMonitor[Out]
that allows monitoring of the current flow. All events are propagated by the monitor unchanged. Note that the monitor inserts a memory barrier every time it processes an event, and may therefor affect performance.The
combine
function is used to combine theFlowMonitor
with this flow's materialized value.- Annotations
- @deprecated
- Deprecated
(Since version Akka 2.5.17) Use monitor() or monitorMat(combine) instead
- def throttleEven(cost: Int, per: FiniteDuration, costCalculation: (Out) => Int, mode: ThrottleMode): Repr[Out]
This is a simplified version of throttle that spreads events evenly across the given time interval.
This is a simplified version of throttle that spreads events evenly across the given time interval.
Use this operator when you need just slow down a stream without worrying about exact amount of time between events.
If you want to be sure that no time interval has no more than specified number of events you need to use throttle with maximumBurst attribute.
- def throttleEven(elements: Int, per: FiniteDuration, mode: ThrottleMode): Repr[Out]
This is a simplified version of throttle that spreads events evenly across the given time interval.
This is a simplified version of throttle that spreads events evenly across the given time interval. throttleEven using best effort approach to meet throttle rate.
Use this operator when you need just slow down a stream without worrying about exact amount of time between events.
If you want to be sure that no time interval has no more than specified number of events you need to use throttle with maximumBurst attribute.
- def →[B](y: B): (FlowOpsMat[Out, Mat], B)
- Implicit
- This member is added by an implicit conversion from FlowOpsMat[Out, Mat] toArrowAssoc[FlowOpsMat[Out, Mat]] performed by method ArrowAssoc in scala.Predef.
- Definition Classes
- ArrowAssoc
- Annotations
- @deprecated
- Deprecated
(Since version 2.13.0) Use
->
instead. If you still wish to display it as one character, consider using a font with programming ligatures such as Fira Code.