Packages

object Consumer

Apache Pekko Stream connector for subscribing to Kafka topics.

Source
Consumer.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Consumer
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. trait Control extends AnyRef

    Materialized value of the consumer Source.

    Materialized value of the consumer Source.

    See Controlled shutdown

  2. final class DrainingControl[T] extends Control

    Combine control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

    Combine control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

    See Controlled shutdown

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def atMostOnceSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[ConsumerRecord[K, V], Control]

    Convenience for "at-most once delivery" semantics.

    Convenience for "at-most once delivery" semantics. The offset of each message is committed to Kafka before being emitted downstream.

  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  7. def commitWithMetadataPartitionedSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription, metadataFromRecord: (ConsumerRecord[K, V]) => String): Source[(TopicPartition, Source[CommittableMessage[K, V], NotUsed]), Control]

    The same as #plainPartitionedSource but with offset commit with metadata support.

  8. def commitWithMetadataSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription, metadataFromRecord: (ConsumerRecord[K, V]) => String): Source[CommittableMessage[K, V], Control]

    The commitWithMetadataSource makes it possible to add additional metadata (in the form of a string) when an offset is committed based on the record.

    The commitWithMetadataSource makes it possible to add additional metadata (in the form of a string) when an offset is committed based on the record. This can be useful (for example) to store information about which node made the commit, what time the commit was made, the timestamp of the record etc.

  9. def committableExternalSource[K, V](consumer: ActorRef, subscription: ManualSubscription, groupId: String, commitTimeout: FiniteDuration): Source[CommittableMessage[K, V], Control]

    The same as #plainExternalSource but with offset commit support.

  10. def committablePartitionedManualOffsetSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription, getOffsetsOnAssign: (Set[TopicPartition]) => Future[Map[TopicPartition, Long]], onRevoke: (Set[TopicPartition]) => Unit = _ => ()): Source[(TopicPartition, Source[CommittableMessage[K, V], NotUsed]), Control]

    The same as #plainPartitionedManualOffsetSource but with offset commit support.

  11. def committablePartitionedSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription): Source[(TopicPartition, Source[CommittableMessage[K, V], NotUsed]), Control]

    The same as #plainPartitionedSource but with offset commit support.

  12. def committableSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[CommittableMessage[K, V], Control]

    The committableSource makes it possible to commit offset positions to Kafka.

    The committableSource makes it possible to commit offset positions to Kafka. This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time but in failure cases could be duplicated.

    If you commit the offset before processing the message you get "at-most once delivery" semantics, and for that there is a #atMostOnceSource.

    Compared to auto-commit, this gives exact control over when a message is considered consumed.

    If you need to store offsets in anything other than Kafka, #plainSource should be used instead of this API.

  13. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  14. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  15. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  16. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  17. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  18. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  19. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  20. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  21. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  22. def plainExternalSource[K, V](consumer: ActorRef, subscription: ManualSubscription): Source[ConsumerRecord[K, V], Control]

    Special source that can use an external KafkaAsyncConsumer.

    Special source that can use an external KafkaAsyncConsumer. This is useful when you have a lot of manually assigned topic-partitions and want to keep only one kafka consumer.

  23. def plainPartitionedManualOffsetSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription, getOffsetsOnAssign: (Set[TopicPartition]) => Future[Map[TopicPartition, Long]], onRevoke: (Set[TopicPartition]) => Unit = _ => ()): Source[(TopicPartition, Source[ConsumerRecord[K, V], NotUsed]), Control]

    The plainPartitionedManualOffsetSource is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment.

    The plainPartitionedManualOffsetSource is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, the getOffsetsOnAssign function will be called to retrieve the offset, followed by a seek to the correct spot in the partition.

    The onRevoke function gives the consumer a chance to store any uncommitted offsets, and do any other cleanup that is required. Also allows the user access to the onPartitionsRevoked hook, useful for cleaning up any partition-specific resources being used by the consumer.

  24. def plainPartitionedSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription): Source[(TopicPartition, Source[ConsumerRecord[K, V], NotUsed]), Control]

    The plainPartitionedSource is a way to track automatic partition assignment from kafka.

    The plainPartitionedSource is a way to track automatic partition assignment from kafka. When a topic-partition is assigned to a consumer, this source will emit tuples with the assigned topic-partition and a corresponding source of ConsumerRecords. When a topic-partition is revoked, the corresponding source completes.

  25. def plainSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[ConsumerRecord[K, V], Control]

    The plainSource emits ConsumerRecord elements (as received from the underlying KafkaConsumer).

    The plainSource emits ConsumerRecord elements (as received from the underlying KafkaConsumer). It has no support for committing offsets to Kafka. It can be used when the offset is stored externally or with auto-commit (note that auto-commit is by default disabled).

    The consumer application doesn't need to use Kafka's built-in offset storage and can store offsets in a store of its own choosing. The primary use case for this is allowing the application to store both the offset and the results of the consumption in the same system in a way that both the results and offsets are stored atomically. This is not always possible, but when it is, it will make the consumption fully atomic and give "exactly once" semantics that are stronger than the "at-least once" semantics you get with Kafka's offset commit functionality.

  26. def sourceWithOffsetContext[K, V](settings: ConsumerSettings[K, V], subscription: Subscription, metadataFromRecord: (ConsumerRecord[K, V]) => String): SourceWithContext[ConsumerRecord[K, V], CommittableOffset, Control]

    API MAY CHANGE

    API MAY CHANGE

    This source emits ConsumerRecord together with the offset position as flow context, thus makes it possible to commit offset positions to Kafka. This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time but in failure cases could be duplicated.

    It is intended to be used with Apache Pekko's [flow with context](https://pekko.apache.org/docs/pekko/current/stream/operators/Flow/asFlowWithContext.html), Producer.flowWithContext and/or Committer.sinkWithOffsetContext.

    This variant makes it possible to add additional metadata (in the form of a string) when an offset is committed based on the record. This can be useful (for example) to store information about which node made the commit, what time the commit was made, the timestamp of the record etc.

    Annotations
    @ApiMayChange()
  27. def sourceWithOffsetContext[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): SourceWithContext[ConsumerRecord[K, V], CommittableOffset, Control]

    API MAY CHANGE

    API MAY CHANGE

    This source emits ConsumerRecord together with the offset position as flow context, thus makes it possible to commit offset positions to Kafka. This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time but in failure cases could be duplicated.

    It is intended to be used with Apache Pekko's [flow with context](https://pekko.apache.org/docs/pekko/current/stream/operators/Flow/asFlowWithContext.html), Producer.flowWithContext and/or Committer.sinkWithOffsetContext.

    Annotations
    @ApiMayChange()
  28. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  29. def toString(): String
    Definition Classes
    AnyRef → Any
  30. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  31. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  32. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  33. object DrainingControl
  34. object NoopControl extends Control

    An implementation of Control to be used as an empty value, all methods return a failed future.

Inherited from AnyRef

Inherited from Any

Ungrouped