object Producer
Apache Pekko Stream connector for publishing messages to Kafka topics.
- Source
- Producer.scala
- Alphabetic
- By Inheritance
- Producer
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def committableSink[K, V](producerSettings: ProducerSettings[K, V], committerSettings: CommitterSettings): Sink[Envelope[K, V, Committable], Future[Done]]
Create a sink that is aware of the committable offset from a Consumer.committableSource.
Create a sink that is aware of the committable offset from a Consumer.committableSource. The offsets are batched and committed regularly.
It publishes records to Kafka topics conditionally:
- Message publishes a single message to its topic, and commits the offset
- MultiMessage publishes all messages in its
records
field, and commits the offset- PassThroughMessage does not publish anything, but commits the offset
Note that there is a risk that something fails after publishing but before committing, so it is "at-least once delivery" semantics.
- def committableSinkWithOffsetContext[K, V](producerSettings: ProducerSettings[K, V], committerSettings: CommitterSettings): Sink[(Envelope[K, V, _], Committable), Future[Done]]
Create a sink that is aware of the committable offset passed as context from a Consumer.sourceWithOffsetContext.
Create a sink that is aware of the committable offset passed as context from a Consumer.sourceWithOffsetContext. The offsets are batched and committed regularly.
It publishes records to Kafka topics conditionally:
- Message publishes a single message to its topic, and commits the offset
- MultiMessage publishes all messages in its
records
field, and commits the offset- PassThroughMessage does not publish anything, but commits the offset
Note that there is a risk that something fails after publishing but before committing, so it is "at-least once delivery" semantics.
- Annotations
- @ApiMayChange()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- def flexiFlow[K, V, PassThrough](settings: ProducerSettings[K, V]): Flow[Envelope[K, V, PassThrough], Results[K, V, PassThrough], NotUsed]
Create a flow to conditionally publish records to Kafka topics and then pass it on.
Create a flow to conditionally publish records to Kafka topics and then pass it on.
It publishes records to Kafka topics conditionally:
- Message publishes a single message to its topic, and continues in the stream as Result
- MultiMessage publishes all messages in its
records
field, and continues in the stream as MultiResult- PassThroughMessage does not publish anything, and continues in the stream as PassThroughResult
The messages support the possibility to pass through arbitrary data, which can for example be a CommittableOffset or CommittableOffsetBatch that can be committed later in the flow.
- def flowWithContext[K, V, C](settings: ProducerSettings[K, V]): FlowWithContext[Envelope[K, V, NotUsed], C, Results[K, V, C], C, NotUsed]
API MAY CHANGE
API MAY CHANGE
Create a flow to conditionally publish records to Kafka topics and then pass it on.
It publishes records to Kafka topics conditionally:
- Message publishes a single message to its topic, and continues in the stream as Result
- MultiMessage publishes all messages in its
records
field, and continues in the stream as MultiResult- PassThroughMessage does not publish anything, and continues in the stream as PassThroughResult
This flow is intended to be used with Apache Pekko's [flow with context](https://pekko.apache.org/docs/pekko/current/stream/operators/Flow/asFlowWithContext.html).
- C
the flow context type
- Annotations
- @ApiMayChange()
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def plainSink[K, V](settings: ProducerSettings[K, V]): Sink[ProducerRecord[K, V], Future[Done]]
Create a sink for publishing records to Kafka topics.
Create a sink for publishing records to Kafka topics.
The Kafka ProducerRecord contains the topic name to which the record is being sent, an optional partition number, and an optional key and value.
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
Deprecated Value Members
- def committableSink[K, V](settings: ProducerSettings[K, V], producer: kafka.clients.producer.Producer[K, V]): Sink[Envelope[K, V, Committable], Future[Done]]
Create a sink that is aware of the committable offset from a Consumer.committableSource.
Create a sink that is aware of the committable offset from a Consumer.committableSource. It will commit the consumer offset when the message has been published successfully to the topic.
It publishes records to Kafka topics conditionally:
- Message publishes a single message to its topic, and commits the offset
- MultiMessage publishes all messages in its
records
field, and commits the offset- PassThroughMessage does not publish anything, but commits the offset
Note that there is always a risk that something fails after publishing but before committing, so it is "at-least once delivery" semantics.
Supports sharing a Kafka Producer instance.
- Annotations
- @deprecated
- Deprecated
(Since version Alpakka Kafka 2.0.0) use
committableSink(ProducerSettings, CommitterSettings)
instead
- def committableSink[K, V](settings: ProducerSettings[K, V]): Sink[Envelope[K, V, Committable], Future[Done]]
Create a sink that is aware of the committable offset from a Consumer.committableSource.
Create a sink that is aware of the committable offset from a Consumer.committableSource. It will commit the consumer offset when the message has been published successfully to the topic.
It publishes records to Kafka topics conditionally:
- Message publishes a single message to its topic, and commits the offset
- MultiMessage publishes all messages in its
records
field, and commits the offset- PassThroughMessage does not publish anything, but commits the offset
Note that there is a risk that something fails after publishing but before committing, so it is "at-least once delivery" semantics.
- Annotations
- @deprecated
- Deprecated
(Since version Alpakka Kafka 2.0.0) use
committableSink(ProducerSettings, CommitterSettings)
instead
- def flexiFlow[K, V, PassThrough](settings: ProducerSettings[K, V], producer: kafka.clients.producer.Producer[K, V]): Flow[Envelope[K, V, PassThrough], Results[K, V, PassThrough], NotUsed]
Create a flow to conditionally publish records to Kafka topics and then pass it on.
Create a flow to conditionally publish records to Kafka topics and then pass it on.
It publishes records to Kafka topics conditionally:
- Message publishes a single message to its topic, and continues in the stream as Result
- MultiMessage publishes all messages in its
records
field, and continues in the stream as MultiResult- PassThroughMessage does not publish anything, and continues in the stream as PassThroughResult
The messages support the possibility to pass through arbitrary data, which can for example be a CommittableOffset or CommittableOffsetBatch that can be committed later in the flow.
Supports sharing a Kafka Producer instance.
- Annotations
- @deprecated
- Deprecated
(Since version Alpakka Kafka 2.0.0) Pass in external or shared producer using ProducerSettings.withProducerFactory or ProducerSettings.withProducer
- def flow[K, V, PassThrough](settings: ProducerSettings[K, V], producer: kafka.clients.producer.Producer[K, V]): Flow[Message[K, V, PassThrough], Result[K, V, PassThrough], NotUsed]
Create a flow to publish records to Kafka topics and then pass it on.
Create a flow to publish records to Kafka topics and then pass it on.
The records must be wrapped in a Message and continue in the stream as Result.
The messages support the possibility to pass through arbitrary data, which can for example be a CommittableOffset or CommittableOffsetBatch that can be committed later in the flow.
Supports sharing a Kafka Producer instance.
- Annotations
- @deprecated
- Deprecated
(Since version 0.21) prefer flexiFlow over this flow implementation
- def flow[K, V, PassThrough](settings: ProducerSettings[K, V]): Flow[Message[K, V, PassThrough], Result[K, V, PassThrough], NotUsed]
Create a flow to publish records to Kafka topics and then pass it on.
Create a flow to publish records to Kafka topics and then pass it on.
The records must be wrapped in a Message and continue in the stream as Result.
The messages support the possibility to pass through arbitrary data, which can for example be a CommittableOffset or CommittableOffsetBatch that can be committed later in the flow.
- Annotations
- @deprecated
- Deprecated
(Since version 0.21) prefer flexiFlow over this flow implementation
- def flowWithContext[K, V, C](settings: ProducerSettings[K, V], producer: kafka.clients.producer.Producer[K, V]): FlowWithContext[Envelope[K, V, NotUsed], C, Results[K, V, C], C, NotUsed]
API MAY CHANGE
API MAY CHANGE
Create a flow to conditionally publish records to Kafka topics and then pass it on.
It publishes records to Kafka topics conditionally:
- Message publishes a single message to its topic, and continues in the stream as Result
- MultiMessage publishes all messages in its
records
field, and continues in the stream as MultiResult- PassThroughMessage does not publish anything, and continues in the stream as PassThroughResult
This flow is intended to be used with Apache Pekko's [flow with context](https://pekko.apache.org/docs/pekko/current/stream/operators/Flow/asFlowWithContext.html).
Supports sharing a Kafka Producer instance.
- C
the flow context type
- Annotations
- @deprecated @ApiMayChange()
- Deprecated
(Since version Alpakka Kafka 2.0.0) Pass in external or shared producer using ProducerSettings.withProducerFactory or ProducerSettings.withProducer
- def plainSink[K, V](settings: ProducerSettings[K, V], producer: kafka.clients.producer.Producer[K, V]): Sink[ProducerRecord[K, V], Future[Done]]
Create a sink for publishing records to Kafka topics.
Create a sink for publishing records to Kafka topics.
The Kafka ProducerRecord contains the topic name to which the record is being sent, an optional partition number, and an optional key and value.
Supports sharing a Kafka Producer instance.
- Annotations
- @deprecated
- Deprecated
(Since version Alpakka Kafka 2.0.0) Pass in external or shared producer using ProducerSettings.withProducerFactory or ProducerSettings.withProducer