IronMQ

The IronMQ connector provides an Apache Pekko Stream source and sink to connect to the IronMQ queue.

IronMQ is a simple point-to-point queue, but it is possible to implement a fan-out semantic by configure the queue as push queue and set other queue as subscribers. More information about that could be found on IronMQ documentation

Project Info: Apache Pekko Connectors IronMQ
Artifact
org.apache.pekko
pekko-connectors-ironmq
1.1.0-M1+154-6981eaa8-SNAPSHOT
JDK versions
OpenJDK 8
OpenJDK 11
OpenJDK 17
OpenJDK 21
Scala versions2.13.15, 2.12.20, 3.3.4
JPMS module namepekko.stream.connectors.ironmq
License
API documentation
Forums
Release notesGitHub releases
IssuesGithub issues
Sourceshttps://github.com/apache/pekko-connectors

Artifacts

sbt
val PekkoVersion = "1.1.2"
val PekkoHttpVersion = "1.1.0"
libraryDependencies ++= Seq(
  "org.apache.pekko" %% "pekko-connectors-ironmq" % "1.1.0-M1+154-6981eaa8-SNAPSHOT",
  "org.apache.pekko" %% "pekko-stream" % PekkoVersion,
  "org.apache.pekko" %% "pekko-http" % PekkoHttpVersion
)
Maven
<properties>
  <pekko.version>1.1.2</pekko.version>
  <pekko.http.version>1.1.0</pekko.http.version>
  <scala.binary.version>2.13</scala.binary.version>
</properties>
<dependencies>
  <dependency>
    <groupId>org.apache.pekko</groupId>
    <artifactId>pekko-connectors-ironmq_${scala.binary.version}</artifactId>
    <version>1.1.0-M1+154-6981eaa8-SNAPSHOT</version>
  </dependency>
  <dependency>
    <groupId>org.apache.pekko</groupId>
    <artifactId>pekko-stream_${scala.binary.version}</artifactId>
    <version>${pekko.version}</version>
  </dependency>
  <dependency>
    <groupId>org.apache.pekko</groupId>
    <artifactId>pekko-http_${scala.binary.version}</artifactId>
    <version>${pekko.http.version}</version>
  </dependency>
</dependencies>
Gradle
def versions = [
  PekkoVersion: "1.1.2",
  PekkoHttpVersion: "1.1.0",
  ScalaBinary: "2.13"
]
dependencies {
  implementation "org.apache.pekko:pekko-connectors-ironmq_${versions.ScalaBinary}:1.1.0-M1+154-6981eaa8-SNAPSHOT"
  implementation "org.apache.pekko:pekko-stream_${versions.ScalaBinary}:${versions.PekkoVersion}"
  implementation "org.apache.pekko:pekko-http_${versions.ScalaBinary}:${versions.PekkoHttpVersion}"
}

The table below shows direct dependencies of this module and the second tab shows all libraries it depends on transitively.

Consumer

IronMQ can be used either in cloud or on-premise. Either way you need a authentication token and a project ID. These can be set in the application.conf file.

source# SPDX-License-Identifier: Apache-2.0

pekko.connectors.ironmq {

  // The IronMg endpoint. It may vary due to availability zone and region.
  endpoint = "https://mq-aws-eu-west-1-1.iron.io"

  credentials {

    // The IronMq project id
    // project-id =

    // The IronMq auth token
    // token =
  }

  consumer {

    // This is the max number of message to fetch from IronMq.
    buffer-max-size = 100

    // This is the threshold where fech other messages from IronMq
    buffer-min-size = 25

    // This is the time interval between each poll loop
    fetch-interval = 250 milliseconds

    // This is the amount of time the IronMq client will wait for a message to be available in the queue
    poll-timeout = 0

    // This is the amount of time a fetched message will be not available to other consumers
    reservation-timeout = 30 seconds

  }
}

The consumer is poll-based. It will poll every fetch-interval milliseconds, waiting for poll-timeout milliseconds to consume new messages and will push those downstream.

It supports both at-most-once and at-least-once semantics. In the first case the messages are deleted straight away after been fetched. In the latter case the messages piggy back a Committable object that should be used to commit the message. Committing the message will cause the message to be deleted from the queue.

At most once

The consumer source is instantiated using the IronMqConsumerIronMqConsumer.

Scala
sourceimport org.apache.pekko.stream.connectors.ironmq.Message

val source: Source[Message, NotUsed] =
  IronMqConsumer.atMostOnceSource(queueName, ironMqSettings)

val receivedMessages: Future[immutable.Seq[Message]] = source
  .take(100)
  .runWith(Sink.seq)
Java
sourceimport org.apache.pekko.stream.connectors.ironmq.*;
import org.apache.pekko.stream.connectors.ironmq.javadsl.*;

Source<Message, NotUsed> source = IronMqConsumer.atMostOnceSource(queueName, ironMqSettings);

CompletionStage<List<Message>> receivedMessages =
    source.take(5).runWith(Sink.seq(), materializer);

At least once

To ensure at-least-once semantics, CommittableMessages need to be committed after successful processing which will delete the message from IronMQ.

Scala
sourceimport org.apache.pekko
import pekko.stream.connectors.ironmq.scaladsl.CommittableMessage
import pekko.stream.connectors.ironmq.Message

val source: Source[CommittableMessage, NotUsed] =
  IronMqConsumer.atLeastOnceSource(queueName, ironMqSettings)

val businessLogic: Flow[CommittableMessage, CommittableMessage, NotUsed] =
  Flow[CommittableMessage] // do something useful with the received messages

val receivedMessages: Future[immutable.Seq[Message]] = source
  .take(100)
  .via(businessLogic)
  .mapAsync(1)(m => m.commit().map(_ => m.message))
  .runWith(Sink.seq)
Java
sourceimport org.apache.pekko.stream.connectors.ironmq.*;
import org.apache.pekko.stream.connectors.ironmq.javadsl.*;

Source<CommittableMessage, NotUsed> source =
    IronMqConsumer.atLeastOnceSource(queueName, ironMqSettings);

Flow<CommittableMessage, CommittableMessage, NotUsed> businessLogic =
    Flow.of(CommittableMessage.class); // do something useful with the received messages

CompletionStage<List<Message>> receivedMessages =
    source
        .take(5)
        .via(businessLogic)
        .mapAsync(1, m -> m.commit().thenApply(d -> m.message()))
        .runWith(Sink.seq(), materializer);

Producer

The producer is very trivial at this time, it does not provide any batching mechanism, but sends messages to IronMq as soon as they arrive to the stage.

The producer is instantiated using the IronMqProducerIronMqProducer. It provides methods to obtain either a Flow[PushMessage, Messages.Id, NotUsed]Flow<PushMessage, Messages.Id, NotUsed> or a Sink[PushMessage, NotUsed]Sink<PushMessage, NotUsed>.

Flow

The PushMessage allows to specify the delay per individual message. The message expiration is set a queue level.

When using the Flow the returned Messages.IdsString contains the ID of the pushed message, that can be used to manipulate the message. For each PushMessage from the upstream you will have exactly one Message.IdString in downstream in the same order.

Scala
sourceimport org.apache.pekko.stream.connectors.ironmq.{ Message, PushMessage }

val messages: immutable.Seq[String] = (1 to messageCount).map(i => s"test-$i")
val producedIds: Future[immutable.Seq[Message.Id]] = Source(messages)
  .map(PushMessage(_))
  .via(IronMqProducer.flow(queueName, ironMqSettings))
  .runWith(Sink.seq)
Java
sourceimport org.apache.pekko.stream.connectors.ironmq.*;
import org.apache.pekko.stream.connectors.ironmq.javadsl.*;

CompletionStage<List<String>> producedIds =
    Source.from(messages)
        .map(PushMessage::create)
        .via(IronMqProducer.flow(queueName, ironMqSettings))
        .runWith(Sink.seq(), materializer);

The producer also provides a committable aware Flow/Sink as Flow[(PushMessage, Committable), Message.Id, NotUsed]Flow<CommittablePushMessage<Committable>, String, NotUsed>. It can be used to consume a Flow from an IronMQ consumer or any other source that provides a commit mechanism.

Scala
sourceimport org.apache.pekko
import pekko.stream.connectors.ironmq.{ Message, PushMessage }
import pekko.stream.connectors.ironmq.scaladsl.Committable

val pushAndCommit: Flow[(PushMessage, Committable), Message.Id, NotUsed] =
  IronMqProducer.atLeastOnceFlow(targetQueue, ironMqSettings)

val producedIds: Future[immutable.Seq[Message.Id]] = IronMqConsumer
  .atLeastOnceSource(sourceQueue, ironMqSettings)
  .take(messages.size)
  .map { committableMessage =>
    (PushMessage(committableMessage.message.body), committableMessage)
  }
  .via(pushAndCommit)
  .runWith(Sink.seq)
Java
sourceimport org.apache.pekko.stream.connectors.ironmq.*;
import org.apache.pekko.stream.connectors.ironmq.javadsl.*;

Flow<CommittablePushMessage<CommittableMessage>, String, NotUsed> pushAndCommit =
    IronMqProducer.atLeastOnceFlow(targetQueue, ironMqSettings);

CompletionStage<List<String>> producedIds =
    IronMqConsumer.atLeastOnceSource(sourceQueue, ironMqSettings)
        .take(messages.size())
        .map(CommittablePushMessage::create)
        .via(pushAndCommit)
        .runWith(Sink.seq(), materializer);

Sink

Scala
sourceimport org.apache.pekko.stream.connectors.ironmq.{ Message, PushMessage }

val messages: immutable.Seq[String] = (1 to messageCount).map(i => s"test-$i")
val producedIds: Future[Done] = Source(messages)
  .map(PushMessage(_))
  .runWith(IronMqProducer.sink(queueName, ironMqSettings))
Java
sourceimport org.apache.pekko.stream.connectors.ironmq.*;
import org.apache.pekko.stream.connectors.ironmq.javadsl.*;

CompletionStage<Done> producedIds =
    Source.from(messages)
        .map(PushMessage::create)
        .runWith(IronMqProducer.sink(queueName, ironMqSettings), materializer);