Class ClusterSharding
In this context sharding means that actors with an identifier, so called entities,
can be automatically distributed across multiple nodes in the cluster. Each entity
actor runs only at one place, and messages can be sent to the entity without requiring
the sender to know the location of the destination actor. This is achieved by sending
the messages via a ShardRegion actor provided by this extension, which knows how
to route the message with the entity id to the final destination.
This extension is supposed to be used by first, typically at system startup on each node
in the cluster, registering the supported entity types with the init(org.apache.pekko.cluster.sharding.typed.javadsl.Entity<M, E>)
method, which returns the ShardRegion actor reference for a named entity type.
Messages to the entities are always sent via that ActorRef, i.e. the local ShardRegion.
Messages can also be sent via the EntityRef retrieved with entityRefFor(org.apache.pekko.cluster.sharding.typed.javadsl.EntityTypeKey<M>, java.lang.String),
which will also send via the local ShardRegion.
Some settings can be configured as described in the pekko.cluster.sharding
section of the reference.conf.
The ShardRegion actor is started on each node in the cluster, or group of nodes
tagged with a specific role. The ShardRegion is created with a ShardingMessageExtractor
to extract the entity identifier and the shard identifier from incoming messages.
A shard is a group of entities that will be managed together. For the first message in a
specific shard the ShardRegion requests the location of the shard from a central coordinator,
the pekko.cluster.sharding.ShardCoordinator. The ShardCoordinator decides which ShardRegion
owns the shard. The ShardRegion receives the decided home of the shard
and if that is the ShardRegion instance itself it will create a local child
actor representing the entity and direct all messages for that entity to it.
If the shard home is another ShardRegion instance messages will be forwarded
to that ShardRegion instance instead. While resolving the location of a
shard incoming messages for that shard are buffered and later delivered when the
shard location is known. Subsequent messages to the resolved shard can be delivered
to the target destination immediately without involving the ShardCoordinator.
To make sure that at most one instance of a specific entity actor is running somewhere
in the cluster it is important that all nodes have the same view of where the shards
are located. Therefore the shard allocation decisions are taken by the central
ShardCoordinator, which is running as a cluster singleton, i.e. one instance on
the oldest member among all cluster nodes or a group of nodes tagged with a specific
role. The oldest member can be determined by pekko.cluster.Member#isOlderThan.
To be able to use newly added members in the cluster the coordinator facilitates rebalancing
of shards, i.e. migrate entities from one node to another. In the rebalance process the
coordinator first notifies all ShardRegion actors that a handoff for a shard has started.
That means they will start buffering incoming messages for that shard, in the same way as if the
shard location is unknown. During the rebalance process the coordinator will not answer any
requests for the location of shards that are being rebalanced, i.e. local buffering will
continue until the handoff is completed. The ShardRegion responsible for the rebalanced shard
will stop all entities in that shard by sending the handOffMessage to them. When all entities have
been terminated the ShardRegion owning the entities will acknowledge the handoff as completed
to the coordinator. Thereafter the coordinator will reply to requests for the location of
the shard and thereby allocate a new home for the shard and then buffered messages in the
ShardRegion actors are delivered to the new location. This means that the state of the entities
are not transferred or migrated. If the state of the entities are of importance it should be
persistent (durable), e.g. with pekko-persistence, so that it can be recovered at the new
location.
The logic that decides which shards to rebalance is defined in a plugable shard
allocation strategy. The default implementation LeastShardAllocationStrategy
picks shards for handoff from the ShardRegion with most number of previously allocated shards.
They will then be allocated to the ShardRegion with least number of previously allocated shards,
i.e. new members in the cluster. This strategy can be replaced by an application specific
implementation.
The state of shard locations in the ShardCoordinator is stored with pekko-distributed-data or
pekko-persistence to survive failures. When a crashed or unreachable coordinator
node has been removed (via down) from the cluster a new ShardCoordinator singleton
actor will take over and the state is recovered. During such a failure period shards
with known location are still available, while messages for new (unknown) shards
are buffered until the new ShardCoordinator becomes available.
As long as a sender uses the same ShardRegion actor to deliver messages to an entity
actor the order of the messages is preserved. As long as the buffer limit is not reached
messages are delivered on a best effort basis, with at-most once delivery semantics,
in the same way as ordinary message sending. Reliable end-to-end messaging, with
at-least-once semantics can be added by using AtLeastOnceDelivery in pekko-persistence.
Some additional latency is introduced for messages targeted to new or previously unused shards due to the round-trip to the coordinator. Rebalancing of shards may also add latency. This should be considered when designing the application specific shard resolution, e.g. to avoid too fine grained shards.
The ShardRegion actor can also be started in proxy only mode, i.e. it will not
host any entities itself, but knows how to delegate messages to the right location.
If the state of the entities are persistent you may stop entities that are not used to
reduce memory consumption. This is done by the application specific implementation of
the entity actors for example by defining receive timeout (context.setReceiveTimeout).
If a message is already enqueued to the entity when it stops itself the enqueued message
in the mailbox will be dropped. To support graceful passivation without losing such
messages the entity actor can send ClusterSharding#Passivate to the ActorRef[ShardCommand]
that was passed in to the factory method when creating the entity..
The specified stopMessage message will be sent back to the entity, which is
then supposed to stop itself. Incoming messages will be buffered by the ShardRegion
between reception of Passivate and termination of the entity. Such buffered messages
are thereafter delivered to a new incarnation of the entity.
This class is not intended for user extension other than for test purposes (e.g. stub implementation). More methods may be added in the future and that may break such implementations.
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic final classThe entity can request passivation by sending theClusterSharding.Passivatemessage to theActorRef[ShardCommand]that was passed in to the factory method when creating the entity.static classstatic interfaceWhen an entity is created anActorRef[ShardCommand]is passed to the factory method. -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionThe defaultShardAllocationStrategyis configured byleast-shard-allocation-strategyproperties.abstract <M> EntityRef<M>entityRefFor(EntityTypeKey<M> typeKey, String entityId) Create anActorRef-like reference to a specific sharded entity.abstract <M> EntityRef<M>entityRefFor(EntityTypeKey<M> typeKey, String entityId, String dataCenter) Create anActorRef-like reference to a specific sharded entity running in another data center.static ClusterShardingget(ActorSystem<?> system) abstract <M,E> ActorRef<E> Initialize sharding for the givenentityfactory settings.abstract ActorRef<ClusterShardingQuery>Actor for querying Cluster Sharding state
-
Constructor Details
-
ClusterSharding
public ClusterSharding()
-
-
Method Details
-
get
-
init
Initialize sharding for the givenentityfactory settings.It will start a shard region or a proxy depending on if the settings require role and if this node has such a role.
-
entityRefFor
Create anActorRef-like reference to a specific sharded entity.You have to correctly specify the type of messages the target can handle via the
typeKey.Messages sent through this
EntityRefwill be wrapped in aShardingEnvelopeincluding the here providedentityId.This can only be used if the default
ShardingEnvelopeis used, when using custom envelopes or in message entity ids you will need to use theActorRef<E>returned by sharding init for messaging with the sharded actors.For in-depth documentation of its semantics, see
EntityRef. -
entityRefFor
public abstract <M> EntityRef<M> entityRefFor(EntityTypeKey<M> typeKey, String entityId, String dataCenter) Create anActorRef-like reference to a specific sharded entity running in another data center.You have to correctly specify the type of messages the target can handle via the
typeKey.Messages sent through this
EntityRefwill be wrapped in aShardingEnvelopeincluding the providedentityId.This can only be used if the default
ShardingEnvelopeis used, when using custom envelopes or in message entity ids you will need to use theActorRef[E]returned by sharding init for messaging with the sharded actors.For in-depth documentation of its semantics, see
EntityRef. -
shardState
Actor for querying Cluster Sharding state -
defaultShardAllocationStrategy
public abstract ShardCoordinator.ShardAllocationStrategy defaultShardAllocationStrategy(ClusterShardingSettings settings) The defaultShardAllocationStrategyis configured byleast-shard-allocation-strategyproperties.
-