Skip to content

Conversation

htemelski-redis
Copy link

When issuing a ssubscribe command, the Cluster client creates a "regular" Redis client to establish a connection to the correct shard. This doesn't work if there's a failover since the client doesn't know how to correctly handle a MOVED response. This PR handles such events by propagating the ssubscribe MOVED error from the DataHandler, through the ClusterSubscriber and the ClusterSubscriberGroup to the RedisCluster client and forces slots refresh

Copy link

@nkaradzhov nkaradzhov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CLGTM

@htemelski-redis htemelski-redis changed the title Force slots refresh on MOVED error when using ssubscribe fix(ssubscribe): Force slots refresh on MOVED error when using ssubscribe Jun 16, 2025
Comment on lines +77 to +81
if (item.command.name == "ssubscribe" && err.message.includes("MOVED")) {
this.redis.emit("moved");
return;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add a comment here explaining how we use the standalone client for ssubscibe connections

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When a hashslot is moved to another shard the message that is sent on the pubsub channel is sunsubscribe, not moved. I suspect this is working because it's caught when the connection is retried. On issuing the ssubscribe command, the response is moved.

Copy link

@elena-kolevska elena-kolevska left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need tests on this. Please fix and enable the existing ones that weren't running before

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants