Awesome-claude-code message-queue-knowledge

Message Queue knowledge base. Provides broker comparison, delivery guarantees, consumer groups, and advanced RabbitMQ/Kafka patterns for messaging audits and generation.

install
source · Clone the upstream repo
git clone https://github.com/dykyi-roman/awesome-claude-code
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/dykyi-roman/awesome-claude-code "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/message-queue-knowledge" ~/.claude/skills/dykyi-roman-awesome-claude-code-message-queue-knowledge && rm -rf "$T"
manifest: skills/message-queue-knowledge/SKILL.md
source content

Message Queue Knowledge Base

Quick reference for message broker operations and advanced messaging patterns. Focuses on broker-level operations — for event-driven patterns, see

eda-knowledge
.

Broker Comparison

FeatureRabbitMQApache KafkaAmazon SQSRedis Streams
ModelMessage queueEvent logMessage queueEvent log
OrderingPer-queue FIFOPer-partitionBest-effort (FIFO available)Per-stream
RetentionUntil consumedTime/size-based4-14 daysMemory/size-based
ReplayNo (once consumed)Yes (offset seek)NoYes (ID-based)
Consumer GroupsCompeting consumersNative supportNot built-inNative (XREADGROUP)
Throughput~50K msg/s~1M msg/s~3K msg/s per queue~100K msg/s
LatencySub-millisecondLow milliseconds10-100msSub-millisecond
ProtocolAMQP 0-9-1Custom (TCP)HTTP/SQS APIRESP
ClusteringQuorum queuesBuilt-in (ZK/KRaft)ManagedRedis Cluster
Best forTask queues, RPCEvent streaming, logsServerless, AWS-nativeLightweight streaming

Message Delivery Guarantees

GuaranteeDescriptionImplementationTrade-off
At-most-onceMessage may be lostFire-and-forget, no ackFastest, data loss possible
At-least-onceMessage delivered 1+ timesAck after processingRequires idempotent consumers
Exactly-onceMessage processed exactly onceTransactional + deduplicationSlowest, most complex

Achieving At-Least-Once in PHP

// RabbitMQ: manual acknowledgment
$channel->basic_consume(
    queue: 'orders',
    no_ack: false,  // require explicit ack
    callback: function (AMQPMessage $msg) use ($channel): void {
        try {
            $this->handler->handle(json_decode($msg->getBody(), true));
            $channel->basic_ack($msg->getDeliveryTag());
        } catch (\Throwable $e) {
            $channel->basic_nack($msg->getDeliveryTag(), requeue: true);
        }
    },
);

Consumer Groups Overview

BrokerMechanismHow It Works
RabbitMQCompeting consumersMultiple consumers on same queue; broker distributes round-robin
KafkaConsumer groupsPartitions assigned to group members; each partition read by one consumer
Redis StreamsXREADGROUPConsumer group tracks last delivered ID per consumer

Ordering Guarantees

BrokerScopeGuarantee
RabbitMQPer-queueStrict FIFO within single queue
RabbitMQAcross queuesNo ordering guarantee
KafkaPer-partitionStrict ordering within partition
KafkaAcross partitionsNo ordering guarantee
SQS StandardQueueBest-effort ordering
SQS FIFOMessage groupStrict FIFO within group
Redis StreamsPer-streamStrict ordering by entry ID

When to Use Which Broker

ScenarioRecommendedWhy
Task distribution (email, image processing)RabbitMQFlexible routing, competing consumers
Event streaming / audit logKafkaImmutable log, replay, high throughput
Simple async in AWSSQSManaged, no infrastructure
Lightweight pub/sub with low latencyRedis StreamsAlready have Redis, minimal overhead
RPC / request-replyRabbitMQBuilt-in reply-to, correlation ID
CDC (Change Data Capture)KafkaLog compaction, connector ecosystem
Prioritized processingRabbitMQNative priority queues
Cross-region replicationKafkaMirrorMaker, built-in replication

Detection Patterns

# RabbitMQ usage
Grep: "AMQPChannel|PhpAmqpLib|bunny|php-amqplib" --glob "**/*.php"
Grep: "RABBITMQ_|AMQP_" --glob "**/.env*"

# Kafka usage
Grep: "RdKafka|kafka|KafkaConsumer|KafkaProducer" --glob "**/*.php"
Grep: "KAFKA_" --glob "**/.env*"

# SQS usage
Grep: "SqsClient|aws/aws-sdk.*sqs" --glob "**/*.php"
Grep: "SQS_|AWS_SQS" --glob "**/.env*"

# Redis Streams
Grep: "XADD|XREAD|XREADGROUP|XACK" --glob "**/*.php"
Grep: "xAdd|xRead|xReadGroup" --glob "**/*.php"

# Consumer patterns
Grep: "basic_consume|consume\(|poll\(" --glob "**/*.php"
Grep: "basic_ack|basic_nack|commitAsync|xAck" --glob "**/*.php"

# Dead letter configuration
Grep: "dead.letter|x-dead-letter|DLQ|deadLetter" --glob "**/*.php"

References

For detailed information, load these reference files:

  • references/rabbitmq-advanced.md
    — Queue types, exchange topologies, clustering, monitoring, PHP patterns
  • references/kafka-advanced.md
    — Partitioning, consumer groups, schema registry, exactly-once, PHP patterns