Mercure, the open-source Go server implementing the Mercure real-time protocol, has a high-severity cache key collision vulnerability in its TopicSelectorStore. Attackers can poison the match result cache, leaking private updates to unauthorized subscribers or blocking delivery to legitimate ones. This affects versions before 0.22.0. Upgrade immediately if you run Mercure hubs handling sensitive real-time data.
The flaw stems from a naive cache key construction: developers concatenated the topic selector and topic with underscores, like "m_" + topicSelector + "_" + topic. Underscores appear naturally in topics and selectors—think user IDs, room names, or API paths. This creates collisions.
Concrete example:
selector="foo_bar" topic="baz" → key: "m_foo_bar_baz"
selector="foo" topic="bar_baz" → key: "m_foo_bar_baz"
If an attacker subscribes with foo and crafts a publish to bar_baz, they reuse the cache entry from a private foo_bar selector matching baz. Result: bypasses authorization. Reverse it to block legit subscribers. Anyone with publish or subscribe access exploits this—no auth bypass needed beyond normal hub permissions.
Why This Matters
Mercure powers real-time features in thousands of apps, especially Symfony ecosystems and Caddy setups. It handles pub/sub over HTTP/SSE for chats, notifications, collaborative editing. Private topics often carry user data, session tokens, or financial updates. A leak here exposes that directly via SSE streams.
Scale it up: In a SaaS dashboard, attacker publishes junk to collide keys, spies on competitor’s private metrics. Or denies service to premium users. CVSS doesn’t rate it yet, but “HIGH” label fits—confidentiality and availability hits without complexity. Discovered via code audit; no known exploits, but trivial to weaponize.
This echoes classic bugs like Log4Shell’s wildcards or Redis key collisions. String hacks fail under real data. Mercure’s cache boosts selector matching speed—critical for high-throughput hubs (thousands of topics/sec). Disabling it tanks perf, so many won’t until forced.
Fix and Mitigation
Version 0.22.0 patches it cleanly: swaps strings for a typed struct key.
type matchCacheKey struct {
topicSelector string
topic string
}
No more collisions—Go’s struct hashing handles it. They ditched sharded caches for a single otter cache, simplifying code. Pull from GitHub dunglas/mercure or Docker dunglas/mercure:0.22.0. Caddy module users: update caddy-module.
Workaround if patching lags: kill the cache. Caddyfile sets topic_selector_cache -1; library code passes size 0. Perf drops—expect 10-50x slower selector evals on dense topics. Test your load.
Verify: spin a local hub, subscribe to colliding selectors, publish cross-topic. Tools like curl suffice:
# Terminal 1: Subscribe broad
curl -N -X GET 'http://localhost:8000/.well-known/mercure?topic=foo'
# Terminal 2: Publish colliding private
curl -X POST 'http://localhost:8000/1' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'topic=bar_baz&data=leaked_private'
Pre-patch, leak appears in T1. Post-patch, nope.
Bottom line: Audit your hub’s topic hygiene—sanitize underscores if caching custom. Mercure’s maintainers moved fast; credit there. But this flags a pattern: real-time protocols prioritize speed over safe caching. If your app routes sensitive pushes, segment hubs or use JWT topics rigorously. Run scans—go mod tidy and check deps. In production, this could cost trust, data, or uptime.