Skip to content

Use ctx.fireChannelRead and reorder packet handler#157

Closed
sanjomo wants to merge 1 commit intomainfrom
bug-156
Closed

Use ctx.fireChannelRead and reorder packet handler#157
sanjomo wants to merge 1 commit intomainfrom
bug-156

Conversation

@sanjomo
Copy link
Copy Markdown
Member

@sanjomo sanjomo commented Mar 30, 2026

Replace pipeline.fireChannelRead(...) with ctx.fireChannelRead(...) in PollingTransport and WebSocketTransport so events are propagated from the current handler context (next handler) instead of from the pipeline head. Move PACKET_HANDLER registration in SocketIOChannelInitializer to after the transport handlers (but before the encoder) to ensure correct handler ordering and avoid double-processing. Changes affect SocketIOChannelInitializer.java, PollingTransport.java, and WebSocketTransport.java.

Description

Brief description of the changes in this PR.

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Code refactoring
  • Test improvements
  • Build/tooling changes

Related Issue

Closes #(issue number)

Changes Made

Testing

  • All existing tests pass
  • New tests added for new functionality
  • Tests pass locally with mvn test
  • Integration tests pass (if applicable)

Checklist

  • Code follows project coding standards
  • Self-review completed
  • Code is commented where necessary
  • Documentation updated (if needed)
  • Commit messages follow conventional format
  • No merge conflicts
  • All CI checks pass

Additional Notes

Any additional information, screenshots, or context that reviewers should know.

Summary by CodeRabbit

  • Refactor
    • Optimized Netty pipeline message propagation and handler ordering to improve message flow efficiency through transport handlers.

Replace pipeline.fireChannelRead(...) with ctx.fireChannelRead(...) in PollingTransport and WebSocketTransport so events are propagated from the current handler context (next handler) instead of from the pipeline head. Move PACKET_HANDLER registration in SocketIOChannelInitializer to after the transport handlers (but before the encoder) to ensure correct handler ordering and avoid double-processing. Changes affect SocketIOChannelInitializer.java, PollingTransport.java, and WebSocketTransport.java.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 30, 2026

📝 Walkthrough

Walkthrough

The pull request reorders the Netty pipeline handler initialization in SocketIOChannelInitializer by moving PACKET_HANDLER to execute after transport handlers. Additionally, the message propagation mechanism in transport handlers is simplified by changing from ctx.pipeline().fireChannelRead() to ctx.fireChannelRead().

Changes

Cohort / File(s) Summary
Netty Pipeline Handler Ordering
netty-socketio-core/src/main/java/com/socketio4j/socketio/SocketIOChannelInitializer.java
Reordered PACKET_HANDLER insertion to occur after transport handlers (xhrPollingTransport, websocket compression, and webSocketTransport) instead of immediately after HTTP encoders/compression block.
Pipeline Message Propagation
netty-socketio-core/src/main/java/com/socketio4j/socketio/transport/PollingTransport.java, netty-socketio-core/src/main/java/com/socketio4j/socketio/transport/WebSocketTransport.java
Changed PacketsMessage propagation from ctx.pipeline().fireChannelRead(...) to ctx.fireChannelRead(...) in both transport handlers, forwarding messages from the current handler position rather than the pipeline entry point.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Poem

🐰 Hop through the pipeline, reordered with care,
PACKET_HANDLER finds its rightful lair,
From pipeline's edge to ctx's gentle call,
Messages dance through handlers, one and all! ✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The PR description includes a detailed explanation of the changes but lacks completion of most template sections (Type of Change, Changes Made checklist, Testing checklist, and Checklist items are all empty). Complete the required template sections: select the Type of Change, fill in the Changes Made list, mark completed Testing items, and check applicable Checklist boxes.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the two main changes: using ctx.fireChannelRead and reordering the packet handler in the pipeline.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch bug-156

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@netty-socketio-core/src/main/java/com/socketio4j/socketio/transport/WebSocketTransport.java`:
- Line 100: The PING_TIMEOUT scheduler key is inconsistent:
AuthorizeHandler.channelActive() schedules with ctx.channel() while
AuthorizeHandler.connect()/connectClient() cancels with a SchedulerKey built
from sessionId, so cancellation fails and leaves orphaned timeouts. Fix by
making the scheduling and cancellation use the same key: update
AuthorizeHandler.channelActive() (and channelRead() if it schedules
PING_TIMEOUT) to build the SchedulerKey using the sessionId (the same sessionId
used in connect()/connectClient()/PacketListener.onPacket()), or alternatively
change AuthorizeHandler.connect() to cancel using ctx.channel()—ensure
SchedulerKey.equals() matches by using the exact same type and sessionId symbol
(PING_TIMEOUT and sessionId) across AuthorizeHandler.channelActive(),
AuthorizeHandler.channelRead(), AuthorizeHandler.connect(), connectClient(), and
PacketListener.onPacket().
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 07df6160-fb66-4881-b24d-93ec7ba962e2

📥 Commits

Reviewing files that changed from the base of the PR and between e3acbd9 and 777e51b.

📒 Files selected for processing (3)
  • netty-socketio-core/src/main/java/com/socketio4j/socketio/SocketIOChannelInitializer.java
  • netty-socketio-core/src/main/java/com/socketio4j/socketio/transport/PollingTransport.java
  • netty-socketio-core/src/main/java/com/socketio4j/socketio/transport/WebSocketTransport.java

PacketsMessage packetsMessage = new PacketsMessage(client, frame.content().retain(), Transport.WEBSOCKET);
try {
ctx.pipeline().fireChannelRead(packetsMessage);
ctx.fireChannelRead(packetsMessage);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Verify where PING_TIMEOUT is cancelled and whether that logic is present
# in handlers that still receive PacketsMessage after ctx.fireChannelRead(...)

echo "== PING_TIMEOUT references =="
rg -n --type=java -C3 'PING_TIMEOUT|SchedulerKey\.Type\.PING_TIMEOUT|scheduler\.cancel\('

echo
echo "== AuthorizeHandler channelRead implementation =="
rg -n --type=java -C6 'class AuthorizeHandler|void channelRead\s*\('

echo
echo "== InPacketHandler read path =="
rg -n --type=java -C6 'class InPacketHandler|channelRead0\s*\(|channelRead\s*\('

Repository: socketio4j/netty-socketio

Length of output: 50381


🏁 Script executed:

# Get full InPacketHandler implementation to check PING_TIMEOUT handling
cat -n netty-socketio-core/src/main/java/com/socketio4j/socketio/handler/InPacketHandler.java

Repository: socketio4j/netty-socketio

Length of output: 11667


🏁 Script executed:

# Check PacketListener for any timeout-related logic
rg -n --type=java -A10 'class PacketListener|schedulePingTimeout\(\)|cancelPingTimeout\(\)' netty-socketio-core/src/main/java/com/socketio4j/socketio/handler/PacketListener.java

Repository: socketio4j/netty-socketio

Length of output: 1938


🏁 Script executed:

# Check WebSocketTransport full implementation to understand the context
cat -n netty-socketio-core/src/main/java/com/socketio4j/socketio/transport/WebSocketTransport.java | head -150

Repository: socketio4j/netty-socketio

Length of output: 7492


🏁 Script executed:

# Verify handler pipeline configuration
rg -n --type=java -B3 -A3 'addLast.*InPacketHandler|addLast.*WebSocketTransport|pipeline.*InPacketHandler|pipeline.*WebSocketTransport' netty-socketio-core/src/main/java/com/socketio4j/socketio/handler/

Repository: socketio4j/netty-socketio

Length of output: 51


🏁 Script executed:

# Find ChannelInitializer or pipeline configuration
rg -n --type=java 'pipeline\.addLast|ChannelInitializer' netty-socketio-core/src/main/java/com/socketio4j/socketio/ | head -50

Repository: socketio4j/netty-socketio

Length of output: 3373


🏁 Script executed:

# Check ClientHead for ping timeout scheduling details
rg -n --type=java -B2 -A8 'public void schedulePingTimeout' netty-socketio-core/src/main/java/com/socketio4j/socketio/handler/ClientHead.java

Repository: socketio4j/netty-socketio

Length of output: 519


🏁 Script executed:

# Verify the exact SchedulerKey usage in both AuthorizeHandler and ClientHead
rg -n --type=java 'new SchedulerKey.*PING_TIMEOUT' netty-socketio-core/src/main/java/com/socketio4j/socketio/handler/

Repository: socketio4j/netty-socketio

Length of output: 903


🏁 Script executed:

# Find where AuthorizeHandler.connect() is called
rg -n --type=java 'authorizeHandler\.connect|\.connect\(' netty-socketio-core/src/main/java/com/socketio4j/socketio/handler/ | head -20

Repository: socketio4j/netty-socketio

Length of output: 51


🏁 Script executed:

# Check WebSocket upgrade/handshake path
rg -n --type=java -B5 -A5 'handshake|WebSocketServerHandshaker' netty-socketio-core/src/main/java/com/socketio4j/socketio/transport/WebSocketTransport.java | head -80

Repository: socketio4j/netty-socketio

Length of output: 3624


🏁 Script executed:

# Understand PacketListener.onPacket() to see when schedulePingTimeout is called
sed -n '35,150p' netty-socketio-core/src/main/java/com/socketio4j/socketio/handler/PacketListener.java

Repository: socketio4j/netty-socketio

Length of output: 3517


🏁 Script executed:

# Get connectClient implementation
sed -n '207,240p' netty-socketio-core/src/main/java/com/socketio4j/socketio/transport/WebSocketTransport.java

Repository: socketio4j/netty-socketio

Length of output: 1386


🏁 Script executed:

# Check ClientHead.getBaseClient() to understand the client relationship
rg -n --type=java 'getBaseClient|class ClientHead' netty-socketio-core/src/main/java/com/socketio4j/socketio/handler/ClientHead.java | head -5

Repository: socketio4j/netty-socketio

Length of output: 97


🏁 Script executed:

# Verify SchedulerKey equality to confirm key mismatch matters
cat -n netty-socketio-core/src/main/java/com/socketio4j/socketio/scheduler/SchedulerKey.java

Repository: socketio4j/netty-socketio

Length of output: 2470


Fix the PING_TIMEOUT scheduler key mismatch between initial scheduling and cancellation.

The PING_TIMEOUT is scheduled in AuthorizeHandler.channelActive() using ctx.channel() as the key, but cancelled in AuthorizeHandler.connect() using sessionId as the key. Since SchedulerKey.equals() requires both type AND sessionId to match, these keys do not refer to the same scheduled task. When connectClient() calls authorizeHandler.connect() during the WebSocket upgrade handshake (line 216 in WebSocketTransport.java), the cancellation silently fails, leaving an orphaned timeout scheduled with the channel object. When packets later arrive and PacketListener.onPacket() reschedules the timeout using sessionId, both timeouts remain active—the original channel-keyed timeout will eventually fire and trigger a premature disconnect.

Fix by consistently using sessionId as the scheduler key for PING_TIMEOUT in AuthorizeHandler.channelActive() and AuthorizeHandler.channelRead(), or ensure connect() uses the same ctx.channel() key when cancelling.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@netty-socketio-core/src/main/java/com/socketio4j/socketio/transport/WebSocketTransport.java`
at line 100, The PING_TIMEOUT scheduler key is inconsistent:
AuthorizeHandler.channelActive() schedules with ctx.channel() while
AuthorizeHandler.connect()/connectClient() cancels with a SchedulerKey built
from sessionId, so cancellation fails and leaves orphaned timeouts. Fix by
making the scheduling and cancellation use the same key: update
AuthorizeHandler.channelActive() (and channelRead() if it schedules
PING_TIMEOUT) to build the SchedulerKey using the sessionId (the same sessionId
used in connect()/connectClient()/PacketListener.onPacket()), or alternatively
change AuthorizeHandler.connect() to cancel using ctx.channel()—ensure
SchedulerKey.equals() matches by using the exact same type and sessionId symbol
(PING_TIMEOUT and sessionId) across AuthorizeHandler.channelActive(),
AuthorizeHandler.channelRead(), AuthorizeHandler.connect(), connectClient(), and
PacketListener.onPacket().

@sanjomo sanjomo closed this Mar 31, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant