Skip to content

[None][fix] Guard CUDA event elapsed_time in perf_metrics_manager to prevent executor crash#12868

Open
yifjiang wants to merge 2 commits intoNVIDIA:mainfrom
yifjiang:fix/perf-metrics-cuda-event-crash
Open

[None][fix] Guard CUDA event elapsed_time in perf_metrics_manager to prevent executor crash#12868
yifjiang wants to merge 2 commits intoNVIDIA:mainfrom
yifjiang:fix/perf-metrics-cuda-event-crash

Conversation

@yifjiang
Copy link
Copy Markdown
Contributor

@yifjiang yifjiang commented Apr 9, 2026

Summary

  • compute_batch_gpu_times() in PerfMetricsManager calls elapsed_time() on CUDA events without error handling
  • If a CUDA event was not recorded on the current stream, elapsed_time() raises RuntimeError
  • This exception propagates through _event_loop_wrapper → kills the executor thread
  • The main process and Dynamo runtime continue running (serving HTTP, responding to health probes), but with no executor thread, every inference request hangs forever — the server appears alive but is silently broken

Fix

Wrap elapsed_time() calls in try/except RuntimeError. On failure, log 0.0 for that batch's GPU timing metrics instead of crashing the executor. This is safe because:

  • Performance metrics are informational, not correctness-critical
  • The executor loop must not die from a metrics collection failure
  • The next batch will attempt to collect metrics normally

Changed file

tensorrt_llm/_torch/pyexecutor/perf_metrics_manager.pycompute_batch_gpu_times() method

Test plan

  • Syntax verified (ast.parse)
  • Included in E2E-tested image (dynamo-trtllm:pr7391-trtllm12476-v4-x86-b200) — confirmed try/except RuntimeError is present via inspect.getsource

Signed-off-by: Yifan Jiang 19356972+yifjiang@users.noreply.github.com

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • Bug Fixes
    • Improved GPU performance metrics reliability by gracefully handling timing computation failures. The system now continues operation instead of aborting metric collection when GPU event timing errors occur.

Stack Trace (reproduced on B200, disaggregated serving with tensorrtllm-runtime:1.0.0)

[TRT-LLM] [E] Error in event loop: Both events must be completed before calculating elapsed time.

Traceback (most recent call last):
  File "tensorrt_llm/_torch/pyexecutor/py_executor.py", line 579, in _event_loop_wrapper
    self.event_loop()
  File "tensorrt_llm/_torch/pyexecutor/py_executor.py", line 1930, in _executor_loop
    self.perf_manager.compute_batch_gpu_times(...)
  File "tensorrt_llm/_torch/pyexecutor/perf_metrics_manager.py", line 166, in compute_batch_gpu_times
    batch_gpu_forward_time = perf.gpu_forward_start_event.elapsed_time(...)
  File "torch/cuda/streams.py", line 233, in elapsed_time
    return super().elapsed_time(end_event)
RuntimeError: Both events must be completed before calculating elapsed time.

Exception in thread Thread-3 (_event_loop_wrapper) -> thread dies -> executor dead

The executor thread dies silently. The main process and Dynamo runtime continue running (HTTP server, health probes return 200), but no inference requests can be processed — they all hang indefinitely waiting for the dead executor.

Broader fix: executor resilience

The immediate fix in this PR guards compute_batch_gpu_times(). However, the deeper issue is that _event_loop_wrapper in py_executor.py re-raises all exceptions, killing the executor thread on any error:

# tensorrt_llm/_torch/pyexecutor/py_executor.py, line ~570
def _event_loop_wrapper(self):
    try:
        ...
        self.event_loop()
    except Exception as e:
        logger.error(f"Error in event loop: {e}")
        logger.error(traceback.format_exc())
        raise e          # <-- kills the thread on ANY exception
    finally:
        self._executor_loop_cleanup()

A more resilient approach would be to catch and log non-fatal errors (like metrics failures) while only terminating the executor for truly unrecoverable errors (OOM, CUDA device lost). Currently, a single metrics computation RuntimeError permanently kills the executor thread, leaving the server in a zombie state — HTTP alive, health probes passing, but zero inference throughput.

… executor crash

Wrap the elapsed_time() calls in compute_batch_gpu_times() with
try/except RuntimeError. If a CUDA event was not recorded on the
current stream, elapsed_time() raises RuntimeError, which propagates
up through the executor event loop and kills the executor thread.

The main process and Dynamo runtime continue running (serving HTTP,
responding to health probes), but with no executor thread, every
inference request hangs forever.

With this fix, a CUDA event timing failure logs 0.0 for that batch
metrics instead of crashing the executor.

Signed-off-by: Yifan Jiang <19356972+yifjiang@users.noreply.github.com>
@yifjiang yifjiang requested a review from a team as a code owner April 9, 2026 01:14
@yifjiang yifjiang requested a review from dongxuy04 April 9, 2026 01:14
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 9, 2026

📝 Walkthrough

Walkthrough

Updated the compute_batch_gpu_times method to handle GPU event timing failures gracefully. When CUDA event timing computation fails, the method now sets batch GPU times to 0.0 instead of raising an exception, allowing metric calculation to continue.

Changes

Cohort / File(s) Summary
GPU Timing Error Handling
tensorrt_llm/_torch/pyexecutor/perf_metrics_manager.py
Wrapped GPU event elapsed_time() computations in try/except RuntimeError block. On timing errors, sets batch_gpu_forward_time and batch_gpu_sample_time to 0.0 rather than propagating the exception.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding error handling (guarding) for CUDA event elapsed_time calls in perf_metrics_manager to prevent executor crashes.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Description check ✅ Passed PR description thoroughly explains the issue, root cause, fix, rationale, changed file, and test plan with supporting evidence.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/perf_metrics_manager.py (1)

175-180: Optional: add low-noise logging for timing failures.

Consider emitting a throttled debug/warn log in this except path so persistent CUDA event-timing failures can be detected and diagnosed without impacting runtime behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/_torch/pyexecutor/perf_metrics_manager.py` around lines 175 -
180, In the except RuntimeError block (where batch_gpu_forward_time and
batch_gpu_sample_time are set to 0.0) add a low-noise, throttled log message
(debug or warn) that records the RuntimeError occurrence; implement throttling
via a simple rate-limit (e.g., last_log_time timestamp or an occurrence counter
with a reset interval) so repeated CUDA event-timing failures don’t spam logs,
and use the module/class logger (e.g., perf_metrics_manager logger) and include
minimal context like "CUDA event timing failed for batch" plus the exception
message when actually logging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tensorrt_llm/_torch/pyexecutor/perf_metrics_manager.py`:
- Around line 175-180: In the except RuntimeError block (where
batch_gpu_forward_time and batch_gpu_sample_time are set to 0.0) add a
low-noise, throttled log message (debug or warn) that records the RuntimeError
occurrence; implement throttling via a simple rate-limit (e.g., last_log_time
timestamp or an occurrence counter with a reset interval) so repeated CUDA
event-timing failures don’t spam logs, and use the module/class logger (e.g.,
perf_metrics_manager logger) and include minimal context like "CUDA event timing
failed for batch" plus the exception message when actually logging.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 8f32e4df-ac83-4bcf-b46a-7b7d57518c0b

📥 Commits

Reviewing files that changed from the base of the PR and between 2fe39c1 and 2240164.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/perf_metrics_manager.py

@Tabrizian
Copy link
Copy Markdown
Member

@yifjiang I think if we see the error here doesn't mean that it is a metric error. It can be other issues but we only see it here since we have synchronized the event. Given that there was a PR to fix the original issue (assertion for not finding the block), is it possible to try that and see if you are still running into the issue you were seeing? Please let me know if I'm missing any details, thanks. #12667

@yifjiang
Copy link
Copy Markdown
Contributor Author

yifjiang commented Apr 9, 2026

@Tabrizian Thanks for the pointer — will test with PR #12667 to see if it addresses the root cause.

That said, regardless of what triggers the RuntimeError, I think a defensive try/except here is still necessary — perf metrics shouldn't be able to take down the executor thread. If elapsed_time() fails for any reason, logging 0.0 and continuing is safer than letting the exception propagate through _event_loop_wrapper and silently kill the executor.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should also set the error response for this request to make sure we are not swallowing the error.

Copy link
Copy Markdown
Contributor Author

@yifjiang yifjiang Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point — we can't claim the inference result is correct here. But at this point in the code, the forward pass and sampling are already complete and results are in the pipeline, and we can't claim that the inference request failed only based on the fact that perf_metrics_manager threw an exception.

Added logger.warning in 7d065e9b7 so the error is visible in logs rather than silently zeroed. This lets operators see the RuntimeError details and investigate whether it indicates a deeper issue with the forward pass or stream synchronization.

Copy link
Copy Markdown
Member

@Tabrizian Tabrizian Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in the earlier comment, this failure doesn't necessarily mean that only metrics have failed. If any of the kernels fail between the start and end events when we synchronize the events here we would observe that failure. Please let me know if this makes sense to you or if I'm missing anything.

Log the RuntimeError details so the failure is visible in logs
rather than silently zeroing the metrics. This helps operators
diagnose whether the timing failure indicates a deeper issue
with the forward pass or stream synchronization.

Uses tensorrt_llm.logger (codebase convention) instead of
stdlib logging.getLogger.

Signed-off-by: Yifan Jiang <19356972+yifjiang@users.noreply.github.com>
@yifjiang yifjiang force-pushed the fix/perf-metrics-cuda-event-crash branch from 7d065e9 to f6fb1d3 Compare April 9, 2026 22:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Community want to contribute PRs initiated from Community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants