-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Labels
help wantedExtra attention is neededExtra attention is needed
Description
A CPU memory leak is observed when inferencing using GPU, even when NativeOps is not used (by removing libllm_sharp_ops.so).
CPU memory continues to grow while inferencing. Diagnostics from torch.Tensor.TotalCount and torch.Tensor.PeakCount is stable during multiple turns of chat. GPU memory is also stable, and no GPU memory leak is observed.
Profiled the program with valgrind massif and memcheck. No obvious clues for the leakage from the logs.
Metadata
Metadata
Assignees
Labels
help wantedExtra attention is neededExtra attention is needed