Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
feat: add ray_serve as llm provider #178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Uh oh!
There was an error while loading. Please reload this page.
feat: add ray_serve as llm provider #178
Changes from all commits
c2bc691File filter
Filter by extension
Conversations
Uh oh!
There was an error while loading. Please reload this page.
Jump to
Uh oh!
There was an error while loading. Please reload this page.
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Catching a generic
Exceptioncan hide unexpected errors and make debugging difficult. It's better to catch more specific exceptions that you anticipate, or at least log the full traceback if a genericExceptionmust be caught.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the previous comment, catching a generic
Exceptionhere can obscure the root cause of issues. Consider catching more specific exceptions related toserve.get_deploymentorget_handlefailures.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imports should generally be placed at the top of the file (PEP 8 guideline). Moving
import uuidto the top of the file improves readability and ensures consistency, even if Python allows conditional imports.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment indicates a potential issue with
kwargsbeing passed directly. It's safer and clearer to explicitly construct aconfigdictionary with only the parameters relevant toLLMDeploymentto avoid passing unintended arguments. This also makes the code more robust to future changes inLLMDeployment's constructor.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
serve.run()is a blocking call by default. IfRayServeClientis intended to be used within an asynchronous application, this call will block the event loop, potentially causing performance issues or deadlocks. Consider if the Ray Serve application should be deployed out-of-band (e.g., as a separate script) or ifserve.start()andserve.run_app()should be used in a non-blocking manner if the client is responsible for its lifecycle within an async context.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tokenizer initialization logic is duplicated here and in
LLMServiceActor. To improve maintainability and avoid redundancy, consider extracting this logic into a shared utility function or a common base class if applicable. This ensures consistent tokenizer handling across different LLM wrappers.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
__call__method, which serves as the HTTP entry point for the Ray Serve deployment, lacks any authentication or authorization checks. This allows any user with network access to the Ray Serve port to execute LLM queries, potentially leading to unauthorized resource consumption and abuse of the service.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
__call__method passes untrusted user input (textandhistory) directly to the LLM'sgenerate_answermethod. Several LLM backends in this repository (e.g.,HuggingFaceWrapper,SGLangWrapper) use manual string concatenation to build prompts, making them highly susceptible to prompt injection attacks. An attacker could provide crafted input to manipulate the LLM's behavior or spoof conversation history.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation exposes sensitive internal information by returning raw exception details, which could be exploited by attackers to gain deeper insights into the system's architecture and vulnerabilities. It's crucial to prevent the leakage of stack traces, file paths, or configuration details. Instead, catch specific exceptions and return generalized error messages, logging full details internally without exposing them to the client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
app_builderfunction defaultsbackendto "vllm" if not provided. This default might not align with the user's expectation or theLLMDeployment's intended behavior if other backends are more common or desired as a default. Consider making thebackendexplicit or ensuring the default is well-documented and consistent with the overall system design.Uh oh!
There was an error while loading. Please reload this page.