-
Notifications
You must be signed in to change notification settings - Fork 234
[skyrl-train][inference] HTTP Inference Integration (Feature-Flagged) 4/N #931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
kouroshHakha
wants to merge
25
commits into
NovaSky-AI:main
Choose a base branch
from
kouroshHakha:kh/inference-3
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
Phase 2b-1: Refactors inference client creation into an overridable hook. Changes: - Add get_inference_client() -> InferenceEngineInterface hook in BasePPOExp - Update _setup_trainer() to use the new hook - Refactor DAPOExp to override get_inference_client() instead of duplicating _setup_trainer() - Update EvalOnlyEntrypoint.run() to use the hook - Update TerminalBenchGenerateExp._setup_generator() to use the hook - Move strategy validation for FlashRL to main() for early failure - Fix bug: add missing tokenizer arg in DAPOExp remote engines path This refactor eliminates code duplication and prepares for future RemoteInferenceClient integration (Phase 2b-2).
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary: Integrates the HTTP-based inference layer with training code behind a private feature flag
_SKYRL_USE_HTTP_INFERENCE. When enabled, usesRemoteInferenceClient+ServerGroup+InferenceRouterinstead of the legacy Ray actor-based inference. Both code paths remain fully functional; the flag allows gradual rollout and validation.Key Changes:
Feature Flag (
env_vars.py):_SKYRL_USE_HTTP_INFERENCEenv var (default:0= legacy path)New Config Options (
ppo_base_config.yaml):generator.external_proxy_url- External data plane URL (optional)generator.external_server_urls- External control plane URLs (optional)generator.router_port- Port for managed router (default: 8080)Config Validation (
utils.py):Updated
get_inference_client()(main_base.py):VLLMServerGroup+InferenceRouter+RemoteInferenceClientInferenceEngineClient(existing behavior)Weight Sync Integration (
worker.py,broadcast_strategy.py,transfer_strategy.py,cuda_ipc_strategy.py):worker.pyfetchesinference_world_sizefromclient.get_world_size()for HTTP pathcreate_init_info()accepts optionalinference_world_sizeparameterAPI Compatibility (
remote_inference_client.py):init_weight_transfer→init_weight_update_communicatorupdate_weights→update_named_weightstagsparameter tosleep()/wake_up()for colocationFiles Changed:
env_vars.py_SKYRL_USE_HTTP_INFERENCEfeature flagppo_base_config.yamlexternal_proxy_url,external_server_urls,router_portutils.py_validate_http_inference_cfg()with routing logicmain_base.pyget_inference_client()to use HTTP path when flag enabledremote_inference_client.pyworker.pyinference_world_sizefrom client for HTTP pathbroadcast_strategy.pyinference_world_sizeparameter, validate for HTTP pathtransfer_strategy.pycuda_ipc_strategy.pyinference_world_sizeparameterTesting: