Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 16 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Parallax Windows CLI adopts modern C++ design patterns and mainly contains the f
- Unified command parsing and execution framework
- Command base class based on template method pattern
- Support for standardized parameter validation, environment preparation, and execution flow
- **Supported Commands**: check, install, config, run, join, cmd
- **Supported Commands**: check, install, config, run, join, chat, cmd

### 2. Environment Installer
- **Location**: `src/parallax/environment/`
Expand Down Expand Up @@ -108,9 +108,8 @@ Parallax Windows CLI adopts modern C++ design patterns and mainly contains the f

### 1. Download and Install
Download the latest installer from the Release page:
```
Gradient_Parallax_PC_Setup_v1.0.0.0.exe
```

[Parallax_Win_Setup.exe](https://github.com/GradientHQ/parallax_win_cli/releases/latest/download/Parallax_Win_Setup.exe)

### 2. Environment Check
```cmd
Expand All @@ -134,6 +133,9 @@ parallax check

# Start Parallax inference server (optional)
parallax run

# Access chat interface (optional)
parallax chat
```

## Command Reference
Expand Down Expand Up @@ -181,6 +183,12 @@ Join distributed inference cluster as a node
parallax join [args...]
```

### `parallax chat`
Access chat interface from non-scheduler computer
```cmd
parallax chat [args...]
```

### `parallax cmd`
Execute commands in WSL or Python virtual environment
```cmd
Expand All @@ -190,6 +198,7 @@ parallax cmd [--venv] <command> [args...]
**Command Descriptions**:
- `run`: Start Parallax inference server directly in WSL. You can pass any arguments supported by `parallax run` command. Examples: `parallax run -m Qwen/Qwen3-0.6B`, `parallax run --port 8080`
- `join`: Join distributed inference cluster as a worker node. You can pass any arguments supported by `parallax join` command. Examples: `parallax join -m Qwen/Qwen3-0.6B`, `parallax join -s scheduler-addr`
- `chat`: Access chat interface from any non-scheduler computer. You can pass any arguments supported by `parallax chat` command. Examples: `parallax chat` (local network), `parallax chat -s scheduler-addr` (public network), `parallax chat --host 0.0.0.0` (allow external access). After launching, visit http://localhost:3002 in your browser.
- `cmd`: Pass-through commands to WSL environment, supports `--venv` option to run in parallax project's Python virtual environment

**Main Configuration Items**:
Expand Down Expand Up @@ -292,6 +301,9 @@ parallax config list
# Start inference server test
parallax run

# Access chat interface test
parallax chat

# Execute commands in WSL
parallax cmd "python --version"

Expand Down
8 changes: 8 additions & 0 deletions src/parallax/cli/command_parser.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,14 @@ void CommandParser::InitializeBuiltinCommands() {
return static_cast<int>(result);
});

// Register chat command (access chat interface from non-scheduler computer)
RegisterCommand("chat", "Access chat interface from non-scheduler computer",
[](const std::vector<std::string>& args) -> int {
parallax::commands::ModelChatCommand chat_cmd;
auto result = chat_cmd.Execute(args);
return static_cast<int>(result);
});

// Register cmd command (pass-through command to WSL or virtual environment)
RegisterCommand("cmd",
"Execute commands in WSL or Python virtual environment",
Expand Down
9 changes: 9 additions & 0 deletions src/parallax/cli/commands/base_command.h
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,15 @@ class WSLCommand : public BaseCommand<Derived> {
command);
}

// Build venv activation command with CUDA environment
std::string BuildVenvActivationCommand(const CommandContext& context) {
// Filter out Windows paths and add CUDA path
// Use single quotes and careful escaping for PowerShell/CMD compatibility
return "cd ~/parallax && "
"export PATH=/usr/local/cuda-12.8/bin:$(echo '$PATH' | tr ':' '\\n' | grep -v '/mnt/c' | paste -sd ':' -) && "
"source ./venv/bin/activate";
}

// Escape arguments for safe passing through bash -c "..."
// This prevents command injection and correctly handles spaces/special chars
// Note: This is for WSL bash layer, not Windows PowerShell layer
Expand Down
12 changes: 6 additions & 6 deletions src/parallax/cli/commands/cmd_command.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -130,22 +130,22 @@ std::string CmdCommand::BuildCommand(const CommandContext& context,
std::string full_command;

if (options.use_venv) {
// Execute in virtual environment
full_command = "cd ~/parallax && source ./venv/bin/activate";
// Execute in virtual environment with CUDA PATH
full_command = BuildVenvActivationCommand(context);

// Add proxy support (similar to implementation in model_commands.cpp)
if (!context.proxy_url.empty()) {
full_command += " && HTTP_PROXY=\"" + context.proxy_url +
"\" HTTPS_PROXY=\"" + context.proxy_url + "\" " +
full_command += " && HTTP_PROXY='" + context.proxy_url +
"' HTTPS_PROXY='" + context.proxy_url + "' " +
command;
} else {
full_command += " && " + command;
}
} else {
// Execute directly in WSL
if (!context.proxy_url.empty()) {
full_command = "HTTP_PROXY=\"" + context.proxy_url +
"\" HTTPS_PROXY=\"" + context.proxy_url + "\" " +
full_command = "HTTP_PROXY='" + context.proxy_url +
"' HTTPS_PROXY='" + context.proxy_url + "' " +
command;
} else {
full_command = command;
Expand Down
109 changes: 99 additions & 10 deletions src/parallax/cli/commands/model_commands.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -40,14 +40,13 @@ bool ModelRunCommand::RunParallaxScript(const CommandContext& context) {
// Build run command: parallax run [user parameters...]
std::string run_command = BuildRunCommand(context);

// Build complete WSL command: cd ~/parallax && source ./venv/bin/activate
// && parallax run [args...]
std::string full_command = "cd ~/parallax && source ./venv/bin/activate";
// Build complete WSL command with venv activation and CUDA environment
std::string full_command = BuildVenvActivationCommand(context);

// If proxy is configured, add proxy environment variables
if (!context.proxy_url.empty()) {
full_command += " && HTTP_PROXY=\"" + context.proxy_url +
"\" HTTPS_PROXY=\"" + context.proxy_url + "\" " +
full_command += " && HTTP_PROXY='" + context.proxy_url +
"' HTTPS_PROXY='" + context.proxy_url + "' " +
run_command;
} else {
full_command += " && " + run_command;
Expand Down Expand Up @@ -95,14 +94,13 @@ CommandResult ModelJoinCommand::ExecuteImpl(const CommandContext& context) {
// Build cluster join command: parallax join [user parameters...]
std::string join_command = BuildJoinCommand(context);

// Build complete WSL command: cd ~/parallax && source ./venv/bin/activate
// && parallax join [args...]
std::string full_command = "cd ~/parallax && source ./venv/bin/activate";
// Build complete WSL command with venv activation and CUDA environment
std::string full_command = BuildVenvActivationCommand(context);

// If proxy is configured, add proxy environment variables
if (!context.proxy_url.empty()) {
full_command += " && HTTP_PROXY=\"" + context.proxy_url +
"\" HTTPS_PROXY=\"" + context.proxy_url + "\" " +
full_command += " && HTTP_PROXY='" + context.proxy_url +
"' HTTPS_PROXY='" + context.proxy_url + "' " +
join_command;
} else {
full_command += " && " + join_command;
Expand Down Expand Up @@ -167,5 +165,96 @@ std::string ModelJoinCommand::BuildJoinCommand(const CommandContext& context) {
return command_stream.str();
}

// ModelChatCommand implementation
CommandResult ModelChatCommand::ValidateArgsImpl(CommandContext& context) {
// Check if it's a help request
if (context.args.size() == 1 &&
(context.args[0] == "--help" || context.args[0] == "-h")) {
ShowHelpImpl();
return CommandResult::Success;
}

// chat command can be executed without parameters (using default settings)
return CommandResult::Success;
}

CommandResult ModelChatCommand::ExecuteImpl(const CommandContext& context) {
// Build chat command: parallax chat [user parameters...]
std::string chat_command = BuildChatCommand(context);

// Build complete WSL command with venv activation and CUDA environment
std::string full_command = BuildVenvActivationCommand(context);

// If proxy is configured, add proxy environment variables
if (!context.proxy_url.empty()) {
full_command += " && HTTP_PROXY='" + context.proxy_url +
"' HTTPS_PROXY='" + context.proxy_url + "' " +
chat_command;
} else {
full_command += " && " + chat_command;
}

std::string wsl_command = BuildWSLCommand(context, full_command);

info_log("Executing chat interface command: %s", wsl_command.c_str());

// Use WSLProcess to execute command for real-time output
WSLProcess wsl_process;
int exit_code = wsl_process.Execute(wsl_command);

if (exit_code == 0) {
ShowInfo("Chat interface started successfully. Visit http://localhost:3002 in your browser.");
return CommandResult::Success;
} else {
ShowError("Failed to start chat interface with exit code: " +
std::to_string(exit_code));
return CommandResult::ExecutionError;
}
}

void ModelChatCommand::ShowHelpImpl() {
std::cout << "Usage: parallax chat [args...]\n\n";
std::cout << "Access the chat interface from any non-scheduler computer.\n\n";
std::cout << "This command will:\n";
std::cout << " 1. Change to ~/parallax directory\n";
std::cout << " 2. Activate the Python virtual environment\n";
std::cout << " 3. Set proxy environment variables (if configured)\n";
std::cout << " 4. Execute 'parallax chat' with your arguments\n";
std::cout << " 5. Start chat server at http://localhost:3002\n\n";
std::cout << "Arguments:\n";
std::cout << " args... Arguments to pass to parallax chat "
"(optional)\n\n";
std::cout << "Options:\n";
std::cout << " --help, -h Show this help message\n\n";
std::cout << "Examples:\n";
std::cout
<< " parallax chat # Execute: parallax "
"chat (local area network)\n";
std::cout << " parallax chat -s scheduler-addr # Execute: parallax "
"chat -s scheduler-addr (public network)\n";
std::cout
<< " parallax chat -s 12D3KooWLX7MWuzi1Txa5LyZS4eTQ2tPaJijheH8faHggB9SxnBu\n";
std::cout << " # Connect to specific scheduler\n";
std::cout << " parallax chat --host 0.0.0.0 # Allow API access from other machines\n\n";
std::cout << "Note: All arguments will be passed to the built-in "
"parallax chat script\n";
std::cout << " in the Parallax Python virtual environment.\n";
std::cout << " After launching, visit http://localhost:3002 in your browser.\n";
}

std::string ModelChatCommand::BuildChatCommand(const CommandContext& context) {
std::ostringstream command_stream;

// Built-in execution of parallax chat
command_stream << "parallax chat";

// If there are user parameters, append them
for (const auto& arg : context.args) {
command_stream << " " << EscapeForShell(arg);
}

return command_stream.str();
}

} // namespace commands
} // namespace parallax
23 changes: 23 additions & 0 deletions src/parallax/cli/commands/model_commands.h
Original file line number Diff line number Diff line change
Expand Up @@ -126,5 +126,28 @@ class ModelJoinCommand : public WSLCommand<ModelJoinCommand> {
std::string BuildJoinCommand(const CommandContext& context);
};

// Chat command - access chat interface from non-scheduler computer
class ModelChatCommand : public WSLCommand<ModelChatCommand> {
public:
std::string GetName() const override { return "chat"; }
std::string GetDescription() const override {
return "Access chat interface from non-scheduler computer";
}

EnvironmentRequirements GetEnvironmentRequirements() {
EnvironmentRequirements req;
req.need_wsl = true;
req.sync_proxy = true;
return req;
}

CommandResult ValidateArgsImpl(CommandContext& context);
CommandResult ExecuteImpl(const CommandContext& context);
void ShowHelpImpl();

private:
std::string BuildChatCommand(const CommandContext& context);
};

} // namespace commands
} // namespace parallax
22 changes: 5 additions & 17 deletions src/parallax/environment/software_installer2.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -257,23 +257,11 @@ ComponentResult ParallaxProjectInstaller::Install() {
true);

if (!is_update_mode) {
// Only install sgl_kernel during first installation (use real-time
// output)
std::string install_sgl_cmd =
"cd ~/parallax && source ./venv/bin/activate && pip install "
"https://github.com/sgl-project/whl/releases/download/v0.3.7/"
"sgl_kernel-0.3.7+cu128-cp310-abi3-manylinux2014_x86_64.whl "
"--force-reinstall";
if (!proxy_url.empty()) {
install_sgl_cmd =
"cd ~/parallax && source ./venv/bin/activate && HTTP_PROXY=\"" +
proxy_url + "\" HTTPS_PROXY=\"" + proxy_url +
"\" pip install "
" https://github.com/sgl-project/whl/releases/download/v0.3.7/"
"sgl_kernel-0.3.7+cu128-cp310-abi3-manylinux2014_x86_64.whl "
"--force-reinstall";
}
commands.emplace_back("install_sgl_kernel", install_sgl_cmd, 600, true);
// Add CUDA environment variable to system profile (only during first installation)
std::string add_cuda_env_cmd =
"grep -q '/usr/local/cuda-12.8/bin' ~/.bashrc || "
"echo 'export PATH=/usr/local/cuda-12.8/bin:$PATH' >> ~/.bashrc";
commands.emplace_back("add_cuda_env", add_cuda_env_cmd, 30, false);
}

// Execute command sequence
Expand Down