From fd9c44ea618ddcb6a46795e3482e11f4b64a1f41 Mon Sep 17 00:00:00 2001 From: openhands Date: Sat, 6 Dec 2025 08:00:50 +0000 Subject: [PATCH 1/3] docs: Add comprehensive GPU support documentation - Add GPU configuration section to configuration-options.mdx with enable_gpu and cuda_visible_devices parameters - Enhance gui-mode.mdx with advanced GPU configuration examples - Update environment-variables.mdx with detailed GPU variable descriptions - Add complete GPU support guide to SDK docker-sandbox.mdx with code examples - Include prerequisites, verification steps, and use cases for GPU-enabled workspaces Co-authored-by: openhands --- .../usage/advanced/configuration-options.mdx | 53 +++++++ openhands/usage/environment-variables.mdx | 4 +- openhands/usage/run-openhands/gui-mode.mdx | 20 +++ sdk/guides/agent-server/docker-sandbox.mdx | 140 ++++++++++++++++++ 4 files changed, 215 insertions(+), 2 deletions(-) diff --git a/openhands/usage/advanced/configuration-options.mdx b/openhands/usage/advanced/configuration-options.mdx index 13f3523b..c2ef0ea9 100644 --- a/openhands/usage/advanced/configuration-options.mdx +++ b/openhands/usage/advanced/configuration-options.mdx @@ -413,6 +413,59 @@ All sandbox configuration options can be set as environment variables by prefixi - Default: `""` - Description: BrowserGym environment to use for evaluation +### GPU Support +- `enable_gpu` + - Type: `bool` + - Default: `false` + - Description: Enable GPU support in the runtime container + - Note: Requires NVIDIA Container Toolkit (nvidia-docker2) installed on the host + +- `cuda_visible_devices` + - Type: `str` + - Default: `""` + - Description: Specify which GPU devices to make available to the container + - Examples: + - `""` (empty) - Mounts all available GPUs + - `"0"` - Mounts only GPU 0 + - `"0,1"` - Mounts GPUs 0 and 1 + - `"2,3,4"` - Mounts GPUs 2, 3, and 4 + +**Example GPU Configuration:** +```toml +[sandbox] +# Enable GPU support with all GPUs +enable_gpu = true + +# Or enable GPU support with specific GPU IDs +enable_gpu = true +cuda_visible_devices = "0,1" + +# Use a CUDA-enabled base image for GPU workloads +base_container_image = "nvidia/cuda:12.2.0-devel-ubuntu22.04" +``` + +**Prerequisites for GPU Support:** +1. NVIDIA GPU with drivers installed on the host +2. NVIDIA Container Toolkit (nvidia-docker2) installed: + ```bash + # For Ubuntu/Debian + distribution=$(. /etc/os-release;echo $ID$VERSION_ID) + curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - + curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ + sudo tee /etc/apt/sources.list.d/nvidia-docker.list + sudo apt-get update && sudo apt-get install -y nvidia-docker2 + sudo systemctl restart docker + ``` + +**Verifying GPU Access:** + +After enabling GPU support, you can verify GPU access in OpenHands by asking the agent to run: +```bash +nvidia-smi +``` + +This should display your GPU information if GPU support is properly configured. + ## Security Configuration The security configuration options are defined in the `[security]` section of the `config.toml` file. diff --git a/openhands/usage/environment-variables.mdx b/openhands/usage/environment-variables.mdx index cf4efde3..616d76cb 100644 --- a/openhands/usage/environment-variables.mdx +++ b/openhands/usage/environment-variables.mdx @@ -112,8 +112,8 @@ These variables correspond to the `[sandbox]` section in `config.toml`: | `SANDBOX_PAUSE_CLOSED_RUNTIMES` | boolean | `false` | Pause instead of stopping closed runtimes | | `SANDBOX_CLOSE_DELAY` | integer | `300` | Delay before closing idle runtimes (seconds) | | `SANDBOX_RM_ALL_CONTAINERS` | boolean | `false` | Remove all containers when stopping | -| `SANDBOX_ENABLE_GPU` | boolean | `false` | Enable GPU support | -| `SANDBOX_CUDA_VISIBLE_DEVICES` | string | `""` | Specify GPU devices by ID | +| `SANDBOX_ENABLE_GPU` | boolean | `false` | Enable GPU support (requires NVIDIA Container Toolkit) | +| `SANDBOX_CUDA_VISIBLE_DEVICES` | string | `""` | Specify GPU devices by ID (e.g., `"0"`, `"0,1"`, `"2,3,4"`). Empty string mounts all GPUs | | `SANDBOX_VSCODE_PORT` | integer | auto | Specific port for VSCode server | ### Sandbox Environment Variables diff --git a/openhands/usage/run-openhands/gui-mode.mdx b/openhands/usage/run-openhands/gui-mode.mdx index 5a0ffaa6..42c92095 100644 --- a/openhands/usage/run-openhands/gui-mode.mdx +++ b/openhands/usage/run-openhands/gui-mode.mdx @@ -57,6 +57,26 @@ openhands serve --gpu --mount-cwd - NVIDIA GPU drivers must be installed on your host system - [NVIDIA Container Toolkit (nvidia-docker2)](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) must be installed and configured +**Advanced GPU Configuration:** + +For more control over GPU access, you can use environment variables or a `config.toml` file: + +```bash +# Enable GPU with specific GPUs via environment variable +export SANDBOX_ENABLE_GPU=true +export SANDBOX_CUDA_VISIBLE_DEVICES="0,1" # Only use GPUs 0 and 1 +openhands serve +``` + +Or in your `config.toml`: +```toml +[sandbox] +enable_gpu = true +cuda_visible_devices = "0,1" # Specify which GPUs to use +``` + +See the [Configuration Options](/openhands/usage/advanced/configuration-options#gpu-support) for more details on GPU configuration. + #### Requirements Before using the `openhands serve` command, ensure that: diff --git a/sdk/guides/agent-server/docker-sandbox.mdx b/sdk/guides/agent-server/docker-sandbox.mdx index e1b2ca6d..d8b5fb0a 100644 --- a/sdk/guides/agent-server/docker-sandbox.mdx +++ b/sdk/guides/agent-server/docker-sandbox.mdx @@ -605,6 +605,146 @@ http://localhost:8012/vnc.html?autoconnect=1&resize=remote --- +## 4) GPU Support in Docker Sandbox + + +GPU support requires NVIDIA Container Toolkit (nvidia-docker2) to be installed on the host system. + + +The Docker sandbox supports GPU acceleration for compute-intensive tasks like machine learning, data processing, and GPU-accelerated applications. Enable GPU support by setting the `enable_gpu` parameter when creating a `DockerWorkspace` or `DockerDevWorkspace`. + +### Basic GPU Configuration + +```python +from openhands.workspace import DockerWorkspace + +with DockerWorkspace( + server_image="ghcr.io/openhands/agent-server:latest-python", + host_port=8010, + platform="linux/amd64", + enable_gpu=True, # Enable GPU support +) as workspace: + # GPU is now available in the workspace + result = workspace.execute_command("nvidia-smi") + print(result.stdout) +``` + +### GPU with Custom Base Image + +When using GPU-accelerated workloads, you may want to use a CUDA-enabled base image: + +```python +from openhands.workspace import DockerDevWorkspace + +with DockerDevWorkspace( + base_image="nvidia/cuda:12.2.0-devel-ubuntu22.04", + host_port=8010, + platform="linux/amd64", + enable_gpu=True, + target="source", +) as workspace: + # Workspace has CUDA toolkit and GPU access + result = workspace.execute_command("nvcc --version && nvidia-smi") + print(result.stdout) +``` + +### Prerequisites for GPU Support + +1. **NVIDIA GPU** with drivers installed on the host system +2. **NVIDIA Container Toolkit** (nvidia-docker2) installed: + +```bash +# For Ubuntu/Debian +distribution=$(. /etc/os-release;echo $ID$VERSION_ID) +curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - +curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ + sudo tee /etc/apt/sources.list.d/nvidia-docker.list +sudo apt-get update && sudo apt-get install -y nvidia-docker2 +sudo systemctl restart docker +``` + +3. **Docker runtime** configured with NVIDIA runtime support + +### Verifying GPU Access + +After enabling GPU support, verify GPU access in the workspace: + +```python +from openhands.workspace import DockerWorkspace + +with DockerWorkspace( + server_image="ghcr.io/openhands/agent-server:latest-python", + enable_gpu=True, +) as workspace: + # Check GPU availability + result = workspace.execute_command("nvidia-smi") + + if result.exit_code == 0: + print("✅ GPU is accessible:") + print(result.stdout) + else: + print("❌ GPU not accessible:") + print(result.stderr) +``` + +### GPU-Enabled Use Cases + +**Machine Learning Training:** +```python +from openhands.sdk import LLM, Conversation +from openhands.tools.preset.default import get_default_agent +from openhands.workspace import DockerWorkspace +from pydantic import SecretStr +import os + +llm = LLM( + usage_id="agent", + model="anthropic/claude-sonnet-4-5-20250929", + api_key=SecretStr(os.getenv("LLM_API_KEY")), +) + +with DockerWorkspace( + server_image="ghcr.io/openhands/agent-server:latest-python", + enable_gpu=True, +) as workspace: + agent = get_default_agent(llm=llm, cli_mode=True) + conversation = Conversation(agent=agent, workspace=workspace) + + conversation.send_message( + "Install PyTorch with CUDA support and verify GPU is available. " + "Then create a simple neural network training script that uses GPU." + ) + conversation.run() + conversation.close() +``` + +**GPU-Accelerated Data Processing:** +```python +with DockerWorkspace( + server_image="ghcr.io/openhands/agent-server:latest-python", + enable_gpu=True, +) as workspace: + agent = get_default_agent(llm=llm, cli_mode=True) + conversation = Conversation(agent=agent, workspace=workspace) + + conversation.send_message( + "Install RAPIDS cuDF and process the CSV file using GPU acceleration. " + "Compare performance with pandas CPU processing." + ) + conversation.run() + conversation.close() +``` + +### Notes + +- When `enable_gpu=True`, the workspace mounts **all available GPUs** into the container +- Currently, the SDK does not support selective GPU mounting (e.g., mounting only specific GPU IDs) +- For selective GPU control, consider using the main OpenHands application with `SANDBOX_CUDA_VISIBLE_DEVICES` configuration +- GPU support adds the `--gpus all` flag to the Docker container runtime +- Ensure your Docker daemon has proper permissions to access NVIDIA devices + +--- + ## Next Steps - **[Local Agent Server](/sdk/guides/agent-server/local-server)** From dccede7bdb65fc4d7b14c0a24918dd552d27dab1 Mon Sep 17 00:00:00 2001 From: Graham Neubig Date: Thu, 18 Dec 2025 16:41:42 -0500 Subject: [PATCH 2/3] Update openhands/usage/run-openhands/gui-mode.mdx --- openhands/usage/run-openhands/gui-mode.mdx | 20 -------------------- 1 file changed, 20 deletions(-) diff --git a/openhands/usage/run-openhands/gui-mode.mdx b/openhands/usage/run-openhands/gui-mode.mdx index 42c92095..5a0ffaa6 100644 --- a/openhands/usage/run-openhands/gui-mode.mdx +++ b/openhands/usage/run-openhands/gui-mode.mdx @@ -57,26 +57,6 @@ openhands serve --gpu --mount-cwd - NVIDIA GPU drivers must be installed on your host system - [NVIDIA Container Toolkit (nvidia-docker2)](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) must be installed and configured -**Advanced GPU Configuration:** - -For more control over GPU access, you can use environment variables or a `config.toml` file: - -```bash -# Enable GPU with specific GPUs via environment variable -export SANDBOX_ENABLE_GPU=true -export SANDBOX_CUDA_VISIBLE_DEVICES="0,1" # Only use GPUs 0 and 1 -openhands serve -``` - -Or in your `config.toml`: -```toml -[sandbox] -enable_gpu = true -cuda_visible_devices = "0,1" # Specify which GPUs to use -``` - -See the [Configuration Options](/openhands/usage/advanced/configuration-options#gpu-support) for more details on GPU configuration. - #### Requirements Before using the `openhands serve` command, ensure that: From 110dabea2514c70943597227accfaacca9b4f0f2 Mon Sep 17 00:00:00 2001 From: Graham Neubig Date: Thu, 18 Dec 2025 16:41:49 -0500 Subject: [PATCH 3/3] Update sdk/guides/agent-server/docker-sandbox.mdx --- sdk/guides/agent-server/docker-sandbox.mdx | 97 ---------------------- 1 file changed, 97 deletions(-) diff --git a/sdk/guides/agent-server/docker-sandbox.mdx b/sdk/guides/agent-server/docker-sandbox.mdx index d8b5fb0a..e4496a53 100644 --- a/sdk/guides/agent-server/docker-sandbox.mdx +++ b/sdk/guides/agent-server/docker-sandbox.mdx @@ -648,103 +648,6 @@ with DockerDevWorkspace( print(result.stdout) ``` -### Prerequisites for GPU Support - -1. **NVIDIA GPU** with drivers installed on the host system -2. **NVIDIA Container Toolkit** (nvidia-docker2) installed: - -```bash -# For Ubuntu/Debian -distribution=$(. /etc/os-release;echo $ID$VERSION_ID) -curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - -curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ - sudo tee /etc/apt/sources.list.d/nvidia-docker.list -sudo apt-get update && sudo apt-get install -y nvidia-docker2 -sudo systemctl restart docker -``` - -3. **Docker runtime** configured with NVIDIA runtime support - -### Verifying GPU Access - -After enabling GPU support, verify GPU access in the workspace: - -```python -from openhands.workspace import DockerWorkspace - -with DockerWorkspace( - server_image="ghcr.io/openhands/agent-server:latest-python", - enable_gpu=True, -) as workspace: - # Check GPU availability - result = workspace.execute_command("nvidia-smi") - - if result.exit_code == 0: - print("✅ GPU is accessible:") - print(result.stdout) - else: - print("❌ GPU not accessible:") - print(result.stderr) -``` - -### GPU-Enabled Use Cases - -**Machine Learning Training:** -```python -from openhands.sdk import LLM, Conversation -from openhands.tools.preset.default import get_default_agent -from openhands.workspace import DockerWorkspace -from pydantic import SecretStr -import os - -llm = LLM( - usage_id="agent", - model="anthropic/claude-sonnet-4-5-20250929", - api_key=SecretStr(os.getenv("LLM_API_KEY")), -) - -with DockerWorkspace( - server_image="ghcr.io/openhands/agent-server:latest-python", - enable_gpu=True, -) as workspace: - agent = get_default_agent(llm=llm, cli_mode=True) - conversation = Conversation(agent=agent, workspace=workspace) - - conversation.send_message( - "Install PyTorch with CUDA support and verify GPU is available. " - "Then create a simple neural network training script that uses GPU." - ) - conversation.run() - conversation.close() -``` - -**GPU-Accelerated Data Processing:** -```python -with DockerWorkspace( - server_image="ghcr.io/openhands/agent-server:latest-python", - enable_gpu=True, -) as workspace: - agent = get_default_agent(llm=llm, cli_mode=True) - conversation = Conversation(agent=agent, workspace=workspace) - - conversation.send_message( - "Install RAPIDS cuDF and process the CSV file using GPU acceleration. " - "Compare performance with pandas CPU processing." - ) - conversation.run() - conversation.close() -``` - -### Notes - -- When `enable_gpu=True`, the workspace mounts **all available GPUs** into the container -- Currently, the SDK does not support selective GPU mounting (e.g., mounting only specific GPU IDs) -- For selective GPU control, consider using the main OpenHands application with `SANDBOX_CUDA_VISIBLE_DEVICES` configuration -- GPU support adds the `--gpus all` flag to the Docker container runtime -- Ensure your Docker daemon has proper permissions to access NVIDIA devices - ---- - ## Next Steps - **[Local Agent Server](/sdk/guides/agent-server/local-server)**