Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions openhands/usage/advanced/configuration-options.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -413,6 +413,59 @@ All sandbox configuration options can be set as environment variables by prefixi
- Default: `""`
- Description: BrowserGym environment to use for evaluation

### GPU Support
- `enable_gpu`
- Type: `bool`
- Default: `false`
- Description: Enable GPU support in the runtime container
- Note: Requires NVIDIA Container Toolkit (nvidia-docker2) installed on the host

- `cuda_visible_devices`
- Type: `str`
- Default: `""`
- Description: Specify which GPU devices to make available to the container
- Examples:
- `""` (empty) - Mounts all available GPUs
- `"0"` - Mounts only GPU 0
- `"0,1"` - Mounts GPUs 0 and 1
- `"2,3,4"` - Mounts GPUs 2, 3, and 4

**Example GPU Configuration:**
```toml
[sandbox]
# Enable GPU support with all GPUs
enable_gpu = true

# Or enable GPU support with specific GPU IDs
enable_gpu = true
cuda_visible_devices = "0,1"

# Use a CUDA-enabled base image for GPU workloads
base_container_image = "nvidia/cuda:12.2.0-devel-ubuntu22.04"
```

**Prerequisites for GPU Support:**
1. NVIDIA GPU with drivers installed on the host
2. NVIDIA Container Toolkit (nvidia-docker2) installed:
```bash
# For Ubuntu/Debian
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
```

**Verifying GPU Access:**

After enabling GPU support, you can verify GPU access in OpenHands by asking the agent to run:
```bash
nvidia-smi
```

This should display your GPU information if GPU support is properly configured.

## Security Configuration

The security configuration options are defined in the `[security]` section of the `config.toml` file.
Expand Down
4 changes: 2 additions & 2 deletions openhands/usage/environment-variables.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -112,8 +112,8 @@ These variables correspond to the `[sandbox]` section in `config.toml`:
| `SANDBOX_PAUSE_CLOSED_RUNTIMES` | boolean | `false` | Pause instead of stopping closed runtimes |
| `SANDBOX_CLOSE_DELAY` | integer | `300` | Delay before closing idle runtimes (seconds) |
| `SANDBOX_RM_ALL_CONTAINERS` | boolean | `false` | Remove all containers when stopping |
| `SANDBOX_ENABLE_GPU` | boolean | `false` | Enable GPU support |
| `SANDBOX_CUDA_VISIBLE_DEVICES` | string | `""` | Specify GPU devices by ID |
| `SANDBOX_ENABLE_GPU` | boolean | `false` | Enable GPU support (requires NVIDIA Container Toolkit) |
| `SANDBOX_CUDA_VISIBLE_DEVICES` | string | `""` | Specify GPU devices by ID (e.g., `"0"`, `"0,1"`, `"2,3,4"`). Empty string mounts all GPUs |
| `SANDBOX_VSCODE_PORT` | integer | auto | Specific port for VSCode server |

### Sandbox Environment Variables
Expand Down
43 changes: 43 additions & 0 deletions sdk/guides/agent-server/docker-sandbox.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -605,6 +605,49 @@ http://localhost:8012/vnc.html?autoconnect=1&resize=remote

---

## 4) GPU Support in Docker Sandbox

<Note>
GPU support requires NVIDIA Container Toolkit (nvidia-docker2) to be installed on the host system.
</Note>

The Docker sandbox supports GPU acceleration for compute-intensive tasks like machine learning, data processing, and GPU-accelerated applications. Enable GPU support by setting the `enable_gpu` parameter when creating a `DockerWorkspace` or `DockerDevWorkspace`.

### Basic GPU Configuration

```python
from openhands.workspace import DockerWorkspace

with DockerWorkspace(
server_image="ghcr.io/openhands/agent-server:latest-python",
host_port=8010,
platform="linux/amd64",
enable_gpu=True, # Enable GPU support
) as workspace:
# GPU is now available in the workspace
result = workspace.execute_command("nvidia-smi")
print(result.stdout)
```

### GPU with Custom Base Image

When using GPU-accelerated workloads, you may want to use a CUDA-enabled base image:

```python
from openhands.workspace import DockerDevWorkspace

with DockerDevWorkspace(
base_image="nvidia/cuda:12.2.0-devel-ubuntu22.04",
host_port=8010,
platform="linux/amd64",
enable_gpu=True,
target="source",
) as workspace:
# Workspace has CUDA toolkit and GPU access
result = workspace.execute_command("nvcc --version && nvidia-smi")
print(result.stdout)
```

## Next Steps

- **[Local Agent Server](/sdk/guides/agent-server/local-server)**
Expand Down