How the installer finds your GPU, how to override it, and how to prove the transcode is actually running on the hardware you expect.
Relevant services: jellyfin and tdarr both carry hwaccelSupport: true in the catalog.
The doctor's vendor probe lives in src/platform/gpu.ts. On wizard start and on every arrstack doctor run, it does three things:
lspci -nn, parsed for display-class devices ([0300],[0302]). The last[xxxx:yyyy]on each matching line is the PCI vendor:device pair.- Vendor mapping of the PCI vendor ID:
8086->intel1002->amd10de->nvidia- anything else ->
unknown
- Device probes:
- Intel / AMD:
existsSync('/dev/dri/renderD128')to confirm the kernel exposed a render node. - NVIDIA:
nvidia-ctk --versionto confirmnvidia-container-toolkitis installed.
- Intel / AMD:
The result gets written to state.gpu as { vendor, device_name, render_gid?, video_gid? } and the Jellyfin service gets the right device mounts and group memberships from src/renderer/jellyfin-encoding.ts.
Verify what the probe found:
jq .gpu ~/arrstack/state.json
lspci -nn | grep -iE '03[02][0-9a-f]'
ls -l /dev/dri/
nvidia-ctk --version 2>/dev/null || echo "no nvidia-container-toolkit"If detection got it wrong, or you want to disable acceleration:
arrstack install --freshThe GPU step in the wizard lets you pick intel, amd, nvidia, or none explicitly. For an existing install, hand-edit state.json:
jq '.gpu.vendor = "intel"' ~/arrstack/state.json > /tmp/s && mv /tmp/s ~/arrstack/state.json
arrstack install --resumeSetting "none" drops every GPU-specific mount and group from Jellyfin/Tdarr. It is the right choice if you want to keep things simple and you have CPU headroom.
The fastest single check: is ffmpeg running with a hardware acceleration flag? Start a stream that forces a transcode (e.g. pick a lower bitrate in the Jellyfin web player), then:
docker exec jellyfin ps auxf | grep ffmpegIf Jellyfin's container is named differently on your install (compose default is arrstack-jellyfin-1), substitute the name. ps auxf prints the process tree with full argv; grep ffmpeg narrows to the active transcodes. One ffmpeg process per concurrent stream is expected.
Scan the command line for the first -hwaccel <vendor> flag, which appears before the -i <input> argument. The flag tells you exactly which code path ffmpeg took:
| Flag in argv | What it means | Expected on |
|---|---|---|
-hwaccel vaapi + -vaapi_device /dev/dri/renderD128 |
VAAPI decode on the iGPU; encode uses h264_vaapi or hevc_vaapi. |
Intel Quick Sync, AMD |
-hwaccel qsv |
Intel Media SDK path (newer Jellyfin builds). Also uses /dev/dri/renderD128. |
Intel only |
-hwaccel cuda or -hwaccel nvdec |
NVDEC decode on the NVIDIA GPU; encode uses h264_nvenc or hevc_nvenc. |
NVIDIA |
no -hwaccel flag, plus -c:v libx264 or libx265 |
Software transcode. GPU is not involved. | any (fallback) |
Also look at the -c:v argument near the end of the argv: it names the encoder. h264_vaapi, h264_nvenc, h264_qsv are hardware encoders. libx264 and libx265 are CPU-only.
No transcode is running. Either the client is direct-playing (no transcode needed) or the stream has not started yet. Force a transcode by picking a quality lower than the source in the web player, then re-run the command.
The driver initialized the context but fell back to software mid-stream. Check the transcode log:
docker exec jellyfin tail -n 200 /config/log/FFmpeg.Transcode-$(date +%Y-%m-%d).logLook for Failed to initialise or No usable lines. The later sections on Intel/AMD/NVIDIA cover the common root causes.
- A recent Intel iGPU (Broadwell or later for full QSV, Skylake+ for H.265).
/dev/dri/renderD128exists on the host.- The user running Docker is in the
renderorvideogroup.
Check them:
ls -l /dev/dri/
# crw-rw---- 1 root render 226, 128 ... <- note "render" group
getent group render video
id -nG $USER # must include 'render' (or 'video' on some distros)If the render group is missing:
sudo usermod -aG render $USER
newgrp renderStart a transcode (play a file at a lower quality on a browser client). Then:
docker exec arrstack-jellyfin-1 ps -ef | grep ffmpegExpect to see flags like -hwaccel vaapi -hwaccel_output_format vaapi -vaapi_device /dev/dri/renderD128. VAAPI is what Jellyfin uses for Intel on Linux.
Watch the iGPU from the host:
sudo apt install -y intel-gpu-tools # Ubuntu/Debian
sudo dnf install -y intel-gpu-tools # Fedora
sudo intel_gpu_topThe "Video" and "VideoEnhance" engines should hit > 0 percent while the transcode runs. If they stay at zero, ffmpeg is decoding on the CPU.
VAAPI hwaccel requested but none available: render node missing or permission denied. Runls -l /dev/dri/renderD128from inside the container.Failed to initialise VAAPI connection: -1: theintel-media-va-driver(orintel-media-va-driver-non-freefor newer codecs) package is missing on the host. The driver is bind-mounted into the container.- Black video / green frames: your CPU is too old for the codec you enabled. Disable HEVC 10-bit decode in Jellyfin's playback settings.
sudo apt install -y mesa-va-drivers libva-drm2 libva2 vainfo
ls -l /dev/dri/ # renderD128 should exist
vainfo | head -n 20 # profiles list should mention VAProfileH264* etc.Then in Jellyfin Dashboard -> Playback:
- Hardware acceleration:
VAAPI - VA API device:
/dev/dri/renderD128 - Enable your codecs.
AMD on Fedora needs the RPM Fusion mesa-va-drivers-freeworld package for non-free codec support:
sudo dnf install -y https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
sudo dnf swap mesa-va-drivers mesa-va-drivers-freeworld
vainfo | grep VAProfileOn SELinux-enforcing Fedora, the Jellyfin bind mounts must carry the :Z label (the installer does this). If you bypassed it:
sudo chcon -R -t container_file_t ~/arrstack/config/jellyfinSame process as Intel: docker exec arrstack-jellyfin-1 ps -ef | grep ffmpeg and look for -hwaccel vaapi. From the host:
sudo dnf install -y radeontop # Fedora
sudo apt install -y radeontop # Debian/Ubuntu
sudo radeontopThe "VGT" (Vertex Grouper / Tessellator) and "EE" (Event Engine) bars stay busy during a transcode.
Ubuntu / Debian:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -fsSL https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart dockerFedora:
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
| sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
sudo dnf install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart dockerConfirm the toolkit is visible to Docker:
docker run --rm --gpus all nvidia/cuda:12.4.0-base-ubuntu22.04 nvidia-smiIf that prints the GPU, the host is ready. Now update the stack so Jellyfin gets the deploy.resources.reservations.devices block:
arrstack update
docker compose -f ~/arrstack/docker-compose.yml up -d --force-recreate jellyfindocker exec arrstack-jellyfin-1 nvidia-smi # should list the GPU
docker exec arrstack-jellyfin-1 ps -ef | grep ffmpeg # look for -hwaccel cuda or nvdecFrom the host, watch GPU utilization while a transcode runs:
nvidia-smi dmon -s u -c 30 # 30 samples of utilization
# or:
watch -n 1 nvidia-smiThe enc and dec columns from nvidia-smi dmon -s u correspond to NVENC (encode) and NVDEC (decode). Both should move above zero during a transcode.
could not select device driver "" with capabilities: [[gpu]]: toolkit not configured. Re-runsudo nvidia-ctk runtime configure --runtime=docker && sudo systemctl restart docker.Failed to initialise NVENC: driver too old for the codec.nvidia-smishows the driver version; cross-reference against the NVIDIA Video Codec SDK support matrix.No NVIDIA GPU foundinside container despitenvidia-smiworking on host:deploy.resources.reservations.devicesblock missing from Jellyfin's service. Runarrstack update.- Consumer GeForce cards have a concurrent NVENC session limit (typically 3 or 5). Hitting the cap makes new sessions fall back to CPU.
This is the question that matters. Three ways to confirm, in order of certainty:
-
ffmpeg command line. Inside the container:
docker exec arrstack-jellyfin-1 ps -ef | grep -v grep | grep ffmpeg
Intel/AMD wants
-hwaccel vaapi. NVIDIA wants-hwaccel cudaor-hwaccel nvdecand a-c:v h264_nvenc(orhevc_nvenc) somewhere after the-i. CPU transcode would showlibx264/libx265with no hwaccel flag. -
Jellyfin dashboard. Dashboard -> Playback -> active streams. Each stream row lists the encoder used (e.g.
h264_nvencorh264_vaapi). If it sayslibx264, the GPU is not involved. -
GPU utilization from the host while the stream runs:
- Intel:
sudo intel_gpu_top(Video engine > 0) - AMD:
sudo radeontop(EE/VGT movement) - NVIDIA:
nvidia-smi dmon -s u -c 10(enc/dec > 0)
- Intel:
If ffmpeg shows the hwaccel flag but GPU utilization stays at zero, driver userspace is falling back silently. Check the Jellyfin ffmpeg log for a Failed to initialise line:
docker exec arrstack-jellyfin-1 \
tail -n 200 /config/log/FFmpeg.Transcode-$(date +%Y-%m-%d).logHardware transcoding is not always worth the debugging effort:
- Direct Play covers most cases if your clients support the source codec. Match client capabilities instead of forcing transcode.
- A modern CPU can comfortably 1080p-transcode one or two streams with
libx264 veryfast. The GPU path pays off at 3+ concurrent streams, 4K sources, or tone-mapping HDR. - On a laptop / mini-PC with a single iGPU, GPU transcoding warms the entire package and can throttle other services.
Set gpu.vendor: "none" in state and run arrstack install --resume to opt out cleanly.