@@ -40,23 +40,6 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
4040
4141### Scaleway GPU Instances types overview
4242
43- | | ** [ RENDER-S] ( https://www.scaleway.com/en/gpu-render-instances/ ) ** | ** [ H100-1-80G] ( https://www.scaleway.com/en/h100-pcie-try-it-now/ ) ** | ** [ H100-2-80G] ( https://www.scaleway.com/en/h100-pcie-try-it-now/ ) ** |
44- | ---------------------------------------------------------------------| -------------------------------------------------------------------------------------| --------------------------------------------------------------------------------------------------------------------------------------------------| --------------------------------------------------------------------------------------------------------------------------------------------------|
45- | GPU Type | 1x [ P100] ( https://www.nvidia.com/en-us/data-center/tesla-p100/ ) PCIe3 | 1x [ H100] ( https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet ) PCIe5 | 2x [ H100] ( https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet ) PCIe5 |
46- | NVIDIA architecture | Pascal 2016 | Hopper 2022 | Hopper 2022 |
47- | Tensor Cores | N/A | Yes | Yes |
48- | Performance (training in FP16 Tensor Cores) | (No Tensor Cores : 9,3 TFLOPS FP32) | 1513 TFLOPS | 2x 1513 TFLOPS |
49- | VRAM | 16 GB CoWoS HBM2 (Memory bandwidth: 732 GB/s) | 80 GB HBM2E (Memory bandwidth: 2TB/s) | 2x80 GB HBM2E (Memory bandwidth: 2TB/s) |
50- | CPU Type | Intel Xeon Gold 6148 (2.4 GHz) | AMD EPYC™ 9334 (2.7GHz) | AMD EPYC™ 9334 (2.7GHz) |
51- | vCPUs | 10 | 24 | 48 |
52- | RAM | 42 GB DDR3 | 240 GB DDR5 | 480 GB DDR5 |
53- | Storage | Block/Local | Block | Block |
54- | [ Scratch Storage] ( /gpu/how-to/use-scratch-storage-h100-instances/ ) | No | Yes (3 TB NVMe) | Yes (6 TB NVMe) |
55- | [ MIG compatibility] ( /gpu/how-to/use-nvidia-mig-technology/ ) | No | Yes | Yes |
56- | Bandwidth | 1 Gbps | 10 Gbps | 20 Gbps |
57- | Better used for | Image / Video encoding (4K) | 7B LLM Fine-Tuning / Inference | 70B LLM Fine-Tuning / Inference |
58- | What they are not made for | Large models (especially LLM) | Graphic or video encoding use cases | Graphic or video encoding use cases |
59-
6043| | ** [ B300-SXM-8-288G] ( https://www.scaleway.com/en/b300-sxm/ ) ** |
6144| --------------------------------------------------------------------| ----------------------------------------------------------------------------|
6245| GPU type | 8x [ B300-SXM] ( https://www.nvidia.com/en-us/data-center/dgx-b300/ ) |
@@ -73,8 +56,25 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
7356| Bandwidth | 20 Gbps |
7457| Network technology | [ NVLink] ( /gpu/reference-content/understanding-nvidia-nvlink/ ) |
7558| Better used for | Deploying large-scale AI model training and inference workloads — especially large LLMs, multimodal AI, or heavy HPC tasks |
76-
7759| What they are not made for | Real-time graphics, video editing or game-graphics workloads |
60+
61+ | | ** [ RENDER-S] ( https://www.scaleway.com/en/gpu-render-instances/ ) ** | ** [ H100-1-80G] ( https://www.scaleway.com/en/h100-pcie-try-it-now/ ) ** | ** [ H100-2-80G] ( https://www.scaleway.com/en/h100-pcie-try-it-now/ ) ** |
62+ | ---------------------------------------------------------------------| -------------------------------------------------------------------------------------| --------------------------------------------------------------------------------------------------------------------------------------------------| --------------------------------------------------------------------------------------------------------------------------------------------------|
63+ | GPU Type | 1x [ P100] ( https://www.nvidia.com/en-us/data-center/tesla-p100/ ) PCIe3 | 1x [ H100] ( https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet ) PCIe5 | 2x [ H100] ( https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet ) PCIe5 |
64+ | NVIDIA architecture | Pascal 2016 | Hopper 2022 | Hopper 2022 |
65+ | Tensor Cores | N/A | Yes | Yes |
66+ | Performance (training in FP16 Tensor Cores) | (No Tensor Cores : 9,3 TFLOPS FP32) | 1513 TFLOPS | 2x 1513 TFLOPS |
67+ | VRAM | 16 GB CoWoS HBM2 (Memory bandwidth: 732 GB/s) | 80 GB HBM2E (Memory bandwidth: 2 TB/s) | 2x80 GB HBM2E (Memory bandwidth: 2 TB/s) |
68+ | CPU Type | Intel Xeon Gold 6148 (2.4 GHz) | AMD EPYC™ 9334 (2.7GHz) | AMD EPYC™ 9334 (2.7GHz) |
69+ | vCPUs | 10 | 24 | 48 |
70+ | RAM | 42 GB DDR3 | 240 GB DDR5 | 480 GB DDR5 |
71+ | Storage | Block/Local | Block | Block |
72+ | [ Scratch Storage] ( /gpu/how-to/use-scratch-storage-h100-instances/ ) | No | Yes (3 TB NVMe) | Yes (6 TB NVMe) |
73+ | [ MIG compatibility] ( /gpu/how-to/use-nvidia-mig-technology/ ) | No | Yes | Yes |
74+ | Bandwidth | 1 Gbps | 10 Gbps | 20 Gbps |
75+ | Better used for | Image / Video encoding (4K) | 7B LLM Fine-Tuning / Inference | 70B LLM Fine-Tuning / Inference |
76+ | What they are not made for | Large models (especially LLM) | Graphic or video encoding use cases | Graphic or video encoding use cases |
77+
7878| | ** [ H100-SXM-2-80G] ( https://www.scaleway.com/en/h100-pcie-try-it-now/ ) ** | ** [ H100-SXM-4-80G] ( https://www.scaleway.com/en/h100-pcie-try-it-now/ ) ** | ** [ H100-SXM-8-80G] ( https://www.scaleway.com/en/h100-pcie-try-it-now/ ) ** |
7979| --------------------------------------------------------------------| -------------------------------------------------------------------| -------------------------------------------------------------------| -------------------------------------------------------------------|
8080| GPU Type | 2x [ H100-SXM] ( https://www.nvidia.com/en-us/data-center/h100/ ) SXM | 4x [ H100-SXM] ( https://www.nvidia.com/en-us/data-center/h100/ ) SXM | 8x [ H100-SXM] ( https://www.nvidia.com/en-us/data-center/h100/ ) SXM |
@@ -125,7 +125,6 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
125125| [ MIG compatibility] ( /gpu/how-to/use-nvidia-mig-technology/ ) | No | No | No | No |
126126| Bandwidth | 2,5 Gbps | 5 Gbps | 10 Gbps | 20 Gbps |
127127| Use cases | GenAI (Image/Video) | GenAI (Image/Video) | 7B Text-to-image model fine-tuning / Inference | 70B text-to-image model fine-tuning / Inference |
128- | What they are not made for | | | | |
129128
130129<Message type = " note" >
131130 The service level objective (SLO) for all GPU Instance types is 99.5% availability. [ Read the SLA] ( https://www.scaleway.com/en/virtual-instances/sla/ ) .
0 commit comments