NVIDIA H200 GPU - 141GB HBM3e, 4.8 TB/s bandwidth. The most powerful GPU for LLM training.
Industry standard for AI training and inference
Proven performance for ML workloads
Optimized for AI inference and visualization
AI inference and graphics in one GPU
Mainstream GPU for mixed AI and HPC workloads
Enterprise-grade GPUs for AI training, inference, and HPC workloads
Next-generation GPU with 141GB HBM3e for large-scale AI training.
Industry standard for AI training and inference
Proven performance for ML workloads
Optimized for AI inference and visualization
Mainstream GPU for mixed AI and HPC workloads
AI inference and graphics in one GPU
NVIDIA RTX for visualization, rendering, and AI development
Professional visualization with ray tracing
Maximum memory for complex 3D models
Compare our most popular data center GPUs side by side.
| Feature | H200 | H100 | A100 | L40S |
|---|---|---|---|---|
| Architecture | Hopper | Hopper | Ampere | Ada Lovelace |
| VRAM | 141 GB | 80 GB | 80 GB | 48 GB |
| Memory Type | HBM3e | HBM3 | HBM2e | GDDR6 |
| Bandwidth | 4.8 TB/s | 3.35 TB/s | 2.0 TB/s | 864 GB/s |
| FP16 Performance | 1,979 TFLOPS | 1,979 TFLOPS | 312 TFLOPS | 362 TFLOPS |
Ampere professional GPU for AI and rendering
Ada Lovelace for next-gen professional workloads
Blackwell professional GPU with 96GB GDDR7
| Best For |
| LLM Training |
| AI Training |
| ML Workloads |
| Inference + Graphics |