Latest Generation

NVIDIA H200

The world's most powerful GPU for large language model training. 141GB of HBM3e memory and 4.8 TB/s bandwidth.

Model Specifications

HBM3e VRAM141 GB
Bandwidth4.8 TB/s
CUDA Cores16,896
TFLOPS FP161,979
Hopper
Architecture
141 GB HBM3e
VRAM
4.8 TB/s
Memory Bandwidth
16,896
CUDA Cores
0
Tensor Cores
1,979 TFLOPS
FP16 Performance
3,958 TFLOPS
FP8 Performance
700W
TDP
NVLink 4.0 (900 GB/s)
Interconnect
Gen5 x16
PCIe
SXM5
Form Factor

Pricing Plans

Flexible pricing options to match your workload requirements.

On-Demand

Pay as you go with no commitment

₹800/hour
  • 1x NVIDIA H200 GPU
  • 64 vCPUs
  • 512 GB RAM
  • 2 TB NVMe SSD
  • No minimum commitment
  • Start/stop anytime
Most Popular

Reserved 1 Month

Save 15% with monthly commitment

₹340,000/month
  • 1x NVIDIA H200 GPU
  • 64 vCPUs
  • 512 GB RAM
  • 2 TB NVMe SSD
  • 15% discount
  • Priority support

Reserved 1 Year

Maximum savings with annual commitment

₹240,000/month
  • 1x NVIDIA H200 GPU
  • 64 vCPUs
  • 512 GB RAM
  • 2 TB NVMe SSD
  • 40% discount
  • Dedicated support
Key Features

Why Choose NVIDIA H200

Unprecedented Memory

141GB HBM3e - 76% more memory than H100 for larger models and batch sizes.

Extreme Bandwidth

4.8 TB/s memory bandwidth for faster data movement and reduced bottlenecks.

NVLink 4.0

900 GB/s GPU-to-GPU bandwidth for efficient multi-GPU scaling.

Transformer Engine

Automatic mixed precision with FP8 support for 2x throughput on transformers.

Use Cases

Use Cases

Large Language Model Training

Train models with hundreds of billions of parameters using 141GB of HBM3e memory.

Multi-Modal AI

Build vision-language models requiring massive memory for image and text processing.

Distributed Training

Scale across multiple H200 GPUs with 900 GB/s NVLink interconnect.

Inference at Scale

Deploy large models with sufficient memory for extended context windows.

Ready to Deploy NVIDIA H200?

Next-generation GPU with 141GB HBM3e for large-scale AI training.

NVIDIA H200 GPU | HOST360