Ideal for AI training and inference workloads
| Feature | Instance | vCPU | RAM | Storage | Price/hr |
|---|---|---|---|---|---|
| gpu.a100.1x | gpu.a100.1x | 12 | 120 GB | 500 GB NVMe | ₹755 |
| gpu.a100.2x | gpu.a100.2x | 24 | 240 GB | 1 TB NVMe | ₹150 |
| gpu.a100.4x | gpu.a100.4x | 48 | 480 GB | 2 TB NVMe | ₹295 |
| gpu.a100.8x | gpu.a100.8x | 96 | 960 GB | 4 TB NVMe | ₹580 |
High Memory variant - Extended memory for large model training
| Feature | Instance | vCPU | RAM | Storage | Price/hr |
|---|---|---|---|---|---|
| gpu.a100-80.1x | gpu.a100-80.1x | 12 | 120 GB | 500 GB NVMe | ₹95 |
| gpu.a100-80.2x | gpu.a100-80.2x | 24 | 240 GB | 1 TB NVMe | ₹185 |
| gpu.a100-80.4x | gpu.a100-80.4x | 48 | 480 GB | 2 TB NVMe | ₹365 |
| gpu.a100-80.8x | gpu.a100-80.8x | 96 | 960 GB | 4 TB NVMe | ₹720 |
Latest generation for cutting-edge AI workloads
| Feature | Instance | vCPU | RAM | Storage | Price/hr |
|---|---|---|---|---|---|
| gpu.h100.1x | gpu.h100.1x | 16 | 200 GB | 1 TB NVMe | ₹165 |
| gpu.h100.2x | gpu.h100.2x | 32 | 400 GB | 2 TB NVMe | ₹325 |
| gpu.h100.4x | gpu.h100.4x | 64 | 800 GB | 4 TB NVMe | ₹640 |
| gpu.h100.8x | gpu.h100.8x | 128 | 1.6 TB | 8 TB NVMe | ₹1,250 |
Cost-effective for inference and light training
| Feature | Instance | vCPU | RAM | Storage | Price/hr |
|---|---|---|---|---|---|
| gpu.t4.1x | gpu.t4.1x | 4 | 16 GB | 200 GB NVMe | ₹25 |
| gpu.t4.2x | gpu.t4.2x | 8 | 32 GB | 400 GB NVMe | ₹48 |
| gpu.t4.4x | gpu.t4.4x | 16 | 64 GB | 800 GB NVMe | ₹92 |
High-bandwidth GPU-to-GPU communication for distributed training.
Ready-to-use with CUDA, cuDNN, and popular ML frameworks.
One-click Jupyter notebook access for interactive development.
Save up to 70% with interruptible GPU instances for training.
Train large language models with multi-GPU support
Image classification, object detection, segmentation
Real-time rendering and ray tracing workloads
Molecular dynamics, weather simulation, HPC
Transcoding, AI upscaling, real-time processing
Deploy ML models for production inference