NVIDIA H100 NVL
Couldn't load pickup availability
Delivery and Shipping
Delivery and Shipping
The cost of shipping your order is determined at checkout, factoring in the speed of delivery you select, as well as the size and weight of the items you've chosen. Please note, our daily cutoff time for same day shipping is 12PM Eastern Standard Time. If you need rush or same-day processing after these cutoff times, please contact us as same-day shipping may still be possible. You can reach us by phone at 781-272-0967 or by email at frontdesk@c3aero.com
Please see our Shipping Policy for more information.
Please contact us for availability.
The NVIDIA® H100 NVL supercharges large language model inference in mainstream PCIe-based server systems. With increased raw performance, bigger, faster HBM3 memory, and NVIDIA NVLink™ connectivity via bridges, mainstream systems with H100 NVL outperform NVIDIA A100 Tensor Core systems by up to 5X on Llama 2 70B.
Technical Specifications
FP64 | 30 TFLOPS |
FP64 Tensor Core | 60 TFLOPS |
FP32 | 60 TFLOPS |
TF32 Tensor Core | 835 TFLOPS | Sparsity |
BFLOAT16 Tensor Core | 1671 TFLOPS | Sparsity |
FP16 Tensor Core | 1671 TFLOPS | Sparsity |
FP8 Tensor Core | 3341 TFLOPS | Sparsity |
INT8 Tensor Core | 3341 TOPS |
GPU Memory | 94 GB HBM3 |
GPU Memory Bandwidth | 3.9 TB/s |
Maximum Thermal Design Power | 350–400 W (Configurable) |
NVIDIA AI Enterprise | Included |
Payment & Security
Payment methods
Your payment information is processed securely. We do not store credit card details nor have access to your credit card information.