PNY  |  SKU: NVH100TCGPU-KIT

NVIDIA H100

$29,999.00

Delivery and Shipping

The cost of shipping your order is determined at checkout, factoring in the speed of delivery you select, as well as the size and weight of the items you've chosen. Please note, our daily cutoff time for same day shipping is 12PM Eastern Standard Time. If you need rush or same-day processing after these cutoff times, please contact us as same-day shipping may still be possible. You can reach us by phone at 781-272-0967 or by email at frontdesk@c3aero.com

Please see our Shipping Policy for more information.

Please contact us for availability.

The NVIDIA® H100 Tensor Core GPU enables an order-of-magnitude leap for large-scale AI and HPC with unprecedented performance, scalability, and security for every data center and includes the NVIDIA AI Enterprise software suite to streamline AI development and deployment. H100 accelerates exascale scale workloads with a dedicated Transformer Engine for trillion parameter language models. For small jobs, H100 can be partitioned down to right-sized Multi-Instance GPU (MIG) partitions.

Technical Specifications

FP64 26 TFLOPS
FP64 Tensor Core 51 TFLOPS
FP32 51 TFLOPS
TF32 Tensor Core 51 TFLOPS | Sparsity
BFLOAT16 Tensor Core 1513 TFLOPS | Sparsity
FP16 Tensor Core 1513 TFLOPS | Sparsity
FP8 Tensor Core 3026 TFLOPS | Sparsity
INT8 Tensor Core 3026 TOPS | Sparsity
GPU Memory 80 GB HBM2e
GPU Memory Bandwidth 2.0 TB/sec
Maximum Power Consumption 350 W

Payment & Security

Payment methods

  • Amazon
  • American Express
  • Apple Pay
  • Diners Club
  • Discover
  • Google Pay
  • Mastercard
  • Shop Pay
  • Visa

Your payment information is processed securely. We do not store credit card details nor have access to your credit card information.

PNY

NVIDIA H100

$29,999.00

The NVIDIA® H100 Tensor Core GPU enables an order-of-magnitude leap for large-scale AI and HPC with unprecedented performance, scalability, and security for every data center and includes the NVIDIA AI Enterprise software suite to streamline AI development and deployment. H100 accelerates exascale scale workloads with a dedicated Transformer Engine for trillion parameter language models. For small jobs, H100 can be partitioned down to right-sized Multi-Instance GPU (MIG) partitions.

Technical Specifications

FP64 26 TFLOPS
FP64 Tensor Core 51 TFLOPS
FP32 51 TFLOPS
TF32 Tensor Core 51 TFLOPS | Sparsity
BFLOAT16 Tensor Core 1513 TFLOPS | Sparsity
FP16 Tensor Core 1513 TFLOPS | Sparsity
FP8 Tensor Core 3026 TFLOPS | Sparsity
INT8 Tensor Core 3026 TOPS | Sparsity
GPU Memory 80 GB HBM2e
GPU Memory Bandwidth 2.0 TB/sec
Maximum Power Consumption 350 W
View product