RWAI
Theme
Back to GPUs
GB200 NVL4

GB200 NVL4

Data Center

NVIDIA - Blackwell Architecture

HPC
AI
data-center
enterprise
LLM
supercomputing

NVIDIA's revolutionary Grace Blackwell Superchip that unlocks the future of converged HPC and AI, delivering exceptional performance through four NVIDIA NVLink-connected Blackwell GPUs unified with two Grace CPUs over NVLink-C2C.

Launch Date

March 18, 2024

Launch MSRP

$40,000

Performance

Delivers 30X faster real-time LLM inference performance for trillion-parameter language models compared to H100, and 4X faster training for large language models at scale.

Technical Specifications
CUDA Cores:
18,500
Tensor Cores:
720
Ray Tracing Cores:
N/A
Base Clock:
1.9 GHz
Boost Clock:
2.5 GHz
Memory:
192 GB HBM3e
Memory Bus:
6144-bit
Memory Bandwidth:
8,000 GB/s
Transistors:
208 billion
Die Size:
850 mm²
TDP:
1,200 W
Key Features
  • HBM3e memory with unprecedented bandwidth
  • 5th generation NVLink: 1.8TB/s
  • PCIe Gen6: 256GB/s
  • Second-generation Transformer Engine with FP4/FP8 precision
  • Dedicated decompression engines
  • Multi-Instance GPU (MIG) support

Join the Future of Tokenized AI

Be part of the revolution in AI infrastructure. Join our whitelist to access exclusive tokenized AI resources and early investment opportunities.

Whitelist closes in:

0
Days
0
Hours
0
Minutes
0
Seconds
1

Access to tokenized AI compute resources

2

Early investment in AI infrastructure tokens

3

Exclusive governance rights in the RWAi ecosystem

Limited spots available. Secure your position in the future of decentralized AI infrastructure.