NVIDIA data center Available

NVIDIA A100 80GB SXM

A-Series ยท Ampere Architecture

The NVIDIA A100 80GB SXM is the data center GPU that defined the modern AI era. Built on Ampere architecture, it introduced Multi-Instance GPU (MIG) technology and 3rd-gen Tensor Cores. Still widely deployed across cloud providers and enterprises for AI training, inference, and HPC workloads.

Key Features

3rd-gen Tensor Cores NVLink 3.0 Multi-Instance GPU (MIG) Structural Sparsity TF32 precision

Full Specifications

Compute

Architecture Ampere
Process Node 7nm TSMC
CUDA Cores 6,912
Tensor Cores 432
Base Clock 1065 MHz
Boost Clock 1410 MHz
FP32 Performance 19.49 TFLOPS
FP16 Performance 312 TFLOPS
BF16 Performance 312 TFLOPS
INT8 Performance 624 TOPS

Memory

Memory Size 80 GB
Memory Type HBM2e
Memory Bus 5120-bit
Memory Bandwidth 2039 GB/s

Power & Physical

TDP 400W
Form Factor SXM4
Power Connectors SXM4 connector

Features & Connectivity

PCIe Version PCIe 4.0
NVLink Support Yes
Multi-GPU Support Yes

Availability

MSRP (USD) $15,000
Release Date Jun 2021
Status Available

Industries

Use Cases

AI Training HPC Data Analytics Deep Learning Molecular Dynamics

Interested in the NVIDIA A100 80GB SXM?

Get pricing, availability, and bulk discount information from our team.

Enquire Now

Related GPUs

NVIDIA data center

NVIDIA H100 SXM

Memory

80GB HBM3

FP32

66.91 TFLOPS

TDP

700W

FP16

989.4 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA H100 PCIe

Memory

80GB HBM3

FP32

51.22 TFLOPS

TDP

350W

FP16

756 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA H200 SXM

Memory

141GB HBM3e

FP32

66.91 TFLOPS

TDP

700W

FP16

989.4 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA B200

Memory

192GB HBM3e

FP32

90 TFLOPS

TDP

1000W

FP16

1800 TFLOPS

Available View Specs