Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Home  Server  Accelerators  NVIDIA Data Center Roadmap

with GX200NVL GX200 X100 and X40 AI Chips...

Server Accelerators AI

NVIDIA Data Center


Roadmap with GX200NVL
GX200 X100 and X40 AI
Chips in 2025
By Patrick Kennedy - October 10, 2023  0

NVIDIA GH200 Refresh

      

NVIDIA recently showed off its roadmap, well beyond


what it normally does in GTC. The company is pushing
hard in the data center space with expectations that it
will be at least 3x the gaming marking and 4.5x if one
includes Enterprise AI and its DGX Cloud offerings. As
such, the roadmap has accelerated with new parts like
the H200, B100, X100, B40, X40, GB200, GX200,
GB200NVL, GX200NVL, and more. There is even a path
to 1.6T Ethernet in the next two years.

NVIDIA Data Center Roadmap


One major change is that NVIDIA now breaks out its
Arm-based products and its x86-based products, with
Arm on top. For some reference, a normal customer
cannot even buy a NVIDIA Grace or Grace Hopper
today, so showing it atop its stack for a 2023-2025
roadmap is an important detail. Here is the roadmap
NVIDIA presented:

NVIDIA Roadmap 2023 10

On the Arm side, we have the GH200NVL in 2024,


GB200NVL in 2024, and then in 2025 we have the
GX200NVL. We have already seen the x86 NVL line
launch with the NVIDIA H100 NVL but these are Arm-
based solutions. There is then the GH200NVL coming in
2024. There is a fast-follow GB200NVL and then a
GX200NVL coming. There are also the non-NVL
versions. We covered the GH200 (non-NVL) with
142GB/ 144GB of memory in the NVIDIA Announces a
New NVIDIA Hopper 144GB HBM3e model in a dual
configuration that may end up being the GH200NVL.
The GB200 would be the next-generation accelerator in
2024 and the GX200 in 2025.

Dual NVIDIA GH200 Refresh

For the x86 market, there is the H200 in 2024 that we


would expect to be a refresh with more memory still on
the Hopper architecture. The B100 and B40 are the
next-gen architecture parts followed by the X100 and
X40 in 2025. Given the B40 and X40 are on the
“enterprise” swimlane, and the current L40S is a PCIe
card, these might be the PCIe cards.

NVIDIA L40S Supernova

On the networking side, both Infiniband and Ethernet


are going to progress from 400Gbps to 800Gbps in
2024 and then to 1.6Tbps in 2025. Given we have
already taken a look at Broadcom Tomahawk 4 and
switches with it in early 2023, and have seen partner
800G Broadcom Tomahawk 5 switches this year, it feels
a bit like the NVIDIA Ethernet portfolio is notably behind
in Ethernet. 2022-2023 800G line from Broadcom
seems to align with a 2024 upgrade from NVIDIA with
NVIDIA announcing Spectrum 4 in mid-2023 whereas
Tomahawk 5 was announced about 21-22 months
earlier. In the industry, there is generally a significant
gap between a chip announcement and when it goes
into production switches.

NVIDIA Spectrum 4 Switch Chip At Computex 2023

On the Infiniband side, NVIDIA is alone. Something


missing from the roadmap is the NVSwitch/ NVLink
roadmap.

Final Words
Other AI hardware companies should be freaked out by
NVIDIA’s enterprise AI roadmap. Playing in the AI
training and inference space is going to mean a refresh
generation of the current Hopper in 2024, then a
transition to the Blackwell generation later in 2024 with
another architecture in 2025. On the CPU side, we have
seen sleepy update cadences give way to a core count
war on the x86 side with massive jumps lately. For
example, Intel’s top-of-the-line Xeon core count is
expected to jump more than 10x from early Q2 2021 to
Q2 2024. NVIDIA seems to be on that pace in the data
center. For AI startups building silicon, this is now a
race given NVIDIA’s new roadmap pace.

For Intel, AMD, and perhaps Cerebras, the goalposts are


going to be moving as NVIDIA is selling big high-margin
chips. It is also putting its Arm-based solutions in the
top swimlane so it can get those high margins on not
just the GPU/ accelerator side, but also on the CPU side.

The one notable laggard seems to be the Ethernet side,


which feels strange given the fact that in the STH lab,
we use NVIDIA BlueField-2 DPUs for Ethernet daily and
the fastest NICs we have used thus far are the NVIDIA
ConnectX-7 400GbE cards we reviewed.

TAGS 800GbE B100 B40 GB200 GB200NVL

GH200 GH200NVL GX200 GX200NVL H200

NVIDIA X100 X40

      

Previous article Next article

FADU CXL 2.0 Switch and This GoWin R86S Pro is


PCIe Gen5 NVMe SSDs at an Everything Revolution
FMS 2023 with 25GbE and 2.5GbE

Patrick Kennedy
https://www.servethehome.com

Patrick has been running STH since 2009 and covers a


wide variety of SME, SMB, and SOHO IT topics. Patrick
is a consultant in the technology industry and has
worked with numerous large hardware and storage
vendors in the Silicon Valley. The goal of STH is simply
to help users find some information about server,
storage and networking, building blocks. If you have any
helpful information please feel free to post on the
forums.

  

RELATED ARTICLES

Cerebras WSE-3 AI Chip


Launched 56x Larger than
NVIDIA H100
Patrick Kennedy - March 13, 2024

JEDEC Published the GDDR7


Spec Paving the Way for More
Memory...
Cliff Robinson - March 10, 2024

AMD Infinity Fabric AFL Scale


Up Competitor to NVIDIA
NVLink Coming...
Patrick Kennedy - March 6, 2024

NO COMMENTS

LEAVE A REPLY

Comment:

Name:*

Email:*

Website:

Save my name, email, and website in this browser for


the next time I comment.

Sign me up for the STH newsletter!

POST COMMENT

This site uses Akismet to reduce spam. Learn how your


comment data is processed.

The Ultimate Military


Workout

No coach - this program will take you


to the next level of !tness

MadMuscles

Open

ABOUT US

ServeTheHome is the IT professional's guide to servers,


storage, networking, and high-end workstation
hardware, plus great open source projects.

Advertise on STH DISCLAIMERS: We are a participant


in the Amazon Services LLC Associates Program, an
affiliate advertising program designed to provide a
means for us to earn fees by linking to Amazon.com and
affiliated sites.

FOLLOW US

      

© 2009-2020 Loyolan Ventures, LLC

Paramètres concernant la confidentialité et les cookies



Géré par Google. Conforme au TCF de l'IAB. ID de CMP : 300

You might also like