Close Menu
London Tribune
  • Home
  • Top Stories
  • Global Trends
  • Business
  • Politics
  • More
    • Sports
    • Lifestyle
    • Technology
    • Health
    • Fashion
    • Food & Recipes
    • Gaming
    • Music
    • Travel

Subscribe to Updates

Get the latest business and politics news about UK and the world directly to your inbox.

Trending

Households hit with ‘penalty’ for £25 DWP bonus payment

November 17, 2025

Europe joins US as exascale superpower after Jupiter clinches Top500 run

November 17, 2025

Melting Sweet Potatoes Recipe

November 17, 2025
Facebook X (Twitter) Instagram
  • About
  • DMCA
  • Privacy
  • Terms
  • Contact
Facebook X (Twitter) Instagram YouTube
Login
London Tribune
  • Home
  • Top Stories

    BBC shut down female staffers’ complaints about trans coverage

    November 17, 2025

    Bill Maher urges Democrats to accept moderate candidates after socialism surge in November election

    November 16, 2025

    James Carville tells Dems they should consider packing Supreme Court if they regain power

    November 16, 2025

    ‘The View’ co-host warns Zohran Mamdani may hit ‘roadblocks’ due to lack of experience

    November 15, 2025

    White House eviscerates Katie Couric for pressing Fetterman to rebuke Charlie Kirk, Trump

    November 15, 2025
  • Global Trends

    New survey reveals just how much Brits love classical music | UK | News

    May 23, 2024

    Remove yellow stains from mattress fast using cheap grooming product

    May 23, 2024

    Cleaning guru warns drain cleaning hack is damaging your home

    May 23, 2024

    Zeta Quantum Diamonds by Themis Ecosystem: Approved to Hit Sooner Than Predicted

    May 23, 2024

    ‘Best winter destination’ in Europe has ‘hearty food’ and public baths

    December 7, 2023
  • Business
  • Politics
  • More
    • Sports
    • Lifestyle
    • Technology
    • Health
    • Fashion
    • Food & Recipes
    • Gaming
    • Music
    • Travel
London Tribune
  • Top Stories
  • Global Trends
  • Business
  • Politics
  • Lifestyle
  • Sports
  • Technology
Home»Technology»Europe joins US as exascale superpower after Jupiter clinches Top500 run
Technology

Europe joins US as exascale superpower after Jupiter clinches Top500 run

LondonTribuneBy LondonTribuneNovember 17, 20257 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram WhatsApp Copy Link

SC25 Europe has officially entered exascale orbit. On Monday, EuroHPC’s Jupiter supercomputer became the fourth such machine on the Top500 list of publicly known systems to exceed a million-trillion floating point operations a second in the time-honored High-Performance Linpack (HPL) benchmark.

Built by Eviden and powered by Nvidia’s Grace-Hopper GH200 superchips, the Jupiter Booster made its maiden flight at the International Supercomputing Conference in June where it managed 793 petaFLOPS of HPL performance in a partial run that established it as Europe’s most powerful super.

Jupiter exascale computer

Jupiter exascale computer

Less than six months later, Jupiter became the first public system outside the US to exceed one exaFLOPS of double precision performance. We’ll note that China is believed to possess several exascale-class systems, but has become increasingly secretive about their existence.

This fall’s HPL run puts Jupiter within spitting distance of America’s third most powerful supercomputer, Argonne National Laboratory’s Aurora, which leads Jupiter by less than 12 petaFLOPS.

However, there’s still time to close the gap. Jupiter remains under construction. So far we’ve only seen benchmark runs from the Booster section of the system, which comprises roughly 6,000 of Eviden’s BullSequana XH3000 nodes, each of which is equipped with four GH200 superchips.

In total, the Booster section is equipped with roughly 24,000 superchips which are theoretically capable of churning out somewhere between 872 petaFLOPS (vector) and 1.6 exaFLOPS (matrix) performance.

Jupiter’s smaller CPU partition, which it calls the Universal Cluster, is still under active development. This section is based around French chip designer SiPearl’s Rhea1 processor, which taped out in July and will feature 80 of Arm’s Neoverse V1 cores and come equipped with 64 GB of high-bandwidth memory.

When the Universal Cluster comes online late next year, it’ll boast 2,600 Rhea1 processors spread across 1,300 nodes, which are expected to contribute another five petaFLOPS of compute to the overall machine.

Compared to GPUs, CPUs aren’t nearly as performant, but are still relevant because of their flexibility. Not all software is well suited to running on GPUs or simply haven’t been optimized for the architecture – hence the name Universal Cluster.

That means Jülich Supercomputing Center’s (JSC) boffins in Germany will only need to squeeze a few more petaFLOPS of HPL performance from the combined system to overtake Aurora as the number three system on the Top500 – assuming that another system doesn’t beat them to it.

These kinds of post-run optimizations aren’t unusual either. Lawrence Livermore National Laboratory has done just that with its number one ranked El Capitan supercomputer. In its latest HPL run, the behemoth squeezed another 67 petaFLOPS from its 44,544 AMD Instinct MI300A accelerated processing units (APUs), bringing its total output to 1.809 exaFLOPS.

El Capitan’s gains aren’t all that surprising. In systems this big, even small optimizations can have major implications for the overall system.

Take, for example, Oak Ridge National Laboratory’s (ORNL) Frontier supercomputer, which with 1.1 exaFLOPS of HPL performance claimed the top spot on the Top500 in early 2022. By the end of 2024, ORNL had managed to extract another 250 petaFLOPS of performance from the system bringing its total output to 1.35 exaFLOPS. Today, Frontier remains the second most powerful system on the Top500.

Besides Jupiter and El Capitan’s performance gains, the upper end of the lineup remains largely unchanged from spring.

Top500 Nov. 2025 High Performance Linpack

1 El Capitan 11,340,000 1,809 2,821.10 29,685
2 Frontier 9,066,176 1,353 2,055.72 24,607
3 Aurora 9,264,128 1,012 1,980.01 38,698
4 Jupiter Booster 4,801,344 1,000 1,226.28 15,794
5 Eagle 2,073,600 561.20 846.84 ?
6 HPC6 3,143,520 477.90 606.97 8,461
7 Fugaku 7,630,848 442.01 537.21 29,899
8 Alps 2,121,600 434.90 574.84 7,124
9 LUMI 2,752,704 379.70 531.51 7,107
10 Leonardo 1,824,768 241.20 306.31 7,494

The changing face of high-performance computing

HPL remains the gold standard by which the scientific community measures their computational grunt, but as data access patterns evolve and lower precision data types become more common in the wake of the AI boom, this is slowly changing.

Back in 2017, the Top500 introduced the High Performance Conjugate Gradients benchmark to its lineup as a complement to its longstanding HPL ranking.

The impetus behind the project was simple. While HPL is still representative of many scientific computing workloads, the benchmark isn’t always indicative of real-world performance in all applications. Because of this, systems that dominate the HPL-based Top500 ranking often struggle to compete with systems far less powerful on paper.

For instance, the El Capitan supercomputer manages just 17.41 petaFLOPS in the HPCG benchmark, putting it just ahead of Riken’s Arm-based Fugaku supercomputer at 16 petaFLOPS, despite the American system achieving a roughly 4.5x lead in HPL.

Top500 Nov 2025 High Performance Conjugate Gradients

1 1 El Capitan 11,340,000 1,809 17.41
2 7 Fugaku 7,630,848 442.01 16
3 2 Frontier 9,066,176 1,353 14.05
4 3 Aurora 9,264,128 1,012 5.61
5 9 LUMI 2,752,704 379.70 4.59
6 17 CHIE-4 662,256 135.40 3.76
7 8 Alps 2,121,600 434.90 3.67
8 10 Leonardo 1,824,768 241.20 3.11
9 15 Discovery 6 806,208 164.20 2.58
10 16 ABCI 3.0 479,232 145.10 2.45

We’ve also seen a similar trend toward alternative benchmarks as chipmakers have continued to adopt lower precision 32, 16, and 8-bit data types.

While El Capitan, Frontier, Aurora, and Jupiter all manage more than 1 exaFLOPS of double precision performance in the HPL benchmark, the accelerators powering these machines are capable of far more than that if you’re willing to turn down the resolution a bit.

Each of the MI300As powering El Capitan and the GH200s powering the Jupiter Booster are capable of churning out roughly 2 petaFLOPS of dense FP8 performance. At this precision, El Capitan’s theoretical performance is closer to 90 exaFLOPS while Jupiter’s is roughly half that.

While eight or 16-bit data types may not be appropriate for all scientific computing workloads, they can provide helpful speedups where 64 bits of precision aren’t necessary or wouldn’t render a result in time. For this reason, scientific workloads are increasingly taking advantage of a mixture of precisions in their workflows.

Nvidia has already demonstrated how lower precision data types can be used for life-saving climate science with AI models designed to improve Tornado and Typhoon warning times. Meanwhile, before airgapping El Capitan, researchers employed its computational grunt to develop a digital twin capable of predicting tsunamis.

While not necessarily the focal point of the Top500’s biannual ranking, the project has been tracking this trend going back to the introduction of the HPL-MxP benchmark in 2019. Unlike HPL, where all operations are conducted with 64 bits of precision, the MxP benchmark allows some of those operations to be conducted at lower precision, dramatically increasing the system’s computational potential.

Top500 Nov. 2025 High Performance Linpack Mixed Precision

1 DoE LLNL, USA El Capitan 1.809 1 16.7 9.2
2 DoE ANL, USA Aurora 1.012 3 11.6 11.5
3 DoE ORNL, USA Frontier 1.353 2 11.4 8.4
4 EuroHPC/FZJ Germany Jupiter Booster 1.0 4 6.25 6.3
5 Softbank, Japan CHIE-4 0.135 17 3.3 24.4

In this fall’s ranking, the four exascale systems unsurprisingly take the lead, but not necessarily in the same order as we see them on the Top500. While El Capitan still holds the number one spot with 16.7 exaFLOPS of HPL-MxP performance, Argonne’s Aurora supercomputer claims the number two spot with 11.6 exaFLOPS versus ORNL’s Frontier system at 11.4.

Jupiter, meanwhile, takes the number four spot with 6.25 exaFLOPS of HPL-MxP performance, and SoftBank’s Blackwell B200 based CHIE-4 system comes in fifth with 3.3 exaFLOPS.

With the US Department of Energy, EuroHPC, and other national labs increasingly exploring the applications of machine learning and AI to accelerate scientific breakthroughs, it would be surprising to see HPL-MxP take center stage. ®

Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Related Posts

Windows boss defends ‘agentic OS’ push as users plead for reliability

AI music has finally beaten hat-act humans, but sounds nothing like victory

Developer made one wrong click and sent his AWS bill into the stratosphere

Jaguar Land Rover hack cost India’s Tata Motors around $2.4 billion and counting

Logitech leaks data after zero-day attack

Nvidia-backed photonics startup Ayar Labs eyes hyperscale customers with GUC design collab

Demo
Our Picks

Europe joins US as exascale superpower after Jupiter clinches Top500 run

November 17, 2025

Melting Sweet Potatoes Recipe

November 17, 2025

Dan Whitlam fans hang onto his every word in powerful Roundhouse show

November 17, 2025

Treyarch Reminds Black Ops 7 Players Which Playlists Don’t Have SBMM

November 17, 2025
Don't Miss
finance

Households hit with ‘penalty’ for £25 DWP bonus payment

By LondonTribuneNovember 17, 20250

A call has been issued to fix the Cold Weather Payments system due to an…

Europe joins US as exascale superpower after Jupiter clinches Top500 run

November 17, 2025

Melting Sweet Potatoes Recipe

November 17, 2025

Dan Whitlam fans hang onto his every word in powerful Roundhouse show

November 17, 2025
London Tribune
Facebook X (Twitter) Instagram Pinterest
  • About
  • DMCA
  • Privacy
  • Terms
  • Contact
© 2025 London Tribune. All rights reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?
This website uses cookies. By continuing to use this website, you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.