#1—Frontier
An HPE Cray EX system run by the US Department of Energy, Frontier incorporates 3rd Gen AMD EPYC™ CPUs representing 8,730,112 cores that have been optimized for high-performance computing (HPC) and AI with AMD Instinct™ 250X accelerators and Slingshot-11 interconnects. Its HPL benchmark was 1.194EFLOPS.
#2—Fugaku
Supercomputer Fugaku, housed at the RIKEN Center for Computational Science in Kobe, Japan, scored 442.01PFLOPS in the HPL test. It is built on the Fujitsu A64FX microprocessor and has 7,630,848 cores.
#3—LUMI
LUMI is an HPE Cray EX system at the EuroHPC center at CSC in Kajaani, Finland, with a performance of 309.1 PFLOPS. It relies on AMD processors and boasts 2,220,288 cores.
#4—Leonardo
Leonardo, which resides in Bolbogna, Italy, is an Intel/Nvidia system with 1,463,616 cores and a maximum speed of 238.7PFLOPS.
#5—Summit
An IBM system at Oak Ridge National Laboratory in Tennessee, Summit scored 148.6PFLOPS on the HPL benchmark. It has 4,356 nodes, each with two Power9 CPUs with 22 cores and six Nvidia Tesla V100 GPUs, each with 80 streaming multiprocessors (SM). The nodes are linked by a Mellanox dual-rail EDR InfiniBand network. It has 2,414,592 cores.
#6—Sierra
Similar in architecture to Summit, Sierra reached 94.64 PFLOPS. It has 4,320 nodes with two Power9 CPUs and four Nvidia Tesla V100 GPUs and a total of 1,572,480 cores. It is housed at the Lawrence Livermore National Laboratory, in California.
#7—Sunway TaihuLight
Sunway TaihuLight is a machine developed by National Research Center of Parallel Computer Engineering & Technology (NRCPC) in China and is installed in the city of Wuxi. It reached 93.01PFLOPS on the HPL benchmark. It has 10,649,600 cores.
#8—Perlmutter
The Perlmutter system is based on the HPE Cray Shasta platform and is a heterogeneous system with both AMD EPYC-based nodes and 1536 Nvidia A100-accelerated nodes. It has 761,856 cores. It achieved 70.87 PFLOPS. That’s an improvement of about 6PFLOP/s over last year’s score, but still not enough to catch Sunway TaihuLight.
#9—Selene
Selene is an Nvidia DGX A100 SuperPOD based on an AMD EPYC processor with Nvidia A100s for acceleration and a Mellanox HDR InfiniBand as a network. It has 555,520 cores. It achieved 63.46 PFLOPS and is installed in-house at Nvidia facilities in the US.
#10—Tianhe-2A (Milky Way-2A)
Powered by Intel Xeon CPUs and NUDT’s Matrix-2000 DSP accelerators, Tianhe-2A has 4,981,760 cores in the system to achieve 61.44 PFLOPS. It was developed by China’s National University of Defense Technology (NUDT) and is deployed at the National Supercomputer Center in Guangzhou, China.
、、、 Gaudi2 last May in the US, where it said the processor's training throughput performance was twice that achieved by Nvidia's 80-gigabyte A100 GPU 、、、
===
Intel is an obsolete high tech company. 餓死的駱駝比馬大,不要指望英特爾還能開發出來什麼領先的高科技了。
说真话讨人嫌 发表评论于 2023-07-15 17:52:18
科学家改变世界,资本家掠夺世界,政治家祸害世界。。。
roliepolieolie 发表评论于 2023-07-15 17:42:00
Keeping the cutting edge technologies off limit to China, but selling the obsolete or degraded versions to China will ensure that the West maintains an edge and uses the Chinese market to help fund continued development of the next-gen technologies. It’s a virtuous circle.
tesuji 发表评论于 2023-07-15 17:40:11
Intel announced that it is working with Inspur Group - the world's second-largest AI server manufacturer, based in eastern Shandong province - to build new Gaudi2-powered machines for the mainland market.
Habana Labs, Intel's data centre team focused on AI deep learning processor technologies, initially launched Gaudi2 last May in the US, where it said the processor's training throughput performance was twice that achieved by Nvidia's 80-gigabyte A100 GPU for the ResNet-50 computer vision model and the BERT natural language processing model.