美国Summit超越中国神威·太湖之光成为目前世界上最快的超级计算机

据美国橡树岭国家实验室官网于当地时间2018年06月08日的报道,由该实验室组织设计制造的超级计算机Summit以每秒20亿亿次的浮点运算峰值超越此前排名世界第一的,浮点运算峰值为12.5亿亿次的中国神威·太湖之光超级计算机。

据美国橡树岭国家实验室官网于当地时间2018年06月08日的报道,由该实验室组织设计制造的超级计算机Summit以每秒20亿亿次(200-petaflop, 数据来自美国橡树岭国家实验室官网)的浮点运算峰值超越此前排名世界第一的,浮点运算峰值为12.5亿亿次(125.436PFlops, 数据来自中国国家超级计算无锡中心官网),实测持续运算速度为93.015PFlops(93.015PFlops, 数据来自中国国家超级计算无锡中心官网)的中国神威·太湖之光超级计算机。

以浮点运算峰值速度进行比较的话,Summit比神威·太湖之光快了约59.44%.

Summit超算系统由4,608个计算机节点组成,每个节点安装有2个IBM Power9 CPU和6个NVIDIA Tesla V100 GPU,整个超算系统共计由9,216个CPU和27,648个GPU组成。


Summit超级计算机实拍:

Picture From ornl.gov
Picture From ornl.gov

 

Picture From ornl.gov
Picture From ornl.gov

 

Picture From ornl.gov
Picture From ornl.gov

 

Picture From ornl.gov
Picture From ornl.gov

修改与更新:

2018年06月12日13时10分:补充了一些关于神威·太湖之光的数据并比较了Summit和神威·太湖之光的峰值运算速度。


附:

神威·太湖之光系统参数:

(数据来自中国国家超级计算无锡中心官网)

系统峰值性能:125.436PFlops
实测持续运算性能:93.015PFlops
处理器型号:”申威26010″ 众核处理器
整机处理器个数:40960个
整机处理器核数:10649600个
系统总内存:1310720 GB
操作系统:Raise Linux
编程语言:C、C++、Fortran
并行语言及环境:MPI、OpenMP、OpenACC等
SSD存储:230TB
在线存储:10PB,带宽288GB/s
近线存储:10PB,带宽32GB/s


美国橡树岭国家实验室发布的关于Summit的文章:

原文链接:ORNL Launches Summit Supercomputer | ORNL

ORNL Launches Summit Supercomputer

New 200-Petaflops System Debuts as America’s Top Supercomputer for Science

Media Contact

Morgan McCorkle, Communications
mccorkleml@ornl.gov, 865.574.7308

OAK RIDGE, Tenn., June 8, 2018—The U.S. Department of Energy’s Oak Ridge National Laboratory today unveiled Summit as the world’s most powerful and smartest scientific supercomputer.

With a peak performance of 200,000 trillion calculations per second—or 200 petaflops, Summit will be eight times more powerful than ORNL’s previous top-ranked system, Titan. For certain scientific applications, Summit will also be capable of more than three billion billion mixed precision calculations per second, or 3.3 exaops. Summit will provide unprecedented computing power for research in energy, advanced materials and artificial intelligence (AI), among other domains, enabling scientific discoveries that were previously impractical or impossible.

“Today’s launch of the Summit supercomputer demonstrates the strength of American leadership in scientific innovation and technology development. It’s going to have a profound impact in energy research, scientific discovery, economic competitiveness and national security,” said Secretary of Energy Rick Perry. “I am truly excited by the potential of Summit, as it moves the nation one step closer to the goal of delivering an exascale supercomputing system by 2021. Summit will empower scientists to address a wide range of new challenges, accelerate discovery, spur innovation and above all, benefit the American people.”

The IBM AC922 system consists of 4,608 compute servers, each containing two 22-core IBM Power9 processors and six NVIDIA Tesla V100 graphics processing unit accelerators, interconnected with dual-rail Mellanox EDR 100Gb/s InfiniBand. Summit also possesses more than 10 petabytes of memory paired with fast, high-bandwidth pathways for efficient data movement. The combination of cutting-edge hardware and robust data subsystems marks an evolution of the hybrid CPU–GPU architecture successfully pioneered by the 27-petaflops Titan in 2012.

ORNL researchers have figured out how to harness the power and intelligence of Summit’s state-of-art architecture to successfully run the world’s first exascale scientific calculation. A team of scientists led by ORNL’s Dan Jacobson and Wayne Joubert has leveraged the intelligence of the machine to run a 1.88 exaops comparative genomics calculation relevant to research in bioenergy and human health. The mixed precision exaops calculation produced identical results to more time-consuming 64-bit calculations previously run on Titan.

“From its genesis 75 years ago, ORNL has a history and culture of solving large and difficult problems with national scope and impact,” ORNL Director Thomas Zacharia said. “ORNL scientists were among the scientific teams that achieved the first gigaflops calculations in 1988, the first teraflops calculations in 1998, the first petaflops calculations in 2008 and now the first exaops calculations in 2018. The pioneering research of ORNL scientists and engineers has played a pivotal role in our nation’s history and continues to shape our future. We look forward to welcoming the scientific user community to Summit as we pursue another 75 years of leadership in science.”

In addition to scientific modeling and simulation, Summit offers unparalleled opportunities for the integration of AI and scientific discovery, enabling researchers to apply techniques like machine learning and deep learning to problems in human health, high-energy physics, materials discovery and other areas. Summit allows DOE and ORNL to respond to the White House Artificial Intelligence for America initiative.

“Summit takes accelerated computing to the next level, with more computing power, more memory, an enormous high-performance file system and fast data paths to tie it all together. That means researchers will be able to get more accurate results faster,” said Jeff Nichols, ORNL associate laboratory director for computing and computational sciences. “Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery.”

Summit moves the nation one step closer to the goal of developing and delivering a fully capable exascale computing ecosystem for broad scientific use by 2021.

Summit will be open to select projects this year while ORNL and IBM work through the acceptance process for the machine. In 2019, the bulk of access to the IBM system will go to research teams selected through DOE’s Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program.

In anticipation of Summit’s launch, researchers have been preparing applications for its next-generation architecture, with many ready to make effective use of the system on day one. Among the early science projects slated to run on Summit:

Astrophysics

Exploding stars, known as supernovas, supply researchers with clues related to how heavy elements—including the gold in jewelry and iron in blood—seeded the universe.

The highly scalable FLASH code models this process at multiple scales—from the nuclear level to the large-scale hydrodynamics of a star’s final moments. On Summit, FLASH will go much further than previously possible, simulating supernova scenarios several thousand times longer and tracking about 12 times more elements than past projects.

“It’s at least a hundred times more computation than we’ve been able to do on earlier machines,” said ORNL computational astrophysicist Bronson Messer. “The sheer size of Summit will allow us to make very high-resolution models.”

Materials

Developing the next generation of materials, including compounds for energy storage, conversion and production, depends on subatomic understanding of material behavior. QMCPACK, a quantum Monte Carlo application, simulates these interactions using first-principles calculations.

Up to now, researchers have only been able to simulate tens of atoms because of QMCPACK’s high computational cost. Summit, however, can support materials composed of hundreds of atoms, a jump that aids the search for a more practical superconductor—a material that can transmit electricity with no energy loss.

“Summit’s large, on-node memory is very important for increasing the range of complexity in materials and physical phenomena,” said ORNL staff scientist Paul Kent. “Additionally, the much more powerful nodes are really going to help us extend the range of our simulations.”

Cancer Surveillance

One of the keys to combating cancer is developing tools that can automatically extract, analyze and sort existing health data to reveal previously hidden relationships between disease factors such as genes, biological markers and environment. Paired with unstructured data such as text-based reports and medical images, machine learning algorithms scaled on Summit will help supply medical researchers with a comprehensive view of the U.S. cancer population at a level of detail typically obtained only for clinical trial patients.

This cancer surveillance project is part of the CANcer Distributed Learning Environment, or CANDLE, a joint initiative between DOE and the National Cancer Institute.

“Essentially, we are training computers to read documents and abstract information using large volumes of data,” ORNL researcher Gina Tourassi said. “Summit enables us to explore much more complex models in a time efficient way so we can identify the ones that are most effective.”

Systems Biology

Applying machine learning and AI to genetic and biomedical datasets offers the potential to accelerate understanding of human health and disease outcomes.

Using a mix of AI techniques on Summit, researchers will be able to identify patterns in the function, cooperation and evolution of human proteins and cellular systems. These patterns can collectively give rise to clinical phenotypes, observable traits of diseases such as Alzheimer’s, heart disease or addiction, and inform the drug discovery process.

Through a strategic partnership project between ORNL and the U.S. Department of Veterans Affairs, researchers are combining clinical and genomic data with machine learning and Summit’s advanced architecture to understand the genetic factors that contribute to conditions such as opioid addiction.

“The complexity of humans as a biological system is incredible,” said ORNL computational biologist Dan Jacobson. “Summit is enabling a whole new range of science that was simply not possible before it arrived.”

Summit is part of the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility located at ORNL. UT-Battelle manages ORNL for the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov.

###

Image: https://www.ornl.gov/sites/default/files/2018-P01537.jpg

Caption: Oak Ridge National Laboratory launches Summit supercomputer.

Photos, b-roll and additional resources are available at http://olcf.ornl.gov/summit.

Access Summit Flickr Photos at https://flic.kr/s/aHsmmTwKLg.

Videos of Summit available at https://www.dropbox.com/sh/fy76ppz7cvjblia/AAC0m93xBWk4poM-rRwJbiZza?dl=0.


TOP500 Supercomputer Sites发布的关于Summit的文章:

原文链接:Summit Up and Running at Oak Ridge, Claims First Exascale Application | TOP500 Supercomputer Sites

Summit Up and Running at Oak Ridge, Claims First Exascale Application

The Department of Energy’s 200-petaflop Summit supercomputer is now in operation at Oak Ridge National Laboratory (ORNL).  The new system is being touted as “the most powerful and smartest machine in the world.”

And unless the Chinese pull off some sort of surprise this month, the new system will vault the US back into first place on the TOP500 list when the new rankings are announced in a couple of weeks. Although the DOE has not revealed Summit’s Linpack result as of yet, the system’s 200-plus-petaflop peak number will surely be enough to outrun the 93-petaflop Linpack mark of the current TOP500 champ, China’s Sunway TaihuLight.

Even though the general specifications for Summit have been known for some time, it’s worth recapping them here:  The IBM-built system is comprised of 4,608 nodes, each one housing two Power9 CPUs and six NVIDIA Tesla V100 GPUs. The nodes are hooked together with a Mellanox dual-rail EDR InfiniBand network, delivering 200 Gbps to each server.

Assuming all those nodes are fully equipped, the GPUs alone will provide 215 peak petaflops at double precision. Also, since each V100 also delivers 125 teraflops of mixed precision, Tensor Core operations, the system’s peak rating for deep learning performance is something on the order of 3.3 exaflops.

Those exaflops are not just theoretical either. According to ORNL director Thomas Zacharia, even before the machine was fully built, researchers had run a comparative genomics code at 1.88 exaflops using the Tensor Core capability of the GPUs. The application was rummaging through genomes looking for patterns indicative of certain conditions. “This is the first time anyone has broken the exascale barrier,” noted Zacharia.

Of course, Summit will also support the standard array of science codes the DOE is most interested in, especially those having to do with things like fusion energy, alternative energy sources, material science, climate studies, computational chemistry, and cosmology. But since this is open science system available to all sorts of research that frankly has nothing to do with energy, Summit will also be used for healthcare applications in areas such as drug discovery, cancer studies, addiction, and research into other types of diseases. In fact, at the press conference announcing the system’s launch, Zacharia expressed his desire for Oak Ridge to be “the CERN for healthcare data analytics.”

The analytics aspect dovetails nicely with Summit’s deep learning propensities, inasmuch as the former is really just a superset of the latter. When the DOE first contracted for the system back in 2014, the agency probably only had a rough idea of what they would be getting AI-wise.  Although IBM had been touting its data-centric approach to supercomputing prior to pitching its Power9-GPU platform to the DOE, the AI/machine learning application space was in its early stages. Because NVIDIA made the decision to integrate the specialized Tensor Cores into the V100, Summit ended up being an AI behemoth, as well as a powerhouse HPC machine.

As a result, the system is likely to be engaged in a lot of cutting-edge AI research, in addition to its HPC duties. For the time being, Summit will only be open to select projects as it goes through its acceptance process. In 2019, the system will become more widely available, including its use in the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.

At that point, Summit’s predecessor, the Titan supercomputer, is likely to be decommissioned. Summit has about eight times the performance of Titan, with five times better energy efficiency. When Oak Ridge installed Titan in 2012, it was the most powerful system in the world and is still fastest supercomputer in the US (well, now the second-fastest). Titan has NVIDIA GPUs too, but these are K20X graphics processors and their machine learning capacity are limited to four single precision teraflops per device. Fortunately, all the GPU-enabled HPC codes developed for Titan should port over to Summit pretty easily and should be able to take advantage of the much greater computational horsepower of the V100.

For IBM, Summit represents a great opportunity to showcase its Power9-GPU AC922 server to other potential HPC  and enterprise customers.  At this point, the company’s principle success with its Power9 servers has been with systems sold to enterprise and cloud clients, but generally without GPU accelerators. IBM’s only other big win for its Power9/GPU product is the identically configured Sierra supercomputer being installed at Lawrence Livermore National Lab. The company seems to think its biggest opportunity with its V100-equipped server is with enterprise customers looking to use GPUs for database acceleration or developing deep learning applications in-house.

Summit will also fulfill another important role – that of a development platform for exascale science applications. As the last petascale system at Oak Ridge, the 200-petaflop machine will be a stepping stone for a bunch of HPC codes moving to exascale machinery over the next few years. And now with Summit up and running, that doesn’t seem like such a far-off prospect. “After all, it’s just 5X from where we are,” laughed Zacharia.

Top image: Summit supercomputer; Bottom image: Interior view of node. Credit: ORNL