Here’s the first look at the configurations of the ASC19 Student Cluster Competition groups. The first thing to understand is that Inspur generously provides every piece of Hardware, so this is a lot like a delivery inventory car race, in which everyone frequently strolls the same gear.
So, if everyone has identical systems, why are there such huge clusters inside the competition? Creativity, that’s why. Students have spent months reading and checking out diverse cluster options given the choices they have been given. Some believe that ‘small is stunning,’ specifically when accompanied by a slew of GPUs, even as others had been looking for better stability between CPU, node memory, and GPU accelerators.
It’s thrilling to examine the traits. Systems have become bigger; the average node depend at ASC19 is a full node larger than the average in 2018, with the median growing through nodes. Average and median CPUs and CPU cores have also risen sharply, especially because Intel increased their middle counts (best paintings, Intel) and because of the greater variety of nodes within the systems this year.
We see a massive increase in memory per node, from 201 GB, consistent with the node in 2018, to a whopping 384 GB, which aligns with the node in 2019. This is due to new massive reminiscence structures from Inspur – a splendid process on their part. And test the memory consistent with cluster figures – 1017 GB in 2018 and 2688 GB in 2019, a big boom over the direction of an unmarried 12 months.
What’s thrilling is that the number of accelerators/GPUs in step with the cluster has dropped slightly. Some competition observers have expected that the scholar cluster opposition race is determined by how many accelerators you may cram into a machine. But this isn’t always genuine in step with my information. It all depends on the programs.
The packages for these 12 months are a blended bag regarding GPU-centricity (a new phrase on my component). Linpack and HPCG are truely impacted by using what number of GPUs you have in a gadget, but these benchmarks aren’t weighted very highly about the general rankings.
Nvidia is touting its PyTorch GPU tasks. GPUs can likely be beneficial inside the SR undertaking; however, in step with the scholars, you don’t need an entire brace of GPUs to finish this assignment.
This brings us to CESM, which isn’t GPU-centric in any respect and is in line with the scholars. Since this is a prime part of the scoring, college students, in a few instances, have eschewed GPUs of their clusters in preference for more CPU. The truth is that Nvidia V100s are quite luxurious and quick to deliver – even though Inspur imparts four V100s to any group, this is interesting. Good job, Inspur.
Now that we realize the configurations, we will transport directly to meeting the scholars via video interview and getting our first actual consequences of the opposition. Stay tuned!