Maximum PC

COMPUTE THIS

As anyone who’s ever tried to work out a restaurant bill, including drinks, taxes, and tip, already knows, some math is difficult. Expand that by several orders of magnitude, and suddenly you’re simulating the effects of a nuclear bomb, or protein folding, or calculating how many oil rigs to send up the Bering Strait before winter, and your needs go beyond mere computers. You need a supercomputer.

Established in the 1960s, supercomputers initially relied on vector processors before changing into the massively parallel machines we see today in the form of Japan’s Fugaku (7,630,848 ARM processor cores producing 442 petaflops) and IBM’s Summit (202,752 POWER9 CPU cores, plus 27,648 Nvidia Tesla V100 GPUs, producing 200 petaflops).

But how did we get to these monsters? And what are we using them for? The answers to that used to lie in physics, especially the explodey kind that can level a city. More recently, however, things like organic chemistry and climate modeling have taken precedence. The computers themselves are on a knife-edge, as the last drops of performance are squeezed out of traditional architectures and materials, and the search begins for new ones.

This, then, is the story of the supercomputer, and its contribution to human civilization.

DEFINE SUPER

What exactly is a supercomputer? Apple tried to market its G4 line as ‘personal supercomputers’ at around the turn of the millennium, but there’s more to it than merely having multiple cores (although that certainly helps). Supercomputers are defined as being large, expensive, and with performance that hugely outstrips the mainstream.

Apple’s claim starts to make more sense when you compare the 20 gigaflops of performance reached by the hottest, most expensive, dual-processor, GPUequipped G4 PowerMac to the four gigaflops of the average early-2000s Pentium 4. For context, Control Data’s CDC Cyber supercomputer ran at 16 gigaflops in 1981, a figure reached by ARMv8 chips in today’s high-end cell phones.

Before supercomputers there were simply computers, though some of them were definitely super. After World War II, many countries found ways to automate code-breaking and other intensive mathematical tasks, such as those involved in building nuclear weapons. So let’s begin in 1945, and the ENIAC.

This programmable mass of valves and relays was designed to compute artillery trajectories, and it could do a calculation in 30 seconds that would take a human 20 hours. Its first test run, however, was commandeered by John von Neumann of the Los Alamos National Laboratory and consisted of calculations for producing a hydrogen bomb. ENIAC was programmed, and provided its output, using punch cards, and a single Los Alamos run used a million cards.

ENIAC was upgraded throughout its life, and when finally switched off in 1956 (having run continuously since 1947, pausing only to replace

You’re reading a preview, subscribe to read more.

More from Maximum PC

Maximum PC3 min read
2TB Crucial T705 M.2 PCIe 5.0 SSD
WELL, FOLKS, that’s it. The PCIe 5.0 standard and its SSDs have been around for almost a year, and we’ve finally hit peak sequential throughput with the Crucial T705. That’s a phenomenal achievement in and of itself. The latest SSD from Crucial, the
Maximum PC6 min readSecurity
Doctor
> StopCrypt fears > Tiny server build > Backup bandwidth I’ve just read up on a new piece of ransomware that can evade detection and is aimed at consumer users. How can we protect ourselves from it? —Samuel B Marshall THE DOCTOR RESPONDS: The ransomw
Maximum PC1 min read
Amd Upgrades Fsr
AMD’S UPSCALING TECH, FidelityFX Super Resolution, has reached version 3.1, and is now in the hands of the developers. First to use it is Ratchet and Clank: Rift Apart, with an update in July. The big change is the decoupling of FSR from FMF (Fluid M

Related