IF YOU’RE THE TYPE OF PERSON who ever contemplates what more you could have done with your life, I have some advice: Don’t talk to Rick Stevens. Just 15 minutes into a conversation with him, I already feel like an idiot. Outwardly, I’m making direct eye contact, taking notes, putting my fingers to my lips to signal I’m hanging on his every word; inwardly, I’m thinking about how many hours I’ve spent on YouTube rewatching clips from The Sopranos.
Stevens is the associate laboratory director for computing, environment, and life sciences at Argonne National Laboratory in southwest suburban Lemont. The title wordily obscures his accomplishments. Stevens, who started computer programming at age 14, has been at Argonne (the country’s first national laboratory, established in 1946 and jointly operated by the U.S. Department of Energy and the University of Chicago) since 1982, when he was still an undergrad at Michigan State. After he joined Argonne, he got a doctorate in computer science at Northwestern. Over the past 40 years, he’s been a key figure in Argonne’s significant advancements in supercomputing.
On a sunny day in November, I’m sitting in Stevens’s office to learn more about the Aurora supercomputer, Argonne’s next big leap in computational speed and power. The lab has been laboring over supercomputers for nearly its entire history, in a constant state of conceptualizing, formulating, fundraising, designing, constructing, testing, and operating. But in a decades-long span of inexorable innovation, Aurora is a unique milestone. When the machine is fully constructed and operational — Argonne officials are hoping for early spring — it will be one of the first supercomputers in the world to operate at exascale, a new and unprecedented stage of computing.
And this is why I came to talk to Stevens. He’s over six feet tall, with fantastic long brown hair hanging past his shoulders, and a wide frame, like he could have played football. On the day I meet him, he’s wearing glasses, Birkenstock sandals with socks, flowy black yoga pants, and a loose-fitting sweatshirt.
The first question I ask him: What’s the impact Aurora will have on our everyday life?
“What’s the impact?” Stevens replies, rhetorically and exhaustedly. “Well, you can get a hint of it, maybe, from the impact that supercomputing has had on the world in the last 20 years. Everything we know about large-scale climate comes from climate simulations on supercomputers. What we know about the human genome comes from massive data analysis on big computers. Everything that’s happening in AI right now is happening on large-scale computers. Just the idea that you could build a system that might be able to drive a car is a result of huge amounts of computing. Our ability to design reactors, our ability to come up with new batteries — all that is a result of computing.”
You know, just the climate, the human genome, nuclear power, robots.
“The exascale machine is the latest version of that,” Stevens continues, “and an exascale machine is a million times faster than the machines we had at the turn of the century.”
Still, how could we witness a “million times faster” empirically? How would we be able to see that materially in our everyday lives? I didn’t want to repeat my initial question, so I ask it in the form of a follow-up: Exascale computing is going to perform functions that we can’t execute now, right?
“Yeah, it’s a million times faster,” Stevens answers, another way of saying,