It was 31 years ago when Alan Karp, then an IBM employee, decided to put up $100 of his own money in hopes of solving a vexing issue for him and others in the computing field. When looking at the HPC space, there were supercomputers armed with eight powerful processors and designed to run the biggest applications of the day. However, there also were people putting 1,000 wimpy chips into machines that leveraged parallelism to run workloads, a rarity at the time.
According to Amdahl’s Law in 1986, even if 95 percent of a workload runs in parallel, the speedup in execution of the workload would only be 20X, Karp said, so he wanted to put up the $100 to encourage people to demonstrate faster speedups for general-purpose applications when running in parallel to prove the worth of building machines powered by many smaller processors.
“So Gordon heard about my challenge and said that nobody is going to take Karp’s money, so what he’s going to do is he’s going to offer $1,000 a year for the best speedup, one year after the next, beating the previous year by a factor of two,” Karp said during a talk this week at the SC17 supercomputing conference in Denver. “’Just to keep things interesting,’ I think were his exact words.”
“Gordon” was Gordon Bell, who by 1986 already had a long and storied career in technology that saw him work at DEC, teach at Carnegie Mellon University and found Encore Computer, which built shared-memory, multi-processor computers. Bell also was a believer in parallel computing and was willing to give $1,000 in award money every year to help drive innovation that would advance those ideals. A year later, the first award was given out, marking the beginning of what would become the Gordon Bell Prize, a prestigious award given for achievements in the HPC field. This year’s SC conference marks the 30th anniversary of the award – which will be given out Nov. 16 – and Bell, now 83, and Karp reunited on stage at the show to talk about the last three decades in HPC, including the rise of parallel computing.
At the time, Bell was with the National Science Foundation’s newly-created Computers and Information Sciences and Engineering (CICE) organization, where he continued his evangelism of parallel computing.
“I said parallelism is the issue. We’re going to have these machines that are going to be highly parallel and nobody knows how to program it,” he said, adding that his ideas ran against three other scientist who said that programming for sequential computing was difficult enough.
Bell’s $1,000 award had an almost immediate effect, according to Karp. The first year, there were seven entries, including from the scientists at the Sandia National Laboratory, who had three applications running at speedup of 600X.
“They made the very simple … observation that when you get a bigger computer, you run a bigger problem, and a bigger problem has a higher percentage that can be parallelized, and that makes the speedup essentially unbounded,” he said.
Bell talked about the evolution of his eponymous award and the changes in the HPC field that has seen the drive in parallel computing continue to accelerate with the development of such technologies as GPU accelerators. He also spoke about the years 1983 to 1993, noting developments that would lead the transition to multi-computers. Such developments included the break from single-memory computers, powerful and inexpensive CMOS microprocessors, the performance of TTL and ECL proprietary processors and Gustafon’s Law, which he said proved parallel systems worked, and the introduction of the Top500 list.
He also said he expects the Gordon Bell Award also helped.
“I think that was really the thing that really moved clusters and our current mode of computing forward for probably five years,” Bell said. “Without that proof, without that stimulation, I don’t think anyone would have been convinced that you could actually use that, and it would have taken much longer. That put an end to the notion that there was a limit there.”
At the beginning, the award “was very loosely specified in those day, and I think that was a really good thing because it let people be rewarded for different kinds of improvements or breakthroughs in computing,” he said.
It started out looking at peak performance and scaling, then grew to include such areas as price performance, sustained performance, scalability compilers, speedup, special-purpose machines, and languages. It also eventually included a Special Lifetime Achievement award.
“The Special Lifetime Achievement was a prize to give the people at the University of Tokyo who had been building a thing called the Grape computer [and were] were winning every year for price performance because they had been putting more and more FPGAs together,” Bell said.
Now, 30 years after the first awards were given out, massive, highly-parallel supercomputers are running complex workloads that look at the oceans as well as the stars. The Sunway TaihuLight supercomputer in China, which for the fourth time sits atop the Top500 list, holds more than 10 million processing cores. And more is on the way, Bell noted, including the expected use in more systems of FPGAs and the ongoing improvements in algorithms. And the role these massive supercomputers play in the world will continue.
“I like to point out, if supercomputing hadn’t been there, there would have some question whether climate change is man-made or not,” he said. “We would really be arguing climate [change] being man-made or not. No question about it. Did I say that? What makes me so smug about it is it’s all based on a belief here in computing.”
Toward the end of the talk, Karp asked Bell if there was anything he would change in his life.
“The fact that I don’t have the attention span now to do programming is something everybody needs to pay attention to,” he said. “Whatever you’re doing, you have to keep doing. I’d like to be able to get enthusiastic about building a computer now, but I just don’t have the tools to do that. Just don’t fall into that crack.”