Bridging the divide between supercomputers and the Cloud
Pushing the boundaries of computing capabilities
Jack Lange, Assistant Professor
Supercomputers are being used to predict the chance of earthquake damage, shorten the chemistry phase of drug development from three or four years to six months, and support the U.S. intelligence community by providing scientific expertise to address cybersecurity. These are just a few examples of how scientists are harnessing the computational power of supercomputers. But, how many people realize the research on supercomputers’ operating systems has revolutionized our daily lives? If you have ever used Netflix, Comcast, Airbnb, Zillow or Yelp, you’re benefiting from supercomputer research.
Jack Lange designed the first virtual machine monitor specifically for high performance computing
For over a decade, Lange has been writing software explicitly for supercomputing platforms. He led the design efforts of the Palacios Virtual Machine Monitor (VMM), the first VMM architecture designed specifically for High Performance Computing (HPC).
Virtualization allows a user to run their entire system as a self-contained software application on another machine. It has numerous advantages, such as server consolidation, fault tolerance, failure and disaster recovery, and debugging.
His Prognostic Lab is currently working on the Hobbes Project — led by Sandia National Laboratories in collaboration with four national laboratories and eight universities. Their goal is to “deliver an operating system for future extreme-scale parallel computing platforms that will address the major technical challenges of energy efficiency — managing massive parallelism and deep memory hierarchies, and providing resilience in the presence of increasing failures.”
Simply put, he is researching how to increase the efficiency of supercomputers and make sure they excel at performance. His research is at the forefront as to how we should think about designing the operating systems of massively complex machines.
While the data center architecture, also called cloud computing, comprises different types of applications, they have some similarities to supercomputers.
According to Lange, “our focus is figuring out how to take what we’ve done to make this HPC system work, with tight constraints and very specific requirements, and take the lessons learned and apply it to data center architecture.”
There are growing similarities between supercomputer environments and cloud computing environments and Lange says that, “a lot of the work we do is at that convergence point.”
Jack Lange is an assistant professor at the University of Pittsburgh where he leads The Prognostic Lab.