John MIchael Levesque
BiographyJohn Levesque is the Director of the Cray’s Supercomputer Center of Excellence based at Oak Ridge National Laboratory. He is responsible for the group performing application porting and optimization for break-through science projects. Levesque has been in High Performance computing for over 40 years. Levesque is also a member of Cray’s Chief Technology Office, heading the company’s efforts in application performance. ORNL was the first site to install a Petaflop Cray XT5 system, Jaguar; as of June 2010, it is the fastest computer in the world according to the TOP500 list and in October 2012 they will install the largest hybrid system consisting of Nvidia Kepler GPUs which has a chance of being number one again.
Before joining Cray Inc., Levesque was the Director of the Advanced Computing Technology Center at IBM Research in Yorktown Heights, New York. At IBM Research, he headed a group focusing HPC expertise within the company, and supplying users with application porting and optimization solutions for IBM SP hardware.
Previous to IBM, Levesque ran Applied Parallel Research (APR), a small California software company, which developed tools for parallelizing applications. In addition, the company completed several High Performance computing software development contracts, both for the government and industry vendors. While at APR, and previously while working for Pacific Sierra Research, Levesque headed a team jointly developing the first and currently, only “Whole Program Analysis” package for Fortran 77, called FORGE.
His early experience with High Performance computing began with optimizing nuclear effects applications for several research organizations.
Levesque is well known as a lecturer and author within the Scientific and Technical Computing community. CRC recently published his new book “High Performance Computing – Programming and Applications”.
He holds a double Masters degree in Mathematics and Physics from the University of New Mexico at Albuquerque.
By: John MIchael Levesque
For the past 20 years, high performance computing has
beneï¬ted from a significant reduction in the clock cycle
time of the basic processor. Going forward, trends indicate the
clock rate of the most powerful processors in the world may stay
the same or decrease slightly. When the clock rate decreases, the
chip runs at a slower speed. At the same time, the amount of
physical space that a computing core occupies is still trending
downward. This means more processing cores can be contained within
With this paradigm shift in chip technology, caused by the amount of electrical power required to run the device, additional performance is being delivered by increasing the number of processors on the chip and (re)introducing SIMD/vector processing. The goal is to deliver more ï¬‚oating-point operations per second per watt. Interestingly, these evolving chip technologies are being used on scientiï¬c systems as small as a single workstation and as large as the systems on the Top 500 list.
Within this book are techniques to
eï¬€ectively utilize these new node architectures.
Eï¬ƒcient threading on the node, vectorization to
utilize the powerful SIMD units, and eï¬€ective
memory management will be covered along with examples to allow the
typical application developer to apply them to their programs.
Performance portable techniques will be shown that will run
eï¬ƒciently on all HPC nodes.
The principal target systems will be Intel’s latest multicore Xeon system, the latest Intel Knight’s Landing (KNL) chip with discussion/comparison to the latest hybrid, accelerated systems using NVIDIA’s Pascal accelerator.