High Performance Computing: What History Can Teach Us About the Present and Future
This course, in partnership with IEEE Future Directions, is based on a very popular lecture that vividly illustrates a profound fact in the High Performance Computing ecosystem. The architecture we are using today on upcoming exascale systems is not new at all. Time and time again engineers waste time and effort trying to re-invent the wheel as software is getting slower more rapidly than hardware becomes faster. The statement "software is getting slower as hardware gets faster" was first coined by Niklaus Wirth as early as 1995. We will be studying key examples of "big iron" computers that existed in the 70s, 80s, and 90s and the component work done on those machines. HPC professionals will find this work extremely valuable today because it will help them avoid duplication of effort. The industry is experiencing a sort of "reverse de-ja vu" or more correctly "jamais vu" as it is called, which means we are finding subjectively unfamiliar something that we know very well to be familiar, and it is extremely appropriate and relevant now. In this course, we will also see how application developers stopped worrying about optimization in the 90s and early 2000 and how this affected the ability of applications to take advantage of the hardware when things changed. We will then conclude by explaining how applications need to be structured in order for the compiler to convert them into optimized code that will take full advantage of the hardware.
What you will learn:
- Review the advent of Super Computers in the 60s, 70s, & 80s, the “Real Vector Machines” to Contemporary High Performance Computers
- Discuss the “Attack of the Killer Micros” in the 90s & the Benefits of On-Processor Parallelism
- Examine the Evolution of Microprocessors in the Late 90s to 2005, the Decreasing Clock Cycle & the Effects On Application Developers
- Review the introduction of GPUs and Multi-Core Nodes to High Performance Computers from 2005 to Today & the use of Multi-core nodes
- Discuss the re-introduction of Vectors in CPUs & the Challenges for Application Developers
This course is part of the following course program:
High Performance Computing: Technologies, Solutions to Exascale Systems, and Beyond
Courses included in this program:
Instructure
John Levesque
John Levesque is a high-performance computing Evangelist that holds advanced degrees in both physics and mathematics and has worked in the field of high-performance computing for over 50 years. He has worked with all relevant hardware from earliest to latest and has led teams for IBM Research, Applied Parallel Research, and Pacific Sierra Research. For many years he was the director of the Cray Supercomputer Center of Excellence based at the Los Alamos National Laboratory in New Mexico. Today he is a member of the Hewlett Packard Mission Critical Systems Chief Technology Office. In the national-international scientific and technical computing community, he is a well-known lecturer and author of three books: A Guidebook to Fortran on Supercomputers, High Performance Computing: Programming and Applications, and Programming for Hybrid Multi/Manycore MPP Systems.
Publication Year: 2022
ISBN: 978-1-7281-7826-4