Invited Talks, a premier component of the SC Conference Program, are designed to complement the regular Technical Papers Program. Previously featured as Masterworks, Plenary and State-of-the-Field talks, the SC15 talks are combined under the single banner of “Invited Talks.” This November’s Invited Talks will feature leaders in the areas of high performance computing, networking, analysis and storage.
According to the SC15 Web site, “Invited Talks will typically concern innovative technical contributions and their applications to address the critical challenges of the day. Additionally, these talks will often concern the development of a major area through a series of important contributions and provide insights within a broader context and from a longer-term perspective. At all Invited Talks, you should expect to hear about pioneering technical achievements, the latest innovations in supercomputing and data analytics, and broad efforts to answer some of most complex questions of our time.”
SC15 Invited Talks
- Superscalar Programming Models: Making Applications Platform Agnostic | Rosa M. Badia, Barcelona Supercomputing Center
Programming models play a key role providing abstractions of the underlying architecture and systems to the application developer and enabling the exploitation of the computing resources possibilities from a suitable programming interface. When considering complex systems with aspects such as large scale, distribution, heterogeneity, variability, etc, it is indeed more important to offer programming paradigms that simplify the life of the programmers while still providing competitive performance results. StarSs (Star superscalar) is a task-based family of programming models that is based on the idea of writing sequential code which is executed in parallel at runtime taking into account the data dependences between tasks. The talk will describe the evolution of this programming model and the different challenges that have been addressed in order to consider different underlying platforms from heterogeneous platforms used in HPC to distributed environments, such as federated clouds and mobile systems. Learn more
The National Strategic Computing Initiative | Randal E. Bryant, William T. Polk, Executive Office of the President, Office of Science and Technology Policy
U.S. President Obama signed an Executive Order creating the National Strategic Computing Initiative (NSCI) on July 31, 2015. In the order, he directed agencies to establish and execute a coordinated Federal strategy in high-performance computing (HPC) research, development, and deployment. The NSCI is a whole-of-government effort to be executed in collaboration with industry and academia, to maximize the benefits of HPC for the United States. The Federal Government is moving forward aggressively to realize that vision. This presentation will describe the NSCI, its current status, and some of its implications for HPC in the U.S. for the coming decade.
- Revealing the Hidden Universe with Supercomputer Simulations of Black Hole Mergers | Manuela Campanelli, Rochester Institute of Technology
Supermassive black holes at the centers of galaxies power some of the most energetic phenomena in the Universe. Their observations have numerous exciting consequences for our understanding of galactic evolution, black hole demographics, plasma dynamics in strong-field gravity, and general relativity. When they collide, they produce intense bursts of gravitational and electromagnetic energy and launch powerful relativistic jets. Understanding these systems requires solving the highly-nonlinear and highly-coupled field equations of General Relativity and Relativistic Magnethodrodynamics. It is only with the use of sophisticated numerical techniques for simulations, data extraction and visualization, and running on petascale supercomputers of ten to hundreds of thousands of CPUs simultaneously that this problem is tractable. This talk will review some of the new developments in the field of numerical relativity, and relativistic astrophysics that allow us to successfully simulate and visualize the innermost workings of these violent astrophysical phenomena.
- Trends and Challenges in Computational Modeling of Giant Hydrocarbon Reservoirs | Ali H. Dogru, Saudi Aramco Fellow and Chief Technologist of Computational Modeling Technology
Saudi Aramco Giant oil and gas reservoirs continue to play an important role in providing energy to the world. Nowadays, state of the art technologies are utilized to further explore and produce these reservoirs since a slight increase in the recovery amounts to discovering a mid-size reservoir somewhere else. Mathematical modeling and numerical simulation play a major role in managing and predicting the behavior of these systems using large super computers. With the aid of evolving measurement technologies, a vast amount of geoscience, fluid and dynamic data is now being collected. Consequently, more and more high resolution, high fidelity numerical models are being constructed. However, certain challenges still remain in model construction and simulating the dynamic behavior of these reservoirs. Challenges include determination of rock property variation between the wells, accurate location of faults, effective simulation of multi component, multiphase transient flow in fractures, complex wells and rock matrix. Computational challenges include effective parallelization of the simulator algorithms, cost effective large scale sparse linear solvers, discretization, handling multi scale physics, complex well shapes, fractures, complaint software engineering with the rapidly evolving super computer architectures and effective visualization of very large data sets. This presentation will cover examples for the giant reservoir models using billion plus elements, model calibration to historical data, challenges, current status and future trends in computational modeling in reservoir modeling. Learn more
- Supercomputing, High-Dimensional Snapshots, and Low-Dimensional Models: A Game Changing Computational Technology for Design and Virtual Test | Charbel Farhat, Stanford University
During the last two decades, giant strides have been achieved in many aspects of computational engineering. Higher-fidelity mathematical models and faster numerical algorithms have been developed for an ever increasing number of applications. Linux clusters are now ubiquitous, GPUs continue to shatter computing speed barriers, and Exascale machines will increase computational power by at least two orders of magnitude. More importantly, the potential of high-fidelity physics-based simulations for providing deeper understanding of complex systems and enhancing their performance has been recognized in almost every field of engineering. Yet, in many applications, high-fidelity numerical simulations remain so computationally intensive that they cannot be performed as often as needed, or are more often performed in special circumstances than routinely. Consequently, the impact of supercomputing on time-critical operations such as engineering design, optimization, control, and test support has not yet fully materialized. To this effect, this talk will argue for the pressing need for a game-changing computational technology that leverages the power of supercomputing with the ability of low-dimensional computational models to perform in real-time. It will also present a candidate approach for such a technology that is based on projection-based nonlinear model reduction, and demonstrate its potential for parametric engineering problems using real-life examples from the naval, automotive, and aeronautics industries. Learn more
Fast and Robust Communication Avoiding Algorithms: Current Status and Future Prospects | Laura Grigori, INRIA
In this talk I address one of the main challenges in high performance computing which is the increased cost of communication with respect to computation, where communication refers to data transferred either between processors or between different levels of memory hierarchy, including possibly NVMs. I will overview novel communication avoiding numerical methods and algorithms that reduce the communication to a minimum for operations that are at the heart of many calculations, in particular numerical linear algebra algorithms. Those algorithms range from iterative methods as used in numerical simulations to low rank matrix approximations as used in data analytics. I will also discuss the algorithm/architecture matching of those algorithms and their integration in several applications.
- System Software in Post K Supercomputer | Yutaka Ishikawa, RIKEN AICS
The next flagship supercomputer in Japan, replacement of K supercomputer, is being designed toward general operation in 2020. Compute nodes, based on a manycore architecture, connected by a 6-D mesh/torus network is considered. A three level hierarchical storage system is taken into account. A heterogeneous operating system, Linux and a light-weight kernel, is designed to build suitable environments for applications. It can not be possible without codesign of applications that the system software is designed to make maximum utilization of compute and storage resources. After a brief introduction of the post K supercomputer architecture, the design issues of the system software will be presented. Two big-data applications, genome processing and meteorological and global environmental predictions will be sketched out as target applications in the system software design. Then, it will be presented how these applications’ demands affect the system software.
- Societal Impact of Earthquake Simulations at Extreme Scale | Thomas H. Jordan, Southern California Earthquake Center, University of Southern California
The highly nonlinear, multiscale dynamics of large earthquakes is a wicked physics problem that challenges HPC systems at extreme computational scales. This presentation will summarize how earthquake simulations at increasing levels of scale and sophistication have contributed to our understanding of seismic phenomena, focusing on the practical use of simulations to reduce seismic risk and enhance community resilience. Milestones include the terascale simulations of large San Andreas earthquakes that culminated in the landmark 2008 ShakeOut planning exercise and the recent petascale simulations that have created the first physics-based seismic hazard models. From the latter it is shown that accurate simulations can potentially reduce the total hazard uncertainty by about one-third relative to empirical models, which would lower the exceedance probabilities at high hazard levels by orders of magnitude. Realizing this gain in forecasting probability will require enhanced computational capabilities, but it could have a broad impact on risk-reduction strategies, especially for critical facilities such as large dams, nuclear power plants, and energy transportation networks. Learn more
The Power of Visual Analytics: Unlocking the Value of Big Data | Daniel A. Keim, University of Konstanz
Never before in history data is generated and collected at such high volumes as it is today. As the volumes of multidimensional data available to businesses, scientists, and the public increase, their effective use becomes more challenging. Visual analytics seeks to provide people with effective ways to understand and analyze large multidimensional data sets, while also enabling them to act upon their findings immediately. It integrates the analytic capabilities of the computer and the abilities of the human analyst, allowing novel discoveries and empowering individuals to take control of the analytical process. The talk presents the potential of visual analytics and discusses the role of automated versus interactive visual techniques in dealing with big data. A variety of application examples ranging from news analysis over network security to SC performance analysis illustrate the exiting potential of visual analysis techniques but also their limitations. Learn more
Reproducibility in High Performance Computing | Victoria Stodden, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign
Ensuring reliability and reproducibility in computational research raises unique challenges in the supercomputing context. Specialized architectures, extensive and customized software, and complex workflows all raise barriers to transparency, while established concepts such as Validation, Verification, and Uncertainty Quantification point ways forward. The topic has attracted national attention: President Obama’s July 29 2015 Executive Order “Creating a National Strategic Computing Initiative” includes accessibility and workflow capture as objectives; an XSEDE14 workshop released a report “Standing Together for Reproducibility in Large-Scale Computing”; on May 5 2015 ACM Transactions in Mathematical Software released a “Replicated Computational Results Initiative”; and this conference is host to a new workshop “Numerical Reproducibility at Exascale,” to name but a few examples. In this context I will outline a research agenda to establish reproducibility and reliability as a cornerstone of scientific computing. Learn more
- Virtual and Real Flows: Challenges for Digital Special Effects | Nils Thuerey, Technische Universität München
Physics simulations for virtual smoke, explosions or water are by now crucial tools for special effects in feature films. Despite their wide spread use, there are central challenges getting these simulations to be controllable, fast enough for practical use and to make them believable. In this talk I will explain simulation techniques for fluids in movies, and why “art directability” is crucial in these settings. A central challenge for virtual special effects is to make them faster. Ideally, previews should be interactive. At the same time, interactive effects are highly interesting for games or training simulators. I will highlight current research in flow capture and data-driven simulation which aims at shifting the computational load from run-time into a pre-computation stage, and give an outlook on future developments in this area
- The European Supercomputing Research Programme | Panagiotis Tsarchopoulos, Future and Emerging Technologies, European Commission
Over the last couple of years, through a number of policy and research initiatives, the European Union has worked in putting together an ambitious supercomputing research programme. As part of this effort, in autumn 2015, the European Commission has launched several new supercomputing projects covering supercomputing hardware, software and applications. This launch marks an important milestone in European supercomputing research and development. The talk will provide a detailed overview of the European supercomputing research programme, its current status and its future perspectives towards exascale.
- LIQUi|> and SoLi|>: Simulation and Compilation of Quantum Algorithms | Dave Wecker, Quantum Architectures and Compilation (QuArC), Microsoft Research
Languages, compilers, and computer-aided design tools will be essential for scalable quantum computing, which promises an exponential leap in our ability to execute complex tasks. LIQUi|> and SoLi|> provide a modular software architecture for the simulation of quantum algorithms and control of quantum hardware. They provide a high level interface and are independent of a specific quantum architecture. This talk will focus on simulation of quantum algorithms in Quantum Chemistry and Materials as well as Factoring, Quantum Error Correction and compilation for hardware implementations (http://arxiv.org/abs/1402.4467?). Learn more