This article is the second of a two-part series on seismic imaging; it looks at HPC seismic imaging advances and full wave inversion (FWI) analysis performed by Imperial College, Intel Parallel Computing Center (Intel PCC) – Open Performance portable Seismic Imaging (OPESCI) and SENAI CIMATEC, Brazil.
Reverse time migration (RTM) has been used for seismic imaging since the 1990s, and its use has steadily widened as computational power has increased. Because of its computational intensity, each significant step in its development was linked to a significant increase in computational power, for example moving from 2-D to 3-D imaging, or to higher resolution. Full waveform inversion (FWI) goes much further than traditional RTM as it tries to account for more of the physics involved in the propagation of sound waves using the so-called adjoint method to invert for the subsurface image. However, it is even more computationally expensive than RTM, and it has only recently become feasible for 3-D imaging, albeit at a much lower resolution.
The ultimate goal of accurate imaging appears much further away, when you consider that this is all done with an acoustic approximation to sound propagation rock while the data collect contains a large component of shear waves, which would require an elastic model of the earth to be used. Thus, this valuable signal becomes, in effect, noise. The obvious solution is to simply use an elastic wave model instead for seismic inversion, but as shear waves have a much shorter wavelength than pressure waves, the computational cost can increase by anything from a factor of 20 to 1000! It is safe to say this is a tall order.
Changing the traditional seismic imaging methodology
If elastic wave based FWI was going to be possible in the foreseeable future, then disruptive technology changes are required. Dr. Gerard Gorman, Principal Investigator of OPESCI, states, “Entering the exascale era of computing, disruptive changes to computer architectures offer many opportunities; however, it also demands disruptive changes in software to achieve the full potential of the new hardware. The fact is that many disruptive changes are already underway in software ranging from automated computational to big data, and the question is whether we are creative and agile enough to take advantage of them.”
The first major challenge was to provide the flexibility to explore numerical methods for modeling. For example, quickly developing new finite difference models solving different approximations to the wave equation with different orders of accuracy. Or, much more drastically, move to unstructured grid and possible adaptive methods where resolution could be tuned to the local properties. Again, higher order methods are leveraged to avoid becoming severely memory bound.
The second of these challenges is to make it run fast. Code optimization requires a great deal of specialist knowledge and hard work. The computational efficiency is maximized by exploiting the available parallelism at all levels of the computer architecture: SIMD vectorization at the level of the processing units; thread parallelism at the level of a multi-core shared memory compute node and many-core accelerators (such as Intel Xeon Phi coprocessors); and message passing with MPI to distribute work across multiple hosts. It is already difficult enough to optimize just one given implementation for a target architecture. But the question arises of how to achieve an acceptable degree of portability across different (and rapidly changing architectures.)
These two challenges form the two ends of a Chinese finger trap — as you increase the sophistication of your numerical method, the complexity of the software implementation and optimization also rapidly increases.
- Read more: Oil and Gas Geological Interpretation uses Supercomputers, HPC Tools to Visualize LiDAR Database Imagery
The usual dogma is that you can have performance, or you can have the flexibility of a high-level language, but you cannot have both. However, there is a growing community focused on automated computational themes, such as domain specific languages and source code generation, which is now challenging this view. An important aspect of this is the notion of multi-layered abstractions and getting a separation of concerns between those who want to write the mathematics of the problem (e.g. geophysicists) and those who capture parallel patterns and map them onto efficient implementations for different architectures (computer scientists).
FWI Implementation and Software Recommendations
Domain-specific language approaches to seismic imaging needs
OPESCI uses a number of different domain specific languages (DSLs) to meet changing seismic imaging needs. Whenever possible, open source software is used in the model. “At the highest level of abstraction, application developers should be able to write algorithms in a clear and concise manner similar to how the algorithm might be written mathematically on paper. While at the lower levels, source-to-source compilers will explore a rich implementation space to transform this DSL code into highly optimized code that can be compiled for a target platform. Not only does this strategy result in performance portability and enhanced programmability (and hopefully productivity), but the high level abstraction makes it possible to apply automatic differentiation tools to the model, allowing more efficient inversion algorithms to be implemented,” states Gerard Gorman. In addition, this approach would provide a route to bridge the divide between domain specialists (often geophysicists) and specialists in parallel programming and software optimization.
Within OPESCI, there are currently two streams of research focusing for DSLs and code generation — one focusing on traditional finite difference methods (code named OPESCI-FD), and the second on unstructured grid based methods. The vision is that, in the near-term, code generation methods can quickly make an impact on existing RTM and FWI codes that are finite difference fully based; while unstructured grid-based methods developed using Firedrake (an automated finite element system developed at Imperial College) are a longer term objective, which will require more re-engineering of existing seismic imaging codes.
OPESCI optimizes seismic imaging code
OPESCI is creating the FWI abstraction model and working on efficient optimization of code to minimize computing time required in FWI seismic imaging. Gorman notes that, even though the OPESCI-FD compiler in still at early development stages, it can already generate significantly faster code for both the Intel Xeon processor and Intel Xeon Phi coprocessor than the reference implementation that had already been threaded and received basic hand optimizations.
“Early results indicate that automatically generated code can outperform hand-written code by a specialist, and it can be developed in a fraction of the time, and therefore at a fraction of the cost. This is important because development groups do not normally have the option to change the physics or numerical method (order or accuracy) of their models due to the cost of development. The fact is that HPC developer time is a scarce and expensive resource. The question is not always if your code will scale — but does your development effort scale?” states Gorman.
Gorman indicates that, using source code generation and compiler technologies, they can go far beyond any optimizations that would be feasible to implement by hand. “We opted for revolution rather than evolution, because implementation choices get rooted into legacy codes that make it difficult to optimize in a meaningful way — remember we really need to consider what choices are necessary for a speedup of an order of magnitude or more, 10 percent speedup here or there just doesn’t cut it. We focus on end-to-end code optimization, changing both the numerical algorithms and the implementation together, so it is much better-suited to getting the most out of the resources on the processor,” adds Gorman.
SENAI CIMATEC optimizes seismic imaging on Latin America’s fastest supercomputer
According to Renato Miceli, Technical Director at SENAI CIMATEC Supercomputing Center for Industry Innovation, “Brazil has a large amount of regulation in terms of environmental risks relating to oil and gas exploration. These risks can be diminished by better seismic imaging and reservoir modeling.”
The BG Group made a significant investment in SENAI CIMATEC which now runs the fastest supercomputer in Latin America dedicated exclusively to geophysical research. The BG Group’s commitment to the Brazilian Science without Borders program will ensure mobility of researchers at all levels between the international partners and Brazil. “SENAI CIMATEC will be a key enabler to create the local skill base to support this vision,” states Miceli.
Miceli indicates, “We aid the BG Group because our team does extensive code generation; we have the abilities to do experiments to create code that meets their specific needs. For example, our team’s work can keep from replacing hundreds of lines of source code by using libraries generated at Imperial College. One key thing you can do to gain parallelism is to leverage key libraries developed by Imperial College / SENAI that have already optimized the code. The work we are doing with libraries can directly enable oil and gas customers.”
Recommendations for future work in oil and gas exploration
Gorman indicates that, while many code optimization issues are been tackled, other areas still need research. “Right now, we are focused on optimizing the compute aspect of FWI. However, you need to remember that FWI involves running thousands of these workloads in parallel in a task farm -ype configuration, processing petabytes of data. Resilience and scalability is an issue here. So, we are already working to ensure that OPESCI can be used in conjunction with big data frameworks, such as Apache Spark. There are already quite a few groups looking into the feasibility of this, so I wouldn’t be surprised if we saw big changes over the next couple of years in how FWI is carried out.”
Miceli states, “Our Intel PCCs create technology that BG Group can take and plug into their software for seismic imaging, as well as using our code to do image analysis. The BG Group was also great in encouraging us to consider high-risk/high-reward approaches, such as the use of unstructured meshes to conform to complex geological features. Today’s approach will be viewed with some suspicion, but there is a wealth of experience from other fields of science and engineering to draw upon for this.”
Miceli also notes that “OPESCI code is open-source (OSS) code and Intel PCCs are developing library codes that are OSS and available on GitHub — this allows us to share and co-develop code. The libraries are generic, so developing applications can more easily be done for commercial applications. So, feel free to get in touch as a user or a developer.”
- Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance.
Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR.
Read Part 1: How Supercomputers aid Oil and Gas Seismic Research