This image shows an array of geodetic instruments at the surface of Earth and activity that was modeled on the fault below. The yellow colors indicate the highest speeds of slippage between plates along the San Andreas Fault. The reddish colors represent slower seismic speeds and the bluish colors indicate slippage at velocity close to the long-term advance of the San Andreas Fault. The dark color indicates a portion of the fault where the velocity is so small that it appears completely locked. Image: Sylvain Barbot/Caltech |
For those who study earthquakes, one major challenge has
been trying to understand all the physics of a fault—both during an earthquake
and at times of “rest”—in order to know more about how a particular
region may behave in the future. Now, researchers at the California Institute
of Technology (Caltech) have developed the first computer model of an
earthquake-producing fault segment that reproduces, in a single physical
framework, the available observations of both the fault’s seismic (fast) and
aseismic (slow) behavior.
“Our study describes a methodology to assimilate
geologic, seismologic, and geodetic data surrounding a seismic fault to form a
physical model of the cycle of earthquakes that has predictive power,”
says Sylvain Barbot, a postdoctoral scholar in geology at Caltech and lead
author of the study.
A paper describing their model—the result of a Caltech
Tectonics Observatory (TO) collaborative study by geologists and geophysicists
from the Institute’s Division of Geological and Planetary Sciences and
engineers from the Division of Engineering and Applied Science—appears in Science.
“Previous research has mostly either concentrated on
the dynamic rupture that produces ground shaking or on the long periods between
earthquakes, which are characterized by slow tectonic loading and associated
slow motions—but not on both at the same time,” explains study coauthor
Nadia Lapusta, professor of mechanical engineering and geophysics at Caltech.
Her research group developed the numerical methods used in making the new
model. “In our study, we model the entire history of an
earthquake-producing fault and the interaction between the fast and slow
deformation phases.”
Using previous observations and laboratory findings, the
team—which also included coauthor Jean-Philippe Avouac, director of the
TO—modeled an active region of the San Andreas Fault called the Parkfield
segment. Located in central California,
Parkfield produces magnitude-6 earthquakes every 20 years on average. They
successfully created a series of earthquakes (ranging from magnitude 2 to 6)
within the computer model, producing fault slip before, during, and after the
earthquakes that closely matched the behavior observed in the past fifty years.
“Our model explains some aspects of the seismic
cycle at Parkfield that had eluded us, such as what causes changes in the
amount of time between significant earthquakes and the jump in location where
earthquakes nucleate, or begin,” says Barbot.
The paper also demonstrates that a physical model of
fault-slip evolution, based on laboratory experiments that measure how rock
materials deform in the fault core, can explain many aspects of the earthquake
cycle—and does so on a range of time scales. “Earthquake science is on the
verge of building models that are based on the actual response of the rock
materials as measured in the lab—models that can be tailored to reproduce a
broad range of available observations for a given region,” says
Lapusta. “This implies we are getting closer to understanding the
physical laws that govern how earthquakes nucleate, propagate, and
arrest.”
She says that they may be able to use models much like
the one described in the Science
paper to forecast the range of potential earthquakes on a fault segment, which
could be used to further assess seismic hazard and improve building designs.
Avouac agrees. “Currently, seismic hazard studies
rely on what is known about past earthquakes,” he says. “However, the
relatively short recorded history may not be representative of all
possibilities, especially rare extreme events. This gap can be filled with
physical models that can be continuously improved as we learn more about
earthquakes and laws that govern them.”
“As computational resources and
methods improve, dynamic simulations of even more realistic earthquake
scenarios, with full account for dynamic interactions among faults, will be
possible,” adds Barbot.