Gregory Beroza Stanford University
Gregory Beroza is Wayne Loel professor of Earth Science at Stanford University. His interest is in analyzing seismograms to understand how earthquakes work and to quantify the hazards they pose. He studies earthquake source processes for shallow earthquakes, intermediate-depth earthquakes, induced earthquakes, and slow earthquakes. He leads the Earthquake Seismology Group, which improves earthquake monitoring in all settings by applying data mining and Machine Learning techniques to large volumes of continuous seismic waveform data. For the past 12 years, he has been Co-Director of the Southern California Earthquake Center. He was selected as member of National Academy of Sciences in 2022.
Abstract: Recent developments in deep learning have led to the development of "deep" earthquake catalogs that typically have an order of magnitude more earthquakes than found by conventional processing. This is attributable to: the ability of neural network models to identify wave arrivals in noisy data more effectively than conventional methods, the effectiveness of neural network models in picking S-wave arrivals, and the fact that Gutenberg-Richter statistics means that there is a large population of events that can be detected near the detection threshold. The higher information density of deep catalogs brings with it the potential to resolve fault structures in far greater detail and to discern new relationships among earthquakes; however, it also raises some interesting challenges. Deep catalogs are difficult to explore. How can we extract new understanding from them? To explore geometric and stress-based relationships, requires higher-order information, such as focal mechanisms and stress drops, for large populations of small earthquakes. How can we accomplish this given that small earthquakes generate weak signals? The detection threshold will be temporally and spatially variable. How do we document such variability and how do we account for it in interpretations? Despite finding many more earthquakes, machine-learning models have inconsistencies in their performance. How do we overcome these inconsistencies? Finally, machine learning models are rapidly developing such that the best available methods can be expected to continue to improve. How do we keep current with the state-of-the-art? In this talk I will pose these questions, will attempt to address them with examples from real-world data, and will try to anticipate developments that will be enabled with improved sensor technology.
Mar.22th 10:00 AM
Location: D3 Lecture Hall