Forensic Data Processing
by
Joe Dellinger
Our seismic data are a kind of digital palimpsest: a manuscript written on imperfectly erased reused paper that contains multiple overlapping layers of writing. Each layer of writing has its own story; it may be fresh and distinct, easy to read, or broken up into fragments and faded into near invisibility. In our processing we typically pay the most attention to the data layer that we plan to use for imaging, and ignore the others to the extent that we can get away with doing so. The deeper layers become “noise”. However, the better we can understand the various layers of the data, the better we can turn each layer into either additional useful signal or structured (i.e. predictable) noise. Preserving the structure of the noise is important. It allows us to do better, possibly much better, than we could by treating it as Gaussian random noise. For random noise the best we can typically do is stacking. While stacking is a powerful tool, its noise-suppression abilities are effective only up to a point. Each incremental increase in the stack size N costs the same to acquire, yet results in ever less S/N improvement. Even worse, noise in real data often contains statistical outliers that will dominate over the noise-suppression power of stack as N becomes large. We can damage our data at every step of the process from acquisition to final delivered product. If you are not checking for problems, you may be unaware that anything is amiss. The next game-changing improvement in our ability to guide business decisions by producing higher-quality Earth images may only happen if we can treat our seismic data with greater scientific rigor than has been standard practice. The goal of this book is to teach you the skills that you will need to do that.
Publication Date: September 24, 2024