Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Interpreting Subsurface Seismic Data
Interpreting Subsurface Seismic Data
Interpreting Subsurface Seismic Data
Ebook661 pages6 hours

Interpreting Subsurface Seismic Data

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Interpreting Subsurface Seismic Data presents recent advances in methodologies for seismic imaging and interpretation across multiple applications in geophysics including exploration, marine geology, and hazards. It provides foundational information for context, as well as focussing on recent advances and future challenges. It offers detailed methodologies for interpreting the increasingly vast quantity of data extracted from seismic volumes.

Organized into three parts covering foundational context, case studies, and future considerations, Interpreting Subsurface Seismic Data offers a holistic view of seismic data interpretation to ensure understanding while also applying cutting-edge technologies. This view makes the book valuable to researchers and students in a variety of geoscience disciplines, including geophysics, hydrocarbon exploration, applied geology, and hazards.

  • Presents advanced seismic detection workflows utilized cutting-edge technologies
  • Integrates geophysics and geology for a variety of applications, using detailed examples
  • Provides an overview of recent advances in methodologies related to seismic imaging and interpretation
LanguageEnglish
Release dateMay 27, 2022
ISBN9780128196922
Interpreting Subsurface Seismic Data

Related to Interpreting Subsurface Seismic Data

Related ebooks

Physics For You

View More

Related articles

Reviews for Interpreting Subsurface Seismic Data

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Interpreting Subsurface Seismic Data - Rebecca Bell

    Chapter 1: Introduction

    Rebecca Bell ¹ , David Iacopini ² , and Mark Vardy ³       ¹ Department of Earth Science and Engineering, Imperial College London, United Kingdom      ² DISTAR, Università di Napoli Federico II, Napoli, Italy      ³ SAND Geophysics, Southampton, United Kingdom

    1. Brief history of seismic exploration

    1.1 Data acquisition and processing

    1.2 Data interpretation

    2. Book overview

    2.1 Interpretation of seismic data in complex systems

    2.2 Reprocessing of vintage 2D data

    2.3 Quantitative imaging

    3. Future outlook

    References

    1. Brief history of seismic exploration

    1.1. Data acquisition and processing

    Seismic exploration of the Earth has a long history, dating back almost 200 years. Working by generating propagating waves of alternating pressure that cause localized regions of compression and rarefaction, the seismic method is dependent upon the elastic properties of the host medium. With the development of elastic theory in the 17th and 18th centuries, experimentalists such as Mallet (1848, 1851), Mallet and Mallet (1859), Rayleigh (1885), Stoneley (1924), and Love (1927) were able to investigate the elastic properties of the Earth. While these early experiments looked at the gross properties of the Earth using surface or refracted waves, it was not until the 1920s that people such as J.A. Udden considered the idea of using reflections from discrete impedance boundaries to map subsurface structures (Sheriff, 1988). In 1921, William P. Haseman, J. Clarence Karcher, Irving Perrine, and Daniel W. Ohern used a dynamite charge with an early seismograph to collect the first seismic reflection profile by recording reflections from a buried stratigraphic interface in the Vines Branch area of south-central Oklahoma (Dragoset, 2005).

    Seismic imaging is ultimately constrained by the acquisition geometry, and early experiments, such as Vines Branch, suffered from the small number of recording channels causing low signal-to-noise (S/N) ratios. Even when technological advancements permitted multireceiver arrays, it was not until the advent of digital field systems in the 1960s that seismic reflection imaging became a truly useful tool (Sheriff, 1988). The increased computational power allowed the use of statistically developed filters to clean the recorded wavelets (Wiener, 1950), and geographically coincident traces to be stacked at common midpoints (CMP – originally CRP, Common Reflection Point) (Mayne, 1962).

    The very early experiments, such as Vines Branch, although not 3D in the true sense of the word, were acquired in such a manner as to enable the application of 3D processing techniques (referred to as Dip Shooting). However, with the increasing number of channels, and therefore better raw S/N ratios, interpreters moved to using continuous sections rather than an individual cross-spread of geophones for each shot (Sheriff, 1988). While 2D data acquired in this manner is limited to only along-track dips and therefore can only migrate energy correctly in the along-track direction, 3D data volumes contain dipping information for all azimuths. This enables dipping events to be correctly imaged regardless of acquisition direction, and the complex diffraction hyperbolae formed around point sources to be collapsed back to their true location, with the additional benefit of increased S/N (Claerbout, 1992).

    The first cross-spread acquisition experiment can be traced back to 1964 (Galbraith, 2001); however, the early development of true 3D seismic acquisition is shrouded in a thick corporate veil. Dragoset (2005) attributes the first 3D survey to Exxon, in 1967, while the first published trial took place at Bell Lake Field, Lea County, New Mexico, in 1972 (Schneider, 1998, 2001). Today, however, 3D surveys have become the norm, particularly in the marine environment, where the first recorded survey was in the North Sea, in 1975 (Davies et al., 2004). Survey ships are capable of towing 16 streamers or more, each >10km in length, providing multi-azimuth coverage of >1km wide swaths (Charron et al., 2022). With the development of sophisticated shot deblending and other processing methods, data density and areas have developed rapidly over the past decade, increasing both the cost/productivity ratio for seismic acquisition and pushing the limits of what it is possible to image.

    This advancement in acquisition methodologies has been matched, if not outpaced, by parallel developments in data processing that leverage the rapid growth and availability of high performance computing. Computationally expensive techniques, such as Reverse Time Migration (RTM) and Full Waveform Inversion (FWI), have now become practically tractable for many datasets (Jones, 2019a) and can explicitly include complex, wavefield properties like loss (e.g., Q-RTM; Zhao et al., 2018) and/or anisotropy (e.g., TTI-PreSDM; Jones, 2019b). The broad application of these advanced processing techniques has changed significantly the complexity and fidelity of subsurface structure that can be imaged using seismic reflection data. In doing so, this provides both challenges and opportunities for subsequent data interpretation.

    1.2. Data interpretation

    Interpreting seismic data is a crucial skill and a necessary method to unravel subsurface information from the impressive seismic reflection images that can now be achieved. Techniques supporting the interpretation of the processed seismic data have evolved, mimicking the rapid development and evolution of seismic imaging methods (2D, 3D, and 4D). But, despite the impressive advancements in data manipulation and storage, interpreting subsurface data still requires a good balance between geological and geophysical knowledge coupled with a certain dose of creative interpolation. Seismic interpretation could, in fact, be defined as the thoughtful procedure of separating the continuity and variability of reflections (related to the geologic structure, stratigraphy fluids, and reservoir fabric) from the recorded seismic wavelet (ideally the pulse of seismic energy defined as the minimum phase of some frequency bandwidth) and the noise of various kinds and image artifacts (Brown, 2011). In order to constrain and integrate information out of the reflective signal, seismic data characterization often requires the combination of borehole and microseismicity data. Prior to the advent of digital data in the 1980s, seismic interpretation was predominantly done on paper. The subsequent replacement of paper by digital data on computer devices has led to an interpretative approach adopting more and more image processing methods (Liner, 2008). Nowadays the use of virtual data enhancement (Purves et al., 2016; Rashed and Atef, 2020) and deep learning approaches (Waldeland et al., 2018) is providing the seismic interpreter with the addition of more interactive advanced computational approaches to add to their traditional tool kit.

    Alongside this rapidly developing process, which has seen the evolution of digital tools to visualize, interpret, and manipulate the reflective and diffractive data from the subsurface, there has been a detailed pathway of digital techniques which have changed our capabilities to extract and invert petrophysical and geological information out of the reflective seismic surveys (Chopra and Marfurt, 2005). The introduction of the principles of seismic stratigraphy by Peter Vail and others in 1977 (Vail et al., 1977) provided a framework to define, explore for, and exploit stratigraphic traps in a time where most of the seismic data were 2D and interpreted in pencil (Claerbout, 1992). Early interactive workstations started with variable area and then wiggle-trace displays. The uses of color bar (eight colors in the late 1970s, with the Genisco providing 16 colors in the mid-1980s; see, Chopra and Marfurt, 2005) to map and estimate relative acoustic impedance changed dramatically our ability to apply recursive inversion to extract wave form properties (amplitude phase and frequency). Seismic attribute analysis, including acoustic impedance (Lavergne, 1975; Lindseth, 1976, 1979) and complex trace attributes, such as the envelope, instantaneous phase, and instantaneous frequency (Taner et al., 1979; Bodine, 1984), became crucial and routinely applied to map seismic properties from stacked data. The concepts of bright spots and Amplitude Variation with Offset (AVO) were also introduced in the 1980s and 1990s. With the assimilation of 3D seismic volumes that provide data at fine spatial sampling leading to more accurate 3D representations of the subsurface reflectivity and advance the use of single trace seismic attributes with the introduction of horizon dip azimuth attributes (Daley et al., 1989), the use of coherence (Bahorich and Farmer, 1995), spectral decomposition (Partyka et al., 1999), as well as wavelet transform and impedance optimization tools, became routinely included as the main software conceived for the digital interpretation of seismic data (e.g., Schlumberger, Halliburton). The methods of inversion have also improved our prediction of petrophysical properties (see, Chapter 11Mazzotti and Aleardi, 2022). In the same time period, with the increasing computational demand which resulted in most major oil companies entering the supercomputer world, time and depth migration techniques became more and more sophisticated producing 3D depth migrated data (Biondi, 2006).

    By the 21st century, the exponential increasing demand for interpretation and data visualization integration forced the subsurface oil and gas industry to increasingly adopt methodologies to improve 3D visualization (color blend; opacity analysis; but also geobody construction) but also the interpreters to routinely cluster alternative seismic measurements as attributes (Claerbout, 2014; Strecker and Uden, 2002) using artificial and probabilistic neural networks approach which today is gearing toward the deep and machine learning approaches with the aim to better risk a given prospect or characterize a reservoir (Van der Baan and Jutten, 2000).

    2. Book overview

    Within this book, we seek to provide an overview of how seismic reflection data is currently being used to image the subsurface, combining nine papers that delve into different aspects of data processing, reprocessing, and/or interpretation. These papers are collected into three sections, each dealing with a different aspect of the overall process.

    2.1. Interpretation of seismic data in complex systems

    In the first section of this book, we look at examples of the use of seismic reflection data to not only image geological features but to better understand fundamental geological processes. Seismic reflection data, and in particular 3D seismic reflection data, has provided a revolution in our understanding of sedimentary systems and how they have been affected by climate and tectonics. Challenges still remain in terms of interpreting subsurface fluids in seismic data (e.g., Fawad et al., 2020), understanding deposition from a source-to-sink perspective (e.g., Watkins et al. 2019) as opposed to a focus only on the sink and the high seismic velocity and density of carbonates still cause challenges in their interpretation (e.g., Hendry et al., 2021). On a more philosophical level we must also accept the fact that seismic interpretation conducted by human interpreters is subjective and will be affected by our own inherent biases (e.g., Bond et al., 2007). We can only make steps to consider uncertainty in our interpretations once we acknowledge these biases.

    Cook and Portnov give us an introduction to natural gas hydrate systems and the famous ‘bottom simulating reflection’ (BSR) and explain cases where the BSR breaks the rules of the original definition by Shipley et al. (1979). The BSR is another example of the great importance of seismic data. The BSR was hypothesized to relate to free gas in 1974 before being confirmed by IODP drilling twodecades later in 1995. Dissociating gas hydrates may play a critical role in climate change and acidification and oxygen depletion in the oceans. This contribution explains why it is still challenging to quantify gas hydrate using seismic data and describes the prospecting tools currently available to identify natural gas hydrate systems in seismic reflection data.

    A potential cause of gas hydrate dissociation is slope failure, which can also lead to destruction of seabed infrastructure and tsunamis. Scarselli et al. use seismic reflection data to reveal slope collapse systems in the Exmouth Plateau, NW Australia, and discuss their potential as petroleum systems. This contribution provides a masterclass in the interpretation and characterization of slope failure deposits and their sources using 3D seismic reflection images and root mean square amplitude (RMS) attributes. Modern-day analogs have been used for the reader to put into context the features interpreted from seismic data (Fig. 1.1). The paper argues that thick, stacked talus wedges with reservoir potential in the hanging walls of normal faults, whose slip in earthquakes likely trigger footwall slope failure, may diversify potential play types in rifted margins.

    Alvarenga et al. take on the notoriously difficult challenge of interpreting presalt carbonates. These deposits in areas like the Campos and Santos basin, South Atlantic, have not been well imaged by 3D seismic volumes, so their interpretation relies on grids of 2D seismic profiles. In this study Alvarenga et al. used core data to develop a supervised seismic facies analysis to identify carbonates in the Campos basin, utilizing RMS, cosine of phase, and relative acoustic impedance. Their analysis results in the identification of 10 carbonate geobodies mapped across the 2D seismic network. They find that some of these carbonates are likely in situ and others resedimented. All of the likely resedimented carbonates occur during a time of high tectonic activity —supporting a tectonic trigger for collapse and resedimenation.

    One of the first things we learn as seismic interpreters is that there is considerable uncertainty in our interpretations, and interpreters must employ heuristics (rules of thumb). Alcade and Bond, using powerful everyday examples, explain that these heuristics lead to unwanted, and often unknown, cognitive biases that influence our interpretations. Alcade and Bond describe four of the key biases in seismic interpretation. (1) Anchoring —when an individual is fixed on an initial concept or idea it affects all subsequent decisions. (2) Availability bias —when easily retrievable information is used more frequently in our decisionmaking. (3) Herding —when a group’s decision moves toward the views of a dominant individual. (4) Framing bias —when the way the data is presented affects decisionmaking. Alcade and Bond discuss various strategies for mitigating these biases.

    Figure 1.1  Example of a modern seismic interpretation study combining reflectivity data, depth horizons, seismic attributes, and modern analogs. c.f. Fig. 12 from Scarselli et al.

    2.2. Reprocessing of vintage 2D data

    The second section of the book deals with the reprocessing of public vintage reflection data using modern processing approaches to extract further subsurface geological information in geographic areas where new data cannot be acquired, or modern recent industry data are not publicly accessible. The three chapters focus on distinctive targets and therefore propose distinctive processing workflows which include workflows to produce digital SEG-Y data out of public paper or A0 image formats (Blake and Hewlett, 1993); database modern broadband processing sequence (Yilmaz, 2001), velocity model building using both inversion techniques (Claerbout, 1992) and grid-based tomography; wave equation datuming (WED) to correct and attenuate noise in complex geological setting.

    Conti et al. propose a digital workflow for digitizing vintage paper or image seismic reflection data into SEG-Y format using free MATLAB scripts and show the potentiality of those newly digitized data to improve our understanding of the eastern Tyrrenian margin and discuss the persistence of fault activity in the central-eastern sector across the Plio-Pleistocene sequences.

    On a different matter, Brancatelli et al. propose an interesting processing workflow to depth convert a seismic profile which crosses complex structure between Puglia and Albania over the Otranto channel. The processing experiment (Fig. 1.2) proposed includes the full sequence (editing, deghosting, multiple attenuation, Q correction, deconvolution, prestack time migration) and a depth migration building on a velocity depth model using the coherence inversion technique (Landa et al., 1988; Yilmaz, 2001) and two refinement loops using layer-based and grid-based tomography. The approach shows clear improvement for the diffuse gas, carbonate platform imaging.

    Figure 1.2  Example of reprocessed seismic. Modified from c.f. Fig. 12 from Brancatelli et al.

    Giustiniani et al. instead propose a processing workflow which uses Wave Equation Datuming (Barison et al., 2011; Berryhill, 1979, 1984) to correct and attenuate perturbations affecting energy propagation by removing time shifts associated with topography variations as well as to near-surface velocity changes. The techniques show very good results for both land investigation of deep geological structure but also with high resolution marine data focusing on shallow seismic imaging in areas with irregular topography and important lithological heterogeneities. In geothermal areas WED improved S/N ratio and preserved the reflection amplitudes, differentiating the distinct seismic facies and offering a better image of the variable lithology and petrophysical properties.

    2.3. Quantitative imaging

    The final section within this book includes three chapters that discuss various approaches for deriving a quantitative characterization of the ground conditions from seismic data. Together, these chapters cover a broad range of quantitative imaging approaches, from long-standing intercept-gradient cross-plots (Aki and Richards, 1980), through petrophysical-based inversions that condense the classical two-step inversion/prediction approach into a single (often probabilistic) prediction (e.g., de Figueirdo et al., 2018), to full waveform methods that are rapidly becoming the go-to technique as computational power increases (e.g., Zimmer, 2019).

    Connolly provides a detailed overview of Amplitude versus Offset (AVO) theory, including multiple practical examples. Starting with the fundamental equations, the impact of measurement errors and anisotropy are discussed in detail, forming an excellent grounding in the potential uncertainty of any predictions that act as a consistent thread throughout the chapter. With these uncertainties in mind, the application of intercept-gradient cross-plots and scaled reflectivity projections are presented alongside extended elastic impedance, the practical strengths and weaknesses of each approach outlined in such a way as to give practitioners confidence when applying these useful tools to future datasets.

    Mazzotti and Aleardi discuss the application of Amplitude versus Angle (AVA) inversions for petrophysical characterization. Cast as a seismic-petrophysical inversion, whereby rock physics relations are built directly into the AVA inversion rather than being used as a secondary step after traditional elastic inversion, two case study examples are presented from the Nile Delta and a terrestrial dataset. Three different inversion approaches of varying complexity are outlined, whereby fractional porosity, shaliness, and water saturation are predicted directly from the seismic data for target intervals, allowing the occurrence of shale, brine sand, and gas sand to be mapped (e.g., Fig. 1.3). Cast within a probabilistic framework, these inversions have the benefit of capturing prediction uncertainty that can be carried forward into reservoir net pay evaluation.

    Figure 1.3  Prediction of gas sand, brine sand, and shale. c.f., Mazotti and Aleardi.

    Agudo et al. provide an in-depth overview of practical Full Waveform Inversion (FWI) applied to a 3D field dataset. The complexities of cycle-skipping and generating an appropriate starting model are explored, along with the impact of structure at both shallow and intermediate depths above the target reservoir. The benefit of FWI in terms of a high-fidelity velocity model and reflectivity imaging are presented, along with a detailed discussion of future applications of FWI, both within geoscience and beyond, as well as the role(s) that machine learning might play over the coming years.

    3. Future outlook

    The way we interpret seismic reflection data has changed considerably over the past 50 years. Evolving from a discipline which relied on drawing on paper, to mapping horizons and faults digitally on a workstation, to the use of image processing techniques (i.e., single and then multiple seismic attributes) that allow us to reveal geological features before we interpret them. In effect, seismic interpretation has become ‘data driven and interpreter guided’ (Paton and Henderson, 2015). The future of seismic interpretation will almost certainly involve developing even closer ties between geology, geophysics, and data science. Artificial intelligence and machine learning promise exciting opportunities to speed up the conventional parts of seismic interpretation (e.g., horizon and fault mapping) and allow much larger volumes of seismic data to be interpreted in reasonable timeframes (e.g., mapping of mass-transport deposits by Kumar and Sain, 2020, mapping of faults and seismic facies by Wrona et al., 2021). Rather than making seismic interpreters redundant, this would instead free up time to place these interpretations into better geological context and deepen our understanding of the Earth. Artificial intelligence excels at finding patterns in multiple datasets, beyond the scope of what we can do as humans. This also opens the opportunity to identify geological information that would have otherwise gone undetected.

    Similarly, the future of interpretation software will progress depending on the development of graphical processing units (GPU) or their future successor, as they will become integrated in most commercial interpretation software products, rendering fast, truly interactive interpretation of large, multiattribute data volumes. Our capability to manipulate big data will more and more require the uses of High Performance Computing (HPC) and potentially simplify the shift toward the full use of multicomponent data and open the door toward time lapse data interpretation which some 10 years ago was considered too high RAM and computationally demanding (Shea, 2020).

    One of the barriers currently to the widespread use of data science and machine learning techniques in the field of seismic interpretation is knowledge. Seismic interpreters, for the most part, have backgrounds in geoscience and have utilized commercial software for their interpretations and do not have the data science background to be able to code and train machine learning algorithms for themselves. This situation is changing, with successful collaborations between geoscientists and data scientists and upskilling of current geoscientists in the data science arena (e.g., the excellent hackathons run by Agile Scientific- https://events.agilescientific.com), improved training for undergraduate and Masters level geoscience students in computation and data science, and companies working to integrate machine learning features into their software.

    Perhaps some of the most exciting developments, however, are in the open-source coding arena. Github is a wonderful resource for open-source codes and tutorials for geoscientists to learn data science skills (e.g., Hall, 2018, also see https://wiki.seg.org/wiki/Geophysical_tutorials). One of the limitations to the use of codes like these are the vast size of SEGY volumes, which lead to memory limitations when desktop PCs or laptops are used rather than HPC facilities. We know of some studies in development to provide a cloud repository for seismic data that can be queried to request bits and pieces of seismic data when needed without having to clog up memory on desktop computers and laptops (e.g., oneseismic from Equinor- https://github.com/equinor/oneseismic). A frustration of many working with seismic data is the difficulty of finding out what datasets exist and who owns them and whether they are open source. Some countries have excellent data repositories (e.g., Geoscience Australia, see Scarselli et al.), but this is not universally the case. In a future where data science is used to process vast volumes of seismic data, some kind of global cloud storage where seismic can be queried will likely be required.

    Finally, in a world where we are moving away from classic oil and gas exploration to a focus on a low-carbon energy future, one could ask the question where does this leave the future for seismic reflection data? Much of our existing seismic reflection data, particularly 3D, comes from oil and gas exploration, with a smaller proportion coming from academia. Many sectors of the low-carbon energy industry will require or benefit from high-resolution images of the subsurface. This includes the installation of wind turbines in terms of characterizing the subsurface to site them (e.g., Abubaker et al., 2021). Geothermal energy production will benefit from a knowledge of fault networks which can be provided by seismic reflection data (e.g., Siler et al., 2019). Perhaps most obviously, carbon sequestration and storage (CSS) requires much of the same data and analysis techniques as oil and gas exploration, just in reverse to locate reservoirs and traps to keep fluids in the group rather than a focus on extracting them (e.g., Alcade et al., 2014). As these industries grow and develop, we anticipate their appetite for seismic data and interpretation will also grow. One thing is for sure, the interpretation of seismic data is likely to undergo a revolution in the next decade, and this book provides a starting point for those interested in the current state-of-the-art of the discipline.

    References

    1. Abubakar A, Juncker Brædstrup M, Di H, Diaz A.T, Freeman S, Hviid S, Karkov K.H, Kriplani S, Manikani S, Salun G, Zhao T. Deep Learning Applications for Wind Farms Site Characterization and Monitoring First International Meeting for Applied Geoscience & Energy Expanded Abstracts . September 2021:3009–3013.

    2. Aki K, Richards P.G.  Quantitative Seismology: Theory and Methods . W. H. Freeman and Co; 1980.

    3. Alcalde J, Marzán I, Saura E, Martí D, Ayarza P, Juhlin C, Pérez-Estaún A, Carbonell R.3D geological characterization of the Hontomín CO2 storage site, Spain: multidisciplinary approach from seismic, well-log and regional data.  Tectonophysics . 2014;627:6–25.

    4. Bahorich M.S, Farmer S.L. 3D seismic discontinuity for faults and stratigraphic features: the coherence cube. In:  65th Annual International Meeting, SEG Expanded Abstracts . 1995:93–96.

    5. Barison E, Brancatelli G, Nicolich R, Accaino F, Giustiniani M, Tinivella U.Wave equation datuming to marine OBS data and to land high resolution seismic profiling.  J. Appl. Geophys.  2011;73:267–277. doi: 10.1016/j.jappgeo.2011.01.009.

    6. Berryhill J.R. Wave-equation datuming.  Geophysics . 1979;44:1329–1344. doi: 10.1190/1.1441010.

    7. Berryhill J.R. Wave-equation datuming before stack.  Geophysics . 1984;49:2064–2066.

    8. Biondi B,  3D seismic imaging, .  SEG Investigations in Geophysics Series No. 14 . 20069781560801375.

    9. Blake N, Hewlett C. Digital information recovery from paper seismic sections for workstation loading.  Jeofizik . 1993;7:3–14.

    10. Bodine J.H. Waveform analysis with seismic attributes. In:  54th annual international meeting, SEG Expanded Abstracts . 1984:505–509.

    11. Bond C.E, Gibbs A.D, Shipton Z.K, Jones S. What do you think this is? Conceptual uncertainty in geoscience interpretation.  GSA Today . 2007;17(11):4.

    12. Brown A. Interpretation of three dimensional seismic data. seventh ed.  SEG Investigation in Geophysics . vol. 9. 2011.

    13. Charron P, L'Arvor E, Fasterling J, Richard G. Super-sparse marine 3D: a game changer for seismic exploration.  Lead. Edge . 2022:19–26.

    14. Chopra S, Marfurt K. Seismic attributes – a historical perspective.  Geophysics . 2005;70:3SO–28SO.

    15. Claerbout J.F.  Earth Soundings Analysis: Processing versus Inversion . Blackwell Scientific Publications; 1992.

    9002. Claerbout J. Geophysical image estimation by example. Lulu.com; 2014. https://www.lulu.com/en/gb/shop/jon-claerbout/geophysical-image-estimation-by-example/paperback/product-125y2vjk.html?page=1&pageSize=4.

    16. Daley R.M, Gevers E.C.A, Stampfli G.M, Davies D.J, Gastaldi C.N, Ruijtenberg P.A, Vermeer G.J.O.Dip and azimuth displays for 3-D seismic interpretation.  First Break

    Enjoying the preview?
    Page 1 of 1