Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Coding and Decoding: Seismic Data: The Concept of Multishooting
Coding and Decoding: Seismic Data: The Concept of Multishooting
Coding and Decoding: Seismic Data: The Concept of Multishooting
Ebook1,281 pages19 hours

Coding and Decoding: Seismic Data: The Concept of Multishooting

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Coding and Decoding Seismic Data: The Concept of Multishooting, Volume One, Second Edition, offers a thorough investigation of modern techniques for collecting, simulating, and processing multishooting data. Currently, the acquisition of seismic surveys is performed as a sequential operation in which shots are computed separately, one after the other. The cost of performing various shots simultaneously is almost identical to that of one shot; thus, the benefits of using the multishooting approach for computing seismic surveys are enormous.

By using this approach, the longstanding problem of simulating a three-dimensional seismic survey can be reduced to a matter of weeks. Providing both theoretical and practical explanations of the multishooting approach, including case histories, this book is an essential resource for exploration geophysicists and practicing seismologists.

  • Investigates how to collect, stimulate, and process multishooting data
  • Addresses the improvements in seismic characterization and resolution that can be expected from multishooting data
  • Provides information for the oil and gas exploration and production business that will influence day-to-day surveying techniques
  • Covers robust decoding methods of undetermined mixtures, nonlinear decoding, the use of constraints in decoding processes, and nonlinear imaging of undecoded data
  • Includes access to a companion site with answers to questions posed in the book
LanguageEnglish
Release dateDec 7, 2017
ISBN9780128111116
Coding and Decoding: Seismic Data: The Concept of Multishooting
Author

Luc T. Ikelle

Dr. Luc Ikelle is a Professor in Geology and Geophysics at Texas A&M University. He received his PhD in Geophysics from Paris 7 University in 1986 and has sense cultivated expertise in: seismic data acquisition, modeling, processing, and interpretation for conventional and unconventional energy production; inverse problem theory, signal processing, linear and nonlinear elastic wave propagation, linear and nonlinear optics, and continuum and fracture mechanics. His research interests include a combined analysis of petroleum systems, earthquakes, and volcanic eruptions based on geology, geophysics, statistical modeling, and control theory. He is a founding member of Geoscientists Without Borders, for which he received an award from SEG in 2010. He is a member of the editorial board of the Journal of Seismic Exploration and has published 107 refereed publications in international journals.

Related authors

Related to Coding and Decoding

Related ebooks

Physics For You

View More

Related articles

Reviews for Coding and Decoding

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Coding and Decoding - Luc T. Ikelle

    VTeX

    Preface (First Edition)

    Seismic surveys remain the fundamental technology for exploring oil and gas reservoirs and for delineating detailed structures of the subsurface. They are carried out with man-made sources consisting of explosives or any other man-made sudden deformations. At one of the predefined locations in the surface of the earth, or just below the sea surface, the source generates waves that propagate through the subsurface. When the wave encounters an interface with different physical properties (e.g., velocity and/or density), such as a fault or lithological change, a percentage of the generated energy is reflected toward the source position, and the remaining energy is transmitted through to the next interface. Sensors located in places accessible to man (like the surface of the earth, the water column in the sea, and boreholes, when they are available) record this reflected and/or transmitted energy (i.e., seismic data). The recorded seismic data are called shot gathers. The source, and sometimes sensors as well, are then moved to another predefined location where the same process is performed, resulting in a new shot gather. This process is repeated 50,000 times or more for a period of weeks and even months for one single seismic survey, costing several millions of US dollars.

    Significant savings in time and money in acquiring seismic data, even in processing and storing them, can be achieved by generating waves from several source locations simultaneously instead of one single-source location at a time, as is currently the case. In fact, the cost of performing several shots simultaneously is almost identical to that of performing one shot. The simultaneous multishooting concept in seismic acquisition and processing, which we are concerned with in this book, is based on this property. We will call this concept multishooting, and data resulting from one multishot will be called a multishot gather.

    The multishooting concept is not limited to field-data acquisition. It is also useful for generating synthetic data (i.e., computer-generated data), which are needed for testing imaging algorithms and interpreting real data. One of the most successful numerical techniques for generating seismic synthetic data is the finite-difference modeling (FDM) technique. It consists of solving the differential equations that control wave propagation in the earth by numerically approximating derivatives of these equations. When an adequate discretization in space and time, which permits an accurate computation of derivatives of the wave equations, is possible, the finite-difference technique is by far the most accurate tool for simulating elastic wave propagation through geologically complex models such as the ones confronted today by the hydrocarbon exploration and production industry. Moreover, the FDM technique is very often easy to use. However, the use of FDM by engineers and interpreters in field operations to simulate seismic surveys is still predominantly limited to its two-dimensional version (2D-FDM). In 2D-FDM, the geological model is assumed to be invariant against parallel translation along one of two spatial axes, and the data are generated by a line source instead of point sources. Yet for FDM to become fully reliable for oil and gas exploration and production, we must develop its 3D cost-effective versions.

    cells. The waveform is received for 4,000 timesteps. We have estimated that it will take more than 12 years of computation time on SGI Origin 2000 with 20 CPUs to produce a 3D survey of 50,000 shots, well beyond the lifetime of some petroleum reservoirs. Because the finite-difference technique has the ability to generate elastic waves from several locations, just as in field-data acquisition, the multishooting concept can be used to reduce the CPU time of simulating 3D seismic surveys to a matter of months or even weeks, as we discuss in Chapter 5.

    The end products of seismic-data acquisition and processing are images of the subsurface. When seismic data are acquired based on the concept of multishooting, there are two possible ways to obtain images of the subsurface. One way consists of decoding multishot data before imaging them; that is, the multishot data are first converted to a new dataset corresponding to the standard acquisition technique, in which one single shot at a time is generated and acquired. Second, imaging algorithms are applied to the new dataset. Actually, all seismic data-processing packages available today require that multishot data be decoded before imaging them because they all assume that data have been collected sequentially. Therefore, five of the seven chapters (2 to 6) of this book are dedicated to the decoding of multishot data.

    An alternative way is to directly image multishot data without decoding them. The benefits of directly imaging multishot data, instead of decoding before imaging, include a reduction in memory and the CPU time needed for imaging processes and an improvement in the signal-to-noise ratio of the resulting images of the subsurface. Methods for directly imaging multishot data without decoding them are described in the last chapter of this book.

    Coding and decoding processes are generally associated with communication theory, especially with the fact that several independent messages can be simultaneously passed through a single channel, such as telephone lines, thus improving the economics of the channel. These processes are widely used in cellular communications today so that several subscribers¹ can share the same channel. Moreover, these processes are becoming increasingly attractive in scientific areas far afield, such as neurobiology and biophysics. In this book, we are concerned with coding and decoding in petroleum seismology; we basically describe the coding and decoding techniques for seismic data that result from elastic-wave-scattering experiments. When these experiments are performed with several sources firing simultaneously, we characterize the resulting multishot gathers as coded data. The process of reconstructing the shot gathers (as if the experiments were performed sequentially from one shot location at a time) that compose a multishot gather is characterized as decoding.

    Compared to the decoding problems in communication theory, the decoding of seismic multishooting data is a much more difficult problem. In communication, the input signals (i.e., voice signals) are coded and combined into a single signal, which is then transmitted through a relatively homogeneous medium that has known properties. Although the input signals are very complex, the decoding process in communication is quite straightforward because the coding process is well known to the decoders, as are most changes to the signals during the transmission process. In seismics, we have almost the opposite problem. The input signals generated by the seismic sources are generally simple. But they pass through the subsurface, which is a very complex heterogeneous, anisotropic, and anelastic medium. It sometimes exhibits even nonlinear elastic behaviors. Moreover, this medium is unknown. Signals received after wave propagation in the subsurface are also as complex as those in communication. However, they contain the information about the subsurface that we are interested in reconstructing. The decoding process in this case consists of recovering the impulse response of the earth that corresponds to each source of the multiple sources of the multishooting experiment.

    About five years ago, we began studying the concept of multishooting for seismic exploration and production. Our investigations have basically addressed the following four issues:

    •  how to generate multishooting data

    •  how to simulate multishooting data on the computer

    •  how to decode multishooting data

    •  how to image multishooting data without decoding them

    It will take time and effort from many research groups before comprehensive answers to these questions can be found. Our objective in this book is to record our attempts in the last five years to answer these questions. If this book serves as a catalyst to the seismology community to transform the concept of multishooting into a day-to-day routine in oil and gas exploration and production in the coming years, it will have served its purpose.

    The book is partly a textbook and partly a monograph. It is a textbook because it gives a detailed introduction to the concept of multishooting and a detailed description of basic coding and decoding models and algorithms (Chapters 1 to 5). It is a monograph because it presents several new results, ideas, and developments, and an explanation of existing coding and decoding algorithms (Chapters 1 to 5). Moreover, research results previously scattered in many scientific journals (e.g., the Journal of Machine Learning Research, the Neural Computation, and the Annual Review of Psychology) and conference proceedings (especially the proceedings of the fifth and sixth international conferences on independent component analysis and blind signal separation) have been are collected and presented in the book in a unified form. So this book is likely to be of interest to graduate and postgraduate students, and to engineers and scientists working in the fields of neurobiology, biophysics, communications, signal and image processing, computer science, and of course, seismology. Furthermore, the book may be of interest to researchers working on various aspects of the famous cross-discipline problem known as the cocktail party problem.² A number of other concepts and results have been included which may be useful in future research.

    The chapters in this book are kept as independent as possible, so some redundancy has been introduced to avoid requiring a linear progression through the book. Although the material has been developed from first principles wherever possible, the book will be of greatest benefit to those who are familiar with basic college calculus, the Fourier transform, elementary matrix algebra, and probability theory. As we mentioned earlier, this book is also suitable for a graduate-level university course in signal processing, and seismology in particular. Exercises, problems, and computer assignments are given at the end of each chapter to facilitate the use of the book for courses.

    Bibliography

    E.C. Cherry, Some experiments on the recognition of speech with one and two ears, Journal of the Acoustical Society of America 1953;25:975–979.


    ¹  Through, for example, a frequency division in which the voice signal of each user is allocated a separate frequency bandwidth, or through an orthogonalization of voice signals in which the voice signal of each user is associated with a different and unique code, multiple voice signals can be combined into one signal (coding process) in such a way that they can easily be recovered. The combined signal is then transmitted through the telephone line. The uniqueness of the code associated with each user, or the disjointing bandwidths associated with the voice signal of each user, are then used at the receiving end of the telephone to recover the original voice signals (decoding process).

    ²  How do we recognize what one person is saying when others are speaking at the same time (the cocktail party problem)? On what logical basis could one design a machine (technique) for carrying out such an operation? (Colin Cherry, 1953).

    Preface (Second Edition)

    In most seismology studies, the cost of data collection far outweighs the cost of data analysis, so it is important to use the most efficient and accurate techniques to collect seismic and EM data. In 1998, when we set up the CASP consortium at Texas A&M University, we started working on the topic of near simultaneous multiple shooting (or multishooting), which is based on the idea of generating waves from several source locations that are nearly simultaneous instead of from one single-source location at a time. We went on to file the first US patent on multiple-shooting acquisition and processing, which was granted in 2001, and we published the first book on the topic in 2010.

    Since then, many other authors have added their contributions, as the table below shows (Table 1). For reasons that I am still unable to comprehend, some people in the E&P industry switched to blending¹ and deblending in 2011. In any event, simultaneous multishooting (Fig. 1) is just one aspect of the famous cross-disciplinary cocktail-party problem. Graduate and postgraduate students, along with engineers and scientists working in the fields of neurobiology, biophysics, machine learning, communications, signal and image processing, computer science, and, more recently, seismology (including earthquake seismology, with the recognition of the occurrence of nearly simultaneous earthquakes), have been wrestling with this problem for many decades. The classic words used in these studies are coding, encoding, decoding, mixtures, mixing, and demixing. If we call the data resulting from nearly simultaneous multishooting acquisition multishot data or multishot gathers, and those resulting from the current acquisition approach, in which waves are generated from one location at a time, single-shot data or single-shot gathers, the multishot data are mixtures of single-shot data. Alternatively, the multishot data are a coded (or an encoded) version of single-shot data. Decoding is the process of reconstructing single-shot data from multishot data.

    Table 1

    A representative list of references to near-simultaneous multiple shooting. Some people in the E&P industry switched to blending and deblending in 2011. Moreover, based on this list, some even operate as if the topic itself started in 2011

    Figure 1 Snapshots of wave propagation in which four shots are fired simultaneously from four points spaced 50 m apart. The source signature is the same for the four shots, but their initial firing times are different (Ikelle and Amundsen, Introduction to Petroleum Seismology, 2005).

    Because the end products of seismic-data acquisition and processing are images of the subsurface rather than decoded data, additional terminology based on an imaging approach can be introduced. One approach is to decode multishot seismic data and then use the current seismic-imaging technology to recover the model of the subsurface. We can naturally call this approach the imaging (inversion or migration) of decoded data. Another approach is to image multishot data without decoding them. We can call this approach direct imaging (inversion or migration) of multishot data or direct multishot imaging.

    This new edition includes (i) more decoding solutions of linear instantaneous and convolutive mixtures (Chapters 2, 3, and 4) than the first edition; (ii) the new topic of decoding nonlinear mixtures (Chapter 5) with applications to upgoing–downgoing wavefield separation, P-and-S-wave separation, PRM (permanent reservoir monitoring) data matching, and rock-physics regression analysis; (iii) an improved description of direct linearized inversion of multishot data without decoding them (Chapter 6); (iv) the new topic of the direct nonlinear inversion of multishot data without decoding them (Chapter 6); and (v) new applications of multishooting to tectonic and volcanic earthquake data (Chapters 4, 5, and 6).

    Bibliography

    L.T. Ikelle, L. Amundsen, An Introduction to Petroleum Seismology: Investigations in Geophysics. Tulsa, OK: Society of Exploration Geophysics; 2005.


    ¹  Blending and coding are not synonymous; neither are deblending and decoding.

    Chapter 1

    Introduction to Multishooting: Challenges and Rewards

    Abstract

    .

    Keywords

    Physical laws; Continuum mechanics; Piecewise-continuous elastic medium

    Chapter Outline

    1.1  Dimensions and Notation Conventions

    1.1.1  Coordinate systems

    1.1.2  Dimensions of heterogeneous media

    1.1.3  Notation conventions

    1.1.4  The f-x and f-k domains

    1.2  Scattering Experiments in Petroleum Seismology

    1.2.1  Principles of seismic acquisition

    1.2.2  Seismic data

    1.2.3  Shot, receiver, midpoint, and offset gathers

    1.2.4  Multiazimuthal data

    1.3  Acquisition of Multishot Data

    1.3.1  Multiazimuth surveys

    1.3.2  Flip-flop acquisition

    1.3.3  Source encoding: the marine style

    1.3.4  Source encoding: the land-vibroseis style

    1.3.5  The challenges of multishooting acquisition

    1.4  Processing of Multishot Data

    1.4.1  Reciprocity theorems

    1.4.2  Cross-talk and challenges of the decoding process

    1.4.3  The challenges of imaging multishot data without decoding

    1.5  The Nonlinear Elasticity and the Superposition Principle

    1.5.1  Linear and nonlinear media

    1.5.2  Second-order nonlinear media

    1.5.3  Volterra series

    1.6  The Cocktail-Party Problem: A Multidisciplinary Problem

    1.6.1  A brief review of the cocktail-party problem

    1.6.2  Coding and decoding in communication theory

    1.6.3  Processing of multishot data without decoding

    1.6.4  Nearly simultaneous earthquakes

    1.6.5  Nearly simultaneous sources of volcanic activities

    1.7  Scope and Content of This Book

    Exercises

    How do we understand what one person is saying when others are speaking at the same time (the cocktail-party problem)? On what logical basis could one design a machine (technique) for carrying out such an operation (Cherry, 1953)? These two questions are the essence of the famous cross-discipline problem known as the cocktail-party problem, which was formulated by Colin Cherry and his coworkers more than 60 years ago. Fig. 1.1 provides an illustration of this problem, which involves several people speaking simultaneously in a room containing two microphones that represent human ears [(Cherry, 1953, 1957, 1961); (Cherry and Taylor, 1954); (Cherry and Sayers, 1956, 1959); and (Sayers and Cherry, 1957)]. In all cases, the output of each microphone is a mixture of several voice signals.

    Figure 1.1 Cocktail-party problem. If I people speak at the same time in a room containing two microphones, then the output of each microphone is a mixture of two voice signals. Given these two signal mixtures, a decoding process aims at recovering the original I voice signals, just like the decoding process of multishooting data does.

    Graduate and postgraduate students, along with engineers and scientists working in the fields of neurobiology, biophysics, communications, signal and image processing, computer science, and, more recently, seismology (ms, it is more difficult to distinguish waves associated with each of the four shots because multiple reflections and diffractions have significantly distorted the wavefronts. Similar observations can be made for the gathers shown in Fig. 1.3. Early-arrival events, such as the direct waves associated with the four shots, are clearly distinguishable and can easily be decoded. It is more difficult, at least visually, to establish the association of late-arrival events with particular shot points.

    Figure 1.2 (A) and (C) Snapshots of wave propagation of a single shot. (B) and (D) Snapshots of wave propagation in which four shots are fired nearly simultaneously from four points spaced 50 m apart. The source signature is the same for the four shots, but their initial firing times are different, i.e., τ 1 = 0, τ 2 = 100 ms, τ 3 = 200 ms, and τ 4 = 300 ms. The propagation times are the same as those of the snapshots of the single shot.

    Figure 1.3 An example of a multishot gather corresponding to the experiment described in Fig. 1.2.

    We call the concept of generating waves simultaneously from several locations simultaneous multishooting, or simply multishooting. The data resulting from multishooting acquisition will be called multishot data, and that resulting from the current acquisition approach, in which waves are generated from one location at a time, will be called single-shot data. So multishot data are the coded data, and the decoding process aims to reconstruct single-shot data.

    Significant savings in time and money in acquiring seismic data, even in processing and storing them, can be achieved by generating waves from several source locations simultaneously (multishooting) instead of from one single-source location at a time (single-shooting), as is currently the case. In fact, the cost of performing several shots simultaneously is almost identical to that of performing one shot. Again, the simultaneous multishooting concept in seismic acquisition and processing is based on this property. Multishooting can also be used to improve the ways in which we acquire data. For instance, it can be used to improve the spacing between shot points, especially the azimuthal distribution of shot points, and therefore to collect true 3D data. The multishooting concept can also be used to speed up the process of generating synthetic data (i.e., computer-generated data) and snapshots, which are needed for imaging and interpreting real data.

    The end products of seismic-data acquisition and processing are images of the subsurface. As depicted in Fig. 1.4, one solution is to decode multishot seismic data. The decoded data can then be imaged to recover the model of the subsurface using current seismic-imaging technology. Another solution is to image multishot data without decoding them. The benefits of directly imaging multishot data, instead of decoding them before imaging, include a reduction in memory and the CPU time needed for imaging processes and an improvement in the signal-to-noise ratio of the resulting images of the subsurface. In the case in which the nonlinear inversion techniques [also known as a full waveform inversion, from the work of Tarantola (1987)] are used to image seismic data, we can significantly reduce the cost of the forward problem by using multishot data instead of the current single-shot data, hence removing one of the major impediments of the application of the nonlinear inversion to seismic data.

    Figure 1.4 Two possible ways (routes) of processing multishooting data. Route 2 consists of directly processing the multishooting data without decoding, whereas Route 1 requires that multishooting data be decoded before imaging them.

    Our key objective in this book is to address the following four issues: (i) how to generate multishooting data, (ii) how to simulate multishooting data on the computer, (iii) how to decode multishot data, and (iv) how to image multishot data without decoding. The decoding and imaging of multishot data, including wavefield decomposition and demultiple, will be discussed in the next chapters. Our focus in this chapter is to review the benefits and challenges of multishooting for the E&P industry, and the basic physical principles behind the concept of multishooting.

    1.1 Dimensions and Notation Conventions

    1.1.1 Coordinate systems

    .

    To properly define these positions, let us consider the configuration in with respect to a fixed orthonormal Cartesian reference frame with origin O to label a particle throughout its entire history (x ).

    Figure 1.5 Configuration of the rectangular Cartesian coordinates.

    1.1.2 Dimensions of heterogeneous media

    Under the continuous-medium assumption, a rock formation can be characterized as either homogeneous or heterogeneous. A rock formation is homogeneous if its physical properties are invariant with regard to position. Otherwise it is heterogeneous.

    Four particular cases of heterogeneous media are commonly cited in petroleum seismology studies:

    •  The 1D case, in which the physical properties are invariant along the x- and y-axis and with time, and they are functions only of z );

    •  The 2D case, in which the physical properties are invariant along the y-axis and with time, and they are functions only of x and z );

    •  The 3D case, in which the physical properties are invariant only with time, and they are functions of x, y, and z );

    •  The 4D case, in which the physical properties vary with time as well as position (i.e., they are functions of x, y, z, and t).

    Our derivations and discussions in this book are limited mainly to 1D, 2D, and 3D media.

    1.1.3 Notation conventions

    ); they are to be assigned the values 1, 2, and 3 unless specified otherwise. The lowercase Latin subscripts r and s are symbols reserved for indicating receivers and sources, respectively. Boldface symbols (e.g., v, τ) will be used to indicate vectors or tensors.

    1.1.4 The f-x and f-k domains

    It is sometimes desirable to Fourier-transform seismic wavefields with respect to time and/or space to take advantage of the computational efficiency of FFTs (fast Fourier transforms) and of the fact that differentiations in time and space (t and x) can be converted into simple functions of frequency and wavenumber. If the Fourier transform is limited to time only, the transform domain is characterized as frequency-space (f-x). If the Fourier transform is performed with respect to both time and space, the transformed domain is characterized as a frequency-wavenumber (f-k).

    with respect to time is given by

    (1.1)

    and the inverse Fourier transform is given by

    (1.2)

    is the temporal radian frequency and f is the temporal frequency. Notice that rather than defining a new symbol to express this physical quantity after it has been Fourier-transformed, we have used the same symbol with different arguments, as the context unambiguously indicates the quantity currently under consideration. This convention will be used through the rest of the book unless specified otherwise. Notice also that the condition for the existence of a unique solution to the inverse Fourier transform is that (Bracewell, 1978)

    (1.3)

    So Eq. from the time-space (t-x) domain to the frequency-space (f-xfrom the (f-x) domain to the (t-x) domain.

    in the (t-x) domain into the (f-k) domain by taking its Fourier transform with respect to both time and horizontal spatial coordinates, as follows:

    (1.4)

    The inverse is defined as follows:

    (1.5)

    are the horizontal wavenumbers. Notice that we have again followed the convention introduced earlier; that is, the same symbol is used for both the (t-x) and (f-x) representations of the wavefield P, as the arguments of these representations unambiguously indicate the quantity currently under consideration.

    1.2 Scattering Experiments in Petroleum Seismology

    Throughout this book, we have illustrated our coding, decoding, and imaging results with synthetic data (i.e., computer-generated data) corresponding to the two models shown in Figs. 1.6 and 1.7. The model in Fig. 1.6 is quite simple, thus permitting a close analysis of our coding, decoding, and imaging results. The second model in Fig. 1.7 is much more complicated; it has a structure similar to those of the complex geologies which petroleum seismologists are now trying to model, image, and interpret. This second model is included here to ensure that our conclusions in this book will also hold in real-data applications.

    Figure 1.6 (left) Snapshots of wave propagation through a model made of three dipping reflectors and (right) the corresponding seismic data for a horizontal array of sensors. The sensor positions vary from ( x = 0 m, z = 8 m) to ( x = 3000 m, z = 8 m), spaced every 25 m. The shot point is located at ( x = 1500 m, z = 8 m). D indicates the direct wave; P1 and P2 indicate primary reflections; and FM1 indicates a free-surface-reflection event. The scattering diagrams of these events are shown in Fig. 1.12. (A) are two seismic datasets corresponding to wave recorded until snapshots at 500 ms and 750 ms. (B) are two seismic datasets corresponding to wave recorded until snapshots at 1000 ms and 1250 ms. (C) are two seismic datasets corresponding to wave recorded until snapshots at 1500 ms and 1750 ms.

    Figure 1.7 (top) Snapshots of wave propagation through a model which contains two salt bodies and (bottom) the corresponding seismic data for a horizontal array of sensors. The sensor positions vary from ( x = 500 m, z = 8 m) to ( x = 4500 m, z = 8 m), spaced every 25 m. The shot point is located at ( x = 2500 m, z = 8 m). (B) and (D) are seismic data corresponding to wave recorded until snapshots (A) and (C), respectively.

    As we can see in these two examples, geological models of the subsurface are predominantly layered with some body-type structures like salt bodies sandwiched between layers. Each layer or body-type structure can be anisotropic, anelastic, and even nonlinearly elastic. For the fundamental discussion of the coding and decoding process that we are carrying out in this chapter and in Chapters 2, 3, and 4, we can limit ourselves to linearly isotropic acoustic media without loss of generality. Thus the wave propagation through these media includes only compressional waves (generally known in seismology as P-waves); shear waves (generally known in seismology as S-waves) are not included in the wave propagation. So the layers and salt bodies in the two models can be completely described by (1) their mass densities, which we will denote by ρ.

    Our description of seismic acquisition and seismic data in this section is essentially based on these two models (Figs. 1.6 and 1.7), with the assumptions based on the physical properties we have just discussed. Again, these assumptions do not affect a general understanding of seismic acquisition and seismic data in the context of coding and decoding. More-comprehensive models that include shear waves will be discussed from Chapter 4 onward.

    1.2.1 Principles of seismic acquisition

    ms.

    Figs. 1.6 and 1.7 also show seismic data recorded by horizontal arrays of sensors. Notice that the various reflections and transmissions of energy in the snapshots are also captured by seismic data.

    The source is then moved to another location, where the entire process of generating and recording waves is repeated. The seismic data recorded in this process are then imaged, based on arrival time and the magnitude of the reflection energy, to obtain a model of the subsurface. The time it takes for the wave to travel from the source to the receivers is recorded in the seismic data. From these traveltimes we can reconstruct the depth of the reflector at which the recorded energy has been reflected. Furthermore, the magnitude of the reflected wave allows us to determine the contrast in physical properties that caused the reflection. Thus we reconstruct the locations of the various discontinuities of our geological model and the contrasts of physical properties which characterize these discontinuities. Examples of such reconstructions are discussed in Chapter 6.

    In Figs. 1.6 and 1.7, we have assumed that our data acquisition is carried out at sea (offshore) with sources and receivers located just below the sea surface. Actually, there are several possible source-and-receiver distributions at the sea. The ones commonly used for petroleum exploration and production are (i) the towed-streamer experiment, in which pressure sources and pressure receivers (known as hydrophones) are distributed horizontally in the water column near the sea surface, (ii) the ocean-bottom seismic (OBS) experiment, in which the pressures sources are in the water column and the receivers [measuring pressure (hydrophones) and particle velocity (geophones)] are at the sea floor, (iii) the vertical cable experiment, in which the pressure sources are in the water near the sea surface, just as in towed-streamer and OBS experiments, except that hydrophones are distributed in the water in the form of a vertical array, and (iv) the walkaway VSP (vertical seismic profile) experiment, in which the sources are in the water, just as in the previous experiment, but with receivers (hydrophones and geophones) inside the borehole. These four experiments are illustrated in Fig. 1.8. For more details on the logistics and operations of these experiments, and also on the land (onshore) alternative, the reader is referred to Ikelle and Amundsen (2005, 2017).

    Figure 1.8 Some examples of the source and receiver distributions: (A) the towed-streamer experiment, (B) the ocean-bottom seismic (OBS) experiment, (C) the vertical cable experiment, and (D) the walkaway VSP (vertical seismic profile) experiment.

    Let us expand a bit more on the towed-streamer acquisition because most of the examples in this book are based on this acquisition. Furthermore, more than 90% of marine-data acquisitions in the oil and gas industry are still carried out as towed-streamer experiments. Fig. 1.9A shows an aerial view of a seismic vessel during seismic-data acquisition, which is known as 3D seismic acquisition (we will contrast 3D acquisition with 2D acquisition later). It shows a ship towing a set of cables containing receivers to record signals generated by seismic sources as the vessel maneuvers across potential petroleum reservoirs. The cables of these receivers, which are more clearly illustrated in Fig. 1.9B, are generally called streamers. They are towed at a depth of between 5 and 10 m below the sea surface. A typical streamer today is 5000 m to 10,000 m long. It carries several hundred sensors, known as hydrophones, which record pressure changes. In conventional acquisition, each seismic receiver is composed of 12 to 24 hydrophones, which are summed before or after recording, depending on the accuracy expected from the seismic imaging [see Ikelle and Amundsen (2005, 2017) for details]. The spacing between receivers (i.e., the center of a group of hydrophones) is generally 12.5 m. Typical acquisition vessels today can tow 8 to 32 streamers spaced 50 to 200 m apart.

    Figure 1.9 (A) An illustration of towed-streamer acquisition in action. The vessel tows an array of airguns and streamers of hydrophones behind the boat while traveling at a roughly constant speed (it takes about 15 seconds for a typical seismic boat to move 50 m). (B) A schematic diagram of a towed-streamer acquisition with six streamers. S indicates seismic sources, and R indicates streamers of hydrophones.

    Several types of sources can be used in towed-streamer data acquisition. The most common involves the use of an array of airguns, which can operate as an exploding source [see Chapters 2 and 8 of Ikelle and Amundsen (2005) for details]. Like the receivers, the typical seismic source is an array composed of subarrays, each containing up to six airguns about 3 m apart (see Fig. 1.10A). The airgun arrays are generally towed at a depth of 5 to 10 m, but they are usually located above the streamers (in other words, they are usually located at a shallower depth than the streamers). Fig. 1.10B shows a typical time signal generated by a source of airgun arrays. This signal represents what we will call the source signature. Notice that the duration of the source signature (<100 ms) is very small compared to the length of data recordings, which is generally between 6 and 10 s.

    Figure 1.10 (A) A typical marine-seismic source is made of three 18-airgun arrays. We here illustrate two examples of marine-seismic sources. These sources are generally fired in alternating ways to allow recharge of the other arrays and to improve the acquisition time. (B) A typical time function (known as a source signature) generated by a marine-airgun source. This picture also shows the amplitude spectrum of this source signature.

    Marine-data acquisition, as depicted in Figs. 1.9 and 1.10, is known as multisource and multistreamer (MS/MS) acquisition. Actually, 3D marine-data acquisition became affordable in the 1990s due, in large part, to the high productivity (square kilometers acquired per day) of MS/MS acquisition technology. As illustrated in Figs. 1.9 and 1.10, MS/MS acquisition generally corresponds to a boat towing two sources and eight streamers (2/8) or 16 streamers (2/16), with a streamer spacing of between 50 and 100 m. The sources are fired alternatively (in flip-flop) every 25 m (i.e., each individual source is fired every 50 m; the spacing between shot points in one shooting line is 50 m), and the sail-line spacing varies between 200 and 800 m. Fig. 1.11 illustrates a typical sailing path of a 3D survey; the vessel travels back and forth, shooting and collecting data along many parallel lines, resulting in seismic data generated along lines 25 to 50 meters apart. Note that it takes about nine hours to turn from one sailing line to another for a vessel carrying 10-km-long streamers.

    Figure 1.11 An example of the sailing path of a marine vessel in a towed-streamer survey. Note that the time for turning from one sailing line to another is about nine hours for vessels carrying streamers that are 10 km long. The dotted line indicates the turning legs of the sailing path.

    So towed-streamer acquisitions are generally performed as a series of parallel shooting lines. The lines shot during the survey are called inline sections or rows. Lines perpendicular to these lines are called crossline sections or columns. The crosslines can be generated from inline data (y in our figures is here considered as the crossline direction and, x is considered as the inline direction). Sometimes two or more 3D surveys are recorded over the same area in succession, but at different angles (or azimuths) to one another. The goal of additional 3D surveys is to illuminate the potential petroleum reservoir from many azimuthal angles. We will discuss the advantages of these additional surveys later.

    Finally, seismic data consisting of a single inline acquisition is known as 2D seismics (see Fig. 1.8A). The model of the earth in this acquisition and in the subsequent seismic-imaging process is assumed to be invariant along the crossline. This, of course, is not true. Because of the structural features of the sea bottom or other reflectors below the sea bottom, reflected events from outside the vertical plane will be recorded in the 2D data. When the source is detonated, seismic energy propagates outward in an expanding wavefront. In the absence of any variation in the geologic structure outside the vertical plane of the 2D acquisition line, only reflections returning from within the vertical plane would be recorded. If, however, there were structural features such as scattering points outside the acquisition plane, out-of-plane reflections (sideswipes) from these scattering points would also be recorded. The presence of sideswipe reflections from out of the plane is one of the inherent problems with 2D seismic data. It is usually difficult to distinguish reflection events from within the vertical plane of the 2D seismic line from out-of-plane reflections. If these reflections are ignored in our imaging schemes, we will end up with inaccurate representations of the subsurface.

    In most of this book, our examples are limited to 2D datasets because such datasets are sufficient to prove the applicability of most coding and decoding processes and even of the imaging processes, described here. These 2D datasets are generated using 2D models of the earth, with sources and receiver points in the 2D models.

    1.2.2 Seismic data

    Seismic data are generally described using the concept of an event. An event is a coherent seismic energy corresponding to one of the wave types that have traveled from the source to a receiver via some path through the subsurface. In Fig. 1.6, we have indicated examples of seismic events along with their scattering diagrams (i.e., wave-propagation paths) in Fig. 1.12A. These events can be grouped into three categories: primaries, free-surface-reflection events (ghosts and free-surface multiples), and internal multiples. Primaries are seismic events which reflect or diffract only once in the subsurface, but not at the free surface, before being recorded. Free-surface-reflection events (ghosts and free-surface multiples) are events with at least one reflection at the sea surface in their wave-propagation path. When the first and/or last reflection in the wave-propagation path of a free-surface-reflection event is at the sea surface, the event is characterized as a ghost. All the other free-surface-reflection events are characterized as free-surface multiples. Internal multiples are seismic events with no reflection at the free surface but with reflections between two interfaces other than the free surface. Two types of events in seismic data do not readily fall into any of these categories. These events are head waves (turning rays from a thin layer) and direct waves (the expanding energy that moves from the source point to the receiver without hitting any interface).

    Figure 1.12 (A) Examples of scattering diagrams of events. These examples include a direct wave (D), two primaries (P1 and P2), a source ghost (GS), two free-surface multiples (FM1 and FM2), a receiver ghost (GR), and an internal multiple (IM). Notice that these displays ignore the effect of Snell's law; it is common practice in petroleum seismology to draw these diagrams without the effects of Snell's law. (B) A scattering diagram without the effect of Snell's law. (C) A scattering diagram with the effect of Snell's law.

    Notice that, for clarity, most scattering diagrams describing wave-propagation paths of seismic events ignore Snell's law (ray bending). The difference between drawing raypaths with and without the effects of Snell's law is shown in Figs. 1.12B and 1.12C. Actually, it is a common practice in petroleum seismology to draw raypaths without using Snell's law. Except when stated otherwise, these simplifications of raypaths are limited only to our drawings; in other words, our data include the effects of Snell's law. We will follow this convention in this book.

    The key processes of marine-seismic imaging include (1) removing free-surface-reflection events from the data (also known as deghosting and free-surface multiple attenuation) and leaving primaries and internal multiples, (2) removing internal multiples from the data and leaving primaries (also known as internal-multiple attenuation), and then (3) locating the scattering points and reflectors in the subsurface, which are the sources of primaries and internal multiples in particular. This last process is generally carried out in two steps, with the first step being the estimation of a velocity model, which allows us to backpropagate the wavefield recorded near the sea surface to the scattering points and reflectors in the subsurface. The second step allows us to reconstruct the structures of the reflectors in the subsurface. This second step is migration [see Ikelle and Amundsen (2005, 2017)]. One of the key objectives of this book is to show that all these processes can be performed without decoding. These developments are discussed in Chapter 6.

    1.2.3 Shot, receiver, midpoint, and offset gathers

    the receiver position. The pressure wavefield can then be written

    (1.6)

    where t is constant. Each cross-section corresponds to a limited experiment with one shot and all receivers available. . We will discuss the more general case later. Fig. 1.14 shows two receiver gathers corresponding to the wave propagation in Figs. 1.6 and 1.7. As we can see in Fig. 1.14, receiver gathers and shot gathers are quite similar in an experiment in which source and receiver positions can be interchanged based on the reciprocity theorem. Note that pressure wavefield P is less than 3 m, and the typical seismic wavelength is 50 m or larger).

    Enjoying the preview?
    Page 1 of 1