Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

An Introduction to Wavelets and Other Filtering Methods in Finance and Economics
An Introduction to Wavelets and Other Filtering Methods in Finance and Economics
An Introduction to Wavelets and Other Filtering Methods in Finance and Economics
Ebook661 pages

An Introduction to Wavelets and Other Filtering Methods in Finance and Economics

Rating: 0 out of 5 stars

()

Read preview

About this ebook

An Introduction to Wavelets and Other Filtering Methods in Finance and Economics presents a unified view of filtering techniques with a special focus on wavelet analysis in finance and economics. It emphasizes the methods and explanations of the theory that underlies them. It also concentrates on exactly what wavelet analysis (and filtering methods in general) can reveal about a time series. It offers testing issues which can be performed with wavelets in conjunction with the multi-resolution analysis. The descriptive focus of the book avoids proofs and provides easy access to a wide spectrum of parametric and nonparametric filtering methods. Examples and empirical applications will show readers the capabilities, advantages, and disadvantages of each method.
  • The first book to present a unified view of filtering techniques
  • Concentrates on exactly what wavelets analysis and filtering methods in general can reveal about a time series
  • Provides easy access to a wide spectrum of parametric and non-parametric filtering methods
LanguageEnglish
Release dateOct 12, 2001
ISBN9780080509228
An Introduction to Wavelets and Other Filtering Methods in Finance and Economics
Author

Ramazan Gençay

Ramazan Gençay is a professor in the economics department at Simon Fraser University. His areas of specialization are financial econometrics, nonlinear time series, nonparametric econometrics, and chaotic dynamics. His publications appear in finance, economics, statistics and physics journals. His work has appeared in the Journal of the American Statistical Association, Journal of Econometrics, and Physics Letters A.

Related to An Introduction to Wavelets and Other Filtering Methods in Finance and Economics

Teaching Methods & Materials For You

View More

Reviews for An Introduction to Wavelets and Other Filtering Methods in Finance and Economics

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    An Introduction to Wavelets and Other Filtering Methods in Finance and Economics - Ramazan Gençay

    her.

    PREFACE

    This book presents a unified view of filtering techniques with a special focus on wavelet analysis in finance and economics. It is designed for those who might be starting research in these areas as well as for those who are interested in appreciating some of the statistical theory that underlies parametric and nonparametric filtering methods. The targeted audience includes finance professionals; research professionals in the public and private sector; those taking graduate courses in finance, economics, econometrics, statistics, and time series analysis; advanced MBA students; and students in other applied sciences, such as engineering, physics, medicine, biology and oceanography. Regardless of one’s profession, this book assumes a basic understanding of mathematics, including such topics as trigonometry, basic linear algrebra, calculus, and the Fourier transform. A certain level of statistical background is also needed, including a basic understanding of probability theory, statistical inference, and time series analysis.

    Many techniques are discussed in the book, including parametric recursive and nonrecursive filters, Kalman filters, Wiener filters, and wavelet and neural network filters. The emphasis is on the methods and the explanation of the theory that underlies them. Our approach concentrates on what exactly wavelet analysis (filtering methods in general) can tell us about a time series. In addition, the presentation contains testing issues that can be performed using wavelets with multiresolution analysis. For neural network methods, there is emphasis on the dynamic architectures (such as recurrent networks) in addition to simple feedforward networks. Recurrent networks together with multistream learning provide a rich filtering framework for long-memory processes.

    This book contains numerous empirical applications from economics and finance. These applications illustrate the current use of filter techniques but also provide guidance for potential application areas. Some of the finance applications are presented with high-frequency financial time series. This provides a platform for the usefulness of the wavelet methods in the analysis of intraday seasonality, identification of trader behavior at different trading horizons, and the usefulness of wavelet methods for cross-correlation analysis of two high-frequency series by decomposing the signal into its high- and low-frequency components.

    The focus of this book is descriptive and proofs are avoided as much as possible. This focus provides easy access to a wide spectrum of parametric and nonparametric filtering methods. Some of these filtering methods are widely known, whereas others, such as the wavelet methods, are fairly new to economics and finance. Our aim is to provide access to these methods that can be easily followed.

    Ramazan Gençay, Faruk Selçuk and Brandon Whitcher

    1

    INTRODUCTION

    The fundamental reason of writing this book is that we believe the basic premise of wavelet filtering provides insight into the dynamics of economic/financial time series beyond that of current methodology. A number of concepts such as nonstationarity, multiresolution and approximate decorrelation emerge from wavelet filters. Wavelet filtering provides a natural platform to deal with the time-varying characteristics found in most real-world time series, and thus the assumption of stationarity may be avoided. Wavelet filters provide an easy vehicle to study the multiresolution properties of a process. It is important to realize that economic/financial time series may not need to follow the same relationship as a function of time horizon (scale). Hence, a transform that decomposes a process into different time horizons is appealing as it differentiates seasonalities, reveals structural breaks and volatility clusters, and identifies local and global dynamic properties of a process at these timescales.¹ Last but not least, wavelet filters provide a convenient way of dissolving the correlation structure of a process across timescales. This would indicate that the wavelet coefficients at one level are not (much) associated with coefficients at different scales or within their scale. This is convenient when performing tasks such as simulations, estimation, and testing since it is always easier to deal with an uncorrelated process as opposed to one with unknown correlation structure. These issues are studied in Chapter 5.

    1.1 FOURIER VERSUS WAVELET ANALYSIS

    At this point, a natural question to ask would be why not use traditional spectral tools such as the Fourier analysis rather than exploring wavelet methods? Fourier series is a linear combination of sines and cosines. Each of these sines and cosines is itself a function of frequency, and therefore the Fourier transform may be seen as a decomposition on a frequency-by-frequency basis. The Fourier basis functions (sines and cosines) are very appealing when working with stationary time series (see Section 4.1.1 for a definition of a stationary time series). However, restricting ourselves to stationary time series is not very appealing since most economic/financial time series exhibit quite complicated patterns over time (e.g., trends, abrupt changes, and volatility clustering). The Fourier transform cannot efficiently capture these events. In fact, if the frequency components are not stationary such that they may appear, disappear, and then reappear over time, traditional spectral tools (such as the Fourier analysis) may miss such frequency components.

    The Fourier transform is an alternative representation of the original time series such that it summarizes information in the data as a function of frequency and therefore does not preserve information in time. This is the opposite of how we observe the original time series, where no frequency resolution is provided. The Gabor transform or short-time Fourier transform (STFT) was developed to achieve a balance between time and frequency by sliding a window across the time series and taking the Fourier transform of the windowed series. The resulting expansion is a function of two parameters: frequency and time shift. Since the STFT is simply applying the Fourier transform to pieces of the time series of interest, a drawback of the STFT is that it will not be able to resolve events when they happen to fall within the width of the window.

    To overcome the fixed time-frequency partitioning, a new set of basis functions are needed. The wavelet transform utilizes a basic function (called the mother wavelet), that is stretched and shifted to capture features that are local in time and local in frequency. Figure 1.1a introduces a square-wave function, based on the Haar wavelet filter, and a shifted version of the same function backward in time (Figure 1.1b). The wavelet filter is long in time when capturing low-frequency events (Figure 1.1c), and hence has good frequency resolution. Conversely, the wavelet is short in time when capturing high-frequency events (Figure 1.1d) and therefore has good time resolution for these events. By combining several combinations of shifting and stretching of the mother wavelet, the wavelet transform is able to capture all the information in a time series and associate it with specific time horizons and locations in time.

    FIGURE 1.1 Application of translation and dilation to the square-wave function. (a) Square-wave function. (b) Square-wave function shifted backward (negatively translated) in time. (c) Square-wave function stretched (positively dilated) to twice its original length in time. (d) Square-wave function compressed (negatively dilated) to half its original length.

    The wavelet transform intelligently adapts itself to capture features across a wide range of frequencies and thus has the ability to capture events that are local in time. This makes the wavelet transform an ideal tool for studying nonstationary or transient time series. The following examples demonstrate the convenient usage of wavelet-based methods in seasonality filtering, denoising, identification of structural breaks, scaling, separating observed data into timescales (so-called multiresolution analysis) and comparing multiple time series.

    1.2 SEASONALITY FILTERING

    The presence of seasonalities (periodicities) in a persistent process may obscure the underlying low-frequency dynamics. Specifically, the periodic component pulls the calculated autocorrelations down, giving the impression that there is no persistence other than particular periodicities. Consider the following AR(1) process with periodic components:

    (1.1)

    for t = 1, …, N, P1 = 3, P2 = 4, P3 = 5, and P4 = 6. The process has three, four, five, and six period stochastic seasonalities. The random variables ∈t and vst are uncorrelated Gaussian disturbance terms with mean zero and unit variance.

    Figure 1.2 presents the autocorrelation functions (ACFs) from a length N = 1000 simulated AR(1) process in Equation 1.1 with and without periodic components. The ACF of the AR(1) process without seasonality (excluding ∑[3 sin (2πt/Ps) + 0.9vst] from the simulated process) starts from a value of 0.95 and decays geometrically.² However, the ACF of the AR(1) process with the seasonality starts from 0.40 and fluctuates between positive and negative values. The seasonality is evident in the peaks at lags that are multiples of 6 (i.e., at lags 12, 24, 36, etc). The underlying persistence of the AR(1) process in the absence of the seasonality component is entirely obscured by these periodic components.

    FIGURE 1.2 Sample autocorrelation function for the simulated AR(1) process (straight line), AR(1) plus seasonal process (dashed line), and wavelet smooth of the AR(1) plus seasonal process (dotted line).

    A well-designed seasonal adjustment procedure should therefore separate the data from its seasonal components and leave the underlying inherent nonseasonal structure intact. In Figure 1.2 the solid line is the ACF of the nonseasonal AR(1) dynamics and the dotted lines are the ACF of the seasonally adjusted series using a wavelet multiresolution analysis. As Figure 1.2 displays, using a multiresolution analysis to selectively filter a time series successfully uncovers the nonseasonal dynamics without inducing any spurious persistence into the filtered series. Chapter 4 provides a detailed exposition of the types of wavelet filters used in this example.

    1.3 DENOISING

    A convenient model for a uniformly sampled process yt is that of the standard signal plus noise model; that is,

    (1.2)

    For now let us assume st is a deterministic function of t . If we want the probability of any noise appearing in our estimate of yt to be as small as possible, as the number of samples goes to infinity, then applying the wavelet transform to yt is a good strategy. Utilizing this threshold one may then remove (hard thresholding) or shrink toward zero (soft thresholding) wavelet coefficients at each level of the decomposition in an attempt to eliminate the noise from the signal. Inverting the wavelet transform yields a nonparametric estimate of the underlying signal st. Thresholding wavelet coefficients is appealing, since they capture information at different combinations of time and frequency, thus the wavelet-based estimate is locally adaptive.

    Figure 1.3 provides two estimates of an example function

    (1.3)

    threshold previously discussed, and the minimax estimator is discussed in Section 6.3.2. In each case a soft thresholding rule was utilized; essentially, all wavelet coefficients less than the threshold are set to zero and all remaining coefficients are moved toward zero by the amount of the threshold. Even though the example function varies with time, the wavelet-based estimate is able to resolve both the bump and the linear portions simultaneously.

    FIGURE 1.3 Universal and minimax estimators for sampled versions of the example function st (N = 1024) with additive noise. The true function (dotted line) is drawn in the bottom two panels for comparison with the estimate.

    1.4 IDENTIFICATION OF STRUCTURAL BREAKS

    When developing time series models, a natural assumption is that of (second-order) stationarity. That is, the time series model assumes that the mean and covariance of the process do not vary over time. For quite a few observed time series, this assumption is suspect and statistical hypothesis testing is useful in detecting and locating deviations from stationarity at specific points. This is one example of what is known as a structural break.

    for all t for t < k for t k. To overcome the restrictive assumption of a white noise process on the observed time series, one common approach is to fit an ARMA time series model and test the residuals. This is adequate only if the true process is in reality an ARMA process. An interesting wavelet-based approach is to test the wavelet coefficients on a level-by-level basis. Two possible scenarios are as follows:

      If the structural break of interest is a sudden change in variance, then the low-level wavelet coefficients (which are associated with high-frequency content of the time series) should retain this sudden shift in variability while the high-level coefficients should be stationary.

     If the structural break of interest is a possible change in the long-range dependence of the series, then all levels of wavelet coefficients should exhibit a structural change since long memory is associated with all scales—especially the low-frequency ones.

    We are not proposing the same test for both of these scenarios, but instead argue that the multiscale decomposition of the wavelet transform allows for a straightforward testing procedure to be applied to each level of the transform instead of developing customized procedures to deal with each type of structural break.

    Figure 1.4 shows the results of testing the daily IBM volatility series (absolute returns) for a single change in variance at an unknown time. This volatility series was computed from daily IBM stock prices spanning May 17, 1961, to November 2, 1962 (Box and Jenkins, 1976). We chose the IBM volatility series because it exhibits a slowly decaying ACF and therefore cannot be modeled as a sequence of uncorrelated Gaussian random variables (required by CUSUM procedures), nor can it be effectively modeled by an ARMA process with few parameters. The null hypothesis of constant variance is rejected for the first three scales of the wavelet transform. The normalized cumulative sum of squares is displayed for the first three levels of wavelet coefficients in Figure 1.4. Since the level 1 coefficients (second row from the top in Figure 1.4) are associated with the highest frequencies, we use its maximum as the estimated time of variance change. Here the wavelet transform allowed for a rigorous test for homogeneity of variance, with only mild assumptions on the underlying spectrum of the process.

    FIGURE 1.4 The IBM stock price volatility (top panel) along with the normalized cumulative sum of squares for its wavelet decomposition. The top row is the original IBM volatility series, and the following three rows are the normalized cumulative sum of squares (NCSS) of the first three levels of its wavelet decomposition. These three levels are associate with changes in longer and longer timescales; specifically – from top to bottom – changes on the order of one, two, and four days. The dotted vertical line denotes the location of the maximum for the scale one day wavelet coefficients (observation 237).

    1.5 SCALING

    It is important to understand the limitations of scaling laws because realized volatility plays an essential role in measuring volatility. There are two limitations to the precision of the estimation of realized volatility. For long time intervals (a year and more), it becomes difficult to assess the statistical significance of the volatility estimation since there are not more than a handful of independent observations. This number grows and the noise shrinks when the return measurement intervals shrink, but then the measurement bias starts to grow.

    Until now, the only choice was a clever trade-off between the noise and the bias, which led to typical return intervals of about an hour. The goal is to define a superior realized volatility, which combines the low noise of short return interval sizes with the low bias of large return intervals.

    Instead of calculating realized volatilities at different data frequencies, we proceed with a multiscale approach. The studied data sets are the 20-min Deutsche Mark – U.S. Dollar (DEM-USD) and Japanese Yen – U.S. Dollar (JPY-USD) price series for the period from December 1, 1986, to December 1, 1996. Here the volatility is defined as the absolute value of the returns.

    Our results provide evidence that the scaling behavior of volatility breaks at scales higher than one day. Figure 1.5 reports the decomposition of the variance on a scale-by-scale basis through a wavelet multiresolution analysis. For example, the first wavelet scale is associated with changes at 20-min, the second wavelet scale is associated with 40-min changes, and so on. An apparent break in the scaling law is observed in the variance at the seventh wavelet scale for both series. Since there are 1440 minutes in one day, the seventh scale corresponds to 0.89 day. Therefore, the seventh and higher scales are taken to be related with one-day and higher dynamics.

    FIGURE 1.5 Multiscale variance for 20-min absolute returns of (a) DEM-USD and (c) JPY-USD. In (a) and (c), the estimated wavelet variances are plotted. In (b) and (d), the results are plotted on a log-log scale. The stars are the estimated variances for each wavelet scale and the straight lines are ordinary least squares (OLS) estimates. Each wavelet scale is associated with a particular time period. For instance, the first wavelet scale is associated with 20-min changes, the second wavelet scale with 40-min changes, the third wavelet scale with 160-min changes, and so on. The seventh wavelet scale is associated with 1280-min changes. Since there are 1440 minutes per day, the seventh scale corresponds to approximately 1 day. Notice that there is a break at 1-day scale. The last wavelet scale is associated with changes of approximately 28 days. The sample period is December 1, 1986, through December 1, 1996. Data source: Olsen & Associates.

    1.6 AGGREGATE HETEROGENEITY AND TIMESCALES

    Consider the participants of financial markets who are made of traders with different trading horizons. In the heart of the trading mechanisms are the market makers. The next level up is composed of the intraday traders, who carry out trades only within a given trading day but do not carry overnight positions. Then there are day traders, who may carry positions overnight, short-term traders and long-term traders. Each of these classes of traders may have its own trading tool sets consistent with its trading horizon and may possess a homogeneous appearance within its own class. Overall, it is the sum of the activities of all traders for all horizons that generates the market prices. Therefore, market activity is heterogeneous with each trading horizon (trader class) dynamically providing feedback across all trader classes.

    In such a heterogeneous market, a low-frequency shock to the system penetrates through all layers, reaching the market maker by penetrating the entire market. The high-frequency shocks, however, would be short lived and may have no impact outside their boundaries. This apparent aggregate heterogeneity (as a sum of a homogeneous set of trader classes) requires econometric methods that can simultaneously learn and forecast the underlying structure at different timescales (horizons). This involves the separation of the local dynamics from the global one, and the transitory from the permanent dynamics. Wavelet methods provide a natural platform to distinguish these effects from one another by decomposing a time series into different timescales. Furthermore, wavelet methods are localized in time so that they can easily identify nonstationary events, such as sudden regime shifts and transient shocks to a system. Figure 1.6 illustrates such a decomposition by separating the intraday variations of volatility from high-frequency data sampled at 20-min frequency and looking at the volatility at a scale of one day. Here, it is evident that the volatility burst during the first half of the sample is not an intraday phenomena. More applications of multiresolution analysis for separating timescales may be found in Section

    Enjoying the preview?
    Page 1 of 1