Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Financial Instrument Pricing Using C++
Financial Instrument Pricing Using C++
Financial Instrument Pricing Using C++
Ebook2,376 pages18 hours

Financial Instrument Pricing Using C++

Rating: 2 out of 5 stars

2/5

()

Read preview

About this ebook

An integrated guide to C++ and computational finance

This complete guide to C++ and computational finance is a follow-up and major extension to Daniel J. Duffy's 2004 edition of Financial Instrument Pricing Using C++. Both C++ and computational finance have evolved and changed dramatically in the last ten years and this book documents these improvements. Duffy focuses on these developments and the advantages for the quant developer by:

  • Delving into a detailed account of the new C++11 standard and its applicability to computational finance.
  • Using de-facto standard libraries, such as Boost and Eigen to improve developer productivity.
  • Developing multiparadigm software using the object-oriented, generic, and functional programming styles.
  • Designing flexible numerical algorithms: modern numerical methods and multiparadigm design patterns.
  • Providing a detailed explanation of the Finite Difference Methods through six chapters, including new developments such as ADE, Method of Lines (MOL), and Uncertain Volatility Models.
  • Developing applications, from financial model to algorithmic design and code, through a coherent approach.
  • Generating interoperability with Excel add-ins, C#, and C++/CLI.
  • Using random number generation in C++11 and Monte Carlo simulation.

Duffy adopted a spiral model approach while writing each chapter of Financial Instrument Pricing Using C++ 2e: analyse a little, design a little, and code a little. Each cycle ends with a working prototype in C++ and shows how a given algorithm or numerical method works. Additionally, each chapter contains non-trivial exercises and projects that discuss improvements and extensions to the material.

This book is for designers and application developers in computational finance, and assumes the reader has some fundamental experience of C++ and derivatives pricing.

HOW TO RECEIVE THE SOURCE CODE

Once you have purchased a copy of the book please send an email to the author dduffyATdatasim.nl requesting your personal and non-transferable copy of the source code. Proof of purchase is needed. The subject of the mail should be “C++ Book Source Code Request”.  You will receive a reply with a zip file attachment.

LanguageEnglish
PublisherWiley
Release dateSep 5, 2018
ISBN9781119170488
Financial Instrument Pricing Using C++

Read more from Daniel J. Duffy

Related to Financial Instrument Pricing Using C++

Titles in the series (100)

View More

Related ebooks

Finance & Money Management For You

View More

Related articles

Reviews for Financial Instrument Pricing Using C++

Rating: 1.8333334 out of 5 stars
2/5

3 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Financial Instrument Pricing Using C++ - Daniel J. Duffy

    CHAPTER 1

    A Tour of C++ and Environs

    riverrun, past Eve and Adam's, from swerve of shore to bend of bay, brings us by a commodius vicus of recirculation back to Howth Castle and Environs

    —Joyce (1939)

    1.1 Introduction and Objectives

    This book is the second edition of Financial Instrument Pricing Using C++, also written by the author (Duffy, 2004B). The most important reason for writing this hands-on book is to reflect the many changes and improvements to the C++ language, in particular due to the announcement of the new standard C++11 (and to a lesser extent C++14 and C++17). It feels like a new language compared to C++03 and in a sense it is. First, C++11 improves and extends the syntax of C++03. Second, it has become a programming language that supports the functional programming model in addition to the object-oriented and generic programming models.

    We apply modern C++ to design and implement applications in computational finance, in particular option pricing problems using partial differential equation (PDE)/finite difference method (FDM), Monte Carlo and lattice models. We show the benefits of using C++11 compared to similar solutions in C++03. The resulting code tends to be more maintainable and extendible, especially if the software system has been properly designed. We recommend spending some time on designing the software system before jumping into code and to this end we include a defined process to take a problem description, design the problem and then implement it in such a way that it results in a product that satisfies the requirements and that is delivered on time and within budget.

    This book is a detailed exposition of the language features in C++, how to use these features and how to design applications in computational finance. We discuss modern numerical methods to price plain and American options and the book is written in a hands-on, step-by-step fashion.

    1.2 What is C++?

    C++ is a general-purpose systems programming language that was originally designed as an extension to the C programming language. Its original name was ‘C with classes' and its object-oriented roots can be traced to the programming language Simula which was one of the first object-oriented languages. C++ was standardised by the International Organization for Standardization (ISO) in 1998 (called the C++03 standard) and C++14 is the standard at the moment of writing. It can be seen as a minor extension to C++11 which is a major update to the language.

    C++ was designed primarily for applications in which performance, efficiency and flexibility play a vital role. In this sense it is a systems programming language and early applications in the 1990s were in telecommunications, embedded systems, medical devices and Computer Aided Design (CAD) as well as first-generation option pricing risk management systems in computational finance. The rise in popularity continued well into the late 1990s as major vendors such as Microsoft, Sun and IBM began to endorse object-oriented technology in general and C++ in particular. It was also in this period that the Java programming language appeared which in time became a competitor to C++.

    C++ remains one of the most important programming languages at the moment of writing. It is evolving to support new hardware such as multicore processors, GPUs (graphics processing units) and heterogeneous computing environments. It also has a number of mathematical libraries that are useful in computational finance applications.

    1.3 C++ As a Multiparadigm Programming Language

    We give an overview of the programming paradigms that C++ supports. In general, a programming paradigm is a way to classify programming languages according to the style of computer programming. Features of various programming languages determine which programming paradigms they belong to. C++ is a multiparadigm programming language because it supports the following styles:

    Procedural: organises code around functions, as typically seen in programs written in C, FORTRAN and COBOL. The style is based on structured programming in which a function or program is decomposed into simpler functions.

    Object-oriented: organises code around classes. A class is an abstract entity that encapsulates functions and data into a logical unit. We instantiate a class to produce objects. Furthermore, classes can be grouped into hierarchies. It is probably safe to say that this style is the most popular one in the C++ community.

    Generic/template: templates are a feature of C++ that allow functions and classes to operate with generic types. A function or class can then work on different data types.

    Functional: treats computation as the evaluation of mathematical functions. It is a declarative programming paradigm; this means that programming is done with expressions and declarations instead of statements. The output value of a function depends only on its input arguments.

    The generic programming style is becoming more important and pronounced in C++, possibly at the expense of the traditional object-oriented model which is based on class hierarchies and subtype (dynamic) polymorphism. Template code tends to perform better at run-time while many errors are caught at compile-time, in contrast to object-oriented code where the errors tend to be caught by the linker or even at run-time.

    The most recent style that C++ has (some) support for is functional programming. This style predates both structured and object-oriented programming. Functional programming has its origins in lambda calculus, a formal system developed by Alonzo Church in the 1930s to investigate computability, function definition, function application and recursion. Many functional programming languages can be viewed as elaborations on the lambda calculus. C++ supports the notion of lambda functions. A lambda function in C++ is an unnamed function but it has all the characteristics of a normal function. Here is an example of defining a stored lambda function (which we can define in place in code) and we then call it as a normal function:

    // TestLambda101.cpp

    //

    // Simple example of a lambda function

    //

    // (C) Datasim Education BV 2018

    //

    //

     

    #include

    #include

     

    int main()

    {

           // Captured variable

           std::string cVar(Hello);

     

           // Stored lambda function, with captured variable

                  auto hello = [&cVar](const std::string& s)

           { // Return type automatically deduced

     

                 std::cout << cVar << << s << '\n';

           };

     

           // Call the stored lambda function

           hello(std::string(C));

           hello(std::string(C++));

     

           return 0;

    }

    In this case we see that the lambda function has a formal input string argument and it uses a captured variable cVar. Lambda functions are simple but powerful and we shall show how they can be used in computational finance.

    C++11 is a major improvement on C++03 and it has a number of features that facilitate the design of software systems based on a combination of Structured Analysis and object-oriented technology. In general, we have a defined process to decompose a system into loosely coupled subsystems (Duffy, 2004). We then implement each subsystem in C++11. We discuss this process in detail in this book.

    1.4 The Structure and Contents of this Book: Overview

    This book examines C++ from a number of perspectives. In this sense it differs from other C++ literature because it discusses the full software lifecycle, starting with the problem description and eventually producing a working C++ program. In this book the topics are based on numerical analysis and its applications to computational finance (in particular, option pricing). In order to design and implement maintainable and efficient software systems we discuss each of the following building blocks in detail:

    A1: The new and improved syntax and language features in C++.

    A2: Integrating object-oriented, generic and functional programming styles in C++ code.

    A3: Replacing and upgrading the traditional Gang-of-Four software design patterns to fit into a multiparadigm design methodology.

    A4: Analysing and designing large and complex software systems using a combination of top-down system decomposition and bottom-up object assembly.

    A5: When writing applications, determining how much of the features in A1, A2, A3 and A4 to use.

    The chapters can be categorised into those that deal with modern C++ syntax and language features, those that focus on system design and finally those chapters that discuss applications. In general, the first ten chapters introduce new language features. Chapters 11 to 19 focus on using C++ to create numerical libraries, visualisation software in Excel and lattice option pricing code. Chapters 20 to 29 are devoted to the finite difference method on the one hand and to multithreading and parallel processing on the other hand. The last three chapters of the book deal with Monte Carlo methods. For easy reference, we give a one-line summary of each chapter in the book:

    Smart pointers, move semantics, r-value references.

    All kinds of function types; lambda functions, std::bind, functional programming fundamentals.

    Advanced templates, variadic templates, decltype, template metaprogramming.

    Tuples A–Z and their applications.

    Type traits and compile-time introspection of template types.

    Fundamental C++ syntax improvements.

    IEEE 754 standard: operations on floating-point types.

    A defined process to decompose systems into software components.

    Useful data types: static and dynamic bitsets, fractions, date and time, fixed-sized arrays, matrices, matrix solvers.

    Fundamental software design and data structures for lattice models.

    Option pricing with lattice models. Both plain and early-exercise cases are considered.

    Essential numerical linear algebra and cubic spline interpolation.

    A C++ package to visualise data in Excel (for example, a matrix or array of option prices from a finite difference solver). This package also allows us to use Excel for simple data storage.

    Univariate statistical distributions in C++ and Boost. We also discuss some applications.

    The different ways to compute the bivariate cumulative normal (BVN) distribution accurately and efficiently using the Genz algorithm and by solving a hyperbolic PDE. Applications to computing the analytic solution of two-factor asset option pricing problems are given.

    STL algorithms A–Z. Part I.

    STL algorithms A–Z. Part II.

    The solution of nonlinear equations and optimisation. The scope is restricted to the univariate case.

    A mathematical background to convection–diffusion–reaction and Black–Scholes PDEs.

    A software framework for the Black–Scholes PDE using the finite difference method.

    Extending the functionality of the framework in Chapter 21; computing option sensitivities; an analysis of traditional software design patterns. We also discuss opportunities to upgrade software patterns to their multiparadigm extensions.

    Path-dependent option problems using the finite difference method.

    Ordinary differential equations (ODEs); theory and numerical approximations.

    The method of lines (MOL) for PDEs.

    Random number generation; some numerical linear algebra solvers.

    Interoperability between ISO C++ and the Microsoft .NET software framework.

    C++ Concurrency: threads.

    C++ Concurrency: task.

    Introduction to Parallel Patterns Library (PPL).

    Single-threaded Monte Carlo simulation.

    Multithreaded Monte Carlo simulation.

    Appendix 1: Multiprecision data types in C++.

    Appendix 2: Computing implied volatility.

    This is quite a list of topics. The first ten chapters are essential reading as they lay the foundation for the rest of the book. In particular, Chapters 2, 3, 4, 5, 7 and 8 introduce the most important syntax and language features. Chapters 11 to 19 are more or less independent of each other and we recommend that you read Chapter 9 before embarking on Chapters 11, 12 and 19. Chapters 17 and 18 discuss STL algorithms in great detail. Chapters 20 to 25 are devoted to PDEs and their numerical approximation using the finite difference method. They should be read sequentially. The same advice holds for Chapters 28 to 30 and Chapters 31 to 32.

    We have put some effort into creating exercises for each chapter. Reading them and understanding their intent is crucial in our opinion. Even better, actually programming these exercises is proof that you really understand the material.

    1.5 A Tour of C++11: Black–Scholes and Environs

    Since this is a hands-on book we introduce a simple and relevant example to show some of the new features in C++. It is a kind of preview or trailer. In particular, we discuss the Black–Scholes option pricing formula and its sensitivities. We focus on the analytical solutions for stock options, futures contracts, futures options and currency options (see Haug, 2007). The approach that we take in this section is similar to how mathematicians solve problems. We quote the famous mathematician Paul Halmos:

    …the source of all great mathematics is the special case, the concrete example. It is frequent in mathematics that every instance of a concept of generality is, in essence, the same as a small and concrete special case.

    We now describe a mini-system that mirrors many of the design techniques and C++ language features that we will discuss in the other 31 chapters of this book. Of course, it goes without saying that we could implement this problem in a few lines of C++ code, but the point of the exercise is to trace the system lifecycle from beginning to end by doing justice to each stage in the software process, no matter how small these stages are.

    We use the following data type:

    using value_type = double;

    1.5.1 System Architecture

    This is the first stage in which we scope the problem (‘what are we trying to solve?') by defining the system scope and decomposing the system into loosely coupled subsystems each of which has a single major responsibility (Duffy, 2004). The subsystems cooperate to satisfy the system's core process, which is to compute plain call and put option prices and their sensitivities. The architecture is based on a dataflow metaphor in which each subsystem processes input data and produces output data. Data is transferred between subsystems using a plug-and-socket architecture (Leavens and Sitarman, 2000). In general, a system delivers a certain service to other systems. A service has a type and it can be connected to the service of another system if the other service is of dual type. We sometimes say that a service is a plug and the dual service is called a socket.

    We represent the architectural model for this problem by the UML (Unified Modelling Language) component diagram in Figure 1.1. Each system does one job well and it interfaces with other systems by means of plugs and sockets. We first define the data that is exchanged between systems:

    // Option data {K, T, r, sig/v} from Input system

    template

         using OptionData = std::tuple;

     

    // Return type of Algorithm system

    // We compute V, delta and gamma

    template

        using ComputedData = std::tuple;

    Diagram shows context diagram that has following plugs and sockets: I1 between input and BSEngine, I2 between BSEngine and algorithm and I3 between BSEngine and output.

    Figure 1.1 Context diagram

    We also define the interface to compute option price and sensitivities based on option data. To this end, we use type-safe function pointers:

    // The abstract interface to compute V, delta and gamma

    template using IAlgorithm

          = std::function (const OptionData&

          optData, const T& S)>;

    Having defined the data structures we now need to design the classes in Figure 1.1 that use them. To this end, we describe how to design these classes.

    1.5.2 Detailed Design

    In this case we apply the policy-based design idiom (Alexandrescu, 2001) to model the classes in Figure 1.1. We are not necessarily endorsing this design as being the best one in general, but it does show some of the important design features that we wish to highlight. We model the input and output systems as template parameters of a template class. We use private inheritance and template–template parameters to model this class that we call SUD (System Under Discussion) while we use a signature-based approach to model the algorithms to compute option prices and sensitivities:

    template class Source,

                template class Sink>

     

    class SUD : private Source, private Sink

    { // System under discussion, in this case for Black Scholes equation

     

    private:

         // Define 'provides'/'requires' interfaces of satellite systems

         using Source::getData;                    // Get input

         using Sink::SendData;                      // Produce output

         using Sink::end;                          // End of program

     

         // Conversion

              IAlgorithm convert;

    public:

          SUD(const IAlgorithm& conversion): convert(conversion) {}

          void run(const T& S)

          {

            // The main process in the application

            OptionData t1 = getData();              // Source

            ComputedData t2 = convert(t1, S);      // Processing

            SendData(t2);                              // Sink

     

            end();                                    // Notification to Sink

          }

    };

    Thus, this class inherits from its source and sink classes and it is composed of an algorithm.

    The member function run() ties in the participating systems to produce the desired output. We should have a clear idea of the data flow in the system.

    1.5.3 Libraries and Algorithms

    Examining Figure 1.1 we see that the Algorithm subsystem computes option prices from option data. It has the same signature as the interface IAlgorithm and this means that we can configure these algorithms with any other callable object (for example, a free function, function object or lambda function) that has the same signature as IAlgorithm. This means that we do not need to create class hierarchies to achieve this level of flexibility. Furthermore, we can switch algorithms at run-time more easily than with traditional object-oriented technology.

    C++ has support for a number of mathematical functions that are useful in computational finance. In this section we introduce the error function that allows us to compute the univariate cumulative normal distribution:

    // Normal variates etc.

    double n(double x)

    {

        const double A = 1.0 / std::sqrt(2.0 * 3.14159265358979323846);

        return A * std::exp(-x*x*0.5);

    }

     

    // C++11 supports the error function

    auto cndN = [](double x)

          { return 0.5 * (1.0 - std::erf(-x / std::sqrt(2.0))); };

     

    double N(double x)

    { // The approximation to the cumulative normal distribution

     

        return cndN(x);

    }

    We now use these functions to compute the analytical solution of plain call and put option prices and their sensitivities. The aggregated value is placed in a tuple which is a new data type in C++11:

    // Option Pricing; give price+delta+gamma

    template

        ComputedData CallValues(const OptionData& optData,

        const V& S)

    {

        // Extract data

        V K = std::get<0>(optData); V T = std::get<1>(optData);

        V r = std::get<2>(optData); V v = std::get<3>(optData);

        V b = r; // Stock option

     

        // Common functionality

        V tmp = v * std::sqrt(T);

        V d1 = (std::log(S / K) + (b + (v*v)*0.5) * T) / tmp;

        V d2 = d1 - tmp;

     

        V t1 = std::exp((b - r)*T); V t2 = std::exp(-r * T);

        V Nd1 = N(d1); V Nd2 = N(d2);

     

        V price = (S * t1 * Nd1) - (K * t2* Nd2);

        V delta = t1*Nd1;

        V gamma = (n(d1) * t1) / (S * tmp);

     

        return std::make_tuple(price, delta, gamma);

    }

     

    // Option Pricing; give price+delta+gamma

    template

        ComputedData PutValues(const OptionData& optData,

        const V& S)

    {

        // Extract data

        V K = std::get<0>(optData); V T = std::get<1>(optData);

        V r = std::get<2>(optData); V v = std::get<3>(optData);

        V b = r; // Stock option

     

        // Common functionality

        V tmp = v * std::sqrt(T);

        V d1 = (std::log(S / K) + (b + (v*v)*0.5) * T) / tmp;

        V d2 = d1 - tmp;

     

        V t1 = std::exp((b - r)*T); V t2 = std::exp(-r * T);

        V Nmd2 = N(-d2); V Nmd1 = N(-d1);

     

        V price = (K * t2 * Nmd2) - (S * t1* Nmd1);

        V delta = t1*(Nmd1 - 1.0);

        V gamma = (n(d1) * t1) / (S * tmp);

     

        return std::make_tuple(price, delta, gamma);

    }

    We see how useful tuples are as return types of functions. This is a more efficient solution than creating a separate function for each of an option's price, delta and gamma functions.

    1.5.4 Configuration and Execution

    We now discuss how to configure the objects and interfaces in Figure 1.1. This usually takes place by either creating the needed objects directly in the body of main() or by outsourcing this process to creational design patterns and builders (see GOF, 1995). In general, we need to choose what we want. As an example, we consider the following hard-coded source and sink classes:

    template class Input

      {

      public:

     

      static OptionData getData ()

      { // Function object

     

          T K = 65.0; T expiration = 0.25;

          T r = 0.08; T v = 0.3;

          OptionData optData(K, expiration, r, v);

     

          return optData;

      }

      };

     

      template class Output

      {

      public:

     

      void SendData (const ComputedData& tup) const

      {

          ThreadSafePrint(tup);

      }

     

      void end() const

      {

          std::cout << end << std::endl;

      }

     

      };

    Multiple threads can write to the console in a non-deterministic way. For this reason we create a lock on the console if and when we port the single-threaded code to a multithreaded program. The corresponding thread-safe code is:

    template

        void ThreadSafePrint(const ComputedData& tup)

    { // Function to avoid garbled output on the console

     

        std::mutex my_mutex;

        std::lock_guard guard(my_mutex);

        std::cout << ( << std::get<0>(tup) << , << std::get<1>(tup)

                  << , << std::get<2>(tup) << )\n;

    }

    We create classes to model calls and puts as follows:

    template class Processing

    {

    public:

     

      ComputedData convert(const OptionData& optData,

                              const T& S) const

      {

          return CallValues(optData, S);

      }

     

      ComputedData operator () (const OptionData& optData,

                                    const T& S) const

      {

        return CallValues(optData, S);

      }

     

    };

     

    template class ProcessingII

    {

    public:

     

      ComputedData convert(const OptionData& optData,

                              const T& S) const

      {

        return PutValues(optData, S);

      }

     

      ComputedData operator () (const OptionData& optData,

                                    const T& S) const

         {

          return PutValues(optData, S);

      }

     

    };

    Having created the objects that we need we are now in a position to run the application:

    Processing converter;

     

    // Calls

    SUD callPricer(converter);

    value_type S = 60.0;

    callPricer.run(S);

     

    // Puts

    ProcessingII converter2;

    SUD putPricer(converter2);

    value_type S2 = 60.0;

    putPricer.run(S2);

    The output in this case is:

    (2.13337,0.372483,0.0420428)

    end

    (5.84628,-0.372483,0.0420428)

    end

    1.6 Parallel Programming in C++ and Parallel C++ Libraries

    The C++ Concurrency library supports both multithreading and multitasking, that is creating programs that are executed by multiple independent threads of control. Multithreading is pre-emptive in the sense that the scheduler allocates a fixed amount of time (a quantum) to each thread after which time the thread reverts to sleep or to wait-to-join mode. The library also supports the creation of programs and algorithms by decomposing them into components that can potentially run in parallel with little interaction between them. In general terms, potential or exploitable concurrency involves our being able to structure code to permit a problem's subproblems to run on multiple processors. Each subproblem is implemented by a task. A task (Quinn, 2004) is a program in local memory in combination with a collection of I/O ports. Tasks send data to other tasks through their output ports and they receive data from other tasks through their input ports.

    We take a simple example. In this case we parallelise the code in Section 1.5.4. Both of the following solutions are special cases of a more general fork-join idiom in which a single (main) thread creates two child threads. Each child thread executes code independently of the other child threads. Since the shared data is read-only in this special case there is no danger of non-deterministic behaviour. For both solutions the main thread or task must wait on its children to complete before it can proceed.

    We now describe the solution using C++ Concurrency. We first encapsulate the algorithms in stored lambda functions:

    // Parallel execution

    auto fn1 = [&converter](value_type S)

    {

      SUD callPricer(converter);

      callPricer.run(S);

    };

     

    auto fn2 = [&converter2](value_type S)

    {

      ProcessingII converter2;

      SUD putPricer(converter2);

      putPricer.run(S);

    };

    The stock value is:

    value_type stock = 60.0;

    The solution using C++ threads is:

    // Threads

    std::thread t1(fn1, stock);

    std::thread t2(fn2, stock);

     

    // Wait on threads to complete

    t1.join(); t2.join();

    For the task-based solution we use C++ asynchronous futures:

    // Asynchronous Tasks

    std::future task1(std::async(fn1, stock));

    std::future task2(std::async(fn2, stock));

     

    // Wait on threads to complete

    task1.wait(); task2.wait();

     

    // Get results from tasks

    task1.get(); task2.get();

    Our final example is to show how to parallelise this code using the OpenMP library (Chapman, Jost and Van der Pas, 2008). We do not discuss this library in this book, but we recommend it as a good way of learning how to write multithreaded applications before moving to C++ Concurrency. In this case we create an array of threads and we execute them using loop-level parallelism:

    // OMP solution

    std::vector > tGroupFunctions

                = { fn1, fn2 };

     

    value_type stock = 60.0;

     

    #pragma omp parallel for

    for (std::size_t i = 0; i < tGroupFunctions.size(); ++i)

    {

      tGroupFunctions[i](stock);

    }

    The output produced from this code is:

    (2.13337,0.372483,0.0420428)

    end

    (5.84628,‐0.372483,0.0420428)

    end

    (2.13337,0.372483,0.0420428)

    end

    (5.84628,‐0.372483,0.0420428)

    end

    (2.13337,0.372483,0.0420428)

    end

    (5.84628,‐0.372483,0.0420428)

    end

    In general, writing parallel applications using tasks is easier than using threads because tasks hide many of the tricky synchronisation and notification use cases when using threads and it is possible to apply the system decomposition technique (Duffy, 2004) to help create task-dependency graphs that we then implement in C++.

    1.7 Writing C++ Applications; Where and How to Start?

    This book centres around the modern multiparadigm language features in C++11 and C++14. We discuss most of the essential language features in the first ten chapters. Later chapters introduce a number of topics that build on these first ten chapters. These are shown in Figure 1.2 in the form of a concept map. The main goal of this book is to develop tools to show how to design and implement software systems for computational finance applications. Based on this remark we show in Figure 1.2 how we will achieve the goal by displaying the main concepts and their relationship with C++. In general, we use standard C++ in most of the examples unless otherwise mentioned. The book is self-contained in the sense that we prefer to use standard libraries (such as those in C++ as well as Boost and Quantlib) rather than proprietary libraries. The code for the numerical methods in the book is self-contained and has been developed by the author.

    Diagram shows C++ and environs such as applications (Monte Carlo and PDE), numerical methods (Cholesky), boost C++ (uBLAS), parallel computing (threads and tasks), design (patterns and top-down) and .NET interop (C++/CLI).

    Figure 1.2 C++ and environs

    We give a short description of the concepts in Figure 1.2 and their relationship with the applications in this book:

    A: We develop code for well-known numerical methods such as numerical differentiation, interpolation, numerical quadrature, matrix algebra, finding the zeroes of nonlinear equations and optimisation problems.

    B: We use the Boost C++ libraries when we need functionality that is not (yet) in C++. Much of the new functionality in C++ had its roots in Boost. In general, the Boost libraries tend to be reasonably well documented. We recommend the Boost Math Toolkit for numerical applications. We also use the Boost odeint library to numerically solve systems of ordinary differential equations (ODEs).

    C: This book is unique in our opinion because it introduces a defined process to analyse, design and implement any kind of software system in a step-by-step fashion. The process is based on the author's experience as a requirements analyst and software architect in several application domains (see Duffy, 2004 where these domains have been documented). The process is a fusion of Structured Analysis (De Marco, 1978) and the object-oriented paradigm. We apply the process to creating design blueprints for FDM and Monte Carlo applications.

    D: Modern laptop and desktop computers have multiple processors on board. This opens the door to developing multithreaded and parallel code to increase the speedup of programs. C++ offers support in the C++ Concurrency library.

    E: We decided to include a chapter on the .NET language C++/CLI that bridges the (native/ISO) C++ and .NET worlds. C++ is and remains a systems programming language which makes it suitable for certain kinds of applications. The C++/CLI language then allows us to create C++ applications that can call .NET functionality on the one hand while it is possible to create .NET wrappers for native C++ classes and call them from C# code. This approach promotes code reusability and helps C# developers because they do not have to learn C++. All they need is to wrap C++ code in .NET wrappers.

    F: In this book, we design option pricing software using lattice methods, Monte Carlo and PDE/FDM methods. We propose a range of numerical methods, design patterns and design styles. Furthermore, we use C++ Concurrency to parallelise the corresponding algorithms.

    1.8 For Whom is this Book Intended?

    C++ is probably one of the most difficult programming languages to master. The learning curve is much steeper than that of other object-oriented languages such as C# and Java, for example. The only real way to learn C++ is to program in C++ and it is for this reason we say that you need to have a number of years of solid C++ experience writing code and applications. In other words, this is not a beginner's book to learn C++.

    The primary focus of this book is to design and implement maintainable and extendible applications and it is suitable for front-office and middle-office quant developers who use C++ in their daily work. The book is also useful for software architects and project managers who wish to understand and manage software projects.

    This book is also useful for MSc students in finance. We would hope that the step-by-step approach will help them structure their theses.

    The first ten chapters can be read by a range of C++ developers because the topics are application independent and to our knowledge this is the first book on the new features in C++11. Chapters 20 to 25 could also be of interest to mathematical physicists.

    1.9 Next-Generation Design and Design Patterns in C++

    The approach taken in this book is special in the sense that we find it important to have an idea of the software system that we wish to build before developing the code to implement it. This is in the interest of developer productivity. We need design blueprints that describe the system at a level higher than raw C++ code. Computer science is not yet an engineering discipline and there are few standardised design processes and standards to help developers analyse, design and implement C++ applications. One possible exception is the famous software design patterns that were first published in GOF (1995) although, when used on their own, they do not ensure that the software system will be stable.

    Design patterns have become very popular in the last 20 years as witnessed by the number of books devoted to them for object-oriented languages such as Java, C#, C++ and others. Neither the structure nor the number of patterns have changed much in the last 20 years as most of the literature seems to imitate the 23 patterns in GOF (1995). In a sense, these patterns were invented when C++ was still in its infancy and when it only supported the traditional object-oriented technology based on subtype polymorphism (virtual functions) and class hierarchies. Our basic premise is that the GOF patterns represent knowledge that has not adapted to improvements in software, hardware and development methods.

    The GOF design patterns are based on the object model which means that the patterns are implemented using objects, classes and class hierarchies in combination with subtype polymorphism. This means that an abstract requirement regarding the flexibility of a software design must be turned into an explicit data model by introducing a proxy for a non-computable concept. This process is called reification and it allows any aspect of a programming language to be expressed in the language itself.

    It is possible to upgrade the GOF patterns in a number of ways:

    S1: Keep and use the patterns in their current form without change.

    S2: Improve the patterns by using new improved C++ functionality such as shared pointers and the syntax that we discuss in the first ten chapters of this book.

    S3: Re-engineer those patterns that can be implemented more easily and correctly using the generic and functional programming models, for example.

    S4: Do not (yet) use design patterns but instead postpone their use in the design trajectory for as long as possible. Instead, we use the system decomposition techniques of Chapter 9 and we hope to achieve the same (and improved) levels of flexibility as with GOF patterns but then by other means, specifically by defined standardised interfaces between components.

    1.10 Some Useful Guidelines and Developer Folklore

    We conclude this chapter with some guidelines on the estimation, planning and management of software projects. The size of a project can range from a one-person software endeavour lasting three months to a 30-person application with a lifetime of five years, for example. Some of the principles underlying our design approach can be summarised by the steps that György Pólya described when solving a mathematical problem (Pólya, 1990):

    First, you have to understand the problem.

    After understanding, then make a plan.

    Carry out the plan.

    Look back at your work. How could it be better?

    We see these steps as being applicable to the software development process in general and to the creation of software systems for computational finance in particular. Getting each step right saves time and money. In short, we take the following tactic regarding software projects: get it working, then get it right and only then get it optimised (in that order). In the current context we translate these steps into a defined software process (as explained in Chapter 9) in order to avoid some scary outcomes, for example:

    A Big Ball of Mud is a haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.

    We use the following general principles when developing software systems:

    Understand the problem as soon as possible. Can you explain the problem to non-developers?

    Can you develop a software prototype in a few days?

    Scope the problem by identifying the boundaries of the software system.

    Decompose the system into loosely coupled subsystems.

    Use a suitable combination of the object-oriented, generic and functional programming styles.

    Each developer has her own way to design software systems. There is no silver bullet (Brooks, 1995).

    1.11 About the Author

    Daniel J. Duffy is the author of both the first and second editions of Financial Instrument Pricing Using C++. He started his company Datasim in 1987 to promote C++ as a new object-oriented language for developing applications. He played the roles of developer, architect and requirements analyst to help clients design and analyse software systems in areas such as CAD, process control and hardware–software systems, logistics, holography (optical technology) and computational finance. He used a combination of top-down functional decomposition and bottom-up object-oriented programming techniques to create stable and extendible applications (for a discussion, see Duffy, 2004 where we have grouped applications into domain categories). He also worked on engineering applications in oil and gas and semiconductor industries using a range of numerical methods (for example, the finite element method (FEM)).

    Daniel Duffy has BA, MSc and PhD degrees in pure and applied mathematics (from Trinity College, the University of Dublin) and has been active in promoting PDE/FDM for applications in computational finance. He was responsible for the introduction of the Fractional Step (Soviet Splitting) method and the Alternating Direction Explicit (ADE) method in computational finance.

    He is the originator of two popular C++ online courses on www.quantnet.com in cooperation with Quantnet LLC and Baruch College (CUNY), NYC. He also trains quant developers around the world. He can be contacted at: dduffy@datasim.nl. The official Datasim site is www.datasimfinancial.com.

    1.12 The Source Code and Getting the Source Code

    The C++ code in this book is based on the C++11 standard (and later versions). The only exception is the code in Chapter 14 where we introduce the Excel Driver library and in Chapter 27 where we discuss interfacing between C++ and Microsoft's .NET Framework. Furthermore, the code that is presented in each chapter is machine readable and has been tested beforehand. Our development environment is Visual Studio C++ which supports all the C++ functionality that we present in this book. We have not tested the code using other compilers but we would not expect major issues. It is the responsibility of the reader to know how to install the compiler, Boost C++ libraries and Quantlib.

    Regarding copyright, legitimate owners of the book are entitled to the source code which can be used for personal use provided you do not remove the copyright notice in the source code (C)Datasim Education BV 2018.

    For further queries concerning training and support, please contact me directly at dduffy@datasim.nl. The corresponding website is www.datasim.nl.

    CHAPTER 2

    New and Improved C++ Fundamentals

    2.1 Introduction and Objectives

    In this chapter we introduce new syntax and functionality in C++ that adds value to the language as a worthy successor to C++03. We discuss the features that promote the run-time efficiency, reliability and usability of code, namely:

    Automatic memory management: avoiding memory disasters that the use of raw pointers can lead to.

    Move semantics: allowing the compiler to replace expensive copying operations with less expensive move operations. In some cases, move semantics are the only options as some classes do not support copy constructors. Typical examples are smart pointers and classes for multithreading and multitasking.

    New fundamental data types.

    The features in this chapter are crucial as they represent best practices when writing C++ code. For this reason we introduce these features early on in the book. We also strongly recommend that you do the exercises in this chapter to become acquainted with the new features as soon as possible.

    2.2 The C++ Smart Pointers

    In this chapter we first introduce (as background) the Boost Smart Pointer library that makes object lifecycle management easier when compared to using the new and delete operators in C++. In particular, the responsibility for removing objects from memory is taken from the developer's shoulders. To this end, we discuss a number of classes that improve the reliability of C++ code. The two most important classes are:

    Scoped pointer: ensures proper deletion of dynamically allocated objects. These are objects having a short lifetime (for example, factory objects) and they are typically created and deleted in a single scope. In other words, these objects are not needed outside the code block in which they are defined and used.

    Shared pointer: ensures that a dynamically allocated object is deleted only when no other objects are referencing it. The shared pointer class eliminates the need to write code to explicitly control the lifetime of objects. It enables shared ownership of objects.

    The four other classes in Boost are:

    Scoped array: ensures proper deletion of dynamically allocated arrays in a scope.

    Shared array: enables shared ownership of arrays. It is similar to the shared pointer class except that it is used with arrays instead of with single objects.

    Weak pointer: this is an observer of a shared pointer. It does not interfere with the ownership of the object that the shared pointer shares. Its main use is to avoid dangling pointers because a weak pointer cannot hold a dangling pointer. For completeness we note that a valid weak pointer can be promoted to a strong pointer (shared) and hence the lifetime of the original shared pointer might be affected/extended by that promotion.

    Intrusive pointer: this is a special kind of smart pointer and is used in code that has already been written with an internal reference counter. You can write your own smart pointer class when you are not happy with the performance of shared pointers or when the software that you are using denotes it.

    2.2.1 An Introduction to Memory Management

    In this section we give a short overview of object lifecycles. In particular, we are interested in heap-based memory allocation of objects. Such memory allocation and deallocation in C++ is the responsibility of the developer. But knowing when memory is no longer needed is not easy to determine and this uncertainty can lead to a number of problems:

    Dangling pointers: these occur when an object is deleted or deallocated without modifying the value of the pointer. In this case the pointer continues to point to the location of the deallocated memory.

    Wild pointers: these are pointers that are used before they are initialised.

    Memory leak: in this case the program is unable to release memory that it has acquired. This situation can be caused by a pointer when it goes out of scope. In other words, the dynamically allocated memory is unreachable and lost forever.

    Double free bugs: this refers to the case when we try to delete memory that has already been deleted.

    In order to resolve (or avoid) these problems we have a number of options open to us. We can allow automatic memory management by using garbage collection (GC) that is supported by some languages such as C# and Java. The garbage collector attempts to reclaim memory used by objects that will never be accessed again in an application. Garbage collection is the opposite of manual memory management. There are various kinds of garbage collectors, the most common being called trace garbage collectors. These first determine which objects are reachable (or potentially reachable) and they then proceed to discard the remaining ‘dead' objects.

    There is also the reference counting technique (used in C++) that stores the number of pointers or handles to a dynamically allocated object. When an object is no longer referenced it will be deleted from memory. Reference counting is a form of garbage collection in which each object contains a count of the number of references to it that are held by other objects. Reference counting can entail frequent updates because the reference count needs to be incremented or decremented when an object is referenced or dereferenced.

    Some of the advantages of reference counting are:

    Objects are reclaimed as soon as they are no longer referenced and then in an incremental fashion without incurring long waits on collection cycles.

    Reference counting is one of the simplest forms of garbage collection to implement.

    It is a useful technique for the management of non-memory resource objects (such as file and database handles).

    Some disadvantages of reference counting are:

    Frequent updates are a source of inefficiency because objects are being continually accessed. Furthermore, each memory-managed object must reserve space for a reference count.

    Some reference-counting algorithms cannot resolve reference cycles (objects that refer directly or indirectly to themselves).

    We now discuss the smart pointer library in C++11. We note that classical garbage collection is not implemented in C++. There is a need in C++ for some kind of automatic (or semi-automatic) memory management mechanism and to this end C++ provides template classes to help developers create reliable and robust code. In general, we use smart pointers in the following situations:

    Avoiding the errors that we discussed in this section.

    Creating objects with well-defined lifetimes.

    Shared ownership of resources.

    Resolving a number of exception-unsafe problems when using raw pointers.

    2.3 Using Smart Pointers in Code

    We discuss three classes of smart pointers in C++. In the next sections we focus on the syntax of each class and we give some simple examples to show what they do.

    2.3.1 Class std::shared_ptr

    This smart pointer class implements the concept of shared ownership. A resource or object (a piece of memory on the heap or a file handle, for example) is shared among a number of shared pointers. Only when the resource is no longer needed is it deleted. This is when the reference count becomes zero.

    We discuss shared pointers in some detail. First, we show how to create empty shared pointers and shared pointers that are coupled to resources. We also show how a shared pointer gives up ownership of one resource and how it becomes owner of another resource. We can see how many shared pointers own a resource by using the member function use_count():

    #include

     

    // Handy alias

    template

      using SP = std::shared_ptr;

    using value_type = double;

     

    // Creating shared pointers with default deleters

    SP sp1;                            // empty shared ptr

    SP sp2(nullptr);                  // empty shared ptr for

                                                  // C++11 nullptr

     

    SP sp3(new value_type(148.413));  // ptr owning raw ptr

    SP sp4(sp3);                      // share ownership with sp3

    SP sp5(sp4);                      // share ownership with sp4

                                                  // and sp3

     

    // The number of shared owners

    std::cout << sp2 shared # << sp2.use_count() << '\n';

    std::cout << sp3 shared # << sp3.use_count() << '\n';

    std::cout << sp4 shared # << sp4.use_count() << '\n';

     

    sp3 = sp2;  // sp3 now shares ownership with sp2;

                // sp3 no longer has ownership of its previous resource

    std::cout << sp3 shared # << sp3.use_count() << '\n';

    std::cout << sp4 shared # << sp4.use_count() << '\n';

    In the above cases the last owner of the resource is responsible for destroying the resource and by default this is achieved by a call of the operator delete. We now show how to create shared pointers in combination with a user-defined deleter. This is a useful option when you wish to execute a command just before destroying a resource, for example notifying clients or printing a message. We can implement a deleter using a function object, lambda function or stored lambda function (which we shall discuss in Chapter 3), in this case as a function object and as a stored lambda function:

    // Memory deleters

    template

        struct Deleter

    {

        void operator () (T* t) const

         {

          std::cout << delete memory from function object\n;

             delete t;

         }

    };

     

    // Creating shared pointers with user-defined deleters

     

    // Deleter as function object

    SP sp(new value_type(148.413), Deleter()); 

                 

    // Deleter as lambda function

    SP sp2(new value_type(148.413), [](value_type* p)

                                                { std::cout << bye\n;

                                                  delete p; });

     

    // Stored lambda function as deleter

    auto deleter = [](value_type* p)

                    { std::cout << bye\n; delete p; };

    SP sp32(new value_type(148.413), deleter);

    Continuing, we now discuss more ways to construct shared pointers. They are:

    std::make_shared < T > : construct an instance of T and wrap it in a shared pointer using arguments as the parameter list for the constructor of T.

    std::allocate_shared: construct an instance of T and wrap it in a shared pointer using arguments as the parameter list for the constructor of T. An explicit memory allocator object is one of the arguments to this function.

    In the second case above there is an option to give a memory allocator as one of the arguments (in this case we show examples of both C++ and Boost C++ allocators):

    struct Point2d

    {

         double x, y;

         Point2d() : x(0.0), y(0.0) {}

         Point2d(double xVal, double yVal) : x(xVal), y(yVal) {}

        void print() const {std::cout << ( << x << , << y << )\n;}

        ∼Point2d() { std::cout << point destroyed\n; }

    };

     

    // More efficient ways to construct shared pointers

    auto sp = std::make_shared(42);

    (*sp)++;

    std::cout << sp: << *sp << '\n'; // 43

     

    auto sp2 = std::make_shared(-1.0, 2.0);

    (*sp2).print(); // (-1, 2)

     

    auto sp3 = std::make_shared();

    (*sp3).print(); // (0,0)

     

    // More efficient ways to construct shared pointers

    auto sp = std::allocate_shared(std::allocator(),42);

    (*sp)++;

    std::cout << sp: << *sp << '\n'; // 43

     

    auto

    sp2 = std::allocate_shared(std::allocator(), -1.0,

    2.0);

    (*sp2).print(); // (-1, 2)

     

    // Use a Boost pool allocator

    auto sp3 = std::allocate_shared (boost::pool_allocator(),

    14.45,28.45);

    (*sp3).print(); // (14.45,28.45)

    There are four overloaded versions of the function reset() that give up ownership by a shared pointer in some way:

    Give up ownership and reset to an empty shared pointer.

    Give up ownership and reinitialise the pointer (with default and user-defined deleters).

    Give up ownership and reinitialise the pointer using a memory allocator and a user-defined deleter.

    We can determine if a shared pointer sp is the only owner of a resource by calling the predicate unique() (which is semantically equivalent to sp.use_count() = = 1). We now give some examples on how to reset a shared pointer:

    // Reset

    std::cout << Reset\n;

    SP sp1(new value_type(148.413)); 

    SP sp2(sp1);   

    SP sp3(sp2);

     

    std::cout << sp3 shared # << sp3.use_count() << '\n';  // 3

     

    SP sp4(new value_type(42.0));

    SP sp5(sp4);

     

    std::cout << sp5 shared # << sp5.use_count() << '\n';  // 2

     

    sp3.reset();

    std::cout << sp3 shared # << sp3.use_count() << '\n';  // 0

    std::cout << sp2 shared # << sp2.use_count() << '\n';  // 2

     

    sp3.reset(new value_type(3.1415));

    std::cout << sp3 shared # << sp3.use_count() << '\n';  // 1

    std::cout << sp2 shared # << sp2.use_count() << '\n';  // 2

     

    sp2.reset(new value_type(3.1415), Deleter());

    std::cout << sp2 shared # << sp2.use_count() << '\n';  // 1

     

    std::cout << sp2 sole owner? << std::boolalpha << sp2.unique() << '\n';

    // true

    2.3.2 Class std::unique_ptr

    Whereas std::shared_ptr allows a resource to be shared among several shared pointers, in the case of std::unique_ptr there is only one transferable owner of a resource. In this case we speak of exclusive or strict ownership. Its main added value is in avoiding resource leaks (for example, missing calls to delete when using raw pointers) and for this reason it can be called an exception-safe pointer. Its main member functions are:

    Constructors (similar to those in std::shared_ptr).

    Assign a unique pointer.

    Release; return a pointer to the resource and release ownership.

    Reset; replace the resource.

    Operator overloading (= =, ! = , < and other comparison operators).

    The interface is similar to that of std::shared_ptr which means that most of the code will be easy to understand. Finally, std::unique_ptr succeeds auto_ptr, the latter being considered deprecated.

    Our first example entails creating a unique pointer in a scope. Under normal circumstances when the pointer goes out of scope the corresponding resource is cleaned up but in this case we (artificially) throw an exception before the end of the scope. What happens? When we run the code we see that the resulting exception is caught and the resource is automatically destroyed:

    template

      using UP = std::unique_ptr;

     

    try

    {

      // Unique pointers

     

      // Stored lambda function as deleter

      auto deleter = [](value_type* p)

                { std::cout << bye, bye unique pointer\n; delete p; };

      UP sp32(new value_type(148.413), deleter);

     

      throw - 1;

    }

    catch(int& n)

    {

      std::cout << error but memory is cleaned up\n;

    }

    This code also works when we use shared pointers instead of unique pointers. In C++14 we can create unique pointers using std::make_unique() for non-array types:

    // Other examples with unique pointers

    // More efficient ways to construct unique pointers

    auto up = std::make_unique(42);

    (*up)++;

    std::cout << up: << *up << '\n';         // 43

     

     

    auto up2 = std::make_unique(-1.0, 2.0);

    (*up2).print();            // (-1, 2)

     

     

    auto up3 = std::make_unique();

    (*up3).print();            // (0,0)

    Finally, we can reset a unique pointer (as we saw with shared pointers) and we can also release ownership and give it back to the caller. We show these functions by way of the following user-defined type that models two-dimensional points:

    struct Point2d

    {

      double x, y;

      Point2d() : x(0.0), y(0.0) {}

      Point2d(double xVal, double yVal) : x(xVal), y(yVal) {}

      void print() const {std::cout << ( << x << , << y << )\n;}

      ∼Point2d() { std::cout << point destroyed\n; }

     

    };

     

    // Reset of unique pointers

    UP up1(new value_type(148.413));

    up1.reset();

    assert(up1 == nullptr);

    // std::cout << reset: << *up1 << '\n';

     

    up1.reset(new value_type(3.1415));

    std::cout << reset: << *up1 << '\n';

    // Give ownership back to caller without calling deleter

    std::cout << Release unique pointer\n;

    auto up = std::make_unique(42.0, 44.5);

    Point2d* fp = up.release();

     

    assert(up.get() == nullptr);

    std::cout << No longer owned by unique_ptr...\n;

     

    (*fp).print();

     

    delete fp; // Destructor of Point2d called

    2.3.3 std::weak_ptr

    The third smart pointer class holds a non-owning (weak) reference to an object (resource) that is managed by a shared pointer. It must be converted to a shared pointer in order to access the resource. It is a helper class to std::shared_ptr and it is needed when the latter's behaviour does not work as intended, namely:

    Resolving cyclical dependencies between shared pointers: dependencies that occur when two objects refer to each other using shared pointers. We cannot release the objects because each has a use count of 1. It is like a deadlock.

    Situations in which you wish to share an object but you do not own it. In this case we define a reference to a resource and that reference outlives the resource.

    The class std::weak_ptr is used in both of the above cases. Sharing is allowed but ownership is not required. The operations in std::weak_ptr can be characterised as follows:

    Constructors (default, from a shared pointer, from a weak pointer).

    Assignment operators.

    Swap two weak pointers.

    Reset a weak pointer.

    Check if a managed object (resource) has expired.

    Create a shared pointer from a weak pointer by locking it. The shared pointer will share ownership if the weak pointer has not expired.

    Some examples are:

    // Create a default weak pointer

    std::weak_ptr wp;

    std::cout << Expired wp? << wp.expired() << '\n';    // true

     

    // Create a weak pointer from a shared pointer

    std::shared_ptr sp(new double (3.1415));

    std::cout << Reference count: << sp.use_count() << std::endl;  //1

     

    // Assign weak pointer to shared pointer

    wp = sp;

    std::cout << Reference count: << sp.use_count() << std::endl;  //1

     

    std::weak_ptr wp2(sp);

    std::cout << Reference count: << sp.use_count() << std::endl;  //1

     

    wp = sp;

    std::shared_ptr sp2(wp);

    std::cout << Reference count, sp2: << sp2.use_count();        //2

    std::cout << std::boolalpha << Expired wp? << wp.expired()    // false

    std::shared_ptr sp3 = wp.lock();

    std::cout << Reference count: << sp3.use_count() << std::endl; //3

    std::cout << Reference count: << sp.use_count() << std::endl;  //3

     

    // Event notification (Observer) pattern and weak pointers

    std::shared_ptr spA(new double (3.1415));

     

    std::weak_ptr wA(spA);

    std::weak_ptr wB(spA);

     

    spA.reset();

    std::cout << wA expired: << wA.expired() << std::endl;

    std::cout << wB expired: << wB.expired() << std::endl;

    2.3.4 Should We Use Smart Pointers and When?

    It is clear that smart pointers are a big improvement on raw pointers. Their use promotes the reliability of code in general but at what cost? For example, a shared pointer object is a wrapper for an ordinary pointer as it contains both this pointer and a reference counter that is shared by all shared pointers that refer to the same object. The situation becomes even more complicated if we use weak pointers because they also need another counter. These features may hinder certain compiler optimisations.

    For unique pointers, there is no run-time penalty when compared to raw pointers. Unique pointers are typically used for factory objects and object engines that create objects in a given scope. When application objects have been constructed, the factory objects are cleaned up by going out of scope. For problems that cannot be directly resolved in C++11 it is possible to resort to the Boost Smart Pointer library that has more smart pointer classes than in C++11 (see Demming and Duffy, 2010). We note that shared pointers are not thread-safe. This fact will have major consequences when porting single-threaded code to multithreaded code. There are functions in C++11 to perform atomic and thread-safe operations on shared pointers that we shall discuss in Chapter 28. In short, you need to determine what smart pointers can and cannot do and what the consequences are when you use them in code. Issues such as performance and maintainability are central. A special project would be to upgrade legacy code that uses raw pointers to code that uses smart pointers.

    2.4 Extended Examples of Smart Pointers Usage

    In the previous sections we gave a discussion of the new pointer classes in C++. The next question to answer is how they represent an improvement on raw pointers. To this end, we discuss how to design user-defined assemblies and aggregations containing embedded pointers and we re-engineer the popular Factory Method design pattern (GOF, 1995) that uses raw pointers.

    2.4.1 Classes with Embedded Pointers

    A common occurrence in software development is when we create composite and whole–part objects consisting of collections whose components are of built-in or user-defined types (a discussion can be found in POSA, 1996). Our interest here is the variant in which the components can be accessed by clients external to the container. This is the shared parts variant. They are pointers that have been initialised elsewhere, possibly in dedicated

    Enjoying the preview?
    Page 1 of 1