Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Financial Analysis and Risk Management: Data Governance, Analytics and Life Cycle Management
Financial Analysis and Risk Management: Data Governance, Analytics and Life Cycle Management
Financial Analysis and Risk Management: Data Governance, Analytics and Life Cycle Management
Ebook397 pages5 hours

Financial Analysis and Risk Management: Data Governance, Analytics and Life Cycle Management

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The Global Financial Crisis and the Eurozone crisis that has followed have drawn attention to weaknesses in financial records, information and data. These weaknesses have led to operational risks in financial institutions, flawed bankruptcy and foreclosure proceedings following the Crisis, and inadequacies in financial supervisors’ access to records and information for the purposes of a prudential response. Research is needed to identify the practices that will provide the records, information and data needed to support more effective financial analysis and risk management.  The unique contribution of this volume is in bringing together researchers in distinct domains that seldom interact to identify theoretical, technological, policy and practical issues related to the management of financial records, information and data.  The book will, therefore, appeal to researchers or advanced practitioners in the field of finance and those with an interest in risk management, computer science, cognitive science, sociology, management information systems, information science, and archival science as applied to the financial domain.

LanguageEnglish
PublisherSpringer
Release dateOct 20, 2012
ISBN9783642322327
Financial Analysis and Risk Management: Data Governance, Analytics and Life Cycle Management

Related to Financial Analysis and Risk Management

Related ebooks

Finance & Money Management For You

View More

Related articles

Reviews for Financial Analysis and Risk Management

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Financial Analysis and Risk Management - Victoria Lemieux

    Victoria Lemieux (ed.)Financial Analysis and Risk Management2013Data Governance, Analytics and Life Cycle Management10.1007/978-3-642-32232-7_1© Springer-Verlag Berlin Heidelberg 2013

    1. Records and Information Management for Financial Analysis and Risk Management

    An Introduction

    Victoria L. Lemieux¹  

    (1)

    School of Library, Archival and Information Studies, University of British Columbia, Vancouver, Canada

    Victoria L. Lemieux

    Email: vlemieux@interchange.ubc.ca

    Abstract

    This chapter provides an introduction to the papers in this volume, which are based on the workshop on Records and Information Management for Financial Risk Analysis and Management, held in Vancouver, Canada, 24–25 August 2011. The chapter contextualizes the papers presented at the workshop, and included in this volume, in relation to international initiatives to improve the transparency of global financial markets and to address information gaps identified as having contributed to or exacerbated the global financial crisis of 2007–2009. The chapter highlights the novel contributions made by each of the papers, and concludes with an examination of future research directions for an emergent field of financial records and information management.

    1.1 Financial Decision Process: Theory and Practice

    This volume brings together papers representing novel research and analyses on a topic that received relatively little attention before the financial crisis of 2007–2009,¹ when markets seemed strong: the day-to-day management of financial records, information, and data. The everyday techne² of global finance has traditionally been of less interest because it is common. Yet, on the very first day of the workshop from which this volume’s papers originate, the Financial Times featured a story on data management, signalling just how much more attention is now being paid to the subject of financial records, information, and data management in the post-crisis era (Stafford 2011).

    The financial crisis and the Eurozone crisis that followed have drawn attention to how weaknesses in the quality and management of financial records, information, and data led to operational risks in financial institutions that prevented effective risk management. The U.S. Office of Financial Research has plainly stated that Data management in most financial firms is a mess, going on to note that the absence of standard reference data, including common standardized designations for firms and their subsidiaries and for financial instruments, has hindered the way transactions are handled and recorded, and thus wasted large amounts of resources in the process of manual correction and reconciliation of millions upon millions of trades per year per firm (CE-NIF 2010). During the crisis, inadequacies in financial supervisors’ access to records and information hindered an effective response to the crisis. Indeed, the need for improved data and information was specifically recognized in a Financial Stability Board (FSB) and International Monetary Fund (IMF) report on The Financial Crisis and Information Gaps (2009) where it was noted that, … the recent crisis has reaffirmed an old lesson—good data and good analysis are the lifeblood of effective surveillance and policy responses at both the national and international levels. Furthermore, following the crisis, poor record-keeping and improperly executed documents led to flawed bankruptcy and foreclosure proceedings in the U.S., as ongoing litigation around the Mortgage Electronic Registration System (MERS) and robo-signing of mortgage foreclosure documents illustrates only too well (Yoon 2012).

    The financial crisis of 2007–2009 has taught that good quality and well-managed records, information, and data are cornerstones of the transparency needed to effectively monitor financial risks, make good risk decisions, and hold individuals and institutions accountable when rules on risk tolerance levels are breached. This finding raises many questions about the characteristics of records, information, and data that produce a good result versus a bad one. Research into the conditions that contribute to good or bad results for the creation, management, and use of financial records, information, and data has implications and applications for the development of standards, best practices, and tools intended to secure a more stable financial future.

    Conducting such research is no small task. Fields that study records, information, and data—archival science, information science, and computer science—each have their own perspectives, research questions, and communities that usually lead separate scientific lives. It is rare, moreover, for these researchers to join forces with finance and economic researchers, as is necessary for effective research on the use of records, information, and data in the context of financial analysis and risk management. Rarer still is the collaboration between academic researchers and financial regulators, industry experts, and financial technology developers to translate theory into practical and socially beneficial results. The Workshop on Records and Information Management for Financial Risk Analysis and Management, held 24–25 August 2011, was one such unique instance, as it brought together all of these groups with a focus on a single challenge: the challenge of effectively dealing with records, information, and data for improved financial analysis and risk management. This proved to be a very topical issue, as indicated by the Financial Times article, and one on which many aspects of future financial regulation will depend.

    The workshop was inspired by the Workshop on Knowledge Representation and Information Management for Financial Risk Management organized by Mark Flood, Louiqa Raschid, and Pete Kyle, on 21–22 July 2010 (Flood et al. 2010). The goal of that workshop was to initiate a research discussion on the knowledge representation challenges for effective financial information management. Participants brought expertise and diverse perspectives from economics, computer science, finance, and information science to an interdisciplinary dialogue on a range of topics, including financial risk management, ontologies for knowledge representation, formal logics, schema mapping, systemic risk, constraint languages, networks, simulation techniques, data integrity, operational risk, and data security.

    The smaller-scale Workshop on Records and Information Management for Financial Analysis and Risk Management sought to continue the conversation on three themes in particular: governance, analytics, and long-term digital preservation. In doing so, it introduced two new themes to the conversation—visual analytics and long-term digital preservation—as well as providing a platform to expand on the themes of ontologies for knowledge representation, formal logics, schema mapping, data integrity, and operational risk. This workshop also broadened the interdisciplinary exchange of ideas on the subject by introducing and including the perspectives of records managers, archivists, cognitive psychologists, behavioural economists, and sociologists to the ongoing dialogue on records and information management for financial risk analysis and management.

    1.2 Terminology

    A brief word about terminology is in order before proceeding further, especially since one of the aims of the workshop was to contribute to the development of a common language and understanding among diverse fields concerned with the management of financial records, information, and data.

    As important as records, information, and data are to the healthy functioning of global financial markets, there is surprisingly little consensus as to exactly what these terms mean. Many coming from the world of information management view records, information, and data as much the same thing; that is, as ‘assets’ of an organization (Coleman et al. 2011), while those from a knowledge management perspective may understand information as taking two distinct forms: tacit and nontacit, or explicit, with records and data falling into the latter category (Alavi and Leidner 2001). Still others distinguish between records and information or data by the fixity of their attributes; records are stable over time and across space (or need to be thus rendered by capture in a record-keeping system), while information and data may be more mutable (Duranti and Preston 2008).

    Legal professionals often prefer to use the term ‘documents’ over records, applying the term to a wide range of materials (Coleman et al. 2011). In other fields, such as computer science, the term ‘documents’ carries a much narrower meaning, often referring only to ‘unstructured’ text in hard copy or digital form. Computer scientists prefer to concern themselves with ‘data,’ rather than records, information, or documents. Coleman et al. (2011) note that, within financial institutions, many who use e-mail and database applications will confidently tell you that they produce data, but not records. Still others speak of ‘traces’ or ‘capta,’ which, as Alexandros-Andreas Kyrtsis explains in his paper in this volume with reference to the work of Checkland and Howell (1998), are defined as subsets of data that we pay attention to and use in order to generate information. Within the records management community, records are usually associated with the business functions, activities, and transactions of an organization or group of organizations, and intended to provide trustworthy evidence of those activities in order to substantiate a claim or to assert rights at some future time. Information and knowledge, on the other hand, may be extracted from records, as may discrete pieces of raw data, which can then be given structure or combined in different systems (e.g. reference data systems) to support risk analytics and other functions. Records may take a documentary form (i.e. narrative and textual), but equally may take other forms (e.g. combinations of data in transactional systems).

    Some sociologists and records theorists speak of records as sociotechnical constructs, thereby accounting for different conceptualizations and meanings as exemplifying processes of social negotiation and technical construction within diverse communities (Latour 1979, 1986; Lemieux 2001). This is a view found in the paper by Kyrtsis who writes of the ‘messiness’ and ‘fogginess’ of records arising from their ‘embeddedness’ in social processes. According to Kyrtsis, financial bubbles and collapses must therefore be understood not solely as problems of market risk (i.e. of inflated values which are at risk of collapse because of unfavourable market events), but equally as organizational problems resulting from the way financial technologies are developed and applied to operations of financial information management, and in subsequent decision-making processes. While we understand the need for records, information, and data to possess certain attributes in order to support effective risk analysis and management (e.g. quality, integrity, authenticity), the value of conceptualizing records as socio-technical constructs is often overlooked. However, acknowledgement and appreciation of the dynamic processes (human, social, and systemic) causing records to fall short of the ideal can inform our way of engineering better records that produce more desirable outcomes (e.g. sustainable levels of risk, stable financial systems, etc.).

    The concept of records and information ‘management’ also has a wide variety of definitions. According to a formal definition of records management from ISO15489-1 (clause 3.16), it is a field of management responsible for the systemic control of the creation, receipt, maintenance, use and disposition of records, including the processes for capturing and maintaining evidence of information about business activities and transactions in the form of records (ISO 2001). Other communities prefer the term ‘information management,’ understood as the collection and management of information from one or more sources and the distribution of that information to one or more audiences. This term encompasses both the organization and control of the structuring, processing, and delivery of information (AIIM 2012). Still other communities focus on ‘data management,’ defined as the development, execution, and supervision of plans, policies, programs, and practices that control, protect, deliver, and enhance the value of data and information assets (Mosley 2007).

    Though slightly different, all of these definitions essentially boil down to the same thing: establishing governance over records, information, and data is service to organizational and societal objectives.

    1.3 Governance

    With regard to governance—one of the main themes of the workshop—there is a global consensus that records, information, and data management must be improved to ensure that the new institutions³ established to govern a recalibrated post-crisis global financial system are able to provide effective financial risk analysis and management. One example of the types of initiatives being undertaken is the U.S. Commodity Futures Trading Commission (CFTC), which has developed standards for describing, communicating, and storing data on complex financial products (e.g. swaps). Explaining the plan, CFTC commissioner Scott O’Malia, has said:

    The data and reporting mandates of the Dodd-Frank Act place the CFTC in the centre of the complex intersection of data, finance and the law. There is a need and desire to go beyond legal entity identifiers and lay the foundation for universal data terms to describe transactions in an electronic format that are identifiable as financial instruments and recognizable as binding legal documents (Grant 2011).

    Worldwide, members of the financial community are in the process of identifying new records, information, and data requirements and standards to provide for better governance of the global financial system.

    While all of the papers in this volume touch on the theme of governance in one way or another, the paper by Mark Flood, Allan Mendelowitz, and Bill Nichols is particularly relevant to this theme as it offers a ‘tour d’horizon’ of the records, information, and data management challenges facing post-crisis macroprudential supervisors. In their paper, they note that traditional financial oversight has been very firm-centric and that financial information has been expanding much faster than traditional technologies can track; they suggest that financial stability supervisors will therefore require specialized techniques for risk measurement and data capture and an expanded capacity for risk analysis. They call for the focus of oversight to shift from firm-centric to a focus on the relationships among firms and markets across the financial system, in particular to the contractual relationships created by financial transactions. Key to this task, they argue, is collecting data which is not now available, including contractual terms and conditions and identifiers for legal entities, to enable forward-looking cash-flow and risk analysis.

    Collection of data required for enhanced risk analysis and management is a huge undertaking and has given rise to many challenging discussions concerning governance of proposed records and information infrastructures, such as the need for a global legal entity identifier. Writing from a firm-centric viewpoint in an article entitled What is Information Governance and Why is it So Hard? Gartner analyst Logan (2010) argues that the root of all of our informational problems is the lack of accountability for information. Once the challenge of identifying the appropriate accountability structures for effective information governance moves beyond the traditional boundaries of the financial firm to encompass networks of international financial relationships, the ‘hard problem’ of information governance becomes acutely compounded. The issue of governance raises broad questions about the potential of policy to determine the right balance between creating and keeping records, and to ensure that the operations of financial institutions are nimble enough to respond to changing market imperatives. But good governance is in itself difficult to implement: over-regulate and operations run the risk of bogging down the delivery of financial services in overly bureaucratic regimes; under-regulate and the senior management, boards, shareholders, clients, and regulators of financial institutions lack the transparency required to properly assess an institution’s levels of risk. These are the questions that many in the financial industry around the world are wrestling with today as they seek to put in place new governance frameworks following the financial crisis of 2007–2009.

    1.4 Analytics

    The workshop also explored the knowledge structures, semantics, and logic needed to support enhanced financial risk analysis and management. In their paper, Flood, Mendelowitz, and Nichols argue that contracts, beyond any other form of financial record, information, or data, possess several characteristics that make them very desirable as the basis of macroprudential oversight: they maintain that contracts connect the individual entities in the system as counterparties; there are strong incentives to make contracts valid, complete, and unambiguous statements of promises and commitments; many financial contracts exist in digital representations already, which makes it easier to incorporate them into analytic systems; and that contracts define the contingent cash flows that derive from contractual relationships and are necessary for calculating valuations and understanding risk exposures.

    The paper by Willi Brammertz builds on the themes raised by Flood, Mendelowitz, and Nichols. In his paper, Brammertz tackles the subject of how the Office of Financial Research (OFR) itself will be inundated by large amounts of data from financial institutions. This, he says, calls for new approaches that combine the mechanical (e.g. value, liquidity, etc.) and the subjective (e.g. market condition, etc.) components of finance to provide a model that the OFR can use to look through the windshield, in order to identify and address risks that could threaten financial stability. Brammertz is critical of attribute-only approaches to specifying the unique features of financial contracts and instead calls for an innovative solution which classifies financial contracts by their cash-flow patterns. In this system, two contracts with the same cash flows are treated as identical contracts, regardless of their other attributes. Brammertz next suggests that the limited number of contract types distinguished by their contractual cash-flow patterns can be used as building blocks to assemble more complicated patterns, so that the cash-flow obligations from the vast majority of financial contracts can be handled in a standardized and manageable manner. Flood, Mendelowitz, and Nichols are supportive of Brammertz’s approach in their paper, while noting that it would take a long-term sustained effort to execute as well as careful design and structuring to avoid overwhelming the macroprudential supervisor with data storage, security, and validation burdens.

    As Flood, Mendelowitz, and Nichols also observe, monitoring risks across the financial system implies comparing and aggregating seemingly disparate exposures, such as a structured mortgage-backed security and a subordinated corporate debenture. As they point out, undertaking this task for individual portfolios is one thing, but to do so at the scale and scope of the full financial system requires new analytic approaches and enhanced analytic capabilities. The papers by Kafui Monu, Victoria Lemieux, Lior Limonad, and Carson Woo; Anya Savikhin; and Thomas Dang and Victoria Lemieux tackle this issue.

    In their paper, Monu, Lemieux, Limonad, and Woo examine and employ conceptual modelling, a technique associated with systems development, and understood as a technology of thinking (Stenning 2002) about representations of records creation, transmission, and management in the processes along the private label residential mortgage-backed securities (MBS) originate and distribute supply chain. Conceptual modelling is the act of representing the physical and social domain through specific abstractions. Their study discusses how they used three different conceptual modelling techniques—the Instrument Centric Analysis (INCA) model, the Organizational Actor (OA) model, and Dependency Network Diagrams (DND)—to explore the relationship between records and risk in the context of the financial crisis. The authors suggest that using conceptual models can have significant advantages over other forms of analysis one might use for information problems along the MBS originate and distribute supply chain and other similarly complex financial processes. Unlike textual analysis, conceptual models can provide the analytical robustness necessary to track interconnections and triggers with the kind of precision that would support the development of predictive models. At the other extreme, conceptual models can reveal the implicit logic in complex financial algorithms. From their experiences with using these different conceptual modelling approaches, the authors conclude that conceptual modelling can be a valuable tool to help understand and model relationships and dynamics in financial risk management and in order to generate new insights about complex financial relationships that would otherwise be difficult for financial risk analysts to see.

    Having user interface tools that can handle the massive amounts of data that flow across global financial networks is essential for risk analysts and risk managers to make sense of the data at their disposal and make fact-based decisions. Many observers of the financial crisis have criticized traditional computational approaches to risk analysis, with Taleb (2011) arguing that the problem comes, in part, from the way in which the tools at our disposal can be tricked into producing erroneous results based on observations of data from a finite sample.

    This problem prompted an exploration of visual analytics (VA) as an approach to supporting exploratory data analysis (Tukey 1977) with the potential to overcome some of the limitations of traditional computational approaches cited by Taleb (2011). VA is defined as the science of analytical reasoning facilitated by interactive visual interfaces (Thomas and Cook 2005). Typically, VA is used when an analytic problem is not sufficiently well-defined for a computer to handle it algorithmically. VA is premised upon human input remaining a key component of the analytic process that is then combined with computational analysis. By conducting VA, people use dynamic interaction with visual representations of datasets to generate new hypotheses related to completely unfamiliar datasets, to confirm existing hypotheses for a partially understood dataset, or to communicate information about a known dataset to an audience (Munzner 2009). In general, then, VA is well-suited to analytic tasks that require the analyst to process masses of complex data, in order to: answer an array of often ambiguous questions; maintain a human component in the process of analysis; blend computational analysis with interactive visualization of the results of that analysis; provide quick answers with on-demand improvement of analytical results; incorporate presentation linked with analysis; and export easy to understand representations of results in real time (VisMaster 2010).

    Both the paper by Savikhin and that by Dang and Lemieux address the application of visual analytics to large sets of financial data for the purpose of financial analysis and risk management. Writing from the disciplinary perspective of behavioural economics, Savikhin discusses VA’s application to the interactive discovery of information from large information sets in order to improve personal financial decision-making. She concludes that VA reduces the cost of obtaining information, improves decisions, and increases confidence levels of consumers in decision tasks involving risk, such as choosing assets for a portfolio and identifying asset price bubbles. Dang and Lemieux present a design and evaluation framework for visual analytics tools in their paper and demonstrate its application to a boutique asset management firm. The authors are able to demonstrate the value of VA in reducing complexity, facilitating easy human visual parsing of data that would otherwise be too large for the human cognitive system to process, and communicating key information with simplicity and impact. Both papers suggest that VA presents many opportunities to see data in new ways that are supportive of more effective risk analytics and risk management. Both papers identify many promising avenues of future research in the theory and application of VA in financial decision-making.

    While new analytic approaches, tools, and techniques, such as visual analytics and the use of conceptual modelling to unravel the complexities of global financial interconnectedness show promise, Kyrtsis’ paper serves as a reminder of the need to be vigilant and wary of creating new gestalts and technological black boxes (sometimes identified as ‘model risks’) that obscure the hazards within financial institutions and within the global financial system. Understanding the processes contributing to gestalts, as outlined by Krytsis, may ultimately encourage ongoing critical examination of how we shape tools and techniques and how these tools and techniques, in turn, shape us. With this awareness, we may hope to avoid the erroneous application of tools and techniques as has happened in the past [e.g. the use of Value-at-Risk (VaR) models criticized by Taleb 2011].

    1.5 Long-Term Digital Preservation

    The final theme of the workshop is long-term digital preservation. Often a forgotten aspect of the management of records, information, and data, long-term digital preservation is critical to creating the capacity for longitudinal studies of market dynamics and risk in financial institutions and financial systems. Here, it is important to make a point that long-term does not, as one might expect, just mean the preservation of records, information, and data so that they will be accessible for hundreds of years from now, although this is an important objective. Many institutions are experiencing trouble retrieving and accessing data in as little as three to five years from the point of creation. This is due to technological obsolescence and change, as well as to a failure on the part of institutions to take measures to ensure that digital records, information, and data are created in forms that will persist over time and be properly maintained. Institutions generally have relied upon backup tapes to archive data, but this has proved to be a universally bad strategy as backup tapes are susceptible to loss and deterioration and their format makes it particularly difficult to retrieve specific and demonstrably reliable records over time. In recent years, largely in response to financial regulation (e.g., SEC rule 17-a4 2001) and high-profile litigation (e.g. Zubulake v UBS Warburg 2003), many financial institutions have introduced new technologies to ensure that archived records, information, and data can be reliably maintained and rapidly retrieved when needed. However, relatively little work has been done to address the risk factors that can lead to the long-term deterioration of reliable and authentic digital records, including efforts addressing file format obsolescence, ensuring digital records can still be read even after changes are made to data structure, as well as improving system documentation, ensuring records can still be retrieved from decommissioned systems, improving audit trails in data migration, and managing uncontrolled accumulation of records. Consequently, financial institutions and macroprudential supervisors may be unable to assume they will have access to critical information beyond a three-year window. Furthermore, as we move to establish new standards for domain representation, we need to consider how data and systems created using earlier versions of these standards can be accessed or interpreted once newer versions have been released. Arguably, at every stage of managing the life cycle of records, information, and data, we must consider how our present choices affect the future accessibility and reliability of records and information.

    Much of the archival research on long-term preservation of digital records has focussed on unstructured data, with many of the approaches to structured data arising from research communities (e.g. data science) that are often not concerned with the preservation of digital objects as impartial and reliable forms of evidence of business transactions for future decision-makers and risk analysts. A merging of diverse approaches is needed in order to develop practical and workable standards and strategies for financial institutions and financial regulators to use in preserving financial records, information, and data in digital form for the long haul. The paper by archival science scholar Sherry Xie discusses International Research on Permanent Authentic Records in Electronic Systems (the InterPARES Project), a 12-year long international research initiative on digital preservation for records. As per one of the key findings of the InterPARES project, a deep understanding of the processes and practices of record-keeping—the everyday techne of organizational records making and keeping practices—is necessary for the development of effective standards and strategies which form the foundation for

    Enjoying the preview?
    Page 1 of 1