Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Measuring and Managing Information Risk: A FAIR Approach
Measuring and Managing Information Risk: A FAIR Approach
Measuring and Managing Information Risk: A FAIR Approach
Ebook857 pages35 hours

Measuring and Managing Information Risk: A FAIR Approach

Rating: 4.5 out of 5 stars

4.5/5

()

Read preview

About this ebook

Using the factor analysis of information risk (FAIR) methodology developed over ten years and adopted by corporations worldwide, Measuring and Managing Information Risk provides a proven and credible framework for understanding, measuring, and analyzing information risk of any size or complexity. Intended for organizations that need to either build a risk management program from the ground up or strengthen an existing one, this book provides a unique and fresh perspective on how to do a basic quantitative risk analysis. Covering such key areas as risk theory, risk calculation, scenario modeling, and communicating risk within the organization, Measuring and Managing Information Risk helps managers make better business decisions by understanding their organizational risk.

  • Uses factor analysis of information risk (FAIR) as a methodology for measuring and managing risk in any organization.
  • Carefully balances theory with practical applicability and relevant stories of successful implementation.
  • Includes examples from a wide variety of businesses and situations presented in an accessible writing style.
LanguageEnglish
Release dateAug 23, 2014
ISBN9780127999326
Measuring and Managing Information Risk: A FAIR Approach
Author

Jack Freund

Dr. Jack Freund is a leading voice in cyber risk measurement and management. As VP, Head of Cyber Risk Methodology for BitSight, Jack has overall responsibility for the systemic development and application of frameworks, algorithms, and quantitative and qualitative methods to measure cyber risk. Previously, Jack was Director of Risk Science at quantitative risk management startup RiskLens and Director of Cyber Risk for TIAA. Jack holds a Ph.D. in Information Systems from Nova Southeastern University, a Masters in Telecommunication and Project Management, and a BS in CIS. Jack has been named a Senior Member of the IEEE and ACM, a Fellow of the IAPP and FAIR Institute, and a Distinguished Fellow of the ISSA. He is the 2020 recipient of the (ISC)2 Global Achievement Award, 2018 recipient of ISACA’s John W. Lainhart IV Common Body of Knowledge Award, and the FAIR Institute’s 2018 FAIR Champion Award.

Related to Measuring and Managing Information Risk

Related ebooks

Security For You

View More

Related articles

Reviews for Measuring and Managing Information Risk

Rating: 4.333333333333333 out of 5 stars
4.5/5

6 ratings1 review

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 5 out of 5 stars
    5/5
    Brilliant book, I have got a deep down understanding of measurement and management of Information risks

Book preview

Measuring and Managing Information Risk - Jack Freund

Measuring and Managing Information Risk

A FAIR Approach

Jack Freund

Jack Jones

Table of Contents

Cover image

Title page

Copyright

Acknowledgments by Jack Jones

About the Authors

Preface by Jack Jones

Preface by Jack Freund

Chapter 1. Introduction

How much risk?

The bald tire

Assumptions

Terminology

The bald tire metaphor

Risk analysis vs risk assessment

Evaluating risk analysis methods

Risk analysis limitations

Warning—learning how to think about risk just may change your professional life

Using this book

Chapter 2. Basic Risk Concepts

Possibility versus probability

Prediction

Subjectivity versus objectivity

Precision versus accuracy

Chapter 3. The FAIR Risk Ontology

Decomposing risk

Loss event frequency

Threat event frequency

Contact frequency

Probability of action

Vulnerability

Threat capability

Difficulty

Loss magnitude

Primary loss magnitude

Secondary risk

Secondary loss event frequency

Secondary loss magnitude

Ontological flexibility

Chapter 4. FAIR Terminology

Risk terminology

Threat

Threat community

Threat profiling

Vulnerability event

Primary and secondary stakeholders

Loss flow

Forms of loss

Chapter 5. Measurement

Measurement as reduction in uncertainty

Measurement as expressions of uncertainty

But we don’t have enough data…and neither does anyone else

Calibration

Equivalent bet test

Chapter 6. Analysis Process

The tools necessary to apply the FAIR risk model

How to apply the FAIR risk model

Process flow

Scenario building

The analysis scope

Expert estimation and PERT

Monte Carlo engine

Levels of abstraction

Chapter 7. Interpreting Results

What do these numbers mean? (How to interpret FAIR results)

Understanding the results table

Vulnerability

Percentiles

Understanding the histogram

Understanding the scatter plot

Qualitative scales

Heatmaps

Splitting heatmaps

Splitting by organization

Splitting by loss type

Special risk conditions

Unstable conditions

Fragile conditions

Troubleshooting results

Chapter 8. Risk Analysis Examples

Overview

Inappropriate access privileges

Privileged insider/snooping/confidentiality

Privileged insider/malicious/confidentiality

Cyber criminal/malicious/confidentiality

Unencrypted internal network traffic

Privileged insider/confidentiality

Nonprivileged insider/malicious

Cyber criminal/malicious

Website denial of service

Analysis

Basic attacker/availability

Chapter 9. Thinking about Risk Scenarios Using FAIR

The boyfriend

Security vulnerabilities

Web application risk

Contractors

Production data in test environments

Password security

Basic Risk Analysis

Project prioritization

Smart compliance

Going into business

Chapter summary

Chapter 10. Common Mistakes

Mistake categories

Checking results

Scoping

Data

Variable confusion

Mistaking TEF for LEF

Mistaking response loss for productivity loss

Confusing secondary loss with primary loss

Confusing reputation damage with Competitive Advantage loss

Vulnerability analysis

Chapter 11. Controls

Overview

High-level control categories

Asset-level controls

Variance controls

Decision-making controls

Control wrap up

Chapter 12. Risk Management

Common questions

What we mean by risk management

Decisions, decisions

Solution selection

A systems view of risk management

Chapter 13. Information Security Metrics

Current state of affairs

Metric value proposition

Beginning with the end in mind

Missed opportunities

Chapter 14. Implementing Risk Management

Overview

A FAIR-based risk management maturity model

Governance, risks, and compliance

Risk frameworks

Root cause analysis

Third-party risk

Ethics

In closing

Index

Copyright

Acquiring Editor: Brian Romer

Editorial Project Manager: Keira Bunn

Project Manager: Poulouse Joseph

Designer: Matthew Limbert

Butterworth-Heinemann is an imprint of Elsevier

The Boulevard, Langford Lane, Kidlington, Oxford, OX5 1GB, UK

225 Wyman Street, Waltham, MA 02451, USA

Copyright © 2015 Elsevier Inc. All rights reserved

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher

Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email: permissions@elsevier.com. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material

Notices

Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

Library of Congress Cataloging-in-Publication Data

Application submitted

British Library Cataloguing in Publication Data

A catalogue record for this book is available from the British Library

ISBN: 978-0-12-420231-3

For information on all Butterworth-Heinemann publications visit our web site at http://store.elsevier.com/

This book has been manufactured using Print on Demand technology. Each copy is produced to order and is limited to black ink. The online version of this book will show color figures where appropriate.

Acknowledgments by Jack Jones

Something like FAIR doesn’t come about in a vacuum, and there are a lot of people who deserve my deepest gratitude for the role they played in its development. Sometimes their role was subtle and unintentional; perhaps an offhand comment that spurred deeper thinking or a twist in thinking that unlocked some conceptual obstacle I faced. In other cases the role was explicit and obvious; perhaps as a sounding board, support in the face of skeptics, or mentoring me through political mine fields that litter the information security and risk management landscape. Regardless, the following list (in alphabetical order except for the last two entries) inevitably is incomplete and I beg the forgiveness of anyone who feels I have left them out.

• Dr. Henry Beker—whose deep wisdom and strong support have been so crucial to the ongoing success of FAIR and CXOWARE. It is a true privilege to know someone like Henry, let alone have the opportunity to work with him.

• The team at CXOWARE—how lucky can one person get, to be surrounded by such great energy, intelligence, and skill. These people seem able to work magic, both in building a business and taking my sometimes half-baked ideas and turning them into truly remarkable software.

• Jack Freund—whose mental quickness may be unmatched in my experience. Jack has been a dear friend, great colleague, and outstanding partner in writing this book. In fact, without his gentle persistence this book likely would not exist.

• Mike Keller and Susan Gueli—two amazing people, both of whom I had the privilege of working for during my tenure as CISO at Nationwide. It is entirely accurate to say that without their support my career would have been quite different and far less successful than it has been. I am deeply indebted to both of them.

• Cindi Hart—who was my right hand (and very often my saving grace) in each of my CISO roles. I hold no other professional in higher regard, and her friendship has been a true blessing.

• Kirk Herath—whose support and friendship has been so important over the years. You will not encounter a more courageous professional, or anyone more expert in the field of privacy.

• Jim Hietala and Ian Dobson—whose support for FAIR within the Open Group has been so critical over the years. These gentlemen define the word class, and it has been a privilege to work with them.

• Douglas Hubbard—perhaps unmatched as a risk guru, Douglas’ books and insights continue to stoke my internal flame for trying to get this right.

• My team and colleagues at Huntington Bank—as with Nationwide, there simply are too many amazing people to list. Here again, my success was largely due to them, and I am deeply grateful for their support and hard work.

• Alex Hutton—great friend, tireless sounding board, and truly remarkable risk professional. It was his hard work in the early years that kept FAIR alive long beyond what would have happened if I had been trying to do it alone.

• Ryan Jones—whose exceptional work developing and providing FAIR training was responsible for keeping CXOWARE afloat in the early days. His unique combination of creativity, critical thinking, work ethic and pragmatism make him a privilege to work with.

• Marty Miracle—another great friend, deep thinker, and brilliant risk professional. Few people have provided more honest feedback, and fewer yet can match the quality of Marty’s analyses.

• Brooke Paul—great advocate and amazing businessman. Brooke’s business advice in the early days, though not always followed by me, was always spot-on.

• My team and colleagues at Nationwide Insurance—any success I realized while at Nationwide was largely a function of the amazing team of professionals around me. There are simply too many to list here, but in my mind and heart they all stand out.

• Eddie Schwartz—easily one of the sharpest minds I have ever encountered. Despite this, he seemed to believe there was something worthwhile in me and mentored me in many ways. I learned an awful lot from Eddie, and am truly grateful for his friendship, guidance, and the opportunities he gave me.

• Steve Tabacek—dear friend and phenomenal business partner. I can’t imagine a harder working more ethical person, and FAIR would have certainly died on the vine without his tireless support and exceptional business acumen.

• Chad Weinman—another great friend and outstanding colleague. I’ve never worked with anyone so completely dedicated to the customer. This combined with Chad’s energy and positive attitude continue to be critical to CXOWARE’s success.

• I am also deeply indebted to all of the early adopters who found value in FAIR and advocated for it even in the face of criticism. These are the people who had the guts to advocate for something that sometimes ran counter to conventional wisdom. Without their timely support I would have likely given up somewhere along the path.

• Last and most important: my wife Jill, son Ryan, and daughter Kristen. They are my inspiration, my heroes, and my reason for being. Their support has meant everything to me. With them I am truly blessed.

About the Authors

Dr. Jack Freund is an expert in IT risk management specializing in analyzing and communicating complex IT risk scenarios in plain language to business executives. Jack has been conducting quantitative information risk modeling since 2007. He currently leads a team of risk analysts at TIAA-CREF. Jack has over 16 years of experience in IT and technology working and consulting for organizations such as Nationwide Insurance, CVS/Caremark, Lucent Technologies, Sony Ericsson, AEP, Wendy’s International, and The State of Ohio.

He holds a BS in CIS, Masters in telecommunication and project management, a PhD in information systems, and the CISSP, CISA, CISM, CRISC, CIPP, and PMP certifications. Jack is a visiting professor at DeVry University and a senior member of the ISSA, IEEE, and ACM. Jack chairs a CRISC subcommittee for ISACA and has participated as a member of the Open Group’s risk analyst certification committee. Jack’s writings have appeared in the ISSA Journal, Bell Labs Technical Journal, Columbus CEO magazine, and he currently writes a risk column for @ISACA. You can follow all Jack’s work and writings at riskdr.com.

Jack Jones, CISM, CISA, CRISC, CISSP, has been employed in technology for the past 30 years, and has specialized in information security and risk management for 24 years. During this time, he has worked in the United States military, government intelligence, consulting, as well as the financial and insurance industries. Jack has over 9 years of experience as a CISO with three different companies, with five of those years at a Fortune 100 financial services company. His work there was recognized in 2006 when he received the 2006 ISSA Excellence in the Field of Security Practices award at that year’s RSA conference.

In 2007, he was selected as a finalist for the Information Security Executive of the Year, Central United States, and in 2012 he was honored with the CSO Compass award for leadership in risk management. He is also the author and creator of the Factor Analysis of Information Risk (FAIR) framework. Currently, Jack is cofounder and president of CXOWARE, Inc.

Preface by Jack Jones

Two questions and two lame answers. Those were the catalyst in 2001 for developing FAIR. At the time, I was the newly minted CISO for Nationwide Insurance, and I was presenting my proposed security strategy to senior executives in hopes of getting additional funding. One of the executives listened politely to what I had to say, and asked two simple questions:

1. How much risk do we have?

2. How much less risk will we have if we spend the millions of dollars you’re asking for?

If he had asked me to talk more about the vulnerabilities¹ we had or the threats we faced, I could have talked all day. Unfortunately (or, I guess, fortunately), he didn’t. He wanted to understand what he was going to get in return for his money. To his first question, I answered, Lots. To his second question, Less. Both of my answers were accompanied by a shrug of my shoulders—tacit admission that I didn’t have a leg to stand on (he knew when he asked the questions that I wouldn’t have a useful answer). The good news was that I got most of the money I was asking for, apparently out of blind faith. The even better news was that I left the meeting determined to find a defensible answer to those questions.

When I began working on FAIR, I had absolutely no idea that an international standards consortium like The Open Group would adopt it as a standard, that people would be building software to implement it, or that organizations would pay to have their people trained in it. Nor had the idea of a book crossed my mind. It also never crossed my mind that what I was developing could be used to evaluate other forms of risk beyond information security. All I wanted was to never have to shrug my shoulders and mutter lame responses to those questions again. This, I have accomplished.

What this book is not, and what it is

If you are looking for a book that spoon-feeds you answers to the daily questions and challenges you encounter as an information security professional, you’ve come to the wrong place. This book doesn’t provide much in the way of checklists. You will likewise be disappointed if you’re looking for a book based on deep academic research, complete with references to scores of scholarly resources. There are only a handful of references to other works, very few of which would probably qualify as scholarly in nature. If you’re looking for highly sophisticated math and formulas that make the average person’s eyes roll back in their heads—my apologies again. FAIR simply is not that complicated.

First and foremost, this is a book about critical thinking. And if you get nothing else out of it, I hope it helps you to think critically and perhaps differently about risk and risk management. It represents the current state of my exploration into risk and risk management, and how it’s being applied today. And as with many explorations, the path has been anything but simple and straight. The experience has been like trying to unravel a tangled ball of twine. You pull on a thread for a while, thinking you are on the right track, only to realize you created a nasty knot that you have to pick apart—and then start over. Some of those knots were due to my own logical failures or limited background. Other times, too many times, the knots existed in large part due to risk-related fallacies I had (and the industry still has) bought into for years. You will find in here that I take square aim at a number of sacred risk management cows, which is certain to have me labeled a heretic by some folks. I’m more than comfortable with that. This book attempts to lay out before you the current state of this twine, which is now much less tangled. There are still strings to pull though, and knots to unravel, and there always will be. Maybe you will take what you read here and do some pulling and unraveling of your own. And if you find and unravel knots that I inadvertently created, so much the better.

A snippet from a T.S. Eliot poem does a great job of capturing my experience with FAIR:

… and the end of all our exploring will be to arrive where we started and know the place for the first time.

T.S. Eliot

That pretty much nails it. My exploration may not be over but there is no question that I know risk and risk management far better than if I hadn’t bothered to explore. I hope you’ll feel the same way after you’ve read this book.

Cheers,

Jack Jones

March 2014


¹ You will see later in the book why I put the word vulnerabilities in quotes.

Preface by Jack Freund

While writing this book, Jack Jones and I had a conversation about some of the difficulties faced by those in this profession, and especially those who are interested in bringing quantitative methods into common practice. During this discussion I did what I always do when I’m full of myself and waxing eloquent: I use Socratic Method to help summarize and build analogies to help illustrate key points. I have one friend who called me The Great Distiller (with tongue firmly planted in cheek). Jack liked the point I made, and suggested that I write about it here to help frame the book and the work being done on FAIR. Essentially, the point I made went something like this.

What is one of the first things that a new leader in IT risk and security needs to do? Well, there are a lot of tasks to be sure: building relationships, hiring staff, diagnosing problem areas, and building out new and/or enhanced processes. This list could be written about most leadership jobs in any profession. However one task that will show up on that list is something like identify risk assessment methodology. How unique that is to our profession! Think about that for a minute: you could have a fully implemented risk function that is rating issues and risk scenarios everyday. Yet, when a new leader joins your organization, they may wipe all of that away because they disagree with the method being used. And this may be for reasons as simple as it’s unfamiliar to them, they prefer another method more, or a little from column A and a little from column B.

I was discussing this with someone who runs a chemistry lab. She has a PhD in organic chemistry, runs a peptide laboratory, and who modestly refers to herself simply as a chemist. I asked her if this is a routine practice in chemistry. Does one of the early tasks of a new lab manager involve choosing the method of chemical interaction they are going to use? Do they define their own approach and methodology for handling volatile chemicals? Certainly not, she replied. Once it is determined the type of chemistry they are going to be doing (organic, inorganic, nuclear, etc.), they will need to supply the lab with the materials necessary to do their job. She said there are five basic chemicals she uses in her peptide lab and once those are selected, it is a matter of outfitting the lab with the correct safety devices and handling precautions (fume hoods, storage containers, etc.). Do any of these tasks involve explaining to your staff your view on how these chemicals interact? Do you have to have conversations to get their minds right on how to do chemistry? I asked. She told me this is not the case (although we had a good chuckle over those that still insist on pipetting by mouth). There are well-known principles that govern how these chemicals work and interact. In areas where there is dispute or cutting-edge work, those involved in its practice use the scientific method to gain a better understanding of what truth looks like and present their work for peer review.

We may never get to the equivalent of a periodic table of risk, but we need to try. We need to set stakes in the ground on what truth looks like, and begin to use scientific method to engage each other on those areas where we disagree. I genuinely want to get better at the practice of IT risk, and I know that Jack Jones does too. It is for this reason that FAIR has been publicly reviewed and vetted for several years now and why Jack Jones placed the basic FAIR taxonomy discussed in chapter 3 in the hands of a neutral standards body (The Open Group). By all means, let us have an open dialogue about what works and what does not. But let us also use impartial, unbiased evidence to make these decisions.

I wrote this book to accomplish several things. First, it is a great honor to be able to author a book with one’s mentor. It is an even bigger honor to help your mentor write a book about their life’s work. That really is significant to me, but it is also a weighty responsibility. I learned FAIR from Jack early on in the part of my career where I was beginning to do Governance, Risk, and Compliance (GRC) work in earnest. By that time, I had been studying, training in, and writing about various methods of risk assessment and it was becoming clear to me that what passed for a method was more process than calculation. Indeed, if you compare most major risk assessment methods, they all bear a striking resemblance: you should consider your assets, threats to them, vulnerabilities, and the strength of the controls. Somehow (although rarely ever explicitly identified), you should relate them to one another. The end result is some risk rankings and there you go. Except that is the problem: no one tells you how to do this exactly, and often times you are encouraged to make up your own solution, as if we all know the right way to go about doing that.

What I learned from Jack was simple and straightforward. The relationship between the variables was well reasoned and well designed. It was easy to understand and explain. It also included some sophisticated math, yet was still easy for me to use (I always scored higher on verbal than math sections on any standardized test). I have often been accused of knowing only a single method for assessing risk (a statement that is wildly inaccurate). I know many methods for assessing risk, yet only one that seeks to calculate and analyze risk in a defensible way. Knowing how to do that gives you a sense of composure, and perhaps even some bravado. You do not shy away from difficult or hard problems because you have learned how to model these scenarios even when you do not have the best data available. This can be off-putting to some people. But you will come back to the FAIR taxonomy and calculation method over and over again. It is like learning the quadratic formula after years of solving quadratic equations using factoring. Why go back to something that is harder to do and takes longer to complete? I will tease Jack often by saying that he has ruined me for other types of risk analysis methods. He takes my good-natured ribbing well. What I mean is that he has showed me the right way to do it, and it is difficult for me to go back to other approaches since their flaws have been laid bare before me. So to that end, yes I only know one (good) method for practicing risk and I have been thoroughly ruined for all the other (not as good) methods for doing risk assessments. And for that I thank you Jack Jones.

The second major reason I decided to write this book is because I believe we are on the precipice of something really amazing in our profession. IT risk is really starting to become its own distinct function that is slowly separating from Information Security proper while simultaneously becoming more intertwined with it. In my role as an educator, I often have discussions with students who are looking to break into the risk and security profession I often tell them that these jobs are really IT specialties and what they really need is to gain some experience in a reference discipline; they need a strong foundation in networking or application development as an example. Only after a few years of work in these roles will they be able to provide useful security work to a future employer. This used to be the way that people entered the security function. Often it was only after many years of work administering servers or working on network routing tables that you were given the chance to be a security practitioner full time. The industry is changing now, and more and more I find that there are paths into risk and security that do not involve even a moderate level of knowledge of something else first.

This is not necessarily bad, however it has some implications. Since we can no longer depend on someone having a solid skillset to draw upon, they may not know a lot about the environments they are now charged with assessing. Second, if they were trained with specific security knowledge that often means that they missed some of the foundational elements that are a part of a core liberal arts education (critical thinking and scientific method as an example). It is also important to learn how to be more autodidactic (a word I learned while being an autodidact).

This book is written in part to help fill out the knowledge gap that a lot of people have when faced with a job that is primarily risk-based. I often draw a diagram for people, which I think adequately reflects the real nature of the skills necessary for working in this job (Figure P.1):

FIGURE P.1   IT risk job skills.

By and large, most of the job is talking to people. You have to learn how to perform technical interviews of IT people and business process reviews with business people. You have to learn how to talk with the person running backups on mainframes, as well as to be able to present risk information to the board of directors. Do not forget the importance of being able to write: risk communication also includes the ability to write e-mails and reports. Essentially, you have to develop a skillset that includes general soft skills and some specialized techniques. This book will aid with some of this.

Good risk practitioners also have technical knowledge. Most of this is not covered here. Like my (aging) advice to college kids, find some way to gain that knowledge either by education or practice. This is probably the easiest of the three to get better at, given the proliferation of free and near-free training available today.

Lastly are the risk skills necessary for success. Addressing this is a big reason we wrote this book. We will go over the definitions for risk variables, how to apply them, how to gather data, taxonomies, ontologies, range estimation, Monte Carlo method, metrics, and reporting. But do not be thrown off by this: our approach is to focus on applied techniques and methods necessary to do the job of an IT risk practitioner today. This is not a book that is heavy with theoretical and complex notions of risk and mathematics. There are many other great books out there that cover this written by authors far more qualified to discuss the topic than we are. My focus in the practice of risk has been on its practical application. It is an irony not lost on me: the guy with the PhD ends up constantly complaining about being practical to those around him. However beautiful and elegant the theory may be, I personally find practical application to be a much better indicator of reality. That and I like to get my hands dirty. To this end, I have always found motivation in this quote from Albert Einstein:

Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction.

I do not want risk to be complicated. It should be no more complicated than it needs to be, and I have been on record for a long time that I do not believe the people charged with running the businesses and organizations where you work are incapable of understanding IT risk (when properly explained). It is easy to make it complicated; it takes dedicated practice to make it easy.

That brings me to the third and final reason that I wanted to write this book. In my family, it is generally understood that I do something with computers. And although I imagine many in the various and sundry nooks and crannies of the practice of Information Technology have similar experiences, I have always wanted to be able to find a better (simpler) way to describe the work that I and we as a profession do everyday to the layperson. This explanation I have sought alongside the rapidly changing profession that we labor in daily. As I have outlined above, it is changed enough that it is no longer sufficient to simply say Information Security is our job. The well informed will find it obtuse (what kind of InfoSec do you do?) and the IT civilian still will not know what you are talking about really. Over the years, I have practiced with certain words and phrases to help make it clearer; I tend to use some tactile examples such as how I protect people’s bank and retirement accounts from hackers (but even that really is not precisely true). However, given the proliferation of these kinds of attacks and their growing impact, there is usually something in my descriptions that is sufficient to satisfy the cursory inquiry of most. However, I have found that some of those closest to me are routinely curious enough such that they deserve a more complete and full answer as to what exactly I do at work everyday (and most evenings); Mom, this is for you.

Jack Freund, PhD

March 2014

Chapter 1

Introduction

Abstract

This chapter makes the case for the need for quantitative risk management. It begins with the Bald Tire thought experiment to help make the case for a need to articulate assumptions, discuss terminology, and makes plain the factors of risk that we care about modeling and how to communicate them effectively to management. This section also discusses the difference between risk assessment and risk analysis, and details the deficiencies in current approaches that treat the two the same. Lastly, the chapter spells out the progression of topics for the remainder of the book and offers some words of advice on how thinking about risk will impact your ability to make better decisions in all aspects of your life.

Keywords

Analysis; Assessment; Assumptions; Bald tire; Risk; Threat; Vulnerability

How much risk do we have?

How much risk is associated with…?

How much less (or more) risk will we have if…?

Which of our risk management options are likely to be most cost-effective?

What benefit are we getting for our current risk management expenditures?

These are the core questions that FAIR (Factor Analysis of Information Risk) is intended to help us answer. But how is FAIR any different than what the profession has been doing for years? That’s what this chapter is about—describing the reasons why being FAIR about risk is different and, in many ways, better.

How much risk?

Pick a risk issue. Doesn’t matter what the issue is. It could be an information security issue like cloud computing, a business issue like entering a new market, or a personal issue like buying a used car. Now, if someone asked you how much loss exposure (risk) is associated with that issue, can you answer him or her? Let us rephrase that. Can you answer them and stand behind your answer? With a straight face? If you are using FAIR effectively, you can.

The reason you can rationally answer questions like these using FAIR is because it provides a well-reasoned and logical evaluation framework made up of the following elements:

• An ontology of the factors that make up risk and their relationships to one another. This ontology provides a foundational understanding of risk, without which we could not reasonably do the rest. It also provides a set of standard definitions for our terms.

• Methods for measuring the factors that drive risk

• A computational engine that derives risk by mathematically simulating the relationships between the measured factors

• A scenario modeling construct that allows us to apply the ontology, measurements, and computational engine to build and analyze risk scenarios of virtually any size or complexity

TALKING ABOUT RISK

How complicated is FAIR? Not very. The concepts and ontology are quite straightforward, and measuring the variables can often be downright simple. Unfortunately, the world in which we have to apply FAIR (or any other analysis framework) is often very complex, and that is where we face the learning curve in analyzing risk. What FAIR does is simplify the problem by providing a relatively noncomplex lens through which to view and evaluate the complex risk landscape.

The bald tire

Those of you who have had some exposure to FAIR in the past might expect this section to be old news. Well, yes and no. Although you may already know how it turns out, some of the discussion and terms have evolved. As a result, it is probably worth your time to read it. If you have never of heard of FAIR or the Bald Tire, then know that this is a thought experiment to help us uncover some of the challenges with the current state of risk analysis.

The bullets below describe a risk scenario in four simple stages. As you proceed through each of the stages, ask yourself how much risk is associated with what’s being described:

1. Picture in your mind a bald car tire. Imagine that it is so bald you can hardly tell that it ever had tread. How much risk is there?

2. Next, imagine that the bald tire is tied to a rope hanging from a tree branch. Now how much risk is there?

3. Next, imagine that the rope is frayed about halfway through, just below where it’s tied to the tree branch. How much risk is there?

4. Finally, imagine that the tire swing is suspended over an 80-foot cliff—with sharp rocks below. How much risk is there?

Now, identify the following components within the scenario. What was the:

• Threat,

• Vulnerability, and

• Risk?

Most people believe the risk is high at the last stage of the scenario. The answer, however, is that there is very little risk given the scenario exactly as described. Who cares if an empty, old bald tire falls to the rocks below? But, but…what about the person using the tire swing?! Ah, what person? We never mentioned any person.

Assumptions

Was our question about the amount of risk unfair? Perhaps, and we’ve heard the protests before…But what if someone climbs on the swing? and, The tire’s purpose is to be swung on, so of course we assumed that somebody would eventually climb on it! Both are reasonable arguments. Our point is that it is easy to make assumptions in risk analysis. In fact, assumptions are unavoidable because the world is infinitely complex.

That is our first point—assumptions are unavoidable. Assumptions are also the most likely source of problems within most analyses because, too often, people do not examine their assumptions or even recognize when they are making them. They just shoot from the hip. That scenario is high risk! they’ll say. Unfortunately, the person next to them may be making a different set of assumptions and react with, Are you nuts? Clearly that scenario is low risk! Most of the time, the disagreement is based on different assumptions.

One of the significant advantages to using FAIR is that its ontology and analysis process help you to identify and clarify your assumptions. That way, when someone questions your results, you are in a position to explain the assumptions underlying the analysis.

Terminology

Our second point is that from any group going through the Bald Tire scenario, we will typically get several different descriptions of what constitutes the threat, vulnerability, and risk within the scenario. We’ve heard the frayed rope described as threat, vulnerability, and risk. We have also heard the cliff and rocks described as threat and risk.

The simple fact is that much of the risk profession (including the operational risk discipline) has not adopted standard definitions for these terms. Compare this to other professions. Physicists do not confuse terms like mass, weight, and velocity, and financial professionals do not confuse debit and credit—even in informal discussions—because to do so significantly increases the opportunity for confusion and misunderstanding. It also makes it tough to normalize data. This is important to keep in mind when we’re trying to communicate to those outside our profession—particularly to executives who are very familiar with the fundamental concepts of risk—where the misuse of terms and concepts can damage our credibility as professionals and reduce the effectiveness of our message. After all, our goal is not to create a secret risk language that only we can talk about—effectively isolating ourselves from other risk professionals.

TALKING ABOUT RISK

Go to your local bookstore or library and pick up two different books on risk by two different authors. There is a very good chance that you will find that they have used foundational terminology differently from one another. For that matter, there’s a decent chance that each author uses fundamental risk terminology inconsistently within his or her own book.

So, what are the threat, vulnerability, and risk components within the Bald Tire scenario? The definitions themselves are described in Chapter 3, but within this scenario:

• The threat is the earth and the force of gravity that it applies to the tire and rope.

• The frayed rope introduces some amount of vulnerability, but is NOT a vulnerability itself. Huh? Vulnerability (in FAIR) is a probability statement. In other words, it is the probability that a threat event (e.g., the force of gravity on the tire and rope) will become a loss event (e.g., the rope breaks). In FAIR, we use the term vulnerability in a way that answers the question, How vulnerable are we? In answer to that question, a FAIR practitioner may answer something like, between 30 and 45%. Simply stated, the rope represents a control, and its frayed condition simply increases the probability of a loss event. If this still doesn’t entirely make sense to you, it will later as we get into the framework in more detail.

• What about risk? Which part of the scenario represents risk? Well, the fact is, there isn’t a single component within the scenario that we can point to and say, Here is the risk. Risk is not a thing. We can’t see it, touch it, or measure it directly. Similar to speed, which is derived from distance divided by time, risk is a derived value that represents loss exposure. It’s derived from the combination of factors described in FAIR’s ontology.

Having made an issue of terminology, the following paragraphs introduce and briefly discuss some basic definitions.

Threat

A reasonable definition for threat is anything (e.g., object, substance, human, etc.) that is capable of acting in a manner that can result in harm. A tornado is a threat, as is someone driving a car, as is a hacker. The key consideration is that threats are the actors (a.k.a., agents) that can cause a loss event to occur. This is in contrast to some common usage, where conditions may be referred to as threats. For example, someone might refer to a puddle of water on the floor as a threat, when in fact it is not. It is passive, inanimate, and not an actor in any meaningful way. A person stepping on it is the threat actor. The water simply increases the probability that the threat action (stepping on the wet floor) results in a loss event (a slip and fall accident).

Vulnerability

Although the word vulnerability is commonly referred to as a weakness that may be exploited by a threat, that isn’t how we view things in FAIR. Similar to the term risk, vulnerability is considered a value rather than a thing. As a result, you can’t point to something and say, Here is a vulnerability! What you can do is point to a control (like a frayed rope) and declare, Here is a condition that increases our vulnerability. It may feel like a fine point, but trust us, it is a critical distinction. The relevance of this distinction will become clearer as you progress through this book.

Risk

The following definition applies regardless of whether you are talking about investment risk, market risk, credit risk, information risk, or any of the other common risk domains:

Risk—The probable frequency and probable magnitude of future loss.

In other words—how often losses are likely to happen, and how much loss is likely to result. Rest assured, we’ll get into much more detail on this in Chapter 3.

The bald tire metaphor

In large part, risk management today (particularly operational risk management) is practiced as an art rather than a science. What is the difference? Science begins by analyzing the nature of the subject—forming a definition and determining the scope of the problem. Once this is accomplished, you can begin to form and then substantiate (or not) theories and hypotheses. The resulting deeper understanding provides the means to explain and more effectively manage the subject.

Art, on the other hand, does not operate within a clearly defined framework or definition. Consequently, it isn’t possible to consistently explain or calculate based upon an artistic approach. In fact, it doesn’t even seek to explain. It just is. A useful example is shamanism. The shaman rolls his bones or confers with his gods. He then prescribes a remedy based upon what his ancestors have passed down to him (analogous to best practices). Now, some shamans may be extremely intuitive and sensitive to the conditions within a scenario and may be able to select a reasonable solution on most occasions. Nevertheless, the shaman can’t rationally explain his analysis, nor can he credibly explain why the cure works (or sometimes doesn’t work). In addition, while we would like to believe that best practices are generally effective (as we strive to reuse what we think has been successful in the past), this can be a dangerous assumption. Best practices are often based on long-held shamanistic solutions, tend to be one-size-fits-all, may evolve more slowly than the conditions in which they are used, and can too often be used as a crutch—e.g., I can’t explain why, so I’ll just point to the fact that everyone else is doing it this way.

There is, however, no question that intuition and experience are essential components of how we do our jobs. The same is true for any profession. Yet these alone do not provide much traction in the face of critical examination, and are not strong formulas for consistency.

In order for the risk management profession to evolve, we have to begin to approach our problem space more rigorously and scientifically (note that we did not say purely scientifically). This means we have to seek to understand why and how, we have to measure in a meaningful way, and we have to be able to explain things consistently and rationally.

Risk analysis vs risk assessment

Many people don’t differentiate assessment from analysis, but there is an important difference. From a FAIR perspective, risk analysis is often a subcomponent of the larger risk assessment process.

The broader risk assessment process typically includes:

• Identification of the issues that contribute to risk,

• Analyzing their significance (this is one place where FAIR fits in),

• Identifying options for dealing with the risk issue,

• Determining which option is likely to be the best fit (another opportunity to apply FAIR), and

• Communicating results and recommendations to decision-makers.

As you can see, analysis is about evaluating significance and/or enabling the comparison of options. Unfortunately, much of what you see today in risk management is assessment without meaningful (or accurate) analysis. The result is poorly informed prioritization and cost-ineffective decisions.

Bottom line—The purpose of any risk analysis is to provide a decision-maker with the best possible information about loss exposure and their options for dealing with it.

Evaluating risk analysis methods

FAIR is just one approach to risk analysis. There are many people working to develop other methods with the same goals in mind. This is terrific news to those of us who are trying to be as effective as possible in managing risk. However, regardless of the methods you consider using, we encourage you to evaluate any risk analysis method on at least three points:

1. Is it useful?

2. Is it practical?

3. Are the results defensible?

A methodology is useful when the results are accurate and meaningful to decision-makers. If an analysis method provides results that are expressed in qualitative or ordinal scales, then meaningfulness must be questioned. Why? Well, what does a risk score of 3.5 mean? We suppose it’s better than a risk score of 4.1 (assuming one is good and five is bad), but ultimately, how do you compare that kind of value statement, or something like medium, against the other more quantitative considerations (like revenue projections or expenses) that inevitably play a role in decisions? As for accuracy, a precise ordinal rating like 3.5 or 4.1 is just begging to be picked apart because it implies a level of precision that is unrealistic in risk analysis. Why 3.5 and not 3.6 or 3.7? Or, if the risk rating is qualitative (e.g., medium), then accuracy is often a matter of what assumptions were being made and how rigorous the thinking was that underlies the analysis (typically not very rigorous).

Now, some decision-makers find qualitative or ordinal results acceptable. That’s fine. Fine, that is, until you have to defend those risk statements when you are asking for significant funding or telling them that their pet project can’t proceed because it has high risk. (Good luck, by the way, defending 3.5 or high to a strong critical thinker.) In our experience, the underpinnings of those kinds of risk values are arrived at through very little rigor, through questionable methods, or both. The executives might not push back, but don’t count on it.

Practicality is a crucial matter. It does little good to apply rocket science when all you really need to do is cross the street. Don’t get us wrong, though. You can go deep into the weeds with FAIR if you want or need to. However, you don’t have to. The same should be true for any practical risk analysis approach—i.e., you should be able to apply it quick-and-dirty if need be, or go deep.

There are risk analysis methods in use today whose logic falls apart under close examination. Often, these methods call themselves risk analysis when what they are really analyzing is a subcomponent of risk (e.g., control conditions). Keep in mind, however, that these methods may be excellent at what they actually do, so don’t disregard their value. It’s simply a matter of recognizing what they do and do not provide. A simple way of identifying a bona fide risk analysis method is to determine whether it includes an analysis of threat frequency, vulnerability, and loss magnitude, and whether it treats the problem probabilistically. If one or more of these components is missing, or if the problem is not treated from a probabilistic perspective, then it likely can’t be defended as a true risk analysis method.

Let’s look at an example risk analysis against this three-point litmus test:

Risk issue: Absence of segregation of duties for system administrators.

Asset at risk: Corporate financial data that supports financial reporting (a SOX concern for publicly traded companies in the United States)

Risk rating: High risk. The assessing individual felt that it was possible for system administrators to manipulate data feeding the financial reporting process, which would require restatement of finances. This could result in significant penalties and reputational damage.

My, that does sound scary.

In this case, executives are presented with a situation they may only superficially understand (i.e., they may understand the concept of segregation of duties, but they couldn’t explain it in this technology context). They do know that they do not want to be

Enjoying the preview?
Page 1 of 1