Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Risk Assessment: Theory, Methods, and Applications
Risk Assessment: Theory, Methods, and Applications
Risk Assessment: Theory, Methods, and Applications
Ebook1,113 pages11 hours

Risk Assessment: Theory, Methods, and Applications

Rating: 4.5 out of 5 stars

4.5/5

()

Read preview

About this ebook

An introduction to risk assessment that utilizes key theory and state-of-the-art applications

With its balanced coverage of theory and applications along with standards and regulations, Risk Assessment: Theory, Methods, and Applications serves as a comprehensive introduction to the topic. The book serves as a practical guide to current risk analysis and risk assessment, emphasizing the possibility of sudden, major accidents across various areas of practice from machinery and manufacturing processes to nuclear power plants and transportation systems.

The author applies a uniform framework to the discussion of each method, setting forth clear objectives and descriptions, while also shedding light on applications, essential resources, and advantages and disadvantages. Following an introduction that provides an overview of risk assessment, the book is organized into two sections that outline key theory, methods, and applications.

  • Introduction to Risk Assessment defines key concepts and details the steps of a thorough risk assessment along with the necessary quantitative risk measures. Chapters outline the overall risk assessment process, and a discussion of accident models and accident causation offers readers new insights into how and why accidents occur to help them make better assessments.

  • Risk Assessment Methods and Applications carefully describes the most relevant methods for risk assessment, including preliminary hazard analysis, HAZOP, fault tree analysis, and event tree analysis. Here, each method is accompanied by a self-contained description as well as workflow diagrams and worksheets that illustrate the use of discussed techniques. Important problem areas in risk assessment, such as barriers and barrier analysis, human errors, and human reliability, are discussed along with uncertainty and sensitivity analysis.

Each chapter concludes with a listing of resources for further study of the topic, and detailed appendices outline main results from probability and statistics, related formulas, and a listing of key terms used in risk assessment. A related website features problems that allow readers to test their comprehension of the presented material and supplemental slides to facilitate the learning process.

Risk Assessment is an excellent book for courses on risk analysis and risk assessment at the upper-undergraduate and graduate levels. It also serves as a valuable reference for engineers, researchers, consultants, and practitioners who use risk assessment techniques in their everyday work.

LanguageEnglish
PublisherWiley
Release dateJun 12, 2013
ISBN9781118281109
Risk Assessment: Theory, Methods, and Applications

Related to Risk Assessment

Titles in the series (57)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Risk Assessment

Rating: 4.5 out of 5 stars
4.5/5

2 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Risk Assessment - Marvin Rausand

    CHAPTER 2

    THE WORDS OF RISK ANALYSIS

    …the defining of risk is essentially a political act.

    —Roger E. Kasperson

    2.1 INTRODUCTION

    This chapter has the same title as Stan Kaplan’s talk to the Annual Meeting of the Society of Risk Analysis in 1996 when he received the Society’s Distinguished Award. The talk was later published in the journal Risk Analysis and has become one of the most influential articles on risk concepts (Kaplan, 1997).

    The risk concept was introduced briefly in Chapter 1. When we ask What is the risk?, we really ask three questions: (i) What can go wrong? (ii) What is the likelihood of that happening? and (iii) What are the consequences?

    Inger Lise Johansen, NTNU, has made important contributions to this chapter.

    To be able to answer these questions, we first need to clearly define what we mean by the words used in the questions, and we also need to define several other, associated terms.

    2.2 EVENTS AND SCENARIOS

    To be able to answer the first question, we need to specify what we mean by What can go wrong? The answer must be one or more events, or sequences of events. An event is defined as:

    Event: Incident or situation which occurs in a particular place during a particular interval of time (AS/NZS 4360, 1995).

    In this book, the term event is used to denote a future occurrence. The duration of the event may range from very short (e.g., an instantaneous shock) to a rather long period.

    We distinguish between two types of events that can go wrong: (i) hazardous events, and (ii) initiating events. When the answer to the question What can go wrong? is a sequence of events, we call this sequence of events an accident scenario, or simply a scenario.

    The terms hazardous event, initiating event, and accident scenario are defined and discussed later in this chapter. For a more thorough discussion, see Johansen (2010b).

    2.2.1 Hazardous Event

    A hazardous event is defined as:

    Hazardous event: The first event in a sequence of events that, if not controlled, will lead to undesired consequences (harm) to some assets.

    A hazardous event may alternatively be defined as the point at which control of the hazard is lost. This is the point from which further barriers (safeguards) can only mitigate the consequences of the event.

    The term hazard is defined and discussed in Chapter 3. At this stage it is sufficient to say that a hazard is often linked to energy of some type and that a hazardous event occurs when the energy is released. In the process industry, two main categories of hazardous events are loss of containment and loss of physical integrity.

    Several alternative terms for hazardous event are used in the literature. Among these are accident initiator, accident initiating event (e.g., used by Stamatelatos et al., 2002a), accidental event, critical event, undesired event, initiating event, TOP event, process deviation, potential major incident, and process demand. We have chosen the term hazardous event since it is used in the important standards ISO 12100 (2010) and IEC 60300-3-9 (1995).

    Figure 2.1 Accident scenario illustrated in a bow-tie diagram.

    2.2.2 Initiating Event

    Some of the methods for risk analysis (see Part II) need a more flexible concept than that permitted by our definition of hazardous event. We will therefore also use the term initiating event or initiator, which is defined as:

    Initiating event (or initiator): An identified event that upsets the normal operations of the system and may require a response to avoid undesirable outcomes (adapted from IAEA, 2002).

    In contrast to a hazardous event, an initiating event may be defined anywhere in the event sequence from the first deviation until harm takes place. The initiating event may even represent the hazardous event. This is because an initiating event is an analytical concept, which is entirely up to the analyst to choose, depending on the hazard and barriers under study. In all cases, the initiating event will require some actions from the system safety functions.

    2.2.3 Accident Scenario

    An accident scenario is defined as:

    Accident scenario:, A specific sequence of events from an initiating event to an undesired consequence (or harm) (adapted from IMO, 2002).

    The concept of accident scenario is illustrated principally as a particular path in the bow-tie diagram in Figure 2.1. The sequence of events is considered to start when the initiating event occurs and is terminated at a uniquely defined end event. The end event may be an undesired consequence, situation, or event. In Figure 2.1, the initiating event develops into a hazardous event where the energy is released.

    The path in an accident scenario is diverted as barriers are activated. An accident scenario will usually involve a sequence of events and several barriers, but may also be a single event. The latter case happens if an initiating event gives harm to an asset directly without the involvement of barriers. The concept of accident scenario is discussed further by Khan and Abbasi (2002) and is also a central element in the ARAMIS methodology (ARAMIS, 2004).

    EXAMPLE 2.1 Accident scenario in a process plant

    A possible accident scenario which starts with a gas leak in a petroleum process plant might proceed as follows:

    1. A gas leak from flange A occurs (i.e., the hazardous event).

    2. The gas is detected and the alarm goes off.

    3. The process shutdown system fails to shut off the gas flow to the flange.

    4. The gas is ignited and a fire occurs.

    5. The firefighting system functions as intended and the fire is extinguished within approximately 1 hour.

    6. No persons are injured, but the accident leads to significant material damage and a 20-days production stoppage (i.e., the end event).

    Note that defining a hazardous event or an accident scenario is not to say that it will indeed occur. For the purpose of describing an accident that has already occurred, accident course is considered a more appropriate term.

    Special Categories of Accident Scenarios. In many risk analyses, it will require too much time and too many resources to study all the possible accident scenarios. A set of representative scenarios are therefore selected for detailed analysis. These are often called reference scenarios.

    Reference accident scenario: An accident scenario that is considered to be representative of a set of accident scenarios that are identified in a risk analysis, where the scenarios in the set are considered to be likely to occur.

    In some applications, it may be relevant to consider the worst possible scenarios:

    Worst-case accident scenario: The accident scenario with the highest consequence that is physically possible regardless of likelihood (Kim et al., 2006).

    A worst-case release scenario may, for example, involve the release of the maximum quantity of some dangerous material during worst-case weather conditions. Worst-case scenarios are often used in establishing emergency plans, but should not be used in, for example, land use planning (see Chapter 4).

    Since a worst-case accident scenario will often have a remote probability of occurrence, a more credible accident scenario may be more relevant.

    Worst credible accident scenario: The highest-consequence accident scenario identified that is considered plausible or reasonably believable (Kim et al., 2006).

    2.3 PROBABILITY AND FREQUENCY

    Probability is the most important concept in modern science, especially as nobody has the slightest notion of what it means.

    —Bertrand Russell

    To answer the second question in the triplet definition of risk, What is the likelihood of that happening?, we need to use concepts from probability theory.

    A brief introduction to probability theory is given in Appendix A. Essentially, the probability of an event E is a number between 0 and 1 (i.e., between 0% and 100%) that expresses the likelihood that the event will occur in a specific situation, and is written as Pr(E). If Pr(E) = 1, we know with certainty that event E will occur, while for Pr(E) = 0 we are certain that event E will not occur.

    2.3.1 Probability

    Probability is a complex concept about whose meaning many books and scientific articles have been written. There are three main approaches to probability: (i) the classical approach, (ii) the frequentist approach, and (iii) the Bayesian or subjective approach.

    People have argued about the meaning of the word probability for at least hundreds of years, maybe thousands. So bitter, and fervent, have the battles been between the contending schools of thought, that they’ve often been likened to religious wars. And this situation continues to the present time (Kaplan, 1997).

    Classical Approach. The classical approach to probability is applicable in only a limited set of situations where we consider experiments with a finite number n of possible outcomes, and where each outcome has the same likelihood of occurring. This is appropriate for many simple games of chance, such as tossing coins, rolling dice, dealing cards, and spinning a roulette wheel.

    We use the following terminology: An outcome is the result of a single experiment, and a sample space S is the set of all the possible outcomes. An event E is a set of (one or more) outcomes in S that have some common properties. When an outcome that is a member of E occurs, we say that the event E occurs. These and many other terms are defined in Appendix A.

    Since all n possible outcomes have the same likelihood of occurring, we can find the likelihood that event E will occur as the number nE of outcomes that belong to E divided by the number n of possible outcomes. The outcomes that belong to E are sometimes called the favorable outcomes for E. The likelihood of getting an outcome from the experiment that belongs to E is called the probability of E:

    (2.1)

    The event E can also be a single outcome. The likelihood of getting a particular outcome is then called the probability of the outcome and is given by 1/n.

    When—as in this case—all the outcomes in S have the same probability of occurrence, we say that we have a uniform model.

    EXAMPLE 2.2 Flipping a coin

    Consider an experiment where you flip a coin. The experiment has n = 2 possible outcomes and the sample space of the experiment is S = {H, T}, where H means that the outcome is a head, and T that it is a tail. We assume that the coin is fair in the sense that both outcomes have the same probability of occurring. We want to find the probability of getting the event E = {H}. Only one of the n = 2 outcomes is favorable for this event (i.e., nE = 1), and the probability of E is therefore

    EXAMPLE 2.3 Rolling two dice

    Consider an experiment where you roll two dice, one red and one blue. In this case, there are n = 36 possible outcomes and the sample space is

    where the first number is the outcome of the red die and the second of the blue die.

    We assume that the dice are fair such that all outcomes in S have the same probability of occurring. The event E1 = same number of eyes on both dice has favorable outcomes and the probability of E1 is

    For the event E2 = the sum of the eyes is 9, we have and the probability of E2 is

    Frequentist Approach. The frequentist approach restricts our attention to phenomena that are inherently repeatable under essentially the same conditions. We call each repetition an experiment, and assume that each experiment may or may not give the event E. The experiment is repeated n times as we count the number nE of the n experiments that end up in the event E. The relative frequency of E is defined as

    Since the conditions are the same for all experiments, the relative frequency will approach a limit when n → ∞. This limit is called the probability of E and is denoted by Pr(E)

    (2.2)

    If we do a single experiment, we say that the probability of getting the outcome E is Pr(E) and consider this probability a property of the experiment.

    EXAMPLE 2.4 Repeated experiments

    Reconsider the experiment in Example 2.2. If you repeat the experiment several times and after each experiment calculate the relative frequency of heads (E = {H}), you will note that the relative frequency fn(E) fluctuates significantly for small values of n. When n increases, fn(E) will come closer and closer to 1/2. The probability of getting the event E (head) is therefore

    If we flip the coin once, we say that the probability of getting a head (E) is Pr(E) = 1/2. We note that this probability is the same as the one we got by using the classical interpretation in Example 2.2.

    Notice that if the coin is not fair, we cannot use the classical approach, but the frequentist approach is still appropriate.

    EXAMPLE 2.5 Flipping a thumbtack

    Consider an experiment where you flip a classical thumbtack (drawing pin). The experiment has two possible outcomes and the sample space is S = {U, D}, where U means that the thumbtack will land point up and D that it will land point down. In this case there is no symmetry between the two outcomes. We can therefore not use the classical approach to determine Pr(U), but have to carry out a large number of experiments to find Pr(U). The probability that we find may be considered a property of this particular thumbtack. If we choose another type of thumbtack, we will most likely get another probability.

    Bayesian Approach. In a risk analysis, we almost never have a finite sample space of outcomes that will occur with the same probability. The classical approach to probability is therefore not appropriate. Furthermore, to apply the frequentist approach, we must at least be able to imagine that experiments can be repeated a large number of times under nearly identical conditions. Since this is rarely possible, we are therefore left with a final option, the Bayesian approach. In this approach, the probability is considered to be subjective and is defined as:

    Subjective probability: A numerical value in the interval [0,1] representing an individual’s degree of belief about whether or not an event will occur.

    In the Bayesian approach, it is not necessary to delimit probability to outcomes of experiments that are repeatable under the same conditions. It is fully acceptable to give the probability of an event that can only happen once. It is also acceptable to talk about the probability of events that are not the outcomes of experiments, but rather are statements or propositions. This can be a statement about the value of a nonobservable parameter, often referred to as a state of nature. To avoid a too complicated terminology, we will also use the word event for statements, saying that an event occurs when a statement is true.

    The degree of belief about an event E is not arbitrary but is the analyst’s best guess based on her available knowledge about the event. The analyst’s (subjective) probability of the event E, given that her knowledge is , should therefore be expressed as

    (2.3)

    The knowledge may come from knowledge about the physical properties of the event, earlier experience with the same type of event, expert judgment, and many other information sources. For simplicity, we often suppress and simply write Pr(E). We should, however, never forget that this is a conditional probability depending on .

    In a risk analysis, the word subjective may have a negative connotation. For this reason, some analysts prefer to use the word personal probability, since the probability is a personal judgment of an event that is based on the analyst’s best knowledge and all the information she has available. The word judgmental probability is also sometimes used. To stress that the probability in the Bayesian approach is subjective (or personal or judgmental), we refer to the analyst’s or her/his/your/my probability instead of the probability.

    EXAMPLE 2.6 Your subjective probability

    Assume that you are going to do a job tomorrow at 10:00 o’clock and that it is very important that it not be raining when you do this job. You want to find your (subjective) probability of the event E: rain tomorrow between 10:00 and 10:15. This has no meaning in the frequentist (or classical) approach, because the experiment cannot be repeated. In the Bayesian approach, your probability Pr(E) is a measure of your belief about the weather between 10:00 and 10:15. When you quantify this belief and, for example, say that Pr(E) = 0.08, this is a measure of your belief about E. To come up with this probability, you may have studied historical weather reports for this area, checked the weather forecasts, looked at the sky, and so on. Based on all the information you can get hold of, you believe that there is an 8% chance that event E occurs and that it will be raining between 10:00 and 10:15 tomorrow.

    The Bayesian approach can also be used when we have repeatable experiments. If we flip a coin and we know that the coin is symmetric, we believe that the probability of getting a head is 1/2. The frequentist and the Bayesian approach will in this case give the same result.

    An attractive feature of the Bayesian approach is the ability to update the subjective probability when more evidence becomes available. Assume that an analyst considers an event E and that her initial or prior belief about this event is given by her prior probability Pr(E):

    Prior probability: An individual’s belief in the occurrence of an event E prior to any additional collection of evidence related to E.

    Later, the analyst gets access to the data D1, which contains information about event E. She can now use Bayes formula to state her updated belief, in light of the evidence D1, expressed by the conditional probability

    (2.4)

    which is a simple consequence of the multiplication rule for probabilities

    The analyst’s updated belief about E, after she has access to the evidence D1, is called the posterior probability Pr(E | D1).

    Posterior probability: An individual’s belief in the occurrence of the event E based on her prior belief and some additional evidence D1.

    Initially, the analyst’s belief about the event E is given by her prior probability Pr(E). After having obtained the evidence D1, her probability of E is, from (2.4), seen to change by a factor of Pr(D1 | E) /Pr(D1).

    Bayes formula (2.4) can be used repetitively. Having obtained the evidence D1 and her posterior probability Pr(E | D1), the analyst may consider this as her current prior probability. When additional evidence D2 becomes available, she may update her current belief in the same way as above, and obtain her new posterior probability:

    (2.5)

    Further updating of her belief about E can be done sequentially as she obtains more and more evidence.

    Remark: Thomas Bayes (1702–1761) was a British Presbyterian minister who has become famous for formulating the formula that bears his name—Bayes’ formula (often written as Bayes formula). His derivation was published (posthumously) in 1763 in the paper An essay towards solving a problem in the doctrine of chances (Bayes, 1763). The general form of the formula was formulated in 1774 by the French mathematician Pierre-Simon Laplace (1749–1825).

    Likelihood. By the posterior probability Pr(E | D1) in (2.4), the analyst expresses her belief about the unknown state of nature E when the evidence D1 is given and known. The interpretation of Pr(D1 | E) in (2.4) may therefore be a bit confusing since D1 is known. Instead, we should interpret Pr(D1 | E) as the likelihood that the (unknown) state of nature is E, when we know that we have got the evidence D1.

    In our daily language, likelihood is often used with the same meaning as probability. There is, however, a clear distinction between the two concepts in statistical usage. In statistics, likelihood is a distinctive concept that is used, for example, when we estimate parameters (maximum likelihood principle) and for testing hypotheses (likelihood ratio test).

    Remark: In Chapter 1 and the first part of Chapter 2, we have used the word likelihood as a synonym for probability, as we often do in our daily parlance. The reason for this rather imprecise use of the word likelihood is that we wanted to avoid using the word probability until it was properly introduced—and also because we wanted to present the main definitions of risk concepts with the same wording that is used in standards and guidelines.

    2.3.2 Controversy

    The debate between the frequentists and the Bayesians, or subjectivists, has been going on for more than 200 years. The subjectivist position is aptly summarized by de Finetti (1974):

    My thesis, paradoxically, and a little provocating, but nevertheless genuinely, is simply this

    PROBABILITY DOES NOT EXIST.

    The abandonment of superstitious beliefs about the existence of Phlogiston, the Cosmic Ether, Absolute Space and Time,…or Fairies and Witches, was an essential step along the road to scientific thinking. Probability, too, if regarded as something endowed with some kind of objective existence, is no less a misleading misconception, an illusory attempt to exteriorize or materialize our true probabilistic beliefs.

    In risk analysis, it is not realistic to assume that the events are repeatable under essentially the same conditions. We cannot, for example, have the same explosion over and over again under the same conditions. This means that we need to use the Bayesian approach. Although most risk analysts agree on this view, there is an ongoing controversy about the interpretation of the subjective probability. There are two main schools:

    1. The first school claims that the subjective probability is subjective in a strict sense. Two individuals will generally come up with two different numerical values of the subjective probability of an event, even if they have exactly the same knowledge. This view is, for example, advocated by Lindley (2007), who claims that individuals will have different preferences and hence judge information in different ways.

    2. The second school claims that the subjective probability is dependent only on knowledge. Two individuals with exactly the same knowledge will always give the same numerical value of the subjective probability of an event. This view is, for example, advocated by Jaynes (2003), who states:

    A probability assignment is subjective in the sense that it describes a state of knowledge rather than any property of the real world, but is objective in the sense that it is independent of the personality of the user. Two rational human beings faced with the same total background of knowledge must assign the same probabilities [also quoted and supported by Garrick (2008)].

    The quotation from Jaynes (2003) also touches on another controversy: that is, whether the probability of an event E is a property of the event E, the experiment producing E, or a subjective probability that exists only in the individual’s mind.

    EXAMPLE 2.7 Constant failure rate

    A specific valve is claimed to have a constant failure rate λ (see Appendix A). The failure rate is a parameter of the time to failure distribution of the valve and it is not possible to observe the value of λ. The value of λ can sometimes be estimated based on recorded times-to-failure of several valves of the same type. We may consider a statement (event) E = {λ > λ0}, where λ0 is some specified value. In this case, the controversy concerns whether the probability Pr(E) is a property of the valve or just a value that exists in the analyst’s mind.

    EXAMPLE 2.8 Ignition of a gas leak

    Consider a gas leak in a process plant. Let p denote the probability that the gas leak will ignite. If the ignition probability p is considered to be a property of this situation, the analyst’s job is to try to reveal the true value of p. After gathering all the information she can get hold of, she may specify her degree of belief about this ignition probability by the estimate . In this case, it gives meaning to a discussion of whether or not is an accurate estimate of P, and she may try to assess the uncertainty of . This is discussed further in Chapter 16.

    If, however, we believe that the ignition probability exists only in our minds, it would be meaningless to try to determine the accuracy of her estimate , since a true value of p does not exist. She may obviously discuss the uncertainty related to the knowledge, data, and information she has used to come up with her estimate, but not the accuracy of the value as such.

    The mathematical rules for manipulating probabilities are well understood and are not controversial. A nice feature of probability theory is that we can use the same symbols and formulas whether we choose the frequentist or the Bayesian approach, and whether or not we consider probability as a property of the situation. The interpretation of the results will, however, be different.

    Remark: Some researchers claim that the frequentist approach is objective and therefore the only feasible approach in many important areas: for example, when testing the effects of new drugs. According to their view, such a test cannot be based on subjective beliefs. This view is probably flawed since the frequentist approach also applies models that are based on a range of assumptions, most of which are subjective.

    2.3.3 Frequency

    When an event E occurs more or less frequently, we often talk about the frequency of E rather than the probability of E. We may ask, for example: "How frequently does event E occur?"

    Fatal traffic accidents occur several times per year, and we may record the number nE(t) of such accidents during a period of length t. A fatal traffic accident is understood as an accident where one or more persons are killed. The frequency of fatal traffic accidents in the time interval (0, t)is given by

    (2.6)

    The time t may be given as calendar time, accumulated operational time (e.g., the accumulated number of hours that cars are on the road), accumulated number of kilometers driven, and so on.

    In some cases, we may assume that the situation is kept unchanged and that the frequency will approach a constant limit when t → ∞. We call this limit the rate of the event E and denote it by λE:

    (2.7)

    Table 2.1 Some types of assets.

    In the frequentist interpretation of probability, parameters like λE have a true, albeit unknown value. The parameters are estimated based on observed values, and confidence intervals are used to quantify the variability in the parameter estimators.

    Models and formulas for the analysis may be found in Appendix A.

    2.4 ASSETS AND CONSEQUENCES

    To answer the third question in the triplet definition of risk What are the consequences?, we first have to identify who—or what—might be harmed. In this book, these objects are called assets.

    Asset: Something we value and want to preserve.

    Assets are also called targets, vulnerable targets, victims, recipients, receptors, and risk-absorbing items. Examples of assets are listed in Table 2.1. Note that the sequence of the assets in Table 2.1 does not imply any priority or ranking.

    2.4.1 Categories of Human Victims

    In a risk analysis, humans are usually considered to be the most important assets. The possible victims of an accident are sometimes classified according to their proximity to and influence on the hazard (Perrow, 1984):

    1. First-party victims. These are people directly involved in the operation of the system.

    2. Second-party victims. These are people who are associated with the system as suppliers or users, but exert no influence over it. Even though such exposure may not be entirely voluntary, these people are not innocent bystanders, because they are aware of (or could be informed about) their exposure. Passengers on airplanes, ships, and railways, for example, are considered to be second-party victims.

    3. Third-party victims. These are innocent bystanders who have no involvement in the system: for example, people living in the neighborhood of a plant.

    4. Fourth-party victims. These are victims of yet-unborn generations. The category includes fetuses that are carried while their parents are exposed to radiation or toxic materials, and all those people who will be contaminated in the future by residual substances, including substances that become concentrated as they move up the food chain.

    EXAMPLE 2.9 Victims of railway accidents

    The railway industry sometimes classifies human assets in five categories:

    (a) Passengers

    (b) Employees

    (c) People on the road or footpath crossings of the line

    (d) Trespassers (who are close to the line without permission)

    (e) Other persons

    2.4.2 Consequences and Harm

    A consequence involves specific damage to one or more assets and is also called adverse effect, impact, impairment, or loss. The term harm is used in several important standards, including IEC 60300-3-9 and ISO 12100.

    Harm: Physical injury or damage to health, property, or the environment

    (IEC 60300-3-9, 1995).

    Consequence Categories. The adverse effects of an accident may be classified into several categories related to the assets, such as:

    – Loss of human life

    – Personal injury

    – Reduction in life expectancy

    – Damage to the environment (fauna, flora, soil, water, air, climate, landscape)

    – Damage to material assets

    – Investigation and cleanup costs

    – Business-interruption losses

    – Loss of staff productivity

    – Loss of information

    – Loss of reputation (public relations)

    – Insurance deductible costs

    – Fines and citations

    – Legal action and damage claims

    – Business-sustainability consequences

    – Societal disturbances

    – Consequences related to so-called soft values, such as reduction of human well-being and loss of freedom.

    Remark: The consequences we consider here are mainly unwanted (negative) consequences that represent some kind of loss. It is, however, possible that a hazardous event may also lead to wanted or positive consequences. Such positive consequences are not considered in this book.

    For harm to people, it is common to distinguish between:

    Temporary harm. In this case the person is harmed but will be totally restored and able to work within a period after the accident.

    Permanent disability. In this case the person will get permanent illness or disability. The degree of disability is sometimes given as a percentage.

    Fatality. The person will die from the harm, either immediately or because of complications. The fatality may sometimes occur a long time after the accident: for example, due to cancer caused by radiation after a nuclear accident.

    2.4.3 Severity

    It is often useful to classify the possible consequences of a hazardous event according to their severity:

    Severity: Seriousness of the consequences of an event expressed either as a financial value or as a category.

    Figure 2.2 Consequence spectrum for a hazardous event.

    It is most common to express the severity as categories: for example, as catastrophic, severe loss, major damage, damage, or minor damage. Each category has to be described. This is discussed further in Section 4.4.

    2.4.4 Consequence Spectrum

    A hazardous event may lead to a number of potential consequences C1, C2,…, C n. The probability pi that consequence Ci will occur depends on the physical situation and whether or not the barriers are functioning. The possible consequences and the associated probabilities resulting from the hazardous event are illustrated in Figure 2.2.

    The diagram in Figure 2.2 is called a consequence spectrum, a risk picture, or a risk profile related to the hazardous event. The consequence spectrum may also be written as a vector:

    (2.8)

    In Figure 2.2 and in the vector (2.8), we have tacitly assumed that the consequences can be classified into a finite number (n) of discrete consequences.

    An activity may lead to several potential hazardous events. It may therefore be relevant to set up the consequence for an activity rather than for a single hazardous event. Each hazardous event will then have a consequence spectrum, as illustrated in Figure 2.2. If we combine all the relevant hazardous events, we can establish a consequence spectrum for the activity that is similar to that of a hazardous event. The consequence spectrum may also be presented in a table, as shown in Table 2.2.

    In some cases, it may be possible to measure the consequences in a common unit (e.g., in U.S. dollars). Let (Ci) denote the loss in dollars if consequence Ci occurs, for i = 1,2,…, n. The loss spectrum for the hazardous event can then be pictured as in Figure 2.3.

    In this case, it may also be meaningful to talk about the mean consequence or mean loss if the hazardous event should occur:

    (2.9)

    Note that (2.9) is the conditional mean loss given that the specified hazardous event has occurred.

    Table 2.2 Consequence spectrum for an activity (example).

    Figure 2.3 Loss spectrum for a hazardous event.

    2.4.5 Time of Recording Consequences

    Some of the consequences of an accident may occur immediately, while others may not materialize until years after the accident. People are, for example, still dying of cancer in 2011 as a consequence of the Chernobyl disaster in 1986. A large quantity of nuclear fallout was released and spread as far as northern Norway. During the accident, only a few persons were harmed physically, but several years after the accident, a number of people developed cancer and died from the fallout. The same applies for other accidents involving dangerous materials. When we assess the consequences of an accident, it is therefore important not only to consider the immediate consequences, but also to consider the delayed effects.

    2.5 RISK

    According to Timmerman (1986), the word risk entered the English language in the 1660s from the Italian word riscare, which means to navigate among dangerous rocks. As pointed out in Chapter 1, we will use the term risk only in relation to future events that may, or may not happen. We never use the term risk when we talk about our more or less dangerous past. The term is also restricted to negative consequences—even if risk refers to both gains and losses in economic theory.

    Table 2.3 Risk related to a system (example).

    Remark: One should, however, note that the labeling of a consequence as positive or negative represents social judgments and cannot be derived from the nature of the hazardous events (Klinke and Renn, 2002).

    As indicated in Chapter 1, there is no universally accepted definition of risk. In this book, we define risk as the answer to the three questions of Kaplan and Garrick (1981):

    Risk: The combined answer to three questions: (1) What can go wrong? (2) What is the likelihood of that happening? and (3) What are the consequences?

    The answers to the three questions may be presented as in Table 2.3, where the column "Event i" denotes an initiating event, a hazardous event, or an accident scenario. For most applications, we recommend that the events be defined as hazardous events. The frequency fi is the frequency of event i. Instead of frequency, it may sometimes be relevant to use probability. The consequence spectrum Ci describes the consequences that may occur if event i has occurred. The probabilities will depend on the capability and reliability of the reactive barriers that are available in the system.

    The risk related to a specified event may be illustrated by a bow-tie diagram such as the one in Figure 1.1. If we move the event to the left in the diagram, the number of possible event sequences will increase. In most cases, the frequency analysis will then be simpler and the consequence analysis will be more complex. On the other hand, if we move the event to the right in the bow-tie diagram, the frequency analysis will be more complicated and the consequence analysis will be simpler. To define the event as a complete accident scenario will be the extreme point in this respect, for which the consequence spectrum will be reduced to the consequences of a single end event. Which of these approaches gives the best and most complete result will depend on the system.

    In the following, we assume that the event is a hazardous event and denote the various events by s1, s2,…, sn. Kaplan and Garrick (1981) express the risk R related to the system by the set of triplets

    If all relevant hazardous events are included, the set of triplets can be considered to be complete and hence to represent the risk. The consequence spectrum Ci is a multidimensional vector which includes damage to people, property, the environment, and so on. Ci may also be timedependent if the magnitude of damage varies with time. A nuclear meltdown, for example, will have different consequences depending on at which point in time one stops to measure the damage.

    The probability p associated with each consequence lies between 0 and 1, where p = 0 means that the event is impossible and p = 1 signals that it is always true. Both extremities correspond to a fatalistic world view in which the future is conceived of as independent of human activities. According to Rosa (1998), the term risk would be of no use in such a world of predetermined outcomes. At the heart of the concept of risk is thus the idea that the consequences admit to some degree of uncertainty.

    2.5.1 Alternative Definitions of Risk

    Several alternative definitions of risk have been suggested in the literature. Among these are:

    (a) Combination of the frequency, or probability, of occurrence and the consequences of a specified hazardous event (IEC 60300-3-9, 1995).

    In this definition, risk is linked to one specific hazardous event. We may, for example, talk about the risk related to a specific gas leak or related to falling loads from a crane. This is different from the definition we are using, where the risk is related to a system or an activity for which several hazardous events may occur.

    (b) The possibility that human actions or events lead to consequences that harm aspects of things that human beings value (Klinke and Renn, 2002).

    (c) Situation or event where something of human value (including humans themselves) has been put at stake and where the outcome is uncertain (Rosa, 1998).

    (d) Uncertainty about and severity of the consequences (or outcomes) of an activity with respect to something that humans value (Aven and Renn, 2009a).

    (e) The probability that a particular adverse event occurs during a stated period of time, or results from a particular challenge (Royal Society, 1992, p. 22).

    (f) Risk refers to the uncertainty that surrounds future events and outcomes. It is the expression of the likelihood and impact of an event with the potential to influence the achievement of an organization’s objectives (Treasury Board, 2001).

    Thorough discussions of the various definitions and aspects of risk are given, for example, by Lupton (1999) and Johansen (2010b).

    2.5.2 Safety Performance

    In this book we use the word risk to describe our uncertainty about adverse events that may occur in the future. Sometimes, decision-makers may be wondering whether the estimated risk in the coming period (e.g., five years) is higher or lower than the risk was in the past period. With our definition of risk, speaking of risk in the past has no meaning. This is because when a period is over, there is no uncertainty related to what happened in that period. We therefore need another term that can be used to describe what happened in a past period—and will use the term safety performance.

    Safety performance: An account of all accidents that occurred in a specified (past) time period, together with frequencies and consequences observed for each type of accident.

    The frequencies can be given by many different time concepts: for example, per calendar time, per hour in operation, per kilometer driven, and so on. In this way, the estimated risk in the coming period can be compared to the safety performance in the past period.

    We should remember that the occurrence of hazardous events and accidents is—at least partly—a random process. If the risk in the coming period is estimated to be rather high, and by the end of that period we find that the safety performance in the period showed no accidents, this does not mean that the risk analysis was wrong.

    EXAMPLE 2.10 Helicopter transport risk

    SINTEF, a Norwegian research organization, has for more than 20 years carried out a sequence of safety studies of helicopter transport to and from Norwegian offshore oil and gas installations. This transport has resulted in several accidents, and the main objectives of the studies have been to identify the main contributors to the high risk and to propose risk-reducing measures.

    Data from accidents have been collected and the safety performance has been analyzed to identify causal factors and trends. During these 20 years, there have been a lot of changes related to helicopter design, equipment, and maintenance, to the air traffic control, to heliports and helidecks, and so on.

    A goal of the Norwegian petroleum authorities is to reduce the helicopter transport risk and they therefore ask: What is the risk in the coming period compared to the risk in the previous period? Risk analyses have been carried out to estimate the risk in the coming period (e.g., five years), and the risk estimates have been compared with the safety performance in previous periods.

    2.5.3 Risk-In‐ fluencing Factors

    Hokstad et al. (2001) define a risk-influencing factor as:

    Risk influencing factor (RIF): A relatively stable condition that influences the risk.

    A RIF is not an isolated event but an enduring condition that influences the occurrence of hazardous events and the performance of the barriers. RIFs may therefore be categorized as either frequency-influencing factors or consequence-influencing factors. The RIFs represent average conditions that can be improved by certain actions and hence are under the influence of risk managers.

    EXAMPLE 2.11 Risk-influencing factors in helicopter transport

    As part of the helicopter safety studies (see Example 2.10), SINTEF has identified a set of RIFs that influence the risk of helicopter transport. The RIFs are classified into three main categories:

    (a) Operational RIFs, which relate to the activities that are necessary to ensure that helicopter transport is safe and efficient on a day-to-day basis: for example, operations, procedures, and maintenance.

    (b) Organizational RIFs, which comprise the organizational basis, support, and control of helicopter activities concerning helicopter manufacturers, operators, and so on.

    (c) Regulatory and customer-related RIFs, which concern requirements and controlling activities from national and international aviation authorities and customers.

    RIFs play a central role in the risk assessment methodology BORA (barrier and operational risk analysis), which models the effect of operational and organizational factors on risk in offshore activities Sklet (2006b). Following this methodology, the performance of safety barriers is assessed by use of risk influence diagrams and weighting of relevant RIFs. BORA is discussed further in Chapter 12.

    2.5.4 Desired Risk

    Within the field of risk analysis, risk is usually considered to be an unwanted side effect of technological activities. However, in our modern society, risk is sometimes deliberately sought and desired. Machilis and Rosa (1990) refer to this as desired risk:

    Desired risk: Risk that is sought, not avoided, because of the thrill and intrinsic enjoyment it brings.

    Gambling, high-speed driving, drug use, and recreational activities such as hang-gliding, skydiving, and mountain climbing are high-risk activities that people engage in for reasons of sensationseeking, social reward, and mastery. Common to all is that risk is attractive and essential to the experience. In the eye of the performer, it is therefore not a principal aim to minimize desired risk, but rather, to seek it out.

    2.5.5 Risk Homeostasis

    Wilde (1982) claims that every person has her or his own fixed level of acceptable risk. If the perception of danger increases, people behave more cautiously. On the other hand, people tend to behave less cautiously in situations where they feel safer or more protected. This theory is called risk homeostasis. Wilde (1982) argues that the same is also true for larger human systems: for example, a population of car drivers. When the technical safety of cars increases, the drivers will tend to drive less cautiously.

    2.5.6 Residual Risk

    Risk analysis provides an estimate of the risk associated with a particular set of hazardous events. After the analysis has been completed, it must be determined whether the risk is acceptable or if risk-reducing measures are necessary. The risk that remains after introducing such measures is the residual risk:

    Residual risk: The risk that remains after engineering, administrative, and work practice controls have been implemented (SEMATECH, 1999).

    Residual risk is closely related to risk acceptance. If acceptable risk is defined according to the ALARP principle (see Chapter 4), then residual risk means the risk that is evaluated to be as low as reasonably practicable. By reference to deterministic risk acceptance criteria, the residual risk is, instead, acceptable at the cutoff limit and no further evaluation is required. Note that this also holds for unmitigated consequences that are considered acceptable without implementation of risk-reducing measures.

    2.5.7 Perceived Risk

    When people without professional expertise make intuitive judgments about risk, it is not called risk assessment, but risk perception.

    Risk perception: Subjective judgment about the characteristics and severity of risk.

    Risk perception consists basically of our attitudes and mental images of the severity of risks. It is influenced by a broad set of phenomena that go beyond the mere technical conception of risk as a combination of accident scenarios, probabilities, and adverse outcomes. Central to risk perception are factors such as voluntariness, novelty, controllability, and dread (Slovic, 1987).

    Several theories have been proposed to explain why different people make different judgments of risks. Psychology approaches are concerned with cognitive biases (called heuristics) and negative affect, while anthropologists explain risk perception as a product of cultural belonging and institutional mistrust. Interdisciplinary approaches, such as the framework social amplification of risk (Kasperson et al., 1988), focus on information processes, institutional structures, and social responses.

    Studying risk perception is important because it explains how people behave in hazardous situations and make decisions in face of risk. Carefully designed risk management programs, such as antismoking campaigns, are of little value if people perceive the risk as low and do not see the need to comply. On the other hand, the public resistance toward nuclear power plants in the United States and Europe in the 1970s demonstrated that risk perceptions may cause a riot even though risk assessments conclude that the risk is

    Enjoying the preview?
    Page 1 of 1