Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Reliability and Probabilistic Safety Assessment in Multi-Unit Nuclear Power Plants
Reliability and Probabilistic Safety Assessment in Multi-Unit Nuclear Power Plants
Reliability and Probabilistic Safety Assessment in Multi-Unit Nuclear Power Plants
Ebook503 pages4 hours

Reliability and Probabilistic Safety Assessment in Multi-Unit Nuclear Power Plants

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

Reliability and Probabilistic Safety Assessment in Multi-Unit Nuclear Power Plants presents the risk contributions from single and multi-unit Nuclear Power Plants to help aggregate the risks that may arise due to applicable hazards and operating states. The book combines the key features of multi-unit risk assessment in one resource, reviewing the practices adopted in various countries around the globe to exemplify the dependencies between units on a site. These dependencies include multi-unit interactions, environmental stresses, the sharing of systems, and the sharing of human resource in a control room, factors which can all introduce an increase potential for heightened accident conditions. This book helps readers systematically identify events and evaluate techniques of possible accident outcomes within multi-units. It serves as a ready reference for PSA analysts in identifying a suitable site and the sharing of resources, while carrying out multi-unit risk assessments to ensure the safety of the public and the environment. It will also be valuable for nuclear researchers, designers and regulators of nuclear power plants, nuclear regulatory agencies, PSA engineers and practicing safety professionals.
  • Provides a framework for nuclear and PSA researchers and professionals on the design and operation of multi-unit risk assessments
  • Reviews practices adopted in various regions around the globe to analyze dependencies between units
  • Includes modeling techniques of inter-connections and shared resources, as well as risk aggregation
LanguageEnglish
Release dateFeb 9, 2023
ISBN9780128193938
Reliability and Probabilistic Safety Assessment in Multi-Unit Nuclear Power Plants
Author

Senthil C. Kumar

Dr. Senthil C. Kumar has over 28 years of research experience at the Indian Atomic Energy Regulatory Board (AERB) and is presently Head of Risk Assessment and GIS section of AERB-Safety Research Institute at Kalpakkam. His areas of specialization include: Reliability Engineering, Risk Assessment studies, Probabilistic Safety Assessment, Software reliability, Seismic Safety, Statistical analysis, etc. Dr. Kumar was involved in the Level-1 probabilistic safety assessment of Prototype Fast Breeder reactor, specifically in the reliability analysis of shutdown system, decay heat removal system, power supply system and in Passive system reliability analysis. He has significantly contributed to the seismic re-evaluation of Fast Breeder Test Reactor at Kalpakkam. His post-doctoral study is in the area of Software reliability for computer-based systems in safety critical operations, real-time scheduling for adaptive fault tolerance in multiprocessor systems, software testing, fault injection and mutation studies. After his Ph.D. in Reliability in Anna University in 2005, Dr. Kumar did his post-doctoral research in Sweden in 2009-10. He has guided 5 Ph.D. students, two MS students and several research fellows. He is a reviewer of journal articles in Elsevier publications and has authored / co-authored more than 26 international peer-reviewed journal publications and several national publications.

Related to Reliability and Probabilistic Safety Assessment in Multi-Unit Nuclear Power Plants

Related ebooks

Science & Mathematics For You

View More

Related articles

Reviews for Reliability and Probabilistic Safety Assessment in Multi-Unit Nuclear Power Plants

Rating: 5 out of 5 stars
5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Reliability and Probabilistic Safety Assessment in Multi-Unit Nuclear Power Plants - Senthil C. Kumar

    Preface

    Accidents at more than one unit in multi-unit nuclear site are a serious concern for regulators, especially after the Fukushima accident. Safety experts are making continuous efforts to develop guidance documents and establish risk assessment methodologies for multi-unit sites as there is a growing concern about the impact of dependencies among the nuclear power plants co-located at a site. Various methods are being developed to model these dependencies and identify the critical contributors to the site risk. Methods are also underway to aggregate the risk associated with each unit and develop a holistic approach to address the factors that are important to risk assessment in multiple units. Further, International Atomic Energy Agency is strengthening and increasing its peer review and advisory missions to member states with an aim to harmonize the different approaches adopted for multi-unit risk assessment.

    The aim of this book is to give an overview of various aspects involved in multi-unit risk assessment with emphasis on new developments, general trends, and points of interest regarding multiple accidents at a nuclear site. It is recognized that probabilistic safety assessment (PSA) is a systematic methodology used to evaluate risks and to obtain insights into the weak links in the design and operation of a nuclear power plant. PSA is also a very useful tool to identify and prioritize safety improvements. A detailed discussion on PSA method and the risk assessment strategies is presented before the introduction of multi-unit aspects in the book before the discussion on multi-unit PSA methodologies in the book.

    Internationally, enough expertise on the application of PSA for single units is available yet it is challenging to identify the unique factors for modelling in multi-unit PSA. Moreover, at a multi-unit site, operation of different types of reactors, site-specific internal and external hazards, different combinations of the operating state of the reactors, common and shared safety systems, and the interaction effects are some of the major issues to be considered and hence it is complex to arrive at a consensus on the uniform Multi-unit PSA approach. The international developments in the above areas are compiled to give an overview of the importance in the evaluation of site-level risk integrating the various risk contributions from different radiological sources to the multi-unit risk assessment.

    Though the focus of this book is to discuss the multi-unit risks in sites with multiple large-scale nuclear reactors, a glimpse of its importance in small modular reactor sites, and chemical and petrochemical industries is also highlighted. It is expected that the content of the book will be useful for both academics and experts in safety-critical industries with multi-unit operations.

    C. Senthil Kumar

    Chapter 1

    Reliability modeling

    Contents

    1.1Reliability mathematics 1

    1.1.1Set theory 1

    1.1.2Fundamentals of Boolean algebra 1

    1.2Probability theory 5

    1.2.1Conditional probability 9

    1.2.2Bayes theorem 10

    1.3Probability distributions 13

    1.3.1Discrete probability distribution 15

    1.3.2Continuous probability distribution 16

    1.4System reliability 22

    Further readings 35

    The fundamental concepts of reliability and PSA are derived and built upon reliability mathematics. The elements of system and component reliability modelling involve the theory of statistical distributions which are explained in this chapter.

    1.1 Reliability mathematics

    1.1.1 Set theory

    A set is a collection of objects called elements. For example, the numbers 1, 2, 3 can be a set, a set A = {1, 2, 3}. A set can be finite or infinite. A set of positive integers, set of natural numbers, etc. are examples of infinite set. Empty set is a set with no elements, ϕ = {}.

    1.1.2 Fundamentals of Boolean algebra

    Boolean algebra is widely used in the field of reliability. In Boolean algebra, the possible values for the variables are either 1 or 0. Similarly, in reliability, systems and components can be present in either success or failure state. Consider a variable X that denotes the state of a component and assume 1 represents success and 0 represents failure. Then, the probability that X is equal to 1, P(X = 1) is called the reliability of that particular component. Depending upon the configuration of the system, it will also have a success or failure state. Based on this binary state assumption, Boolean algebra can be conveniently used.

    It is the set of rules, used to simplify or minimize the given logic expression without changing its functionality.

    Logic gates: Logic gates are the building blocks of any digital system. A logic gate has one or more inputs but only one output. The types of logic gates are as follows:

    a. Basic Gates: AND gate, OR gate, NOT gate

    b. Universal Gates: NAND gate, NOR gate

    c. Special Gates: XOR gate, XNOR gate

    AND gate: An AND gate has two or more inputs but only one output which is the product of all the inputs.

    To relate the inputs to outputs, truth table is used.

    Truth table for AND gate:

    OR gate: Like an AND gate, OR gate also has two or more inputs but only one output which is the sum of all the inputs.

    Truth table for OR gate:

    NOT gate: A NOT gate is also called an inverter gate. It has a single input and single output and the output is the complement of the input.

    Truth table for NOT gate:

    Universal gate is a gate used to implement any other gates without using any other type of gates.

    NAND gate: It has two or more inputs but only one output. The output is the complement of the product of the inputs. In simple terms, a NAND gate is a combination of NOT and AND gate.

    Truth table for NAND gate:

    NOR gate: It has two or more inputs but only one output. The output is the complement of the sum of the inputs. In simple terms, a NOR gate is a combination of NOT and OR gate.

    Truth table for NOR gate:

    Special gates are exclusive gates used in particular digital circuits.

    XOR gate: It is two or more inputs and only one output. XOR is also called as exclusive OR gate.

    Truth table for XOR gate:

    XNOR gate: It is two or more inputs and only one output. XNOR is also called as exclusive NOR gate and is the complement of the XOR gate.

    Truth table for XOR gate:

    1.2 Probability theory

    In probability theory, an experiment is an operation which results in a well-defined outcome. Experiment is of two types: deterministic and probabilistic (random).

    If an experiment, when repeated under identical conditions, result in the same outcome every time, it is called a deterministic experiment.

    If an experiment, when repeated under identical conditions, do not result in the same outcome every time but the result is one of the several possible outcomes, it is called a probabilistic or random experiment.

    Sample space: The set of all possible outcomes of a random experiment is called the sample space of the experiment. The sample space is denoted as S. If a coin is tossed, the S = {H, T}.

    Consider an experiment in which you measure the thickness of a material. The possible values for thickness depend on the resolution of the measuring instrument, and they also depend on upper and lower bounds for thickness. It is easier to define the sample space as the positive real line

    because a negative value cannot occur.

    If it is known that the thickness will be between 10 and 15 mm, the sample space could be

    If the objective of the analysis is to consider only whether a particular part is low, medium, or high for thickness, the sample space might be taken to be the set of three outcomes:

    If the objective of the analysis is to consider only whether or not a particular part conforms to the required specifications, the sample space might be simplified to the set of two outcomes that indicate whether or not the part meets the requirement.

    A sample space is discrete if it consists of a finite or countable infinite set of outcomes. A sample space is continuous if it contains an interval (either finite or infinite) of real numbers.

    Event: An event is a subset of the sample space associated with a random experiment. It is denoted by E or A, B, C, etc.

    Axioms of probability: Probability is a number that is assigned to each member of a collection of events from a random experiment that satisfies the following properties. If S is the sample space and E is any event in a random experiment:

    1. P(S) = 1

    2. 0 ≤ P(E) ≤ 1

    3. For two events E1 and E2 with E1 ∩ E2 = , P(E1 ∪ E2) = P(E1) + P(E2)

    The property that 0 ≤ P(E) ≤ 1 is equivalent to the requirement that a relative frequency must be between 0 and 1. The property that P(S) = 1 is a consequence of the fact that an outcome from the sample space occurs in every trial of an experiment. Consequently, the relative frequency of S is 1. Property 3 implies that if the events E1 and E2 have no outcomes in common, the relative frequency of outcomes is the sum of the relative frequencies of the outcomes in E1 and E2.

    Elementary event: If a random experiment is performed, then each of its outcomes is knows as an elementary event. Also, an event which contain only on element is called an elementary event.

    Impossible event: A null set (ϕ) is a subset of every set and. An event E = ϕ or {} is called an impossible event. If a coin is tossed, an event which does not result in either head or tail, E = {} is an impossible event.

    Compound event: If an event contain more than one element is called a compound event.

    Probability is used to quantify the likelihood, or chance, that an outcome of a random experiment will occur. The chance of rain today is 30% is a statement that quantifies our feeling about the possibility of rain. The likelihood of an outcome is quantified by assigning a number from the interval [0, 1] to the outcome (or a percentage from 0% to 100%). Higher numbers indicate that the outcome is more likely than lower numbers. A 0 indicates an outcome will not occur. A probability of 1 indicates an outcome will occur with certainty. Different individuals will no doubt assign different probabilities to the same outcomes.

    Another interpretation of probability is based on the conceptual model of repeated replications of the random experiment. The probability of an outcome is interpreted as the limiting value of the proportion of times the outcome occurs in n repetitions of the random experiment as n increases beyond all bounds. For example, if we assign probability 0.2 to the outcome that there is a corrupted pulse in a digital signal, we might interpret this assignment as implying that, if we analyze many pulses, approximately 20% of them will be corrupted. This example provides a relative frequency interpretation of probability. The proportion, or relative frequency, of replications of the experiment that result in the outcome is 0.2.

    When the model of equally likely outcomes is assumed, the probabilities are chosen to be equal.

    Whenever a sample space consists of N possible outcomes that are equally likely, the probability of each outcome is 1/N.

    A random experiment can result in one of the outcomes {k,l,m,n} with probabilities 0.2, 0.1, 0.4, and 0.3, respectively. Let A denote the event {a,b}, B the event {b,c,d}, and C the event {d}. Then,

    P(A) = 0.2 + 0.1 = 0.3

    P(B) = 0.1 + 0.4 + 0.3 = 0.8

    P(C) = 0.3

    P( ) = 0.7, P( ) = 0.2 and P( ) = 0.7

    P(A ∩ B) = 0.1

    P(A U B) = 1 and P(A ∩ C) = 0

    Mutually exclusive events: Two events are said to be mutually exclusive if the occurrence of one event prevents the occurrence of the other event. When two events cannot happen simultaneously, they are said to be mutually exclusive. If events A and B are mutually exclusive, then the probability of A and B occurring together is zero. That is, P(A and B) = 0.

    Independent events: Two events are said to be independent if the occurrence of one does not affect the probability of occurrence of other event. Event A is said to be independent of event B if the probability of occurrence of event A is not affected by the occurrence of B.

    If events A and B are independent, then P(A and B) = P(A) x P(B).

    Two events A and B are said to be independent if and only if

    P(A U B) = P(A) + P(B)

    i.e., P(A ∩ B) = 0

    Problem:

    The following circuit operates only if there is a path of functional devices from left to right. The probability that each device functions is shown on the graph. Assume that devices fail independently. What is the probability that the circuit operates?

    Let T and B denote the events that the top and bottom devices operate, respectively. There is a path if at least one device operates. The probability that the circuit operates is

    P(A or B) = 1 − P[( )] = 1 − P( and )

    P( and ) = P( ) P( ) = (1 − 0.95)² = 0.05²

    And so,

    P(A or B) = 1 − 0.05² = 0.9975

    Exercise:

    1.2.1 Conditional probability

    The concept of conditional probability is the most important in all of probability theory. Given two events A and B associated with a random experiment, P(A|B) is defined as the conditional probability of A given that B has occurred. P(A|B) = P(A ∩ B)/P(B). If A and B are independent events then the conditional probability P(A|B) = P(A) only. Thus, when A and B are independent, the probability that A and B occur together is simply the product of the probabilities that A and B occur individually.

    In a production process, 10% of the items contain flaws and 25% of the item with flaws are functionally defective. However, only 5% of items without flaws are defective. If D denote the event that an item is defective and F the flaw, then the probability of D given that the item has a flaw is P(D|F) = 0.25. F' denotes the event that an item does not have a flaw and because 5% of the items without flaws are defective, we have that P(D| ) = 0.05.

    1.2.2 Bayes theorem

    In many situations, we might know one conditional probability but how to calculate the different one. This concept was the basis for developing Bayes theorem. Statistically speaking or in reliability terms, if we know the prior distribution of a component and a new evidence or information about it, Bayes theorem helps to obtain a posterior distribution by updating the prior with the new evidence.

    Consider a semiconductor for which we know the probability of failure for various levels of contamination in manufacturing:

    If the semiconductor chip in the product fails, what is the probability that the chip was exposed to high levels of contamination?

    From the definition of conditional probability,

    P(A ∩ B) = P(A|B)P(B) = P(B∩A) = P(B|A)P(A)

    Now considering the second and last terms in the expression above, we can write

    This is a useful result that enables us to solve for P(A|B) in terms of P(B|A).

    If P(B) in the above equation is expressed in its full form, we obtain the Bayes equation.

    Another example: In a tire manufacturing factory, say machines A, B, and C produce 20%, 30%, and 50% tires, respectively. Out of the total manufactured tires, 4%, 5%, and 2% are defective tires from machines A, B, and C respectively. If the tire drawn at random is found to defective, what is the probability it is manufactured by the machine B.

    Let

    E1: tire manufactured from machine A, P(E1) = 20/100 = 0.20

    E2: tire manufactured from machine B, P(E2) = 30/100 = 0.30

    E3: tire manufactured from machine C, P(E3) = 50/100 = 0.50

    Since the defective probability is also known,

    P(defective tire from machine A) =

    P(defective tire from machine B) =

    P(defective tire from machine C) =

    Now, the probability that a defective tire is manufactured by machine B.

    Random variables

    Because the particular outcome of the experiment is not known in advance, the resulting value of our variable is not known in advance. For this reason, the variable that associates a number with the outcome of a random experiment is referred to as a random variable.

    A random variable is a function that assigns a real number to each outcome in the sample space of a random experiment. A random variable is denoted by an uppercase letter such as X. After an experiment is conducted, the measured value of the random variable is denoted by a lowercase letter such as x = 8 inches.

    Random variables can be classified into two categories, namely, discrete and continuous random variables. A random variable is said to be discrete if its sample space is countable. The number of power outages in a plant in a specified time is a discrete random variable. A discrete random variable is a random variable with a finite (or countably infinite) range. For example, number of scratches on a surface, proportion of defective parts among 1000 tested, number of transmitted bits received in error. Whenever the measurement is limited to discrete points on the real line, the random variable is said to be a discrete random

    Enjoying the preview?
    Page 1 of 1