Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Reliability Assessment of Safety and Production Systems: Analysis, Modelling, Calculations and Case Studies
Reliability Assessment of Safety and Production Systems: Analysis, Modelling, Calculations and Case Studies
Reliability Assessment of Safety and Production Systems: Analysis, Modelling, Calculations and Case Studies
Ebook1,956 pages13 hours

Reliability Assessment of Safety and Production Systems: Analysis, Modelling, Calculations and Case Studies

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book provides, as simply as possible, sound foundations for an in-depth understanding of reliability engineering with regard to qualitative analysis, modelling, and probabilistic calculations of safety and production systems.

Drawing on the authors’ extensive experience within the field of reliability engineering, it addresses and discusses a variety of topics, including:

•    Background and overview of safety and dependability studies;

•    Explanation and critical analysis of definitions related to core concepts;

•    Risk identification through qualitative approaches (preliminary hazard analysis, HAZOP, FMECA, etc.);

•   Modelling of industrial systems through static (fault tree, reliability block diagram), sequential (cause-consequence diagrams, event trees, LOPA, bowtie), and dynamic (Markov graphs, Petri nets) approaches;

•    Probabilistic calculations through state-of-the-art analytical or Monte Carlo simulation techniques;

•    Analysis, modelling, and calculations of common cause failure and uncertainties;

•    Linkages and combinations between the various modelling and calculation approaches;

•    Reliability data collection and standardization.

The book features illustrations, explanations, examples, and exercises to help readers gain a detailed understanding of the topic and implement it into their own work. Further, it analyses the production availability of production systems and the functional safety of safety systems (SIL calculations), showcasing specific applications of the general theory discussed. Given its scope, this book is a valuable resource for engineers, software designers, standard developers, professors, and students.

LanguageEnglish
PublisherSpringer
Release dateMar 23, 2021
ISBN9783030647087
Reliability Assessment of Safety and Production Systems: Analysis, Modelling, Calculations and Case Studies

Related to Reliability Assessment of Safety and Production Systems

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Reliability Assessment of Safety and Production Systems

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Reliability Assessment of Safety and Production Systems - Jean-Pierre Signoret

    © Springer Nature Switzerland AG 2021

    J.-P. Signoret, A. LeroyReliability Assessment of Safety and Production SystemsSpringer Series in Reliability Engineeringhttps://doi.org/10.1007/978-3-030-64708-7_1

    1. Introduction

    Jean-Pierre Signoret¹   and Alain Leroy²  

    (1)

    Total Professeurs Associés, Sedzère, France

    (2)

    Montreuil, France

    Jean-Pierre Signoret

    Email: j-p.signoret@orange.fr

    Alain Leroy (Corresponding author)

    Email: alain.leroylyon@gmail.com

    1.1 Human Enterprises Involve Risks

    Since the old ages, human beings and even their predecessors have had to rely on their tools and weapons to survive in a wild and dangerous nature. This is why they designed simple but effective artefacts made of wood, bone and stone. Plenty of objects from the Stone Age have been exhumed from archaeological sites spread all over the planet and this testifies of an intensive lithic craft industry where a kind of standardization can even be observed (raw material, form, size). The Stone Age has been followed by the Bronze and Iron Ages where the artefacts have been improved. At this point, it is interesting to notice that the life duration of the artefacts has begun to decrease: this is mainly observable for iron objects which are rapidly destructed by corrosion and an iron sword of some centuries is less likely to be intact when exhumed than a flint biface of several millenniums. The agriculture activities have been dominant in the historical ages but dedicated corporations and guilds of craftsmen have continued to develop and improve the production of objects until the industrial revolution occurred in the nineteenth century where the industry, as known nowadays, was born. Since this time, it seems that the world has entered in the Anthropocene era where the industrial systems become more and more complex and more and more likely to produce new artificial hazards from which people and environment have to be protected (safety). At the same time, the economic point of view becomes more and more prevalent and designing efficient and cost-effective systems (i.e. dependable) also becomes more and more important (dependability).

    Then, any human enterprise implies some risks, those risks increase with the complexity of the systems developed nowadays and numerous events have occurred to remind that:

    Fukushima (Japan, 2011), Chernobyl (Russia, 1986), Three Mile Island (USA, 1979) for the nuclear risk.

    Boeing 737 Max 8 (Ethiopia, 2019), Concorde (France, 2000), Tenerife (Spain, 1977) for the aeronautic risk.

    Ariane V (France, 1996), Challenger (USA, 1986), Apollo 13 (USA, 1970), for the spatial risk.

    Bhopal (India, 1984), Flixborough (United Kingdom, 1974), Seveso (Italy, 1976) for the chemical risk.

    Elgin (North Sea, Norway, 2012), Macondo (Gulf of Mexico, 2010), Piper Alpha (North Sea, Scotland, 1988) for the oil and gas risk.

    America Grande (France, 2019), Prestige (France, 2002), Exon Valdez (Alaska, 1979), Torrey Canyon (Scilly Islands, United Kingdom, 1967) for spill oil risk.

    Lac-Mégantic (Canada, 2013), Santiago de Compostela (Spain, 2013), Eschede (Germany, 1998) for the railway risk.

    Costa Concordia (Italy, 2012), Estonia (Baltic Sea, 1994), Herald of Free Enterprise (Belgium, 1987), RMS Titanic (North Atlantic Ocean, 1912) and several ferries in Indonesia or Korea for the naval transportation risk.

    The various blackouts observed throughout the world (New York, France), the stock exchange crashes (New York, worldwide), the pandemics (plague, Spanish influenza, AIDS, COVID-19) and the climatic disturbance could be added to this list which validates the commonly claimed assertion that any human enterprise involves risks and that the zero risk does not exist.

    As a matter of fact, many managers were very reluctant to accept this assertion in the seventies because they claimed that applying rules and regulations was necessary and sufficient to make the risk disappear. Nowadays, some of the same managers have completely changed their mind as they have realized that they can use it as an alibi to explain that accidents occur because everyone knows that the zero risk does not exist!

    1.2 Philosophy to Master the Risks

    If the zero risk is a utopia, its reduction to an acceptable level has been a quest since the beginning of human history. From the trial and error approach used since the early days to the sophisticated approaches used nowadays, the key point is to use the past experience to improve the future.

    In this view, the thought of the French positivist philosopher Auguste Comte who said about philosophy—"like the plain common sense, the true philosophical mind consists in knowing what is, to predict what will be, in order to improve it as far as possible"—can be adopted as a way of thinking for reliability engineers/risk managers:

    $${\text{knowing}} \Rightarrow {\text{forecasting}} \Rightarrow {\text{improve}}.$$

    The various approaches developed in the reliability field, including these proposed in this book, have been developed to help doing that, provided that they are combined with a minimum of plain common sense of the analysts.

    © Springer Nature Switzerland AG 2021

    J.-P. Signoret, A. LeroyReliability Assessment of Safety and Production SystemsSpringer Series in Reliability Engineeringhttps://doi.org/10.1007/978-3-030-64708-7_2

    2. Background

    Jean-Pierre Signoret¹   and Alain Leroy²  

    (1)

    Total Professeurs Associés, Sedzère, France

    (2)

    Montreuil, France

    Jean-Pierre Signoret

    Email: j-p.signoret@orange.fr

    Alain Leroy (Corresponding author)

    Email: alain.leroylyon@gmail.com

    2.1 A Short Story of Reliability Analysis

    2.1.1 Premises

    Before looking at the future, it can be very instructive to look at the past. At any time, the authorities have tried to master the risks by issuing laws and regulations. This was generally done after some detrimental event had been observed.

    A good example of the way to manage the risks in the nineteenth century is given by the orders from Napoléon the 1st to the Prefect of Var (a department in the South East of France) about forest fires occurring in this place: "Shoot on location who is suspected to have lighted the fires and you will be replaced by a new Prefect in case of new fires". History has not recorded what actually happened but this is likely to have had only a small influence as nowadays fires continue to occur every year in this area! Even if the strong repression described above is no longer in use, rules or regulations continue to be issued almost each time new serious detrimental events occur. Deterministic in nature, this very old trial and error approach is expected to prevent the unwanted events to occur again and is still the basis for designing safe systems. Nevertheless, rules and regulations are effective only to some extent as the unwanted events are seldom completely prevented and, in addition, they have no impact on events which have never been observed. Then, probabilistic oriented techniques have been introduced as a complement and this has been done through a relatively slow process which has started at the end of the First World War.

    At this time, the idea to quantitatively compare systems designed to do the same things is born in the mind of specialists in aeronautics. As they have noticed that twin-engine aircrafts seemed less prone to crashes than single-engine aircrafts, they decided to calculate an indicator equal to the ratio of the number of crashes by the number of flight hours. As expected, this indicator was higher for single-engine than for twin-engine aircrafts and this confirmed the feeling of the specialists and also proved the usefulness of redundancy with regards to reliability.

    This simple statistic approach has been used to compare aircrafts from a crash point of view until the thirties when a new idea has emerged: beyond the comparison between aircrafts from past events, this indicator could be useful to predict the future crashes. This is the starting point of a new discipline which has been recognized and named reliability theory later on in the fifties.

    2.1.2 The Beginning

    It is during the Second World War that the reliability theory really rises. The most famous example is, unfortunately, its use for developing the German flying bomb V1 and flying rocket V2. At the beginning all the V1 exploded on their launch pad or fell down into the Channel.

    This lasted as long as the aircraft engineer Robert Lusser was involved in the project and found what was wrong in the V1 design. Making an analogy with a simple chain, he brought to light that "one chain cannot be stronger than its weaker link. Translated into the reliability field, this led to the first and fundamental reliability property: one system made of several components in series cannot be more reliable than the less reliable of these components. Translated in turn into a probabilistic form, this led to the famous Lusser theorem: The probability of success of a series of components is equal to the product of the probabilities of success of each of these components".

    From this point, it has been understood that the identification and improvement of the weak points was of utmost importance to improve the reliability of a given system. Even if this seems only common sense, applying this simple idea likely allowed to improve the probability of success in launching the V1 and then the V2… but this certainly has not been seen as a progress by those who have been exposed to these bombs.

    Redundancy is a technique widely used to increase reliability and nowadays the hunting for weak points often consists in the identification of the possible common cause failure between redundant elements.

    In the forties and fifties, the dissemination of the reliability approach has been observed mainly in the aeronautic, nuclear and military industries and mainly in the United States. During the Second World War, 50% of the spares and equipment in storage became unserviceable before use. At the beginning of the Korean War, about 70% of US Navy electronic gear did not operate properly and the analysis failure of items was initiated later on Kececioglu (2002). When the US Ministry of Defense noticed that for 1$ of electronic equipment, 2$ of maintenance (Villemeur 1988 or 1992) was spent per year, it became clear that the equipment should be reliable by design: the science of failure was born. Reliability requirements have been introduced in the call for tender of electronic components but, at that time, the analysis techniques were not available and the engineering know-how not sufficient to meet the requirements. Therefore, the providers willing to bid have had to demonstrate the reliability of their product from statistics obtained by undertaking long range tests.

    Later on, the results have been used to issue the very famous reliability data handbook Military standard 217. Reliability prediction of electronic equipment which has been used for decades (first issue in 1965) to assess the reliability of electronic systems. The latest issue Mil HDBK 217 F (1995), Quanterion 217Plus (2015) of this document might be replaced by the standard IEC 63142 based on the FIDES project UTE (2011) when it will be issued.

    It is also in the forties that the FMECA (failure modes, effects and criticality analysis) has been originally developed by the US army. This bottom-up (inductive) approach is still widely in use nowadays and has been standardized in IEC 60812 (2019).

    At the same time as the science of failure began, the development of the science of the restoration of failed equipment (maintainability) also began with the aim of decreasing the maintenance costs. Then, at the end of the fifties, the importance of human failures has been brought to light and the first attempts to take them into account have been made mainly in aeronautics.

    Reliability engineering started to develop into a separate discipline in 1952; the first National Symposium on Reliability and Quality Control has been organized as soon as 1954 and the first Annual Reliability and Maintainability Symposium was held in 1962 in the United States (Kececioglu 2002).

    2.1.3 A Step Forward of the Reliability Approach

    In the sixties, the works developed in the fifties have been extended to the systems made of electrical, mechanical or hydraulic components. This has needed to develop new analysis techniques better adapted than those developed for electronic components. It is in this context that H. E. Watson of the Bell laboratories has designed the so-called FTA (fault tree analysis) method which has been used to assess the safety of the launching of the Minuteman missiles (Watson 1961). Very quickly this top-down (deductive) approach has been disseminated throughout the industry because it allowed to describe how complex (or rather, complicated) systems can fail. Since the beginning it has been used by the NASA for the Mercury, Gemini and Apollo (after the Apollo 1 launch pad fire) programs or Boeing to design commercial aircraft.

    Since the sixties, the fault tree analysis has been widely used, often in combination with FMECA. It is still widely used by reliability engineers and has been standardized in IEC 61025 Ed. 3.0 (in progress) and it is invaluable as it is the single top-down approach used to model and analyse the system reliability in its broad acceptation.

    It has to be noted that the RBD (reliability block diagram) models were likely to be in use at this time but history has not kept track about by who and when they have been invented.

    It is in the sixties that the HAZOP (hazard and operability study) has been developed in chemical industry. Like FMECA, it is a bottom-up approach which has been standardized in IEC 61882 (2016) and which is still widely used nowadays.

    The first comprehensive textbook exclusively devoted to reliability engineering was published by Igor Bazovsky in 1961 (Bazovsky 2004) and the MIL-STD-882 System Safety Program Requirements was first issued in 1969. The PHA (preliminary hazard analysis) has been formally instituted and promulgated by developers of MIL-STD-882A. Like FTA and HAZOP, it is still widely used.

    In parallel of the technological part, A. D. Swain has developed, in 1963, the THERP method (technique for human error prediction) (Guttmann and Swain 1983) to assess the safety of nuclear weapons. This is the ancestor of the HRA (human reliability assessment) (Bell et al. 2009) approach used for evaluating and reducing the probability of human error occurring during a specific task.

    With the arrival of the Boeing 747, a wide-body aircraft, airline operators realized that their maintenance activity would require considerable change due to a large increase in scheduled maintenance costs. Airline operators jointly organized the so-called Maintenance Steering Group (MSG) which issued the MSG-1 document in 1968. The term "reliability centered maintenance" (RCM), which appeared first in the civil aviation industry, is now commonly used in all industries IEC 60300-3-11 (2017).

    At the same time as the reliability modelling was improved, the need for reliability data was increasing: no probabilistic calculations possible without reliability data. Therefore, the first tables providing reliability data for components (pumps, valves) or human factor have been established and issued since the beginning of the sixties.

    2.1.4 Consolidation of the Reliability Approach

    Following the lead of the aerospace industry, the nuclear power industry began to use the method in the design and development of nuclear power plants in the seventies and proceed to the main improvements which were observed. Due to its original sin, Hiroshima, the nuclear industry has been obliged, for the first time in industry, to prove that the risk was acceptable even before the plants were built, that is to say without experience from previously operated installations. Therefore, the lack of field feedback had to be compensated by modelling, analysing and calculating and many engineers have been mobilised for this purpose. It is in this context that N. Rasmussen from the US Nuclear Regulatory Commission and his teams achieved and issued the first comprehensive risk analysis of nuclear power plants: PWR (pressurized water reactor) and BWR (boiling water reactor). FTAs invented in the sixties have been used to analyse the system failures and then they have been combined through event trees to identify and calculate the probabilities of scenarios leading to various detrimental consequences. The THERP method has also been used to assess the influence of human factor with regards to nuclear safety. This resulted in the large Wash 1400 report (Rasmussen 1975) which, beyond the results about nuclear safety, has consolidated the success of FTA and made the success of the ETA (event tree analysis). It has also brought to light the importance of common cause failures. The ETA approach has been widely used since the seventies and is still in use nowadays. The success of the event trees which are now standardized in IEC 62502 (2010) has, unfortunately, almost completely obliterated the cause-consequence diagram approach (Nielsen 1971) which has been previously developed and was easier to handle: it could be interesting to re-discover this approach.

    In France, the first methods and tools allowing to perform probabilistic safety studies have been introduced in the sixties in telecom by Schwob and Peyrache (1969) and then in aeronautics by Lievens (1976) and nuclear industries by Pagès and Gondran (1986) and by Villemeur (1988). A bureau devoted to probabilistic safety studies has been created in the French atomic energy commission (CEACommissariat à l’énergie atomique) since the beginning of the seventies. The approaches developed at this time included FMECA, FTA, ETA, cause consequence diagrams and Markov graphs. A great effort has been made since 1974 to develop software packages dealing with probabilistic calculations: analytical calculations based on fault trees or Markov graphs and Monte Carlo simulation based on specific models.

    It has to be noted that, developed in 1906, the Markov approach is certainly the oldest approach used for reliability calculations. It is very much used for academic works. Like Monte Carlo simulation, the use of Markov calculations increases thanks to the increasing computer calculation power. Nowadays it is often used in combination with fault trees and has been standardized in IEC 61165 (2006).

    At the end of the seventies, the UKAEA (United Kingdom atomic energy authority) has applied the techniques developed in the nuclear industry to perform the complete safety analysis of the Canvey Island (Canvey 1978) petrochemical complex. This study, which has been almost entirely published, has been the first such study performed on non-nuclear installations.

    No casualties but huge economical losses were reported from the Three Mile Island accident (Rogovin et al. 1979) which occurred at the end of the seventies. As usual, this stimulated the research and development in the domain of nuclear safety in general and reliability in particular.

    Within the same period, the Aerospatiale company (Now Airbus Industry) has developed new reliability analysis approaches (Lievens 1976) adapted for developing aircrafts and based on the combination of summarized significant failures. They have been systematically used to design the supersonic aircraft Concorde and then for the Airbus program and have certainly participated to the high degree of reliability of these aircrafts. Nowadays, aircraft transportation is one of the safest means of transportation even if, unfortunately, accidents occur from time to time, which consolidates the idea that the zero risk does not exist and confirms that the designers have to be vigilant all the time about reliability and field feedback.

    It is during the development of Concorde that the first probabilistic regulations have been promulgated. The failures were classified in minor, significant, critical and catastrophic and risk objectives set in term of failure per hours of flight. For example, at that time the objective has been set to 10−7 per hour for catastrophic failures (i.e. crash of the aircraft). The aim was to divide by 10 what was observed on the previous aircraft generation in order to increase very much the air flight safety and to maintain the crash frequency to an acceptable level. This was done for obvious safety reasons but also to avoid economic problems due to flight ban from safety authorities or to user rejection of aircraft transportation: the single accident of the Concorde has quickly led to its abandonment.

    2.1.5 Dissemination in All the Industry Sectors

    The link between the seventies and the eighties has been done through the Seveso accident occurred in 1976. This accident which has occurred in Italy made the European citizens suddenly aware of the risks linked to chemical processes: one cloud of dioxin had been released in the atmosphere and, in many aspects, this was similar to a nuclear accident with toxic fallouts that the exposed inhabitants were not able to detect. Therefore, they were not able to know if they were contaminated or not because some experts explained on the media that there was absolutely no risk while, at the same time, others explained that dioxin was one of the most dangerous products ever produced by human beings! This results in frightening the European population and, although no casualties have been recorded, this accident is still considered to have been very serious. This led the European Union to issue in 1982 the Seveso directive imposing to identify the hazards related to major-risk plants and to produce safety reports assessing the probability and consequences of these identified hazards (Seveso III 2012). It has to be noted that the 7,575 official immediate deaths (and certainly much more) of the Bhopal accident (India) in 1984 (Wikipedia 2020) have not had such an impact on regulations … but it was far away. The Seveso 1 directive has been the first regulation where probabilistic assessment and fault trees have been explicitly mentioned and, at the present time, the 3rd version of the directive has been in use since 2015.

    Beyond the aeronautics, spatial and nuclear sectors already mentioned, this directive as well as the need to improve the economic aspects have progressively led to the dissemination of the reliability approaches in most of the industry sectors: oil and gas, chemistry, petrochemistry, railways, automobile, etc.

    Further works have been undertaken to improve the previous approaches, new approaches have been developed and this process is still in progress nowadays. It has been boosted by the huge increase in the calculation power of personal computers which in turn has allowed to develop powerful reliability software packages to help the analysts to perform, at low cost, accurate analyses and exact calculations on large industrial systems. This has even opened to the use of the Monte Carlo simulation which, previously, was too time-consuming and costly to be commonly used for reliability studies. In parallel to these improvements, an intensive standardization effort has been undertaken toward civil industry.

    It is difficult to mention all the developments achieved since the eighties until now but the following can be mentioned:

    improvement in Boolean model calculations (fault trees, reliability block diagrams, event trees): implementation of the binary decision diagrams (BDD) (Bryant et al. 1986);

    improvement in Markov model calculations: increase of the model size and use in combination with fault trees (FT-driven Markov processes);

    development related to the functional safety (IEC 615082010; ISO/TR 124892013): safety integrity levels (SIL) requirements for safety instrumented systems (SIS) and techniques like risk graphs, LOPA (layer of protection analysis) or bowtie models (CCPS 2001; Torres-Echeveria 2014; IEC 615112016; ISO/IEC 310102019; Wikipedia Bowtie 2020);

    development related to maintenance: reliability centred maintenance (RCM) (IEC 60300-3-112017), reliability-based inspection (RBI), integrated logistic support (ILS) (IEC 60300-3-12 2011);

    development related to dynamic systems: dynamic reliability block diagrams (DRBD), dynamic fault trees (DFT) and especially Petri nets (IEC 62551 2012) used in combination with Monte Carlo simulation;

    extension to economic aspects (dependability), e.g. production availability of production systems (ISO 20815 2018);

    development of high-level formal languages to model both the good functioning and the system failures: specialized language (e.g. AltaRica (Batteux et al. 2019)) or deriving from model-based engineering (e.g. SysML, UML, AADL) (Roques 2013; SysML 2020; Wikipedia SysML 2020; UML 2020; Wikipedia UML 2020; SAE 2012; Wikipedia AADL 2020);

    development of the link with theLCC (life cycle cost) (IEC 60300-3-32017; ISO 15663 Ed. 1.0 2021);

    development of new approaches to take the human factor into account: human cognitive reliability (HCR), or human error assessment and reduction technique (HEART) (William JC 1985).

    A complete corpus of methods and tools is now available for reliability studies and this implies that more and more reliability or operational data are needed to feed the models and to perform accurate calculations. The effort undertaken for reliability data collection has been weaker than for improving models and, in addition, the existing data bases are generally dedicated to a given industry sector and difficult to use outside this sector. This is why the analysts are often starved of data but, nevertheless, the following can be mentioned:

    data bases developed within the military equipment framework (MIL-HDBK-217 F 1995);

    data bases developed within the regulatory framework of nuclear industry (e.g. SRDF (Aupied and Procaccia. 1984));

    OREDA data base developed in oil and gas industry since 1982 (OREDA 2020);

    PERD (Process Equipment Reliability Database) developed in the chemical industry (CCPS 2020);

    FIDES (IEC 63142 to be issued; UTE 2011) already mentioned developed for electronic components;

    etc.

    The standards IEC 60300-3-2 (2004) (Collection of dependability data from the field), ISO 7385 (1983) (Nuclear power plantsGuidelines to ensure quality of collected data on reliability), ISO 14224 (2016) (Collection and exchange of reliability and maintenance data for equipment) and ISO 6527 (1982) (Nuclear power plants. Reliability data exchange) can be used as bases to undertake effective reliability data collections.

    2.2 Why, When and How to Implement Reliability Studies

    2.2.1 Why

    As shown in the short history above, the safety of industrial systems has been provided for a long time by the know-how of engineers (the state of the art) and the rules and regulations promulgated after serious accidents have occurred. This practice has been effective for a long time but, since the nineteenth century, the complexity of systems has continuously increased as well as the severity of the consequences of accidents. Then, at one moment it became clear that, if the application of the state of the art, rules and regulations was still of utmost importance, it was no longer able to insure an acceptable level of risk for high risk activities: the application of the above approach, deterministic in nature, remains necessary but becomes no longer sufficient.

    In addition, nowadays and beyond safety, the increased commercial pressure on companies also requires accurate assessments of economic oriented risks (e.g. plant economic effectiveness based on availability calculations). This is the aim of the dependability analyses.

    The implementation of new complementary approaches being obviously needed, the reliability and risk analyses have been developed to close the gap: they have proven to be very effective to identify, analyse, manage and mitigate the risks.

    2.2.2 When

    Achieving a reliability study is relatively costly but, like the insurance premium, this seems too expensive … as long as no problem occurs. Therefore, the decision to perform such an analysis should be taken with great precautions and the main reasons for deciding to launch a study are the following:

    Novelty: the past experience gained from similar systems in operation is the main basis for designing industrial systems. Therefore, when designing a new system for which such experience is not available, it is necessary to anticipate the events which are likely to occur when it will be actually in operation. The use of reliability models is an easy way to help the analysts to create this missing experience on the paper. They can play with the models and perform calculations to identify the strengths and weaknesses of the system under study and implement measures preventing the unwanted events (related to safety or economic losses) to occur even before they can be observed on the system in operation.

    Complexity: the increasing size, the participation of the more and more specialized (electronics, electrotechnics, hydraulics, software, …) disciplines and the introduction of more and more automatisms make the system reliability more and more difficult to design. From Aristotle it is well known that the whole is not the sum of its parts and this is confirmed more recently by the Bellman's theorem (Wikipedia Bellman 2020) which says that a system made of optimized parts is not necessarily optimum itself (see Fig. 2.1). Therefore, even when each part of a system is designed at the best (according to the specific discipline), this does not guaranty that the whole system is going to work well especially when it is made of various interacting subsystems whose interactions can be difficult to identify and take into account. As there is no specialist to stick back the parts and the subsystems together, difficulties often occur when they are gathered to form the final system. Fortunately, the analysis of failures does not care about the breakdown of the systems between various disciplines and different subsystems. The reliability approaches naturally direct the analysts toward asystemic point of view and provide efficient means to cope with the above difficulties.

    ../images/482574_1_En_2_Chapter/482574_1_En_2_Fig1_HTML.png

    Fig. 2.1

    Non optimum system made of optimum parts

    Competing risks: safety and dependability generally lead to antagonistic risks: improving safety without regards to the economic aspects is likely to degrade dependability and reciprocally. With the increasing system complexity, other competing risks are emerging: for example, closing a valve to prevent an overpressure (safety action) can lead in turn to safety problems when reopening the valve. Therefore, many trade-offs have to be made and reliability and risk analyses prove to be very effective as decision aids to determine the best ones.

    Severity of consequences: the application of the traditional deterministic approach based on know-how, rules and regulations is sufficient when the expected consequences are low whereas it is not sufficient for high risk activities. This is why, in addition, reliability studies are commonly performed in the industry sectors when events can occur with potential severe consequences with regards to safety, environment, assets or production. This allows to identify and implement preventive actions, to decrease the probability of occurrence of the detrimental events and to implement means to mitigate the consequences when, unfortunately, the detrimental events occur.

    Occurrence of detrimental events: the occurrence of a detrimental event with severe consequences to safety, environment, assets or production is always an opportunity for the operators of similar systems to wonder what to do to prevent this event to occur again. This generally triggers changes in the internal company rules and a significant augmentation of the reliability studies/risk analyses performed in this industry sector. This has been observed in the nuclear field after Three Mile Island (1979), Chernobyl (1986) or Fukushima (2011) accidents as well as in oil and gas field after Piper Alpha (1988) or the Deep-water Horizon accidents (2010). When the detrimental event is symptomatic of a deep problem, new regulations are also often promulgated or/and new standards are developed as this has been the case in Europe after the Seveso accident or in the oil and gas field after the Deep-Water Horizon accident.

    2.2.3 How

    As previously written, the know-how, the rules and the regulations are the bases for designing systems. In fact, it would not be realistic to try to design a new system by using only the risk analyses/reliability studies from scratch.

    Figure 2.2 illustrates how deterministic and probabilistic approaches can be combined to design a system according to safety and operational performance requirements. The safety requirement comes from regulations (from safety authorities), standards (international or sectoral) and company rules whereas the operational requirements are established only to satisfy company needs. The design in itself is generally not a one-shot process but an iterative process where, at each stage, the safety and operational performances are compared to the criteria to be fulfilled.

    ../images/482574_1_En_2_Chapter/482574_1_En_2_Fig2_HTML.png

    Fig. 2.2

    Complementarity between deterministic and probabilistic approaches

    The design process stops when all the criteria are fulfilled but it goes on as long as they are not fulfilled. In this case, the design has to be improved and the result compared again to the criteria. When the criteria are easy to be fulfilled by using the deterministic approach, no complementary approach is needed. When they are difficult to fulfil, it is often interesting to switch to the probabilistic approach to determine which improvements are really needed to reach an acceptable risk level while satisfying the operational requirements.

    The problem often occurs when multiple deterministic safety criteria independent from each other and related to different purposes have to be applied. This can lead to requirements too much conservative and, sometimes, contradictory. In this case, only a systemic approach can help the designers to decide what to do. In addition, the deterministic approach is manichean in nature and there is no room for intermediate answer between "the rule is applied and the system is OK and the rule is not applied and the system is not OK". To cope with this problem, a more nuanced and flexible approach is useful. This is precisely what can be expected from the probabilistic approach (risk/reliability analyses) which has proven, as shown in Fig. 2.2, to be very useful to be used in complement of the deterministic approach. In this case, probabilistic criteria (e.g. probability of occurrence of unwanted events) can be used instead of deterministic criteria to get the final design.

    The probabilistic approach can also be used to verify which risk level can be actually expected from the system designed in a deterministic way. In addition, when contradictory rules are encountered, it is a way to prove to a safety authority that the safety level is acceptable even if all the regulations have not been implemented.

    2.3 Name for the New Discipline

    According to the industry sectors, the probabilistic approaches mentioned above have been gathered under various denominations like reliability studies, risk analysis, probabilistic risk assessment, risk management, cindynics (from the Greek cindynos: danger/hazard), aleatics (from the Latin alea: dice game/randomness), etc.

    In France, the term adopted has been "sûreté de fonctionnement (SdF) which means something like functioning sureness" were sure is used with the same meaning as in "assurance". This is related to the confidence in the good functioning of systems and encompasses a wide range of points of view ranging from the impact of events with regards to safety to the impacts on economic aspects. This is a good point of view as those aspects should be considered at the same time to find the best trade-offs between them.

    SdF has no exact equivalent in English but it is close to the meaning of the acronym RAMS (reliability, availability, maintainability and safety) which is often used. The main difficulty comes from the fact that safety and economic aspects (dependability) are considered separately in the standardization field where dependability and safety are treated in different technical committees: IEC TC56 committee alone for dependability and numerous other committees for safety. French and English being the two languages used for the international standardization, equivalent terms should be used in the two languages: it is not the case for "dependability which has been improperly translated by sûreté de fonctionnement". This is quite confusing as safety is, in principle, excluded of the scope of the IEC TC56 although it is the main topic addressed by French reliability engineers working in the SdF field. Even if the gap has been closed a little bit in the last years (safety is no longer completely taboo in the TC56), this terminology problem remains and analysts should be aware and cautious about it.

    It is to avoid these confusing terms that "Reliability assessment of safety and production systems" has been adopted for the title of this book which aims to cover both safety and dependability.

    2.4 Notion of Risk

    2.4.1 Etymology. Danger Versus Peril, Risk and Hazard

    Until now in the book, the terms risk, reliability, etc. have been used in their vernacular acceptation and as most of the people could use them. In fact, according to the needs of various industries, various disciplines, various people or various standards, these terms and many others have been defined in different ways and have become very much polysemic: this is specially the case of the term "risk" for which dozens of definitions can be found (e.g. in the ISO standards).

    To catch the very meaning of the term "risk, it is necessary to look at the origin of this term. English dictionaries indicate that risk comes from the French word risque. This is interesting as it implies that the word has, in principle, the same meaning in both English and French languages. About the etymology of these words, the French dictionaries indicate that they come from the old Italian risco which, in turn, comes from the Latin risicus, resecare" (cut). This Latin etymology (i.e. what is cutting) has led to the acceptation of steep rock kept in the Spanish "risco meaning reef. This last term is clearly associated with the risks incurred by goods transported by boats. Therefore, from the origin, the present time meaning has been the result of a long semantic process. It has always been associated with a detrimental (negative) aspect and never to a beneficial (positive) aspect. According to dictionaries, it means: Danger/losses more or less foreseeable".

    When looking at the usual dictionaries to make the difference between risk, hazard, danger and peril, a kind of vicious circle is entered because the meanings are very close and each of them is used to define the others. In the end, they are presented as if they were quasi-synonyms. This is not accurate enough for our purpose but, fortunately, a key is given in the Littré which is a reference dictionary for the usage of French language. It indicates that "Risk may be easily distinguished from danger/hazard as it contains less the idea of peril than this of random chance but considered on the wrong side". Although this has been written at the end of the nineteenth century, it clearly identifies the characteristic of the risk concept:

    random chance: i.e. probability or frequency of occurrence of an event

    detrimental consequences of this event if it occurs.

    The danger in itself is simply related to the list of hazardous events that a system is able to generate but there is no link with their probability of occurrence.

    2.4.2 Safety Versus Risk Management Definitions

    Among the dozens of definitions of risk, two are very important as they are used as bases to develop standards:

    ISO/IEC guide 51 (2014): "Combination of the probability of occurrence of harm and the severity of that harm"

    ISO guide 73 (2009): "Effect of uncertainty on objectives".

    The ISO/IEC guide 51 is used for developing safety standards whereas the ISO guide 73 is used for developing risk management standards.

    The ISO/IEC guide 51 is fully in line with the acceptation of the term risk as explained in the previous Subsection 2.4.1: something detrimental/unpleasant (harm) and which can occur (probability of occurrence).

    This could be perfect if this definition was more related to the measure of the risk than to the concept of risk in itself. This is why the risk managers have designed the definition found in the ISO guide 73 which intends to overcome the shortcomings of the definition found in the ISO/IEC guide 51. The problem is that several difficulties arise when using this definition:

    uncertainty: this concept is not defined in the ISO guide 73 and therefore, according to the rules for building consistent terminology, the concept of risk is neither defined;

    objective: this implies that no risk exists when no objective is defined;

    positive risk: as the effect can be positive or negative, the users of this definition tend to consider, against the sound meaning of the concept, that the risk is no longer only negative but can also be positive.

    Uncertainties:

    They are usually classified into two different categories: aleatory uncertainties and epistemic uncertainties.

    The aleatory uncertainties are linked to randomness which, according to B. Mandelbrot (Wikipedia 2020), can be split into mild randomness and wild randomness. Wild randomness cannot be managed because it leads to chaotic behaviours.

    The epistemic uncertainties are linked to the ignorance of the phenomena. Obviously, something which is completely ignored is not manageable. As a consequence of the incompleteness theorems developed by Kurt Gödel (Wikipedia 2020) in 1931 in the formal logic framework, the introduction of smart components can lead to unpredictable behaviours. As it is more and more widely used, it is likely to become an important source of epistemic uncertainties in the near future.

    Finally, only mild randomness, i.e. uncertainty linked to the usual probability distribution, can actually be manageable and this comes back to the usual probability framework mentioned in the ISO/IEC guide 51.

    Objective:

    With the ISO guide 73 definition, the risk disappears when the objective is not defined: no objective implies no risk!

    This should be an easy way for a head-in-the-sand policy! But beyond the joke, the definition is, however, problematic as it obviously excludes the risks against which nothing can be done like natural risk (climate problems, earthquake, meteorite, giant electromagnetic impulse, …) or which are incurred but not identified yet (unknown diseases, exposition to substances with unknown toxic effect, …).

    Positive Risk:

    According to its definition, the risk is deeply linked to detrimental potential events and claiming that it can be positive is a little bit puzzling. This mistake is often done by people who say that they have taken the risk to win at the lottery when they actually have taken the risk to lose their bet!

    Following those people, the term "positive risk" is often used in casual conversation between the users of the risk management standards (ISO 31000 2018). However, it does not appear in the ISO/IEC guide 73 (2009) which only mentions that the effects and the consequences can be positive or negative. There is nothing new in that as the deviation from the expected values is the direct result of the random aspects of both occurrence and consequences of the related events. More interesting, and beyond natural randomness, the aim seems to cope with changing situations which can become less favourable or more favourable than expected. This is a part of the epistemic uncertainties which are revealed and which need that decisions are taken in a reactive manner. The ISO 31000 talks about threats and opportunities and this explains why the term "opportunityis often associated to the positive" (sic) risk. Anyway, even if an opportunity is properly seized, this changes the objective and, once again, the risk is that this new objective is not reached.

    The philosophy of the risk/reliability analysis is to systematically use conservative or, at least, best estimate assumptions. Therefore, if the results are better than expected, this is the result of a rational reasoning and of voluntary actions, not the result of a stroke of good fortune like the term positive risk may imply.

    Contingency and Reconciliation between Definitions:

    The consideration of the contingency concept is a way to solve the "positive risk" issue.

    Contingency is related to what can happen or not happen. Then, it may be considered as the source of the uncertainties mentioned in the ISO guide 73 definition and this allows to introduce the following definitions:

    Risk: potentiality of detrimental contingent effects.

    Opportunity: potentiality of beneficial contingent effects.

    Of course, the fact that effects are detrimental or beneficial is a matter of point of view and depends on the stakeholders.

    Potentiality can be measured in term of chance, probability or likelihood and effects in term of consequences. Therefore, a way of reconciliation can be found in the ISO guide 73 NOTE 4 which indicates that the "risk is often expressed in terms of a combination of the consequences of an event and the associated likelihood of occurrence" and which is almost the same definition as in the ISO/IEC guide 51 (2014). Then, both guides agree on the way to measure the risk. If not in letter and beyond the terminology problems raised above, there is no real discrepancies in spirit between them.

    2.4.3 Risk Overview in Industrial Context

    Risk as a Two Dimension Concept

    Events rather frequently occur to remind us that every human enterprise implies risks (see Chap. 1). This is precisely the case when writing these lines (March 2019): the media report that the chemical plant of Yancheng has exploded in China (78 deaths and hundreds of casualties) and that a cruise ship, the Viking Sky, is stuck in a very rough Norwegian Sea (see Chap. 5). She risks to capsize due to the breakdown of its four propulsion engines and, while watching at the rescue of the passengers on TV, the idea of having caught a beautiful common cause failure for the book is gently gaining ground in our mind! Fortunately, all the passengers have been rescued and consequences have been only economical.

    This consolidates the points of view adopted by the ISO/IEC guide 51 (for the definition of risk) and of the ISO guide 73 (for the measure of risk) and this also confirms the pragmatic point of view adopted by reliability engineers since the origin of reliability studies and long before the guides have been issued. The risk is a two-dimensional concept and its severity is related to these two dimensions:

    Chance for a detrimental event to occur. This can be expressed in terms of frequency, probability or likelihood.

    Consequence when the detrimental event has occurred.

    This implies that the risk can be represented by using a system of coordinates with two axes with, for example, the chance (probability/frequency) in ordinate and the consequence severity in abscise. This allows to divide the risk space between an acceptable and a not acceptable zone as illustrated in Fig. 2.3 for various cases:

    ../images/482574_1_En_2_Chapter/482574_1_En_2_Fig3_HTML.png

    Fig. 2.3

    Example of various risk matrices

    On the left-hand side of the figure, the risks are considered to be equivalent provided that the product probability x consequence is the same: this leads to split the space between an acceptable and a non-acceptable zone with a linear curve. This is the most simplistic approach.

    In the middle of the figure, it is considered that the high probability x low consequence risks (e.g. car accident) are more acceptable by society than the low probability x high consequence risks (e.g. aircraft crash, nuclear power plant accident) and this leads to a non-linear curve to split the space between an acceptable and a non-acceptable zone. This kind of curve has been introduced by F. R. Farmer for analysing the nuclear power plant risks (e.g. in the Wash 1400 report Rasmussen (1975)) and is known as the Farmer curve.

    On the right-hand side of the figure, the chances and the consequences have been split into discrete classes and this leads to a risk matrix where three zones are identified instead of only two: acceptable zone where there is nothing to do, not-acceptable zone where the design has to be imperatively improved and the tolerable zone where the risk has to be analysed in more detail to determine if it can be really acceptable as it is (e.g. as low as reasonably practicable, ALARP) or if further improvements are needed (HSE 2020; Wikipedia ALARP 2020). The classes of probability are generally described in qualitative terms (e.g. certain, likely, possible, unlikely) as well as the consequence severity classes (e.g. negligible, marginal, critical, catastrophic). Such matrices are very useful to discuss between safety engineers and the wording can change according to the industrial domains and users. This is more flexible to take decisions than using the simple curves splitting the space between an acceptable and a non-acceptable zone and figures can be associated to characterize the classes in more rigorous ways. The use of such matrices is illustrated in Chap. 36 in order to handle the risk linked to instrumented safety system failures.

    This also implies that there are two ways to decrease the risk from the non-acceptable to the acceptable zone: decreasing the probability of occurrence (1), mitigating the consequences (2) or both (3), (see Fig. 2.3).

    Safety-oriented versus Dependability-oriented Risks

    No assumption is made above about the nature of the consequences and therefore the combination of chance and consequence provides a very broad definition encompassing all kinds of risks. Consequences are specific to each industry sector but nevertheless can be split according to their impact:

    safety impact: safety-oriented risks. E.g. safety properly speaking (including functional safety) and environment issues;

    economic impact: dependability-oriented risks. E.g. production availability, profitability and assets issues.

    Then, the risk issues related to a given system are not reducible to a single indicator and this is why, as already mentioned, the acronym RAMS (reliability, availability, maintainability and safety) is often used when various risks have to be considered at the same time. This is always the case in industry where safety and dependability are associated to operate the systems safely and with the maximum of benefits.

    Unfortunately, safety and dependability objectives are antagonistic most of the time: improving one is likely to be detrimental for the other and vice versa.

    This universal problem should not be forgotten by designers specially when dealing with safety systems for which the safety action can be either inhibited or untimely triggered by component failures. The first case is dangerous because the safety action is not achieved when needed and an accident can occur. The second case is safe as, the protected system being shut down, any danger is supposed to disappear. However, the real life is more complicated because a safe failure is safe only with regards to a given situation but can be dangerous with regards to another one. For example, the spurious closure of an emergency shutdown valve, devoted to protect a plant against overpressure, can produce a water hammer detrimental to the installation piping. It can also lead to an increasing pressure upstream the valve inducing a new risk when reopening the valve. On the other hand, too much spurious actions are likely to increase the probability of human failure and the safety system can even be, purely and simply, taken out of service if the frequency of spurious actions leads to inacceptable production losses. Indeed, in this last case, the safe failure is transformed into a very dangerous one! This is why achieving safety to the detriment of dependability should be avoided. For example, it is wise to be cautious about an indiscriminate use of the SFF (safe failure fraction) encouraged by the functional safety standard IEC 61508 (2010) which assumes that increasing the probability of safe failure is always beneficial for safety (see Chap. 36).

    At the limit, considering only safety without taking dependability into account leads to a super safe installation because it is even no longer able … to start and considering only dependability without taking safety into account leads to a super profitable plant … between accidents. Therefore, between these two extremes, trade-offs are needed and the designers should be vigilant about obtaining a good balance between safe and dangerous failures. This is why, for example, the implementation of 2 out of 3 majority vote logic is often adopted: two dangerous failures are needed to inhibit the safety action but also two safe failures are needed to untimely trigger it. This is a good compromise improving safety and dependability at the same time.

    Consequently, safety and dependability issues should be considered together at the same time (see for example Ciliberti et al. 2019) and by the same analysts but, unfortunately, this is seldom observed in industry. Nevertheless, the systemic probabilistic approaches described in this book are useful to design both safe and dependable systems.

    Safety versus Dependability from a Probability Point of View

    This book being devoted to reliability modelling and calculations, it is useful to analyse the difference between the safety and dependability aspects which have to be considered during the studies. These differences are twofold:

    Probability and consequence point of view:

    Safety studies generally deal with events with low probability of occurrence (rare events which, hopefully, are not really expected to be observed during the life of the installation) but with heavy consequences in terms of casualties, environment or asset impacts.

    Dependability studies generally deal with frequent events (expected to be seen several times in the life of the installation) with small consequences each time the events occur.

    End users of the study point of view:

    Safety studies are generally achieved for safety authorities in order to be allowed to operate a given installation. Therefore, they aim to provide conservative estimations in order to prove that the risk is lower than a threshold set by the safety authority.

    Dependability studies are generally achieved for internal use as a decision-aid to take balanced decisions. Therefore, they aim to provide best estimations as close as possible to the actual risk level.

    Therefore, the two types of analysis are very different from the probabilistic point of view. For safety, the probabilities being low, the usual analytic approximations work well and simplifying assumptions can be used, provided that they are proven to be conservative and that the target risk level is reached. For dependability studies, this is completely the opposite, the events are frequent, the usual analytic approximations do not work and the assumptions have to be as close as possible to the real life. The result is that dependability studies imply the use of more detailed models than safety studies and that analytical calculations are generally not manageable and must be replaced by Monte Carlo simulations. Nevertheless, for safety studies with complex interrelationships, it may also be necessary to use the Monte Carlo simulation. This is more and more easy thanks to the increasing calculation power of present time computers (personal computers or main frame computers).

    As a side effect, the increasing calculation power of computers leads to increasingly demanding requirements from the project leaders ordering the studies. This is why increasingly powerful models and tools have been developed and are now available. Unfortunately, data collection is a little bit late to feed them.

    Safety & Dependability Related Constituent parts

    As mentioned above, the traditional constituent parts of safety and dependability studies are reliability, availability, maintainability and safety (RAMS, see Chap. 4) but numerous other topics are now commonly considered like production availability, integrated logistic support (ILS), reliability centered maintenance (RCM), reliability based inspection (RBI), life cycle cost (LCC), security (computer abuse, hacking), legal risk (penalties due to regulation violation), fault tolerance (redundancy), confidentiality, …

    Indeed, safety and dependability studies are also closely related to risk management, asset management and quality.

    In fact, all the approaches mentioned above pursue a common goal: mastering the risks. They are interlinked and when addressing one of them, and provided that the analysis is accurate enough, all the others are likely to be concerned to some extent at some point.

    References

    Aupied JR, Procaccia H (1984) SRDF: a system for collecting reliability data from French PWR power plants. Method of failure analysis. Application to the processing of valves data. Nuclear Eng Des 81(1):127–137, Elsevier

    Batteux M, Prosvirnova T, Rauzy A (2019) AltaRica 3.0 in 10 modeling patterns. Int J Crit Comput Based Syst. Inderscience Publishers. 9:1–2. pp 133–165. 2019. https://​doi.​org/​10.​1504/​ijccbs.​2019.​098809

    Bazovsky I (2004) Reliability theory and practice, Dover Publications Inc

    Bell J, Holroyd J (2009) Review of human reliability assessment methods. RR679. HSE. Buxton, UK

    Bryant R (1986) Graph based algorithms for Boolean functions manipulation. IEEE Trans Comput 35(8):677–691. IEEE, USA

    Canvey Island (1978) An investigation of Potential Hazards from Operations in the Canvey Island/ Thurrock Area HMSO, London

    CCPS (2001) Layer of protection analysis—simplified process risk assessment. American Institute of Chemical Engineers, Center for Chemical Process Safety, New York, USA, 2001

    CCPS (2020) Process Equipment Reliability Database (PERD): https://​www.​aiche.​org/​ccps/​resources/​process-equipment-reliability-database-perd. Accessed 18 Apr 2020

    Ciliberti V, Ostebo R, Selvik J, Alhanati F (2019) Otimize safety and profitability by use of the ISO 14224 standard and big data analytics. OTC-19634-MS. Houston, USA

    Guttmann H, Swain A (1983) Handbook of human reliability analysis with emphasis on nuclear power plant application, NUREG/CR-1278. USNRC, Washington

    HSE (2020) ALARP at a glance. https://​www.​hse.​gov.​uk/​risk/​theory/​alarpglance.​htm. Accessed September 2020

    IEC 61025 Ed. 3 (in progress) Fault tree analysis (FTA). International Electrotechnical Commission (IEC), Geneva, Switzerland

    IEC 61165 Ed. 2 (2006) Application of Markov techniques, International Electrotechnical Commission (IEC). Geneva, Switzerland

    IEC 61508 Ed. 2.0 (2010) Functional safety. Safety of electrical / electronic / programmable electronic safety-related systems (7 parts). International Electrotechnical Commission (IEC), Geneva, Switzerland

    IEC 61511 Ed. 2.0 (2016) Functional safety. Safety instrumented systems for the process safety sector (3 parts). International Electrotechnical Commission (IEC), Geneva, Switzerland

    IEC 61882 Ed.2 (2016) Hazard and operability studies (HAZOP studies)—application guide. International Electrotechnical Commission (IEC), Geneva, Switzerland

    IEC 62502 Ed. 1.0 (2010) Analysis techniques for dependability. Event tree analysis (ETA). International Electrotechnical Commission (IEC), Geneva, Switzerland

    IEC 60300-3-12 (2011) Dependability management: application guide—integrated logistic support. International Electrotechnical Commission (IEC). Geneva, Switzerland

    IEC 62551 Ed. 1.0 (2012) Analysis techniques for dependability. Petri net techniques. International Electrotechnical Commission (IEC), Geneva, Switzerland

    IEC 60300-3-11 (2017) Dependability management: application guide—reliability centred maintenance. International Electrotechnical Commission (IEC). Geneva, Switzerland

    IEC 60300-3-2 Ed. 2.0 (2004) Dependability management, Part 3-2: Application guide—collection of dependability data from the field. International Electrotechnical Commission (IEC), Geneva, Switzerland

    IEC 60300-3-3 Ed. 3.0 (2017) Dependability management, Part 3-3: Application guide, Life Cycle Costing, International Electrotechnical Commission, Geneva, Switzerland

    IEC 60812 Ed. 3.0 (2019) Failure modes and effects analysis (FMEA and FMECA), International Electrotechnical Commission (IEC), Geneva, Switzerland

    IEC 63142 (in progress) A global methodology for reliability data prediction of electronic components. International Electrotechnical Commission (IEC), Geneva, Switzerland

    ISO 31000 Ed. 2.0 (2018) Risk management. Guidelines. International organization for standardization (ISO), Geneva, Switzerland

    ISO 14224 Ed. 3.0 (2016) Petroleum, petrochemical and natural gas industries. Collection and exchange of reliability and maintenance data for equipment. International organization for standardization (ISO), Geneva, Switzerland

    ISO 15663 Ed.1.0 (2021) Petroleum, petroctechnical and natural gas industies-Life cycle costing. Organization for Standardization and International Electrotechnical Commission. Geneva, Switzerland

    ISO 20815 Ed. 2.0 (2018) Petroleum, petrochemical and natural gas industries. Production assurance and reliability management. International organization for standardization (ISO), Geneva, Switzerland

    ISO 6527 Ed. 1.0 (1982) Nuclear power plants. Reliability data exchange. General guidelines. International organization for standardization (ISO), Geneva, Switzerland

    ISO 7385 Ed. 1.0 (1983) Nuclear power plants. Guidelines to ensure quality of collected data on reliability. International organization for standardization (ISO), Geneva, Switzerland

    ISO guide 73 Ed. 1.0 (2009) Risk management—vocabulary. International Organization for Standardization (ISO). Geneva

    ISO/IEC 31010 (2019) Risk management–risk assessment techniques. International Organization for Standardization and International Electrotechnical Commission. Geneva, Switzerland

    ISO/IEC Guide 51 Ed. 3.0 (2014) Safety aspects. Guidelines for their inclusion in standards. International organization for standardization (ISO) and International Electrotechnical Commission (IEC), Geneva, Switzerland

    ISO/TR 12489 Ed. 1.0 (2013) Petroleum, petrochemical and natural gas industries. Reliability modelling and calculation of safety systems. International organization for standardization (ISO), Geneva, Switzerland

    Kececioglu D revised edition (2002)., Reliability engineering handbook, DEStech Publications Inc, Lancaster

    Lievens C (1976) Sécurité des systèmes. Cepadues-Editions, Toulouse, France

    MIL-HDBK 217 F notice 2 (1995) Military handbook: reliability prediction of electronic equipment, Department of Defense, Washington DC, USA

    MIL-STD-882E (2012) Standard practice: system Safety, US Department of Defense, Washington, USA

    MSG-1 (1968) Maintenance evaluation and program development, air transport association steering group and US federal aviation administration, USA

    Nielsen D S (1971) The Cause-Consequence Diagram Method as a Basis for Quantitative Accident Analysis. RISO-M-1374. AEK Riso, Roskilde, Denmark. Roskilde. Denmark

    OREDA (2020): https://​www.​oreda.​com/​. Accessed September 2020

    Pagès A, Gondran M (1986) System reliability: evaluation and prediction in engineering, Springer

    Quanterion 217Plus (2015) Handbook of 217Plus. Reliability prediction models, Quanterion Solutions Inc. Utica NY. USA

    Rasmussen C (1975) Reactor safety study. An assessment of accidents risks in U.S. commercial power plants; WASH 1400 (NUREG 75/014), U.S. Nuclear Regulatory Commission, Washington. USA

    Rogovin M, Frampton G F (1979) Three mile Island: a report to the commissioners and to the public Vol 1 to 3. NUREG /CR-1250. USNRC. USA

    Roques P (2013) Modélisation des systèmes complexes avec SysML. Eyrolles, France

    SAE Aerospace standard (2012) Architecture analysis and design language (AADL). AS 5506. http://​www.​sae.​org

    Schowb M, Peyrache G (1969) Traité de fiabilité. Masson & Cie editeurs, Paris

    SEVESO III (2012) Directive 2012/18/EU of the European parliament and of the council of 4 July 2012 on the control of major-accident hazards involving dangerous substances, amending and subsequently repealing Council Directive 96/82/EC

    SysML (2020) Open Source Project. https://​sysml.​org/​. Accessed September 2020

    Torres-Etcheverria A (2014) On the use of LOPA and Risk Graph for SIL determination. TEES 17th annual international symposium. College Station, Texas, USA

    UML (2020) https://​www.​uml.​org/​. Accessed September 2020

    UTE C80-811A (2011) Reliability methodology for electronic systems, FIDES guide, Issue A, AFNOR éditions. France

    Villemeur A (1988) Sûreté de fonctionnement des systèmes industriels. Collection de la Direction des Etudes et Recherche d’Electricité de France, Eyrolles

    Villemeur A (1992) Reliability, availability,

    Enjoying the preview?
    Page 1 of 1