Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Risk Management in Life-Critical Systems
Risk Management in Life-Critical Systems
Risk Management in Life-Critical Systems
Ebook599 pages6 hours

Risk Management in Life-Critical Systems

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Risk management deals with prevention, decision-making, action taking, crisis management and recovery, taking into account the consequences of unexpected events. The authors of this book are interested in ecological processes, human behavior, as well as the control and management of life-critical systems, which are potentially highly automated. Three main attributes define life-critical systems, i.e. safety, efficiency and comfort. They typically lead to complex and time-critical issues and can belong to domains such as transportation (trains, cars, aircraft), energy (nuclear, chemical engineering), health, telecommunications, manufacturing and services.

The topics covered relate to risk management principles, methods and tools, and reliability assessment: human errors as well as system failures, socio-organizational issues of crisis occurrence and management, co-operative work including human−machine cooperation and CSCW (computer-supported cooperative work): task and function allocation, authority sharing, interactivity, situation awareness, networking and management evolution and lessons learned from Human-Centered Design.

LanguageEnglish
PublisherWiley
Release dateOct 10, 2014
ISBN9781118639368
Risk Management in Life-Critical Systems

Related to Risk Management in Life-Critical Systems

Related ebooks

Business For You

View More

Related articles

Reviews for Risk Management in Life-Critical Systems

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Risk Management in Life-Critical Systems - Patrick Millot

    Introduction

    Introduction written by Patrick MILLOT.

    Life Critical Systems are characterized by three main attributes: safety, efficiency and comfort. They typically lead to complex and time critical issues. They belong to domains such as transportation (trains, cars, aircraft, air traffic control), space exploration energy (nuclear and chemical engineering), health and medical care, telecommunication networks, cooperative robot fleets, manufacturing, and services leading to complex and time critical issues.

    Risk management deals with prevention, decision-making, action taking, crisis management and recovery, taking into account consequences of unexpected events. We are interested in ecological processes, human behavior, as well as control and management of life-critical systems, potentially highly-automated. Our approach focuses on "human(s) in the loop" systems and simulations, taking advantage of the human ability to cope with unexpected dangerous events on the one hand, and attempting to recover from human errors and system failures on the other hand. Our competences are developed both in Human–Computer Interaction and Human–Machine System. Interactivity and human-centered automation are our main focuses.

    The approach consists of three complementary steps: prevention, where any unexpected event could be blocked or managed before its propagation; recovery, when the event results in an accident, making protective measures mandatory to avoid damages; and possibly after the accident occurs, management of consequences is required to minimize or remove the most severe ones. Global crisis management methods and organizations are considered.

    Prevention can be achieved by enhancing both system and human capabilities to guarantee optimal task execution:

    – by defining procedures, system monitoring devices, control and management methods;

    – by taking care of the socio-organizational context of human activities, in particular the adequacy between system demands and human resources. In case of lack of adequacy, assistance tools must be introduced using the present development of Information Technologies and Engineering Sciences.

    The specialties of our community and the originality of our approaches are to combine these technologies with cognitive science knowledge and skills in human in the loop systems. Our main related research topics are: impact of new technology on human situation awareness (SA); cooperative work, including human–machine cooperation and computer supporting cooperative work (CSCW); responsibility and accountability (task and function allocation, authority sharing).

    Recovery can be enhanced:

    – by providing technical protective measures, such as barriers, which prevent from erroneous actions;

    – by developing reliability assessment methods for detecting human errors as well as system failures;

    – and by improving human detection and recovery of their own errors, enhancing system resilience; human–machine or human–human cooperation is a way to enhance resilience.

    Crisis management consists:

    – of developing dedicated methods;

    – of coping with socio-organizational issues using a multiagent approach through for instance an adaptive control organization.

    The different themes developed in this book are related to complementary topics developed in pluridisciplinary approaches, some are more related to prevention, others to recovery and the last ones to global crisis management. But all are related to concrete application fields among life-critical systems.

    Seventeen chapters contribute to answer these important issues. We chose to gather them in this book into three complementary parts: (1) general approaches for crisis management, (2) risk management and human factors and (3) managing risks via human–machine cooperation.

    Part 1 is composed of first six chapters dedicated to general approaches for crisis management:

    – Chapter 1, written by Guy A. Boy criticizes the theories, methods and tools developed several years ago, based on linear approaches to engineering systems that consider unexpected and rare events as exceptions, instead of including them in the flow of everyday events, handled by well-trained and experienced experts in specific domains. Consequently, regulations, clumsy automation and operational procedures are still accumulated in the short term instead of integrating long-term experience feedback. This results in the concept of quality assurance and human–machine interfaces (HMI) instead of focusing on human–system integration. The author promotes human-centered processes such as creativity, adaptability and problem solving and the need to be better acquainted with risk taking, preparation, maturity management, complacency emerging from routine operations and educated common sense.

    – Chapter 2, written by Eric Chatelet starts with well-known concepts in risk analysis but introduces the merging use of the resilience concept. The vulnerability concept is one of the starting points to extend the risk analysis approaches. The author gives an overview of approaches dedicated to the resilience assessment of critical or catastrophic events concerning infrastructures and/or networks.

    – Chapter 3, written by Jean René Ruault deals with a case study on an emergency system management, from an architectural and a system of systems engineering perspective. It gives an overview of all dimensions to take into account when providing a geographical area with the capacity to manage crisis situations – in the present case road accidents – in order to reduce accidental mortality and morbidity and to challenge the golden hour. This case study shows how these operational, technical, economic and social dimensions are interlinked, both in the practical use of products and in service provision. Based on a reference operational scenario, the author shows how to define the perimeter and functions of a system of systems.

    – Chapter 4, written by Lucas Stephane provides an overview of state-ofthe-art approaches to critical operations and proposes a solution based on the integration of several visual concepts within a single interactive 3D scene intended to support situated visualization of risk in crisis situations.The author first presents approaches to critical operations and synthesizes risk approaches. He then proposes the 3D integrated scene and develops user-test results and feedback.

    – Chapter 5, written by Stephane Romei shows the high level of performance attained by the European railway system. It results from several success factors among which three are of higher importance: (1) expertise and innovation in design, operation and maintenance in safety critical technologies, (2) competences in project management and system integration and (3) procedures for risk management. Illustrations are taken from Very High Speed Train technology.

    – Finally, Chapter 6, written by Morten Lind deals with system complexity, another dimension that influences decisions made by system designers and that may affect the vulnerability of systems to disturbances, their efficiency, the safety of their operations and their maintainability. The author describes a modeling methodology capable of representing industrial processes and technical infrastructures from multiple perspectives. The methodology called Multilevel Flow Modeling (MFM) has a particular focus on semantic complexity but addresses also syntactic complexity. MFM uses mean-end and part-whole concepts to distinguish between different levels of abstraction representing selected aspects of a system. MFM is applied for process and automation design and for reasoning about fault management and supervision and control of complex plants.

    Part 2 is comprised of the five following chapters and is related to human factors, the second dimension beside the technical and methodological aspects of risk management:

    – Chapter 7, written by Pietro Carlo Cacciabue, presents a well-formalized and consolidated methodology called Risk-Based Design (RBD) that integrates systematically risk analysis in the design process with the aim of prevention, reduction and/or containment of hazards and consequences embedded in the systems as the design process evolves. Formally, it identifies the hazards of the system and continuously optimizes design decisions to mitigate them or limit the likelihood of the associated consequences, i.e. the associated risk. The author first discusses the specific theoretical problem of handling dynamic human–machine interactions in a safety- and risk-based design perspective. A development for the automotive domain is then considered and a case study complements the theoretical discussion.

    – Chapter 8, written by Frederic Vanderhaegen presents a new original approach to analyze risks based on the dissonance concept. A dissonance occurs when conflict between individual or collective knowledge occurs. A theoretical framework is then proposed to control dissonances based on the Dissonance Management (DIMAGE) model and the human–machine learning concept. The dissonance identification, evaluation and reduction function of DIMAGE is supported by automated tools that analyze the human behavior and knowledge. Three examples illustrate the approach.

    – Chapter 9, written by René Van Paassen, deals with the influence of human errors in the reliability of systems, illustrated by examples in aviation. While technical developments increased the reliability of aircraft, it cannot be expected that the human component in a complex technical system underwent similar advances in reliability. This chapter contains a designer’s view on the creation of combined human–machine systems that provide safe, reliable and flexible operation. A common approach in design is the breakdown of a complete system into subsystems, and to focus on the design of the individual components. This can, up to a point, be used in the design of safe systems. However, the adaptive nature of the human component, which is precisely the reason for having humans in complex systems, is such that it is not practical to isolate the human as a single component, and assume that the synthesis of the human with the other components yields the complete system. Rather, humans merge with the complete system to a far greater extent than often imagined, and a designer needs to be aware of that. The author explores – through the reflection on a number of incidents and accidents – the nature of mishaps in human–machine systems, and the factors that might have influenced these events. It begins with a brief introduction of the events, and an overview of the different ways of analyzing them.

    – Chapter 10, written by Kara Schmitt, challenges the assumptions of the US nuclear industry, that strict adherence to procedure increases safety, to see if they are still valid and hold true. The author reviews what has changed within the industry, and verifies that the industry does have strict adherence to procedures and a culture of rigid compliance. She offers an application regarding performing an experimental protocol and utilizing expert judgment to prove that the strict procedure adherence is not sufficient for overall system safety.

    – Chapter 11, written by Serge Boverie, shows that in Organization for Economic Cooperation and Development (OECD) countries about 90% of the accidents are due to an intentional or non-intentional driver behavior: bad perception or bad knowledge of the driving environment (obstacles, etc.), due to physiological conditions (drowsiness and sleepiness) or bad physical conditions (old people and elderly drivers) etc. The author shows how development of increasingly intelligent advanced driver assistance systems (ADASs) should partly solve these problems. New functions will improve the environmental perception of the driver (night vision, blind spot detection and obstacle detection). In critical situation, they can substitute the driver (e.g. autonomous emergency braking, etc.). New ADAS generation will be able to provide the driver with the possibility to adapt the level of assistance in relation to his comprehension, needs, aptitudes, capacities and availabilities. For instance, a real-time diagnosis of the driver’s state (sleepiness, drowsiness, head orientation or of extra driving activity) is now under development.

    Finally, Part 3 groups together the last six chapters dedicated to managing risk via a human–machine cooperation:

    – Chapter 12, written by Marie Pierre Pacaux-Lemoine, presents a model of human–machine cooperation issued from different disciplines, human engineering, automation science, computer sciences, cognitive and social psychology. The model aims to enable humans and machines to work as partners while supporting interactions between them, i.e. making easier the perception and understanding of other agents’ viewpoint and behavior. Such a support is called a common work space (CWS) that we will see again in the following chapters. These principles aim to evaluate the risk for a human–machine system to reach an unstable and unrecoverable state. Several application domains, including car driving, air traffic control, fighter aircraft and robotics, illustrate this framework.

    – Chapter 13, written by Patrick Millot, shows how organizations improving SA enhance human–machine safety. Indeed, people involved in the control and management of life-critical systems provide two kinds of roles: negative, with their ability to make errors, and positive, with their unique involvement and capacity to deal with the unexpected. The human– machine system designer remains, therefore, faced with a drastic dilemma: how to combine both roles, a procedure-based automated behavior versus an innovative behavior that allows humans to be aware and to cope with unknown situations. SA that characterizes the human presence in the system becomes a crucial concept for that purpose. The author reviews some of the SA weaknesses and proposes several improvements, especially the effect of the organization and of the task distribution among the agents to construct an SA distribution and a support to collective work. This issue derives from the human–machine cooperation framework, and the support to collective SA is once again the CWS.

    – Chapter 14, written by Donald Platt, looks at a human-centered design approach to develop a tool to allow improved SA and cooperation in a remote and possibly hostile environment. The application field relates to deep space exploration. The associated risks include physical, mental, emotional and even organizational risks. Cooperation between astronauts on the planet surface and the mission operator and the chief scientist on Earth takes the form of a virtual camera (VC). The VC displays the dialog between the human agents, but is also a database with various useful information on the planet geography, geology, etc., that can be preliminary recorded in its memory or downloaded online. It plays the role of a CWS. The author relates how its ability to improve astronaut SA as well as collective SA has been tested experimentally.

    – Chapter 15, written by Makoto Itoh, returns to ADASs. The human driver has to place appropriate trust in the ADAS based on an appropriate understanding of this tool. For this purpose, it is necessary for system designers to understand what trust is and what inappropriate trust is (i.e. overtrust and distrust), and how to design ADAS that is appropriately trusted by human drivers. ADAS also has to understand the physiological and/or cognitive state of the human driver in order to determine whether it is really necessary to provide assistive functions, especially safety control actions or not. The author presents a theoretical model of trust in ADAS, which is useful to understand what overtrust and/or distrust is and what should be needed to avoid inappropriate trust. Also, this chapter presents several driver-monitoring techniques, especially to detect a driver’s drowsiness or fatigue and to detect a driver’s lane-changing intent. Finally, he shows several examples of design of attention arousal systems, warning systems and systems that perform safety control actions in an autonomous manner.

    – Chapter 16, written by Chouki Sentouh and Jean Christophe Popieul, presents the ABV project (French acronym for low-speed automation). It focuses on the interaction between human and machine with a continuous sharing of driving, considering the acceptability of the assistance and driver’s distractions and drowsiness. The main motivation of this project is the fact that in many situations, the driver is required to drive his/her vehicle at a speed lower than 50 km/h (speed limit in urban areas) or in the case of a traffic congestion due to traffic jams, in the surrounding areas of big cities, for example. The authors describe the specification of cooperation principles between the driver and assistance system for lane keeping developed in the framework of the ABV project.

    – Finally, Chapter 17 is written by Christophe Kolski, Catherine Garbay, Yoann Lebrun, Fabien Badeig, Sophie Lepreux, René Mandiau and Emmanuel Adam. It describes interactive tables (also called tabletops) that can be considered as new interaction platforms, as collaborative and colocalized workspaces, allowing several users to interact (work, play, etc.) simultaneously. The authors’ goal is to share an application between several users, platforms (tabletops, mobile and tablet devices and other interactive supports) and types of interaction, allowing distributed human–computer interactions. Such an approach may lead to new perspectives for risk management; indeed, it may become possible to propose new types of remote and collaborative ways in this domain.

    PART 1

    General Approaches for Crisis Management

    1

    Dealing with the Unexpected

    Chapter written by Guy A. BOY.

    1.1. Introduction

    Sectors dealing with life-critical systems (LCSs), such as aerospace, nuclear energy and medicine, have developed safety cultures that attempt to frame operations within acceptable domains of risk. They have improved their systems’ engineering approaches and developed more appropriate regulations, operational procedures and training programs. System reliability has been extensively studied and related methods have been developed to improve safety [NIL 03]. Human reliability is a more difficult endeavor; human factors specialists developed approaches based on human error analysis and management [HOL 98]. Despite this heavy framework, we still have to face unexpected situations that people have to manage in order to minimize consequences.

    During the 20th Century, we developed methods and tools based on a linear¹ approach of human–machine systems (HMS). We developed user interfaces and operational procedures based on experience feedback [IAE 06]. We have accumulated a giant amount of operational knowledge. In other words, we tried to close a world that is still open. As a result, anytime human operators deviate from the (linear) norm, we talk about noise, or even about the unexpected. In fact, this model of the world tends to consider the unexpected as an exception, and could be explained by the fact that engineering was developed having the normal distribution in mind supported by the Gaussian function, and any event that deviates beyond a (small) given standard deviation is ignored. This simplification does not take into account that context may change, and simplification assumptions made may turn out to be wrong when context changes. This is what nonlinear dynamic systems are about. Therefore, when a bigger deviation occurs, it is considered as rare and unexpected. It would be fair to say that once a simplification is made we should be aware of the limitations that they introduce.

    Quantitative risk assessments are typically based on the (event probability multiplied by consequence magnitude) numerical product. This formula does not work when we deal with small probabilities and huge consequences; it is mathematically undetermined. The misconception that the unexpected is exceptional comes from this probabilistic approach of operations and, more generally, standardized life. In contrast, LCS human operators deal with the unexpected all the time in their various activities, and with possibilities and necessities instead of probabilities [DUB 01].

    The 21st Century started with the Fukushima nuclear tragedy [RAM 11], which highlighted the fact that our linear/local approaches to engineering must be revised, and even drastically changed, emphasizing our world in a nonlinear and holistic manner. This is not possible without addressing complexity in depth. Nature is complex. People are part of nature, therefore HMSs are necessarily complex; even if machines could be very simple, people create complexity once they start interacting with these machines. Therefore, when talking about safety, reliability and performance, people are the most central element. Instead of developing user interfaces once systems are fully developed as it is still commonly done today (the linear local approach to HMS), it is urgent to integrate people and technology from the very beginning of design processes. This is why human–system integration (HSI) is now a better terminology than HMSs or human–computer interaction. The term system in HSI denotes both hardware (machine in mechanical engineering terms) and software (the most salient part of contemporary computers).

    This necessary shift from linear/local to nonlinear/holistic has tremendous repercussions on the way technology is designed. The engineering community rationalized design and manufacturing, and produced very rigid standards to the point that it is now very difficult to design a new LCS without being constantly constrained and forbidden from any purposeful innovation. To a certain extent, standardization is a successful result of the linear/local approach to engineering. Even human factors have been standardized [EAS 04]. However, we tend to forget that people still have fundamental assets that machines or standardization systems do not and will never have, they are creative, adaptable and can solve problems that are not known in advance. These assets should be better used both in design and operations. Standard operational procedures are good sociocognitive support in complex operations, but competence, knowledge and adaptability are always the basis for solving hard problems. For that matter, both nonlinear/holistic and linear/local approaches should be used, and in that order. They should be combined putting nonlinear/holistic at the top (the design part) and linear/local at the bottom (the implementation part). In other words, human-centered design should oversee technology-centered engineering [BOY 13a].

    It is time to (re-)learn how to deal with the unexpected using a nonlinear approach, where experience and expertise are key assets. People involved in the design and operations of LCS require knowledge and competence in complex systems design and management, the domain at stake (e.g., aerospace, nuclear), teamwork and risk taking. Dealing with the unexpected requires accurate and effective situation awareness, synthetic mind, decision-making capability, self-control, multitasking, stress management and cooperation (team spirit). This paper presents a synthesis using examples in the aviation domain compiled from a conference organized in 2011 by the Air and Space Academy on the subject [ASA 13].

    1.2. From mechanics to software to computer network

    Our sociotechnical world drastically changes. When I was at school, I learned to simplify hard problems in order to solve them using methods and techniques derived from linear algebra, for example. This is a very simplified view of what my generation learned but it represents a good image of the 20th Century’s engineering background. We developed very sophisticated applied mathematics and physics approaches to build systems such as cars, aircraft, spacecraft and nuclear power plants. However, everything that engineers learned and used was very much linear by nature. Any variability was typically considered as noise, which needed to be filtered. We managed to build very efficient machines that not only extended our capabilities, but also enabled us to do things that were not naturally possible before.

    The 20th Century was the era of mechanics more than anything else, conceptually and technologically speaking. Then a new era came supported by the development of modern computers where software took a major role. Software introduced a totally different way of thinking because machines were able to perform more elaborate tasks by themselves. We moved from manipulation of mechanical devices to interaction with software agents. We moved from control to management. The first glass cockpits and fly-by-wire technology drastically changed the way pilots were flying. Information technology and systems invaded cockpits and intensively support pilots’ activities. New kinds of problems emerge when systems fail and manual reversion is necessary. In other words, nowadays pilots not only need to master the art of flying, but also need to know how to manage systems. Even if these systems have become more reliable and robust, they do not remove the art of flying. We always need to remember that flying is not a person’s natural capability (i.e. people do not fly like birds do), it is a cognitive ability that needs to be learned and embodied by extensive and long training using specific prostheses such as aircraft.

    This brings to the fore the difficult issue of tools versus prostheses. We never stopped automating technology. Automation can be seen as a natural extension of human capabilities. That is a simple transfer of cognitive and physical functions from people to machines; a very mechanical view of automation. Rasmussen’s model is an excellent example of a mechanistic model of human behavior that contributed to the development of cognitive engineering [RAS 86]. In reality, building an aircraft, for example, is not a function transfer because people do not naturally fly; we are handicapped compared to birds and, therefore, an aircraft is a prosthesis that enables us to fly. In a sense, the aircraft is a cognitive entity that was built using methods and tools developed by mechanical engineers and now information technology specialists.

    During the 1990s, many research efforts were carried out in human factors on ironies of automation [BAI 83], clumsy automation [WIE 89] and automation surprises [SAR 97]. Engineers automated what was easy to automate, leaving the responsibility of complex things to human operators, such as abnormal conditions. What was called automation surprise is actually related to the topic of this paper on the unexpected. However, none of these research efforts take into account technology maturity and maturity of practice [BOY 13a]. People take time to become mature and learn. It is the same for technology and its usages. It takes many surprises to learn. Maturity is related to autonomy. Autonomy differs from automation in the sense that the former relates to problem solving and learning, as the latter relates to procedures following whether for machines or people. Indeed, procedure following is a kind of people behavior automation [BOY 13a]. Machines can be automated but are still far from being autonomous like people can be.

    Today, things are getting more difficult when we continue to use mechanistic cognitive models because our sociotechnical world is becoming more interconnected. Instead of mechanical devices, we have many pieces of software highly interconnected. Instead of complicated devices that we could deconstruct and repair, like the old mechanical clocks, we have layers of software that are difficult, and most of time impossible, to humanly diagnose when they fail, e.g. modern cars are comprised of electronics and software, and only sophisticated diagnostic systems enable troubleshooting. The level of technology and related-usages’ complexity drastically changed with the introduction of software. It is now even more complex as computer networks are not only local (e.g. in the car), but also more global (e.g. among cars and other cars with collision avoidance systems). How do we deal with the unexpected, and more generally variability, in such a highly interconnected world?

    1.3. Handling complexity: looking for new models

    Here are four examples of so-called successful accidents: the aborted Apollo 13 mission after an oxygen tank exploded on April 13, 1970; the DHL A300 landing in Baghdad shot by a missile on November 22, 2003; the US Airways A320 landing on the Hudson River after losing both engines on January 15, 2009; the Qantas A380 recovery around Singapore after the explosion of an engine on November 4, 2010 [ASA 13]. These examples are described in more details in section 1.4 of this book. They show that people can handle very complex and life-critical situations successfully when they have enough time [BOY 13b] and are equipped with the right functions, whether in the form of training and experience, or appropriate technology; in addition, these functions should be handled in concert. Consequences are about life and death. We can see that problem solving and cooperative work are major ingredients of such successful stories. The main question here is to maintain a good balance between automation that provides precision, flawless routine operations and relief in case of high-pressure situations, and flexibility required by human problem-solving. Obviously, conflicts may occur between automation rigidity and people’s flexibility. Let us analyze this dilemma.

    Automation will continue to develop taking into account more tasks that pilots had to perform. It is also clear that, at least for commercial passenger transportation, pilots will be needed to handle unexpected situations for a long time. There will be surprises that will require appropriate reactions involving good situation awareness timewise [BOY 13b] and content-wise, decision-making, self-control, stress management and cooperation with the other actors involved. Dealing with the unexpected is not really a new skill that pilots should have, but instead of being frightened by the evolving complexity of our sociotechnical world, we should better understand and use this complexity. For example, since airspace capacity will continue to increase, it is better to use its hyper-redundancy to improve safety and constant management of unexpected situations, i.e. small and big variations of it.

    Automation rigidifies operations. Operational procedures also rigidify operations, since they automate human operator’s behavior. Therefore, both automation and procedure need to be used with a critical spirit, competence and knowledge. Human operators dealing with highly automated LCSs need to deeply know and understand the technology they are using, especially when this technology is not fully mature. Automation is good when it is designed and developed by considering its users, and when it has reached an acceptable level of maturity [BOY 13a]. There are even situations where people may switch to automation to improve safety. This requires competence, situation awareness and great decision-making skills.

    Automation shifted the human operator’s role from basic control to supervisory control and management [SHE 84]. Instead of directly manipulating handles and levers, human operators push buttons in order to manage software-intensive systems, which are often qualified of artificial agents [BOY 98]. Therefore, this new work environment involves human agents and artificial agents. We talk about humans and systems as a multi-agent environment, and ultimately HSI. This shift from control to management involves new emergent properties that need to be clearly identified. People in charge of such multi-agent environments need to know and understand these emergent properties. For example, it is now known that automation increases complacency in the long term, especially when it works very well. More generally, the best way to face the unexpected is to move from task training to skill training, such as astronaut training where they learn humility, time-constrained situations that require simple and effective solutions, and the most appropriate use of technology (considered as a tool and not as a remedy).

    For example, airspace is evolving everyday toward more aircrafts in the sky, especially in terminal areas. In 2011, the Federal Aviation Administration (FAA) anticipated that the U.S. air transportation would double over the next two decades [HUE 11]. Eurocontrol anticipated similar air traffic growth over the same period of time in Europe [GRE 10]. This growth tremendously changes the way air traffic control (ATC) will be performed during future decades. In particular, the increasing number of aircrafts and their interconnections will cause new complexity issues and emergences of new unexpected properties that we will need to identify and manage. Air traffic control will progressively evolve toward air traffic management (ATM). Air traffic controllers will become air traffic managers. During the PAUSA project, we identified various changes in authority sharing and a new model that we called the Orchestra model [BOY 13a, BOY 09].

    Until now, ATC had authority on aircraft. We took the metaphor of the military where the general has authority on the chain of command down to the soldier. Within the military model, information flows are hierarchical, linear and sequential. In contrast, in the Orchestra model soldiers have become musicians (i.e. more specialized, cooperative and autonomous). The conductor replaces the general who coordinates the various information flows that have become more nonlinear and parallelized. In addition, the composer generates scores (prescribed tasks) that musicians follow to perform (effective task or activity). The composer coordinates these scores before delivering the symphony. We observed this very interesting change in the shift from ATC to ATM, where scores are contracts [BOY 09]. Today, we need to better define the function (jobs) of composers, conductors and musicians, as well as the overall organization of the Orchestra.

    Until now, air traffic controllers had a reasonable number of aircrafts to control. They knew where aircrafts were located using radar technology. Their job consisted of ensuring a fluid traffic flow with no conflicts leading to collision. A new type of complexity emerges from traffic over saturation in final areas. In the future, instead of controlling they will need to manage like a conductor would manage an orchestra. A conductor’s situation awareness has to be perfect from beginning to end of a symphony. They need to deal with various personalities. They are managers in the sense of authority, effectiveness and professionalism. They are self-confident and have a good sense of humor. A good conductor knows about emerging patterns that an orchestra produces. He or she needs to identify these patterns in order to have the required authority.

    The management of LCSs is always based on a model, whether the military or the orchestra models for examples, which needs to be further elicited. We already argued that if we use the traditional linear model, where operational procedures could support most kinds of situations, the unexpected is typically considered as an exception to the rule or procedure. However, if we are in the nonlinear model of life, where problem solving is the major resource, the unexpected is an everyday issue that deals with care, concentration and discipline.

    1.4. Risk taking: dealing with nonlinear dynamic systems

    What do successful risk takers do? They prepare everything in detail before starting their activity. They usually detect all possible recovery situations where they can end up safe when everything goes wrong. They need to know and embody these kinds of things; depending on their feeling of the situation, then they do not go. They also need to know their limitations, which need to be compatible with the risk they will take. Preparation and risk assessment are the keys. They also need to accept that it takes a long time to learn these skills.

    Taking a risk involves a logical abduction process [BOY 10]. Abduction is one of the three inference mechanisms with deduction and induction. Abduction is about postulating a possible future and demonstrating that we can manage to reach it. John F. Kennedy abducted that Americans will go to the moon and get back safe to earth; NASA demonstrated that to be true in less than a decade. This is typically what great visionaries do. Abduction requires competence, knowledge and understanding of the world, not necessarily to have a good idea, but to make sure that it is reachable. Abduction deals with goal-driven behavior and characterizing people’s intentions and actions. It is generally opposed to event-driven behavior, characterizing people’s reactions to events. In fact, people constantly switch back and forth from goal-driven to event-driven behavior. The resulting cognitive process is typically called opportunistic behavior. In aviation, pilots learn how to think ahead (this is a kind of abduction) and constantly shift from goal-driven to event-driven behaviors.

    Risk taking deals with discipline, i.e. there are safety margins that cannot be overridden and experts know them, therefore they are very disciplined and respect these safety margins scrupulously. The main difficulty is to handle the complexity of a risky situation. Complexity comes from the large number of factors involved. For example, a typical aviation situation results from a dynamic and nonlinear combination of aircrew psychological and physiological state, the way the given airline manages operations, aircraft state,

    Enjoying the preview?
    Page 1 of 1