Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Decentralized Control of Complex Systems
Decentralized Control of Complex Systems
Decentralized Control of Complex Systems
Ebook794 pages6 hours

Decentralized Control of Complex Systems

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Starting with a graph-theoretic framework for structural modeling of complex systems, this text presents results related to robust stabilization via decentralized state feedback. Subsequent chapters explore optimization, output feedback, the manipulative power of graphs, overlapping decompositions and the underlying inclusion principle, and reliability design. An appendix provides efficient graph algorithms. 1991 edition.
LanguageEnglish
Release dateJul 24, 2013
ISBN9780486294377
Decentralized Control of Complex Systems

Related to Decentralized Control of Complex Systems

Related ebooks

Industrial Engineering For You

View More

Related articles

Reviews for Decentralized Control of Complex Systems

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Decentralized Control of Complex Systems - Dragoslav D. Siljak

    Errata

    Preface

    Complexity is a central problem in modern system theory and practice. Because of our intensive and limitless desire to build and to control ever larger and more sophisticated systems, the orthodox concept of a high performance system driven by a central computer has become obsolete. New emerging notions are subsystems, interconnections, distributed computing, neural networks, parallel processing, and automated factories, to mention a few. It is becoming apparent that a well-organized complexity is the way of the future.

    The notion of complexity is used broadly, in widely different contexts, and often with subjective connotations. A great deal of our research in modeling and control of complex systems indicates that the fundamental characteristics of complexity are dimensionality, uncertainty, and information structure constraints. For us, these three features will be of overriding concern and will serve as major motivating factors for development of decentralized control theory for complex systems.

    We regard a dynamic system as an interconnection of subsystems, which may be known as physical entities, or are a purely mathematical artifice to be identified by a suitable partitioning algorithm. In either case, we can take advantage of special structural features of a decomposed system and come up with a substantial reduction of dimensionality in control problems. Presumably, each subsystem can be considered independently and the individual solutions can be combined in some way to get a solution for the overall problem. While this recipe has been around for quite some time, it is only recently that the decomposition approach is emerging as a powerful design concept. We shall provide for the first time a comprehensive presentation of this approach to control of complex systems, exposing, at the same time, critical conceptual insights and significant numerical simplifications available in this context.

    We customarily assume that neither the internal nor external nature of complex systems can be described precisely in deterministic or stochastic terms. Essential uncertainties of both the structured (parametric) and unstructured variety are present in the subsystems and their interconnections as well as information channels and controller configurations. For this reason, we emphasize, and perhaps overemphasize, the superior robustness of decentralized control laws with respect to plant and controller uncertainties and, in particular, unpredictable structural perturbations whereby the subsystems are disconnected and again connected in various ways during operation. We also include the multiple controller configurations to cope with controller failures and to achieve a satisfactory reliability of decentralized control schemes. Our concern for robustness and reliability is justified by the fact that more often than not the normal operating mode of a complex system is a failure mode!

    Our ability to study and master large complex systems is greatly enhanced by the advent of modern computing machinery. This reliance on computing power is likely to increase with our persistent efforts to build high-performance control systems at lower costs. Therefore, it is absolutely essential that we develop control theory that can accommodate the new and decisive trend in computer technology toward distributed parallel computing with highly reliable low-cost multiprocessor architectures. In complex systems, where databases are developed around the plants with distributed sources of data, a need for fast control action in response to local inputs and perturbations dictates use of distributed (that is, decentralized) information and control structures. We require a theory for synthesizing control laws under decentralized information structure constraints, which is what this book is all about.

    The plan of the book is as follows. Chapter 1 provides a graph-theoretic framework for structural modeling of complex systems. This is a natural environment for decentralized control problems, because they are primarily structural. Furthermore, screening candidate structures for controllability, observability, and fixed modes in this context is computationally attractive, especially when large systems are involved: the outcomes are generic and only binary operations are required. In Chapter 2, we present the results concerning robust stabilization via decentralized state feedback. Optimization is the subject of Chapter 3, where we emphasize robustness in terms of gain and phase margins and gain reduction tolerance in each control channel. Output feedback is considered in the context of interconnected observers (Chapter 4) and directly using controllers of a static or dynamic variety (Chapter 5). Manipulative power of graphs is exploited in Chapters 6 and 7 to discover hidden structures conducive to decentralized control. Both lower block triangular and weak coupling forms are considered, which offer a considerable savings in computational effort involved in building controllers and observers for complex systems. Overlapping decompositions and the underlying Inclusion Principle are presented in Chapter 8, which enlarge to a great extent the scope and applicability of decentralized control. It is in reliability design (Chapter 9) where decentralized control with overlapping information constraints is the basic tool for making the multiple controller concept work. In this last chapter, a unified synthesis of robust decentralized control is achieved with respect to both the plant and controller failures.

    At the end of each chapter we include a Notes and References section where we provide comments and cite relevant literature to describe the evolution of the ideas, as well as to broaden the range of the presented results. References for each individual chapter are listed in the Bibliography section following the Notes and References section. Omitted references are either well known, or are regretfully excluded to keep the list at a resonable length.

    Finally, we provide graph algorithms in the Appendix. Depth-first search, input and output reachability, structural controllability and observability, and lower block triangular and nested epsilon decompositions are all included and explained. Utility of our theoretical results depends a great deal on how efficient these computer algorithms are, and we shall provide a discussion of their complexity.

    My first thanks go to Masao Ikeda and Erol Sezer for a fruitful, gratifying and, above all, exciting collaboration, which provided a wealth of results for this book. As if it were not enough, they read the final manuscript and offered numerous comments and suggestions.

    , Don Gavel, and Doug LaMont for exploring with me the new avenues as well as the backroads of complex systems.

    My special thanks are due to Fran Karmitz for typing the manuscript in her peerless fashion after deciphering my innumerable and often confusing revisions.

    Finally, I wish to gratefully acknowledge the support of the National Science Foundation and the U.S. Department of Energy. Their continual and generous assistance made our ambitious broad-range research in complex systems come to fruition.

    Chapter 1

    Structured Systems

    In the mathematical modeling of physical systems, one is invariably confronted with a dilemma: to use a more accurate model which is harder to manage, or to work with a simpler model which is easier to manipulate but with less confidence. A hierarchy of models with increasing complexity and fidelity is more often than not the best approach. When the number of variables is large, it is prudent, if not imperative, to start the analysis with simple structural models that offer relatively easy ways to identify unsuitable system configurations, causes for lack of desired properties, and straightforward remedies.

    Structure is a widely used and often misused term in system theory and practice. For us, however, a structure is a graph, which is a precise mathematical object. In this way, a structural analysis of dynamic systems is carried out within the rich and rigorous framework of graph theory. Our objective in this chapter is to set up the graph-theoretic approach and present basic results concerning the existence of suitable structures for decentralized control. The end product is a collection of efficient and reliable algorithms for testing candidate schemes for decentralized control and estimation.

    Before proceeding to technical developments, let us discuss the intuitive ideas underlying the approach. With a linear system we associate a directed graph (digraph) in an obvious way: to each variable we assign a vertex of a digraph, and an edge is present whenever the corresponding coefficient in the system matrix, which relates a pair of variables, is different from zero. Then, the central problem is to determine if there is a path to every state vertex from at least one input vertex, that is, if the system is input reachable. The dual concept is output reachability, which is the property that each state reaches at least one output. These concepts were formulated as the basic structural requirements for controllability and observability. They apply to linear and nonlinear systems alike, they are parameter independent, and there are very efficient algorithms to test them. A crucial advantage offered by the reachability concepts is their manipulative power, which can be used in decompositions of large dynamic systems (see Chapters 6 and 7).

    When the property of generic rank is added to input reachability, one arrives at structural controllability. This notion is a better graph-theoretic approximation of controllability than reachability, but it is achieved at the price of liability to parameter dependencies of the generic rank condition (see Examples 1.33 and 1.34). A way to simplify this problem and, at the same time, decrease the computational effort involved in checking the rank condition, is to manipulate first the system into hierarchically ordered input reachable subsystems using graphs (see Chapter 6). Then, testing for structural controllability of the overall system is reduced to verifying the same property of low-order subsystems.

    Finally, we consider structurally fixed modes, which were defined to study the important existence problem of control laws under arbitrary structure constraints including output and decentralized control schemes. Again, as expected, the presence of structurally fixed modes, which cannot be moved by any choice of system parameters, is determined by a reliable algorithm involving only binary computations. This is of special importance to us, because unstable structurally fixed modes make stabilization impossible no matter how we choose the feedback parameters in a decentralized control law.

    1.1.

    Graphs and Dynamic Systems

    Let us consider a linear dynamic system described by the equations

    where x(t) ∈ Rn is the state, u(t) ∈ Rm is the input, and y(t) ∈ Rl is the output of S at time t R, and A = (aij), B = (bij), and C = (Cij) are constant matrices of dimensions n × n, n × m, and l × n, respectively.

    A precise way to represent the structure of S is to use the interconnection (adjacency) matrix specified by the following:

    1.1. DEFINITION. The interconnection matrix of S is a binary (n + m + l) × (n + m + l) matrix E = (eij) defined as

    ijijij) have the elements

    of E are Boolean representations of the original system matrices A, B, C. This conversion of matrix elements to binary values makes the interconnection matrix E a useful modeling tool for the study of qualitative properties of the system S, which are independent of specific numerical values taken by system parameters (Šiljak, 1977a). In this way, we can also handle parameter uncertainties caused by modeling errors or operating failures in the system.

    While the interconnection matrix E is useful in computations because only binary operations are involved, for qualitative interpretations of structural properties of S the equivalent concept of a directed graph (digraph) is often preferred. We recall (Deo, 1974) that a digraph is the ordered pair D = (V, E), where V is a nonempty finite set of vertices (points, nodes), and E is a relation in V, that is, E is a set of ordered pairs (vj, vi), which are the directed edges (lines, arcs) connecting the vertices of D.

    Using Definition 1.1 of the matrix E = (eij), we formulate the following:

    1.2. DEFINITION. The digraph D = (V, E) of a system S where U = {u1, u2, ..., um}, X = {x1, x2, ..., xn}, and Y = {y1, y2, ..., yl} nonempty sets of input, state, and output vertices of D, respectively, and E is the edge set such that (vj, vi) ∈ E if and only if eij = 1.

    We notice that digraph D of S contains only the edges (uj, xi), (xj, xi), and (xj, yi), which reflects our basic assumption about the system S: there are no connections among inputs, outputs, between inputs and outputs, etc. (Šiljak, 1977a, b).

    Fig. 1.1. Pendulum.

    1.3. EXAMPLE. To illustrate how graphs are associated with dynamic systems, let us consider the motion of a frictionless pendulum shown in Figure 1.1, which is described by the equation

    (t) is the angle from the vertical to the rod at time t(t) ≡ d(t)/dt² is the acceleration of the bob at time t, l is the length of the rod (rigid and without mass), and m is the mass of the bob subject to a force (input) u(t) at time t. Choosing the state vector x = (x1, x2)T, where x1(t(t) and x2(t) = d (t)/dt, we get the system S representing the pendulum as

    = –g/l= 1/ml, and

    (t) as a measurable state of S, that is, as the output y(t) of S. The interconnection matrix E is given as

    Fig. 1.2. System digraph.

    . The corresponding digraph D is shown in Figure 1.2.

    It is important to note that if the system parameters m and l are changed to m1 and l1, the interconnection matrix E and, thus, the digraph remain the same. That is, E and D represent the structure of S1 as well, where

    1 = –g/l1 = 1/m1l1. We say that the systems S and S1 are structurally equivalent.

    In addition to the variation of system parameters, structural equivalence allows for a relabeling of the inputs, states, and outputs by permutation matrices. More precisely, let us consider two systems

    and

    where the state vectors x1 and x2 have the same dimension n2) and formulate:

    1.4. DEFINITION. Two systems S1 and S2 are said to be structurally equivalent if there exist (nonsingular) pemutation matrices PA, PB, and PC such that

    1.5. EXAMPLE. On the basis of Definition 1.4, the system

    2 = − g/l2 = 1/m2l2, is structurally equivalent to the system S. The permutation matrices

    satisfy the relations (1.11) for the two systems S1 and S2.

    The invariance of structural equivalence to parameter variations is the underlying feature of graph-theoretic analysis, which is appealing in modeling of physical systems, especially those having large dimensions where uncertainty about system parameter values is always present. The properties of a system S established by its interconnection matrix E or digraph D are at the same time valid for systems structurally equivalent to S. This fact not only provides a way to cope with uncertainty in the modeling process, but also serves as a cornerstone in robust designs of control and estimation schemes for large scale systems. This aspect of structural modeling will be exploited throughout this book.

    Another appealing aspect of structural modeling of complex systems is the binary nature of the interconnection matrices and digraphs. Tests of intrinsic characteristics of such systems, when formulated in structural terms, are well-posed numerical problems that can be executed with speed and reliability by standard computing machinery. This feature of structural modeling is of our immediate interest.

    1.2.

    Input and Output Reachability

    The existence and actual design of control laws for a dynamic system depend crucially upon the well-known fundamental properties of complete controllability and observability (Kalman, 1963). Complete controllability means that any initial state of a given dynamical system can be transferred to the zero state in a finite length of time by a suitable input, while complete observability implies that any state of the system can be determined in a finite length of time from subsequent inputs and outputs. In most situations, these properties are all one needs to construct controllers for linear constant plants.

    For a system S of (1.1), the standard tests for complete controllability and observability are

    and

    where n is the dimension of the state space of S (that is, the order of S). In systems of large dimensions, computing the rank conditions (1.14) and (1.15) is an ill-posed numerical problem. Furthermore, when the tests fail, there is no indication how the rank deficiency can be removed even in relatively simple situations. For these reasons, the notions of input and output reachability, which represent the crucial ingredients in controllability and observability, have been defined by Šiljak (1977a, b). The reachability properties are numerically attractive, because they can be established by binary computations alone.

    To define the concept of input and output reachability, we need several well-known notions from the theory of digraphs (Harary, 1969). We consider a digraph D = (V, E) of S, and recall that if a collection of distinct vertices v1, v2, ..., vk together with the edges (v1, v2), (v2, v3), ..., (vk−1, vk) are placed in sequence, then the ordered set {(v1, v2), (v2, v3), ..., (vk−1, vk)} is a (directed) path from v1 to vk. We say that vi is reachable from vj if there is a path from vj to vi. A reachable set Vi(vj) of a vertex vj is a set Vi of vertices vi reachable from the vertex vj ∈ V. Carrying this a step further, we define a reachable set Vi(Vj) of a set Vj as a set of vertices vi reachable from at least one vertex vj ∈ Vj. Therefore, Vi(Vj) is the union of the sets Vi(vj) for all vj ∈ Vj. An antecedent set Vj(Vi) of a set Vi is a set of vertices vj from which at least one vertex vi ∈ Vi is reachable.

    1.6. DEFINITION. A system S is input reachable if X is a reachable set of U.

    The directional dual of input reachability is the following:

    1.7. DEFINITION. A system S is output reachable if X is an antecedent set of Y.

    Definition 1.6 means that if no component uj of the input vector u can reach (influence) a state xi either directly or via other states of S, then there is no way to choose an input function u(t) to drive the state xi to zero, which implies that the system is uncontrollable independently of numerical values of the nonzero system parameters. In a dual manner, Definition 1.7 means that if a state xj does not reach (influence) any component yi of the output vector y either directly or indirectly by connecting the other states of S, then it is impossible to determine the state xj at any given time from the future values of the output y and the knowledge of the input u. In this case, a structural deficiency (not the values of system parameters) prevents observability of S.

    In order to establish input and output reachability of a given system S, we start with the digraph D and recall (Harary, 1969) that the reachability matrix R = (rij) of D = (V, E) is defined as follows:

    In other words, rij = 1 if and only if there exists a path of nonzero length from vj to vi; the length of a path being the number of individual edges between vj and vi.

    To determine the reachability matrix for a given digraph, we need the following result which was obtained by Festinger et al. (1950):

    1.8. THEOREM. Let E = (eij) be the s × s interconnection matrix corresponding to a digraph D = (V, E), and let P = (pij) be an s × s matrix such that P = Ed where d ∈ {1, 2, ..., s}. Then, pij is the total number of distinct sequences (vj, .), ..., (., vi) of length d in D.

    Proof. We can prove this theorem by induction (Berztiss, 1975). We show that the theorem is true for d = 1, and then show that it is true for d + 1 whenever it is true for d. For d = 1, the theorem is actually the definition of the matrix E. For d + 1, eikpkj = pkj if (vk, vi) is an edge of D, and eikpkj = 0 if it is not. The total number of sequences of length d + 1 having the form (vj, .), ..., (., vk), (vk, viwhich is the ijth element of Ed+l.

    Q.E.D.

    The reachability matrix can be computed by the following corollary to Theorem 1.8:

    1.9. COROLLARY. If R = (rij) is the s × s reachability matrix of D and

    then rij = 1 if and only if qij ≠ 0.

    1 = 1) to get the reachability matrix as

    where Ek = Ek−1 Λ E. For two s × s binary matrices A = (aij) and B = (bij), the Boolean operations C = A Λ B and D = A B are defined by C = (cij) and D = (dij) as

    A more efficient Boolean-type algorithm for computing the matrix R was proposed by Warshall (1962):

    1.10. ALGORITHM.

    The IF statement means that after the iteration with i = l sif and only if there exists a path from j to k that contains only the points from the set {j, k, 1, 2, ..., l}. This algorithm has many useful generalizations (Deo, 1974). A still faster algorithm for computing the reachability matrix is provided by the depth-first search on a graph formulated by Hopcroft and Tarjan (1971) and subsequently developed by Tarjan (1972). The reachability algorithms are reviewed in Notes and References (Section 1.7), and an efficient graph-theoretic algorithm, which is based upon the depth-first search, is described in the Appendix.

    Let us now turn our attention to input and output reachability. We note that Ed for the adjacency matrix E of the system S, which has the form (1.2), can be calculated as

    Using this expression in (1.18), one gets the reachability matrix as

    where the binary matrices F, G, Hhave dimensions obvious from (1.20). Of course, the matrix R could have been obtained by the Algorithm 1.10, but the structure of the matrix would not have been explicit in terms of the system matrices.

    From (1.21), the following result is automatic:

    1.11. THEOREM. A system S is input reachable if and only if the binary matrix G has no zero rows, and it is output reachable if and only if the binary matrix H has no zero columns.

    1.12. REMARK. From R of (1.21), one gets for free the additional information about reachability properties involving only inputs and outputs. In (Šiljak, 1978), a system S of (1.21) has neither zero rows nor zero columns.

    Fig. 1.3. Input and output unreachable digraph.

    1.13. EXAMPLE. To illustrate the application of Theorem 1.11, The reachability matrix can be computed by the following corollary to let us consider a system digraph shown in Figure 1.3. The interconnection matrix E is

    and the corresponding reachability matrix R is computed as

    By comparing (1.23) with (1.21), we see that there are two zero rows in G of (1.23) and the system is not input reachable. From Figure 1.3, it is obvious that the input u does not reach the states x3 and x4. We also see that H has two zero columns and, therefore, the states x1 and x2 do not reach the output y. It is easy to see what should be done to recover input and output reachability of S. A connection should be made from the existing input (or a new added input) to either x3 or x4 (or both). Similarly, the output reachability would be established if either x1 or x2 (or both) were measured to become new outputs. This type of information makes the graph-theoretic methods useful in the design of control and estimation schemes, especially for complex systems with a large number of variables.

    1.3.

    Partitions and Condensations

    Problems involving input and output reachability can be greatly simplified by partitioning of the given directed graph into subgraphs with assigned reachability properties. By condensing each subgraph to a vertex of a new digraph, one can greatly increase the conceptual insight while, at the same time, reduce the computational effort required in the graph-theoretic analysis of system structures. The process of partitioning and condensing large graphs has been effective in diverse fields, especially in communication networks, social sciences, and transportation systems. The use of the graph-theoretic decompositions in control theory is of a recent interest (see Chapters 6 and 7).

    Let us recall several useful notions about a directed graph D = (V, E) that are not necessarily related to a dynamic system. When a vertex vi is reachable from a vertex vj, we use the notation vjRvi. We note that R is a relation, and it is transitive because vjRvk and vkRvi imply vjRvi. We say that a pair of vertices (vj, vi) is strongly connected if vjRvi and viRvj, which we denote by vj viis an equivalence relation on V. Furthermore, we can partition , unless D is strongly connected, that is, every vertex in D .

    If D = (V, E) is a digraph and Vk is a subset of V, then the digraph Dk = [Vk, (Vk × Vk) ∩ E] is a subgraph of D. That is, a subgraph Dk = (Vk, Ek) has the vertex set Vk as a subset of V and the edge set Ek which contains all the edges of E joining the vertices of Vk.

    We also recall that a cycle is a path of length greater than one which begins and ends on the same vertex. A special edge (vi, vi) is not considered as a cycle and is termed a self-loop. A digraph D = (V, E) is acyclic if it has no cycles.

    1.14. DEFINITION. A subgraph Dk = [Vk, (Vk × Vk) ∩ E] is a strong component of D if it is strongly connected and there are no two points vi ∈ Vk and vj Vk which lie on the same cycle in D.

    Equivalence classes and strong components of a digraph are isomorphic concepts. They induce a partition in a given digraph and serve as a basis for subsequent reductions of the digraph by the process of condensation (i.e., clustering, aggregation), which preserves the reachability property of the original digraph.

    Let us consider a digraph D = (V, E) and let {V1, V2, ..., VN. Obviously, the sets Vk are pairwise disjoint, that is, Vi ∩ Vj for all i, j = 1, 2, ..., N, i j. To each equivalence class Vk corresponds a strong component Dk of D. Then, we have:

    1.15. DEFINITION. Given a digraph D = (V, E), define

    The digraph D* = (V*, E*) is the condensation of D.

    In other words, the condensation D* of D are the equivalence classes Vk in the new digraph D* if and only if there is in D at least one edge from a vertex of Vj to a vertex of Vi. In Figure 1.4, a graph D is shown together with its strong components D1 and D2, as well as the condensation D*.

    The next step toward a formulation of the desired partition is the following (Harary et al. 1965):

    1.16. THEOREM. Given any digraph D, its condensation D* is acyclic.

    Proofin D*. By the construction rule of DN.

    Q.E.D.

    Fig. 1.4. (a) Digraph; (b) strong components; (c) condensation.

    Now, every acyclic digraph is a partial order because the corresponding relation on its vertices is irreflexive, asymmetric, and transitive. This means that we can assign an integer ni of the condensation D* = (V*, E*), which are called levels of D*. Furthermore, D* can always be made to have an ascending level assignment, as illustrated in Figure 1.4c, where n1 = 1 and n2 = 2. In order to establish this fact rigorously, we recall that the outdegree of a vertex vi, which is denoted by od(vi), is the number of edges from vi. The indegree of vi, written id(vi), is the number of edges to vi. We prove the following (Harary et al. 1965):

    1.17. THEOREM. An acyclic digraph D = (V, E) has at least one vertex vi ∈ V such that od(vi) = 0, and at least one vertex vj ∈ V such that id(vi) = 0.

    Proof. Assume that D . Since the number of vertices of D , which means that D is not acyclic. This is a contradiction.

    Q.E.D.

    Finally, we establish (Harary et al. 1965):

    1.18. THEOREM. For any digraph D = (V, E), the following statements are equivalent:

    Proof. (i(ii). By Theorem 1.17, D has a vertex of indegree zero. One such vertex is labeled v1, and we form a new digraph D v1 by truncating the vertex v1 and its edges from D. The digraph D v1 has no cycles because D is acyclic and, therefore, has a vertex v2 with indegree zero. By repeating the truncation process to form D − {v1, v2}, and locating v3 such that id(v3) = 0, we eventually exhaust all the vertices of D. By construction, there is no edge (vj, vi) ∈ E if j i, which implies that eij = 0 for j i, and the interconnection matrix E is lower triangular.

    (ii(iii). Suppose E of D is lower triangular. We assign to each vertex vi, which correspond to the ith row and column of E, the level i. Since eij = 0 for j i, there is no edge from any lower level to any of the upper levels.

    (iii(i). Suppose D has a cycle containing vj and vi with level assignment j and i such that j < i. Then, there is a path from vj to vi and a path from vi to vj, which is a contradiction, because D has an ascending level assignment and there are no edges from lower levels to upper levels of D. This means that there are no paths from vi to vj, which is a contradiction to the assumption that D has a cycle.

    Q.E.D.

    Since by Theorem 1.16 the condensation D* of a digraph D , and define the state truncation Dx = (X, A), which is obtained from D by removing all the input vertices U and the output vertices Y as well as the edges in E connected to the vertices of U and Y. The adjacency matrix of Dx is the Boolean n × n ij) defined as a submatrix of E = (X*, A*) and the interconnection N × N where N is the number of strong components of Dx* is lower triangular:

    Partition of the input and output sets U and Y of D is not crucial at present and can be done in any convenient way. We assume that U and Y are partitioned in M and L pairwise disjoint sets U = {U1, U2, ..., UM} and Y = {Y1, Y2, ..., YL}. We already have the partition X = {X1, X2, ..., XN}, where Xi corresponds to the iare the subsets Xi, Ui, and Yi. Of course, Ui and Yi are not necessarily equivalence classes in D, but Xi’s are. This fact will be used soon.

    With D* we associate the interconnection matrix

    = 1 if and only if there is an edge in D from a vertex of Uj to a vertex in the equivalence class Xiif and only if there is an edge from Xj to Yi. The matrix

    Enjoying the preview?
    Page 1 of 1