Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Bayesian Decision Networks: Fundamentals and Applications
Bayesian Decision Networks: Fundamentals and Applications
Bayesian Decision Networks: Fundamentals and Applications
Ebook114 pages1 hour

Bayesian Decision Networks: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Bayesian Decision Networks


A Bayesian network is a probabilistic graphical model that depicts a set of variables and their conditional relationships via a directed acyclic graph (DAG). In other words, a Bayesian network is a type of directed acyclic graph. Bayesian networks are perfect for determining the likelihood that any one of multiple possible known causes was the contributing factor in an event that has already taken place and making a prediction based on that likelihood. For instance, the probabilistic links that exist between diseases and symptoms might be represented by a Bayesian network. The network may be used to compute the odds of the presence of a variety of diseases based on the symptoms that are provided.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Bayesian network


Chapter 2: Influence diagram


Chapter 3: Graphical model


Chapter 4: Hidden Markov model


Chapter 5: Decision tree


Chapter 6: Gibbs sampling


Chapter 7: Decision analysis


Chapter 8: Value of information


Chapter 9: Probabilistic forecasting


Chapter 10: Causal graph


(II) Answering the public top questions about bayesian decision networks.


(III) Real world examples for the usage of bayesian decision networks in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of bayesian decision networks' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of bayesian decision networks.

LanguageEnglish
Release dateJul 1, 2023
Bayesian Decision Networks: Fundamentals and Applications

Read more from Fouad Sabry

Related to Bayesian Decision Networks

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Bayesian Decision Networks

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Bayesian Decision Networks - Fouad Sabry

    Chapter 1: Bayesian network

    A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph. It is also known as a Bayes network, Bayes net, belief network, or decision network. Other names for this type of model include a Bayes network and a Bayes net (DAG). Bayesian networks are perfect for determining the probability that any one of multiple potential known causes was the contributing element in an event that has already taken place and making a prediction based on that likelihood. For instance, the probabilistic links that exist between illnesses and symptoms may be represented by a Bayesian network. The network may be used to calculate the odds of the existence of a variety of illnesses based on the symptoms that are provided.

    Bayesian networks are able to do inference and learning with the use of efficient algorithms. Bayesian networks that model sequences of variables (such as protein sequences or voice signals, for example) are referred to as dynamic Bayesian networks. Generalizations of Bayesian networks that may depict decision issues under uncertainty and provide solutions to such problems are referred to as influence diagrams.

    Formally, Bayesian networks are a kind of directed acyclic graph (DAG), and the nodes of a Bayesian network represent variables in the Bayesian sense. These variables might be observable things, latent variables, parameters or theories that are not known.

    Edges represent conditional dependencies; nodes that are not linked to one another (meaning that there is no route that leads from one node to another) reflect variables that are conditionally independent of one another.

    Each node is connected to a probability function that considers the inputs it receives, as input, a specific collection of values for each of the node's parent variables, It produces (as the result) the probability (or the probability distribution), if relevant) of the variable that the node is attempting to represent.

    For example, if m parent nodes represent m Boolean variables, then the probability function could be represented by a table of 2^{m} entries, one entry for each of the 2^{m} possible parent combinations.

    It's possible to apply comparable concepts to undirected, and maybe even cyclic, graphs like Markov networks also count.

    Let's use an example to drive home the points about what a Bayesian network is and how it works. Let's say we wish to represent the dependencies that exist between three variables: the sprinkler (or, more accurately, its state, which refers to whether it is on or off), the presence or absence of rain, and whether or not the grass is moist. It is important to note that there are two things that might cause the grass to get wet: an operating sprinkler or rain. The effectiveness of the sprinkler system is directly influenced by the rain (namely that when it rains, the sprinkler usually is not active). A Bayesian network is an appropriate modeling tool for this scenario (shown to the right). Each variable may take on the values T (which stands for true) or F (which stands for false) (for false).

    The chain rule of probability may be used to get the joint probability function as,

    {\displaystyle \Pr(G,S,R)=\Pr(G\mid S,R)\Pr(S\mid R)\Pr(R)}

    where G stands for Grass wet (true/false), S stands for Sprinkler switched on (true/false), and R stands for Raining (true/false).

    The model is able to provide answers to queries about the existence of a cause given the existence of an effect (also known as the inverse probability), such as What is the chance that it is raining, given that the grass is wet? with the use of the formula for conditional probabilities and the accumulation of all nuisance factors:

    {\displaystyle \Pr(R=T\mid G=T)={\frac {\Pr(G=T,R=T)}{\Pr(G=T)}}={\frac {\sum _{x\in \{T,F\}}\Pr(G=T,S=x,R=T)}{\sum _{x,y\in \{T,F\}}\Pr(G=T,S=x,R=y)}}}

    Using the expansion for the joint probability function {\displaystyle \Pr(G,S,R)} and the conditional probabilities from the conditional probability tables (CPTs) stated in the diagram, One is able to do an analysis on each phrase that is included in the sums in the numerator and denominator.

    For example,

    {\displaystyle {\begin{aligned}\Pr(G=T,S=T,R=T)&=\Pr(G=T\mid S=T,R=T)\Pr(S=T\mid R=T)\Pr(R=T)\\&=0.99\times 0.01\times 0.2\\&=0.00198.\end{aligned}}}

    The subsequent step is to write down the numerical results, which are then subscripted by the related variable values.

    {\displaystyle \Pr(R=T\mid G=T)={\frac {0.00198_{TTT}+0.1584_{TFT}}{0.00198_{TTT}+0.288_{TTF}+0.1584_{TFT}+0.0_{TFF}}}={\frac {891}{2491}}\approx 35.77\%.}

    In order to provide a solution to an interventional inquiry, such as What is the chance that it will rain, given that we drenched the grass? The solution is determined by the joint distribution function after the intervention.

    {\displaystyle \Pr(S,R\mid {\text{do}}(G=T))=\Pr(S\mid R)\Pr(R)}

    obtained by removing the factor {\displaystyle \Pr(G\mid S,R)} from the pre-intervention distribution.

    The use of the do operator ensures that the value of G is always true.

    The action will not have any impact on the likelihood of precipitation:

    {\displaystyle \Pr(R\mid {\text{do}}(G=T))=\Pr(R).}

    In order to estimate the results of activating the sprinkler system:

    {\displaystyle \Pr(R,G\mid {\text{do}}(S=T))=\Pr(R)\Pr(G\mid R,S=T)}

    with the term {\displaystyle \Pr(S=T\mid R)} removed, demonstrating that the motion does not have an effect on the rain but does have an effect on the grass.

    Given the existence of unseen factors, it is possible that these predictions cannot be realized, as is the case with the majority of policy assessment issues.

    The effect of the action {\text{do}}(x) can still be predicted, however, Once the criteria established via the back door are met.

    (or blocks) any and all other routes leading from X to Y, then

    {\displaystyle \Pr(Y,Z\mid {\text{do}}(x))={\frac {\Pr(Y,Z,X=x)}{\Pr(X=x\mid Z)}}.}

    One definition of a back-door road is one that ultimately leads to the letter X.

    Sets that are adequate or admissible are the terms used to describe those that meet the back-door condition. For example, For the purpose of anticipating the influence of S = T on G, the set Z = R may be considered, because R d-separates the (only) back-door path S ← R → G.

    However, in the event that S is not seen, There is no other set that d-separates this route, and the impact that activating the sprinkler (S = T) will have on the grass (G) cannot be deduced from observing the grass passively.

    In this particular scenario, the variable P(G | do(S = T)) cannot be recognized..

    This is an indication of the fact that, a paucity of data about interventions, The observed reliance between S and G may be explained by a causal relationship, or it can be explained as spurious (an apparent dependence coming from a shared source), R).

    (see Simpson's dilemma)

    By using the three principles of do-calculus, one is able to establish whether or not a causal link may be discovered based on an arbitrary Bayesian network that contains unobserved variables.

    When compared to using extensive probability tables, using a Bayesian network may save a large amount of memory, if there are few dependencies in the joint distribution, it is said to have a sparse structure.

    For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for 2^{10}=1024 values.

    If the local distribution of no variable is dependent on more than three of its parent variables, then, the Bayesian network representation stores at most 10\cdot 2^{3}=80 values.

    One of the benefits of Bayesian

    Enjoying the preview?
    Page 1 of 1