Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Markov Random Field: Exploring the Power of Markov Random Fields in Computer Vision
Markov Random Field: Exploring the Power of Markov Random Fields in Computer Vision
Markov Random Field: Exploring the Power of Markov Random Fields in Computer Vision
Ebook99 pages1 hour

Markov Random Field: Exploring the Power of Markov Random Fields in Computer Vision

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What is Markov Random Field


In the domain of physics and probability, a Markov random field (MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to be a Markov random field if it satisfies Markov properties. The concept originates from the Sherrington-Kirkpatrick model.


How you will benefit


(I) Insights, and validations about the following topics:


Chapter 1: Markov random field


Chapter 2: Multivariate random variable


Chapter 3: Hidden Markov model


Chapter 4: Bayesian network


Chapter 5: Graphical model


Chapter 6: Random field


Chapter 7: Belief propagation


Chapter 8: Factor graph


Chapter 9: Conditional random field


Chapter 10: Hammersley-Clifford theorem


(II) Answering the public top questions about markov random field.


(III) Real world examples for the usage of markov random field in many fields.


Who this book is for


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of Markov Random Field.

LanguageEnglish
Release dateMay 12, 2024
Markov Random Field: Exploring the Power of Markov Random Fields in Computer Vision

Read more from Fouad Sabry

Related to Markov Random Field

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Markov Random Field

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Markov Random Field - Fouad Sabry

    Chapter 1: Markov random field

    A Markov random field (MRF), Markov network, or undirected graphical model is a collection of random variables with a Markov property that can be represented by an undirected graph in the fields of physics and probability. To rephrase, a random field has Markov properties if and only if it meets certain qualities. The idea was developed in the Sherrington-Kirkpatrick framework.

    In terms of dependency representation, a Markov network or Markov random field (MRF) is comparable to a Bayesian network, with the key distinction being that Bayesian networks are directed and acyclic, while Markov networks are undirected and potentially cyclic. For this reason, whereas a Markov network can represent dependencies that a Bayesian network cannot (such as cyclic relationships), the converse is not true (such as induced dependencies). A Markov random field can have either a finite or infinite underlying graph.

    The Hammersley-Clifford theorem states that for a suitable (locally specified) energy function, a Gibbs measure can be used to represent a random field if and only if the joint probability density of the random variables is strictly positive. The Ising model serves as a paradigmatic example of a Markov random field, and it was in this context that the Markov random field was first presented.

    Given an undirected graph G=(V,E) , a set of random variables {\displaystyle X=(X_{v})_{v\in V}} indexed by V form a Markov random field with respect to G if they satisfy the local Markov properties:

    All pairs of non-collinear variables are conditionally independent with respect to all other variables, according to the Pairwise Markov property:

    {\displaystyle X_{u}\perp \!\!\!\perp X_{v}\mid X_{V\setminus \{u,v\}}}

    The local Markov property states that, given its immediate surroundings, a given variable is conditionally independent from every other variable:

    {\displaystyle X_{v}\perp \!\!\!\perp X_{V\setminus \operatorname {N} [v]}\mid X_{\operatorname {N} (v)}}

    where {\textstyle \operatorname {N} (v)} is the set of neighbors of v , and {\displaystyle \operatorname {N} [v]=v\cup \operatorname {N} (v)} is the closed neighbourhood of v .

    The global Markov property states that, given a separating subset of variables, any two subsets of variables are conditionally independent:

    X_A \perp\!\!\!\perp X_B \mid X_S

    where every path from a node in A to a node in B passes through S .

    The Global Markov property outperforms the Local Markov property, which outperforms the Pairwise Markov property. (which only give linked variables non-zero probabilities).

    The following formulation makes the connection between the three Markov features crystal plain:

    Pairwise: For any {\displaystyle i,j\in V} not equal or adjacent, {\displaystyle X_{i}\perp \!\!\!\perp X_{j}|X_{V\setminus \{i,j\}}} .

    Local: For any {\displaystyle i\in V} and {\displaystyle J\subset V} not containing or adjacent to i , {\displaystyle X_{i}\perp \!\!\!\perp X_{J}|X_{V\setminus (\{i\}\cup J)}} .

    Global: For any {\displaystyle I,J\subset V} not intersecting or adjacent, {\displaystyle X_{I}\perp \!\!\!\perp X_{J}|X_{V\setminus (I\cup J)}} .

    Markov random fields that may be factorized according to the cliques of the network are often employed since the Markov property of an arbitrary probability distribution can be challenging to establish.

    Given a set of random variables {\displaystyle X=(X_{v})_{v\in V}} , let P(X=x) be the probability of a particular field configuration x in X .

    That is, P(X=x) is the probability of finding that the random variables X take on the particular value x .

    Because X is a set, the probability of x should be understood to be taken with respect to a joint distribution of the {\displaystyle X_{v}} .

    If this joint density can be factorized over the cliques of G :

    P(X=x) = \prod_{C \in \operatorname{cl}(G)} \phi_C (x_C)

    then X forms a Markov random field with respect to G .

    Here, {\displaystyle \operatorname {cl} (G)} is the set of cliques of G .

    If only maximal cliques are considered, the concept remains unchanged.

    The functions {\displaystyle \phi _{C}} are sometimes referred to as factor potentials or clique potentials.

    Note, however, conflicting terminology is in use: the word potential is often applied to the logarithm of {\displaystyle \phi _{C}} .

    Considering that, mechanics of chance, {\displaystyle \log(\phi _{C})} has a direct interpretation as the potential energy of a configuration {\displaystyle x_{C}} .

    It is possible to design a simple example of an MRF that does not factorize on a 4-node cycle with certain infinite energies, i.e.

    probabilistic zero-sum configurations, the case that, more appropriately, allows the infinite energies to act on the complete graph on V .

    If any of the following is true, then MRFs factorize::

    According to Hammersley-theorem, Clifford's the density must be positive.

    A chordal graph (by equivalence to a Bayesian network)

    A factor graph of the network can be built if its vertices have been factorized.

    Any positive Markov random field can be written as exponential family in canonical form with feature functions f_{k} such that the full-joint distribution can be written as

    P(X=x) = \frac{1}{Z} \exp \left( \sum_{k} w_k^{\top} f_k (x_{ \{ k \}}) \right)

    where the notation

    w_k^{\top} f_k (x_{ \{ k \}}) = \sum_{i=1}^{N_k} w_{k,i} \cdot f_{k,i}(x_{\{k\}})

    partition function, and Z is just a dot product across field configurations:

    Z = \sum_{x \in \mathcal{X}} \exp \left(\sum_{k} w_k^{\top} f_k(x_{ \{ k \} })\right).

    Here, {\mathcal {X}} denotes the set of all possible assignments of values to all the network's random variables.

    Usually, the feature functions f_{k,i} are defined such that they are indicators of the clique's configuration, i.e.

    f_{k,i}(x_{\{k\}}) = 1 if x_{\{k\}} corresponds to the i-th possible configuration of the k-th clique and 0 otherwise.

    The above clique factorization model is equivalent to this one, if N_k=|\operatorname{dom}(C_k)| is the cardinality of the clique, and the weight of a feature f_{k,i} corresponds to the logarithm of the corresponding clique factor, i.e.

    w_{k,i} = \log \phi(c_{k,i}) , where c_{k,i} is the i-th possible configuration of the k-th clique, i.e.

    the i-th value in the domain of the clique C_{k} .

    Gibbs measure is another name for the probability P.

    Only if all clique factors are non-zero can a Markov field be expressed as a logistic model, i.e.

    if none of the

    Enjoying the preview?
    Page 1 of 1