Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Inductive Logic Programming: Fundamentals and Applications
Inductive Logic Programming: Fundamentals and Applications
Inductive Logic Programming: Fundamentals and Applications
Ebook128 pages1 hour

Inductive Logic Programming: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Inductive Logic Programming


A subfield of symbolic artificial intelligence known as inductive logic programming (ILP) use logic programming as a consistent representation for examples, background knowledge, and hypotheses. An ILP system will develop a hypothesised logic program in the event that it is provided with an encoding of the known background knowledge and a collection of examples that are represented as a logical database of facts. This program will involve all of the positive examples and none of the negative instances.In this model, the hypothesis is derived from positive instances, negative examples, and background knowledge.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Inductive Logic Programming


Chapter 2: Stephen Muggleton


Chapter 3: Progol


Chapter 4: Program Synthesis


Chapter 5: Inductive Programming


Chapter 6: First-Order Logic


Chapter 7: List of Rules of Inference


Chapter 8: Disjunctive Normal Form


Chapter 9: Resolution (Logic)


Chapter 10: Answer Set Programming


(II) Answering the public top questions about inductive logic programming.


(III) Real world examples for the usage of inductive logic programming in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of inductive logic programming' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of inductive logic programming.

LanguageEnglish
Release dateJun 30, 2023
Inductive Logic Programming: Fundamentals and Applications

Read more from Fouad Sabry

Related to Inductive Logic Programming

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Inductive Logic Programming

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Inductive Logic Programming - Fouad Sabry

    Chapter 1: Inductive logic programming

    Inductive logic programming (ILP) is a branch of symbolic AI that standardizes on logic programming to represent data like data sets, knowledge bases, and hypotheses. An ILP system, given a knowledge encoding and a set of examples in the form of a logical database of facts, will generate a hypothesized logic program that includes all the positive and none of the negative examples.

    Schema: positive examples + negative examples + background knowledge ⇒ hypothesis.

    Bioinformatics and NLP are two fields that benefit greatly from inductive logic programming. In a logical context, Gordon Plotkin and Ehud Shapiro established the first theoretical groundwork for inductive machine learning. In the PROGOL system, Muggleton was the first to implement Inverse entailment. In this context, induction refers more to philosophical than mathematical induction (the latter being the process of proving a property for all members of a well-ordered set).

    The necessary context is provided in the form of a B-theory of logic, in logic programming, Horn clauses are a common construction.

    The positive and negative examples are given as a conjunction E^{+} and E^{-} of unnegated and negated ground literals, respectively.

    A valid hypothesis h is a logical statement that guarantees the following results:.

    {\displaystyle {\begin{array}{llll}{\text{Necessity:}}&B&\not \models &E^{+}\\{\text{Sufficiency:}}&B\land h&\color {blue}{\models }&E^{+}\\{\text{Weak consistency:}}&B\land h&\not \models &{\textit {false}}\\{\text{Strong consistency:}}&B\land h\land E^{-}&\not \models &{\textit {false}}\end{array}}}

    The word necessity doesn't limit what can be done, as long as the positive facts can be explained without a hypothesis, its generation is strictly forbidden.

    Sufficiency requires any generated hypothesis h to explain all positive examples E^{+} .

    The principle of weak consistency forbids the formation of any hypothesis h that conflicts with established facts B.

    Strong consistency also forbids generation of any hypothesis h that is inconsistent with the negative examples E^{-} , considering A and B together; meaning inconsistently weak; if there are no counterexamples presented, Both criteria are met.

    Džeroski requires only Sufficiency (called Completeness there) and Strong consistency.

    The following is a classic illustration of how to use the acronyms when learning about the various types of family relationships:

    par: parent, fem: female, dau: daughter, g: George, h: Helen, m: Mary, t: Tom, n: Nancy, and e: Eve.

    The first step is to acquire necessary context (cf. picture)

    {\textit {par}}(h,m)\land {\textit {par}}(h,t)\land {\textit {par}}(g,m)\land {\textit {par}}(t,e)\land {\textit {par}}(n,e)\land {\textit {fem}}(h)\land {\textit {fem}}(m)\land {\textit {fem}}(n)\land {\textit {fem}}(e)

    , the positive examples

    {\textit {dau}}(m,h)\land {\textit {dau}}(e,t) , and the meaningless statement true to signify the absence of counterexamples.

    The inductive logic programming technique of relative least general generalization proposed by Plotkin will be used to learn how to formally define the daughter relation dau.

    The following are the steps taken by this method:.

    Literally positive examples should be contextualized with all available information:

    {\displaystyle {\begin{aligned}{\textit {dau}}(m,h)\leftarrow {\textit {par}}(h,m)\land {\textit {par}}(h,t)\land {\textit {par}}(g,m)\land {\textit {par}}(t,e)\land {\textit {par}}(n,e)\land {\textit {fem}}(h)\land {\textit {fem}}(m)\land {\textit {fem}}(n)\land {\textit {fem}}(e)\\{\textit {dau}}(e,t)\leftarrow {\textit {par}}(h,m)\land {\textit {par}}(h,t)\land {\textit {par}}(g,m)\land {\textit {par}}(t,e)\land {\textit {par}}(n,e)\land {\textit {fem}}(h)\land {\textit {fem}}(m)\land {\textit {fem}}(n)\land {\textit {fem}}(e)\end{aligned}}}

    , Normalize the clause form:

    {\displaystyle {\begin{aligned}{\textit {dau}}(m,h)\lor \lnot {\textit {par}}(h,m)\lor \lnot {\textit {par}}(h,t)\lor \lnot {\textit {par}}(g,m)\lor \lnot {\textit {par}}(t,e)\lor \lnot {\textit {par}}(n,e)\lor \lnot {\textit {fem}}(h)\lor \lnot {\textit {fem}}(m)\lor \lnot {\textit {fem}}(n)\lor \lnot {\textit {fem}}(e)\\{\textit {dau}}(e,t)\lor \lnot {\textit {par}}(h,m)\lor \lnot {\textit {par}}(h,t)\lor \lnot {\textit {par}}(g,m)\lor \lnot {\textit {par}}(t,e)\lor \lnot {\textit {par}}(n,e)\lor \lnot {\textit {fem}}(h)\lor \lnot {\textit {fem}}(m)\lor \lnot {\textit {fem}}(n)\lor \lnot {\textit {fem}}(e)\end{aligned}}}

    , Discordantly unite all literals that are compatible:

    {\textit {dau}}(x_{{me}},x_{{ht}}) from {\textit {dau}}(m,h) and {\textit {dau}}(e,t) , \lnot {\textit {par}}(x_{{ht}},x_{{me}}) from \lnot {\textit {par}}(h,m) and \lnot {\textit {par}}(t,e) , \lnot {\textit {fem}}(x_{{me}}) from \lnot {\textit {fem}}(m) and \lnot {\textit {fem}}(e) , \lnot {\textit {par}}(g,m) from \lnot {\textit {par}}(g,m) and \lnot {\textit {par}}(g,m) , similarly for all other literals requiring context

    \lnot {\textit {par}}(x_{{gt}},x_{{me}}) from \lnot {\textit {par}}(g,m) and \lnot {\textit {par}}(t,e) , including numerous figurative expressions that are not intended literally

    Any variables not found in a positive literal should be removed from the negated literals and deleted:

    after deleting all negated literals containing other variables than x_{{me}},x_{{ht}} , only

    {\textit {dau}}(x_{{me}},x_{{ht}})\lor \lnot {\textit {par}}(x_{{ht}},x_{{me}})\lor \lnot {\textit {fem}}(x_{{me}})

    remains, together with all literals from the base knowledge used as ground

    Revert to the Horn form of the clause:

    {\textit {dau}}(x_{{me}},x_{{ht}})\leftarrow {\textit {par}}(x_{{ht}},x_{{me}})\land {\textit {fem}}(x_{{me}})\land ({\text{all background knowledge facts}})

    The hypothesis h obtained using the rlgg method is the Horn clause.

    Leaving out crucial context information, the clause informally reads x_{{me}} is called a daughter of x_{{ht}} if x_{{ht}} is the parent of x_{{me}} and x_{{me}} is female, which is the generally agreed-upon definition.

    In reference to the aforementioned conditions, The criterion of necessity was met because the noun phrase predicate dau is not established vocabulary, which rules out the inference of any property with that predicate, positive examples include.

    The calculated hypothesis h proves sufficiency., since it, together with {\textit {par}}(h,m)\land {\textit {fem}}(m) from the background knowledge, implies the first positive example {\textit {dau}}(m,h) , and similarly h and {\textit {par}}(t,e)\land {\textit {fem}}(e) from the background knowledge implies the second positive example {\textit {dau}}(e,t) .

    H satisfies the condition of weak consistency., Since h is true in the (finite) Herbrand structure defined by the prior information,; meaning very consistent.

    Traditional meaning of grandmother in a family, viz.

    {\textit {gra}}(x,z)\leftarrow {\textit {fem}}(x)\land {\textit {par}}(x,y)\land {\textit {par}}(y,z)

    , cannot be taught in the manner described above, given that y is only used within the main clause; In the fourth stage, the corresponding literals would have been removed.

    Fixing this shortcoming, To allow for parametrization with a variety of literal post-selection heuristics, that step must be adjusted.

    Historically, The GOLEM system was developed using the rlgg methodology.

    Inductive Logic Programming system is a program that takes as an input logic theories B,E^{+},E^{-} and outputs a correct hypothesis H wrt theories B,E^{+},E^{-} An algorithm of an ILP system consists of two parts: hypothesis search and hypothesis selection.

    It begins with an inductive logic programming procedure that searches a hypothesis, then a selection algorithm is used to pick a subset of the hypotheses that were found (in most systems this is just one hypothesis).

    Each hypothesis discovered is given a score by a selection algorithm, and the top-scoring hypotheses are returned.

    The hypothesis with the smallest Kolmogorov complexity, for instance, would have the highest score according to the minimal compression length.

    An ILP system is complete iff for any input logic theories B,E^{+},E^{-} any correct hypothesis H wrt to these input theories can be found with its hypothesis search procedure.

    The Progol and other modern ILP systems, concerning B-Theory, E, H:

    B\land H\models E\iff B\land \neg E\models \neg H

    .

    First they construct an intermediate theory F called a bridge theory satisfying the conditions B\land \neg E\models F and F\models \neg H .

    Then as H\models \neg F , They use anti-entailment to generalize the negation of F's bridge theory.

    However, the anti-entailment operation is computationally more expensive due to its high non-determinism.

    Therefore, Anti-subsumption, a less non-deterministic operation than anti-entailment, can be used to search for alternative hypotheses.

    The question of whether or not a given ILP system's hypothesis search procedure is exhaustive arises. Using Yamamoto's example as an illustration, we can see that Progol's hypothesis search procedure using the inverse entailment inference rule is incomplete. procedure.

    First-order naive Bayesian classifiers (1BC) and second-order (1BC2):

    ACE (A Combined Engine)

    Aleph

    Atomic Information Processing Theory and Applications

    Claudien

    DL-Learner

    DMax

    FastLAS (Fast Learning from Answer Sets)

    FOIL (First Order Inductive Learner)

    Golem

    ILASP (Inductive Learning of Answer Set Programs)

    Imparo

    Archived 2011-11-28 at the Wayback Machine. inthelex (INcremental THEory Learner from EXamples).

    Lime

    Metagol

    Mio

    Ehud Shapiro's Model Inference System (MIS)

    PROGOL

    RSD

    Warmr (now included in ACE)

    ProGolem

    {End Chapter 2}

    {End Chapter 1}

    Chapter 2: Stephen Muggleton

    FBCS, FIET, FAAAI Stephen H. Muggleton, Muggleton attended the University of Edinburgh, where he earned a BS in CS in 1982 and a PhD in AI in 1986 under Donald Michie's tutelage.

    In the years after his doctorate, Muggleton worked as a research associate at the Turing Institute in Glasgow (1987–1991) and then as an EPSRC Advanced Research Fellow at the Oxford University Computing Laboratory (OUCL) (1992–1997), where he also established the Machine Learning Group. He attended the University of York until 1997, and then Imperial College London until 2001.

    Muggleton's work and its relevance, especially in the context of

    Enjoying the preview?
    Page 1 of 1