Inductive Logic Programming: Fundamentals and Applications
By Fouad Sabry
()
About this ebook
What Is Inductive Logic Programming
A subfield of symbolic artificial intelligence known as inductive logic programming (ILP) use logic programming as a consistent representation for examples, background knowledge, and hypotheses. An ILP system will develop a hypothesised logic program in the event that it is provided with an encoding of the known background knowledge and a collection of examples that are represented as a logical database of facts. This program will involve all of the positive examples and none of the negative instances.In this model, the hypothesis is derived from positive instances, negative examples, and background knowledge.
How You Will Benefit
(I) Insights, and validations about the following topics:
Chapter 1: Inductive Logic Programming
Chapter 2: Stephen Muggleton
Chapter 3: Progol
Chapter 4: Program Synthesis
Chapter 5: Inductive Programming
Chapter 6: First-Order Logic
Chapter 7: List of Rules of Inference
Chapter 8: Disjunctive Normal Form
Chapter 9: Resolution (Logic)
Chapter 10: Answer Set Programming
(II) Answering the public top questions about inductive logic programming.
(III) Real world examples for the usage of inductive logic programming in many fields.
(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of inductive logic programming' technologies.
Who This Book Is For
Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of inductive logic programming.
Read more from Fouad Sabry
Emerging Technologies in Medical
Related to Inductive Logic Programming
Titles in the series (100)
Nouvelle Artificial Intelligence: Fundamentals and Applications for Producing Robots With Intelligence Levels Similar to Insects Rating: 0 out of 5 stars0 ratingsArtificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation Rating: 0 out of 5 stars0 ratingsHopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories Rating: 0 out of 5 stars0 ratingsConvolutional Neural Networks: Fundamentals and Applications for Analyzing Visual Imagery Rating: 0 out of 5 stars0 ratingsHybrid Neural Networks: Fundamentals and Applications for Interacting Biological Neural Networks with Artificial Neuronal Models Rating: 0 out of 5 stars0 ratingsPerceptrons: Fundamentals and Applications for The Neural Building Block Rating: 0 out of 5 stars0 ratingsKernel Methods: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBackpropagation: Fundamentals and Applications for Preparing Data for Training in Deep Learning Rating: 0 out of 5 stars0 ratingsRadial Basis Networks: Fundamentals and Applications for The Activation Functions of Artificial Neural Networks Rating: 0 out of 5 stars0 ratingsHebbian Learning: Fundamentals and Applications for Uniting Memory and Learning Rating: 0 out of 5 stars0 ratingsRecurrent Neural Networks: Fundamentals and Applications from Simple to Gated Architectures Rating: 0 out of 5 stars0 ratingsClosed World Assumption: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAttractor Networks: Fundamentals and Applications in Computational Neuroscience Rating: 0 out of 5 stars0 ratingsRestricted Boltzmann Machine: Fundamentals and Applications for Unlocking the Hidden Layers of Artificial Intelligence Rating: 0 out of 5 stars0 ratingsDistributed Artificial Intelligence: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsLong Short Term Memory: Fundamentals and Applications for Sequence Prediction Rating: 0 out of 5 stars0 ratingsMultilayer Perceptron: Fundamentals and Applications for Decoding Neural Networks Rating: 0 out of 5 stars0 ratingsEmbodied Cognition: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Immune Systems: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsNeuroevolution: Fundamentals and Applications for Surpassing Human Intelligence with Neuroevolution Rating: 0 out of 5 stars0 ratingsNaive Bayes Classifier: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAgent Architecture: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHierarchical Control System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFeedforward Neural Networks: Fundamentals and Applications for The Architecture of Thinking Machines and Neural Webs Rating: 0 out of 5 stars0 ratingsGroup Method of Data Handling: Fundamentals and Applications for Predictive Modeling and Data Analysis Rating: 0 out of 5 stars0 ratingsBio Inspired Computing: Fundamentals and Applications for Biological Inspiration in the Digital World Rating: 0 out of 5 stars0 ratingsEmbodied Cognitive Science: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBlackboard System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsSituated Artificial Intelligence: Fundamentals and Applications for Integrating Intelligence With Action Rating: 0 out of 5 stars0 ratingsAlternating Decision Tree: Fundamentals and Applications Rating: 0 out of 5 stars0 ratings
Related ebooks
Artificial Intelligence Diagnosis: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHorn Clause: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsLogic Programming: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsGeneral Problem Solver: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBackward Chaining: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsRule of Inference: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsDefault Reasoning: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAutomated Theorem Proving: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsLogic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsNon Monotonic Logic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsParaconsistent Logic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAutomated Reasoning: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsPropositional Logic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAlgorithmic Probability: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Myths: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAbductive Reasoning: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsClosed World Assumption: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsEvent Calculus: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFluent Calculus: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsEvolutionary Computation: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFuzzy Logic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsComputability Theory: An Introduction to Recursion Theory Rating: 0 out of 5 stars0 ratingsLogic for Problem Solving, Revisited Rating: 5 out of 5 stars5/5Algorithmic Information Theory: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsGroup Method of Data Handling: Fundamentals and Applications for Predictive Modeling and Data Analysis Rating: 0 out of 5 stars0 ratingsMarkov Models Supervised and Unsupervised Machine Learning: Mastering Data Science And Python Rating: 2 out of 5 stars2/5Soft Computing: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFeedforward Neural Networks: Fundamentals and Applications for The Architecture of Thinking Machines and Neural Webs Rating: 0 out of 5 stars0 ratingsPhysical Symbol System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratings
Intelligence (AI) & Semantics For You
Midjourney Mastery - The Ultimate Handbook of Prompts Rating: 5 out of 5 stars5/5Creating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5Mastering ChatGPT: 21 Prompts Templates for Effortless Writing Rating: 5 out of 5 stars5/580 Ways to Use ChatGPT in the Classroom Rating: 5 out of 5 stars5/5101 Midjourney Prompt Secrets Rating: 3 out of 5 stars3/5Chat-GPT Income Ideas: Pioneering Monetization Concepts Utilizing Conversational AI for Profitable Ventures Rating: 4 out of 5 stars4/5ChatGPT For Fiction Writing: AI for Authors Rating: 5 out of 5 stars5/5AI for Educators: AI for Educators Rating: 5 out of 5 stars5/5Dark Aeon: Transhumanism and the War Against Humanity Rating: 5 out of 5 stars5/5ChatGPT For Dummies Rating: 0 out of 5 stars0 ratingsEnterprise AI For Dummies Rating: 3 out of 5 stars3/5A Quickstart Guide To Becoming A ChatGPT Millionaire: The ChatGPT Book For Beginners (Lazy Money Series®) Rating: 4 out of 5 stars4/5The Secrets of ChatGPT Prompt Engineering for Non-Developers Rating: 5 out of 5 stars5/5ChatGPT: The Future of Intelligent Conversation Rating: 4 out of 5 stars4/5ChatGPT Rating: 1 out of 5 stars1/5Artificial Intelligence: A Guide for Thinking Humans Rating: 4 out of 5 stars4/5Dancing with Qubits: How quantum computing works and how it can change the world Rating: 5 out of 5 stars5/5ChatGPT Ultimate User Guide - How to Make Money Online Faster and More Precise Using AI Technology Rating: 0 out of 5 stars0 ratingsSummary of Super-Intelligence From Nick Bostrom Rating: 5 out of 5 stars5/5TensorFlow in 1 Day: Make your own Neural Network Rating: 4 out of 5 stars4/5
Reviews for Inductive Logic Programming
0 ratings0 reviews
Book preview
Inductive Logic Programming - Fouad Sabry
Chapter 1: Inductive logic programming
Inductive logic programming (ILP) is a branch of symbolic AI that standardizes on logic programming to represent data like data sets, knowledge bases, and hypotheses. An ILP system, given a knowledge encoding and a set of examples in the form of a logical database of facts, will generate a hypothesized logic program that includes all the positive and none of the negative examples.
Schema: positive examples + negative examples + background knowledge ⇒ hypothesis.
Bioinformatics and NLP are two fields that benefit greatly from inductive logic programming. In a logical context, Gordon Plotkin and Ehud Shapiro established the first theoretical groundwork for inductive machine learning. In the PROGOL system, Muggleton was the first to implement Inverse entailment. In this context, induction
refers more to philosophical than mathematical induction (the latter being the process of proving a property for all members of a well-ordered set).
The necessary context is provided in the form of a B-theory of logic, in logic programming, Horn clauses are a common construction.
The positive and negative examples are given as a conjunction E^{+} and E^{-} of unnegated and negated ground literals, respectively.
A valid hypothesis h is a logical statement that guarantees the following results:.
{\displaystyle {\begin{array}{llll}{\text{Necessity:}}&B&\not \models &E^{+}\\{\text{Sufficiency:}}&B\land h&\color {blue}{\models }&E^{+}\\{\text{Weak consistency:}}&B\land h&\not \models &{\textit {false}}\\{\text{Strong consistency:}}&B\land h\land E^{-}&\not \models &{\textit {false}}\end{array}}}The word necessity
doesn't limit what can be done, as long as the positive facts can be explained without a hypothesis, its generation is strictly forbidden.
Sufficiency
requires any generated hypothesis h to explain all positive examples E^{+} .
The principle of weak consistency
forbids the formation of any hypothesis h that conflicts with established facts B.
Strong consistency
also forbids generation of any hypothesis h that is inconsistent with the negative examples E^{-} , considering A and B together; meaning inconsistently weak
; if there are no counterexamples presented, Both criteria are met.
Džeroski requires only Sufficiency
(called Completeness
there) and Strong consistency
.
The following is a classic illustration of how to use the acronyms when learning about the various types of family relationships:
par: parent, fem: female, dau: daughter, g: George, h: Helen, m: Mary, t: Tom, n: Nancy, and e: Eve.
The first step is to acquire necessary context (cf. picture)
{\textit {par}}(h,m)\land {\textit {par}}(h,t)\land {\textit {par}}(g,m)\land {\textit {par}}(t,e)\land {\textit {par}}(n,e)\land {\textit {fem}}(h)\land {\textit {fem}}(m)\land {\textit {fem}}(n)\land {\textit {fem}}(e), the positive examples
{\textit {dau}}(m,h)\land {\textit {dau}}(e,t) , and the meaningless statement true
to signify the absence of counterexamples.
The inductive logic programming technique of relative least general generalization
proposed by Plotkin will be used to learn how to formally define the daughter relation dau.
The following are the steps taken by this method:.
Literally positive examples should be contextualized with all available information:
{\displaystyle {\begin{aligned}{\textit {dau}}(m,h)\leftarrow {\textit {par}}(h,m)\land {\textit {par}}(h,t)\land {\textit {par}}(g,m)\land {\textit {par}}(t,e)\land {\textit {par}}(n,e)\land {\textit {fem}}(h)\land {\textit {fem}}(m)\land {\textit {fem}}(n)\land {\textit {fem}}(e)\\{\textit {dau}}(e,t)\leftarrow {\textit {par}}(h,m)\land {\textit {par}}(h,t)\land {\textit {par}}(g,m)\land {\textit {par}}(t,e)\land {\textit {par}}(n,e)\land {\textit {fem}}(h)\land {\textit {fem}}(m)\land {\textit {fem}}(n)\land {\textit {fem}}(e)\end{aligned}}}, Normalize the clause form:
{\displaystyle {\begin{aligned}{\textit {dau}}(m,h)\lor \lnot {\textit {par}}(h,m)\lor \lnot {\textit {par}}(h,t)\lor \lnot {\textit {par}}(g,m)\lor \lnot {\textit {par}}(t,e)\lor \lnot {\textit {par}}(n,e)\lor \lnot {\textit {fem}}(h)\lor \lnot {\textit {fem}}(m)\lor \lnot {\textit {fem}}(n)\lor \lnot {\textit {fem}}(e)\\{\textit {dau}}(e,t)\lor \lnot {\textit {par}}(h,m)\lor \lnot {\textit {par}}(h,t)\lor \lnot {\textit {par}}(g,m)\lor \lnot {\textit {par}}(t,e)\lor \lnot {\textit {par}}(n,e)\lor \lnot {\textit {fem}}(h)\lor \lnot {\textit {fem}}(m)\lor \lnot {\textit {fem}}(n)\lor \lnot {\textit {fem}}(e)\end{aligned}}}, Discordantly unite all literals that are compatible:
{\textit {dau}}(x_{{me}},x_{{ht}}) from {\textit {dau}}(m,h) and {\textit {dau}}(e,t) , \lnot {\textit {par}}(x_{{ht}},x_{{me}}) from \lnot {\textit {par}}(h,m) and \lnot {\textit {par}}(t,e) , \lnot {\textit {fem}}(x_{{me}}) from \lnot {\textit {fem}}(m) and \lnot {\textit {fem}}(e) , \lnot {\textit {par}}(g,m) from \lnot {\textit {par}}(g,m) and \lnot {\textit {par}}(g,m) , similarly for all other literals requiring context
\lnot {\textit {par}}(x_{{gt}},x_{{me}}) from \lnot {\textit {par}}(g,m) and \lnot {\textit {par}}(t,e) , including numerous figurative expressions that are not intended literally
Any variables not found in a positive literal should be removed from the negated literals and deleted:
after deleting all negated literals containing other variables than x_{{me}},x_{{ht}} , only
{\textit {dau}}(x_{{me}},x_{{ht}})\lor \lnot {\textit {par}}(x_{{ht}},x_{{me}})\lor \lnot {\textit {fem}}(x_{{me}})remains, together with all literals from the base knowledge used as ground
Revert to the Horn form of the clause:
{\textit {dau}}(x_{{me}},x_{{ht}})\leftarrow {\textit {par}}(x_{{ht}},x_{{me}})\land {\textit {fem}}(x_{{me}})\land ({\text{all background knowledge facts}})The hypothesis h obtained using the rlgg method is the Horn clause.
Leaving out crucial context information, the clause informally reads x_{{me}} is called a daughter of x_{{ht}} if x_{{ht}} is the parent of x_{{me}} and x_{{me}} is female
, which is the generally agreed-upon definition.
In reference to the aforementioned conditions, The criterion of necessity
was met because the noun phrase predicate dau is not established vocabulary, which rules out the inference of any property with that predicate, positive examples include.
The calculated hypothesis h proves sufficiency.
, since it, together with {\textit {par}}(h,m)\land {\textit {fem}}(m) from the background knowledge, implies the first positive example {\textit {dau}}(m,h) , and similarly h and {\textit {par}}(t,e)\land {\textit {fem}}(e) from the background knowledge implies the second positive example {\textit {dau}}(e,t) .
H satisfies the condition of weak consistency.
, Since h is true in the (finite) Herbrand structure defined by the prior information,; meaning very consistent
.
Traditional meaning of grandmother in a family, viz.
{\textit {gra}}(x,z)\leftarrow {\textit {fem}}(x)\land {\textit {par}}(x,y)\land {\textit {par}}(y,z), cannot be taught in the manner described above, given that y is only used within the main clause; In the fourth stage, the corresponding literals would have been removed.
Fixing this shortcoming, To allow for parametrization with a variety of literal post-selection heuristics, that step must be adjusted.
Historically, The GOLEM system was developed using the rlgg methodology.
Inductive Logic Programming system is a program that takes as an input logic theories B,E^{+},E^{-} and outputs a correct hypothesis H wrt theories B,E^{+},E^{-} An algorithm of an ILP system consists of two parts: hypothesis search and hypothesis selection.
It begins with an inductive logic programming procedure that searches a hypothesis, then a selection algorithm is used to pick a subset of the hypotheses that were found (in most systems this is just one hypothesis).
Each hypothesis discovered is given a score by a selection algorithm, and the top-scoring hypotheses are returned.
The hypothesis with the smallest Kolmogorov complexity, for instance, would have the highest score according to the minimal compression length.
An ILP system is complete iff for any input logic theories B,E^{+},E^{-} any correct hypothesis H wrt to these input theories can be found with its hypothesis search procedure.
The Progol and other modern ILP systems, concerning B-Theory, E, H:
B\land H\models E\iff B\land \neg E\models \neg H.
First they construct an intermediate theory F called a bridge theory satisfying the conditions B\land \neg E\models F and F\models \neg H .
Then as H\models \neg F , They use anti-entailment to generalize the negation of F's bridge theory.
However, the anti-entailment operation is computationally more expensive due to its high non-determinism.
Therefore, Anti-subsumption, a less non-deterministic operation than anti-entailment, can be used to search for alternative hypotheses.
The question of whether or not a given ILP system's hypothesis search procedure is exhaustive arises. Using Yamamoto's example as an illustration, we can see that Progol's hypothesis search procedure using the inverse entailment inference rule is incomplete. procedure.
First-order naive Bayesian classifiers (1BC) and second-order (1BC2):
ACE (A Combined Engine)
Aleph
Atomic Information Processing Theory and Applications
Claudien
DL-Learner
DMax
FastLAS (Fast Learning from Answer Sets)
FOIL (First Order Inductive Learner)
Golem
ILASP (Inductive Learning of Answer Set Programs)
Imparo
Archived 2011-11-28 at the Wayback Machine. inthelex (INcremental THEory Learner from EXamples).
Lime
Metagol
Mio
Ehud Shapiro's Model Inference System (MIS)
PROGOL
RSD
Warmr (now included in ACE)
ProGolem
{End Chapter 2}
{End Chapter 1}
Chapter 2: Stephen Muggleton
FBCS, FIET, FAAAI Stephen H. Muggleton, Muggleton attended the University of Edinburgh, where he earned a BS in CS in 1982 and a PhD in AI in 1986 under Donald Michie's tutelage.
In the years after his doctorate, Muggleton worked as a research associate at the Turing Institute in Glasgow (1987–1991) and then as an EPSRC Advanced Research Fellow at the Oxford University Computing Laboratory (OUCL) (1992–1997), where he also established the Machine Learning Group. He attended the University of York until 1997, and then Imperial College London until 2001.
Muggleton's work and its relevance, especially in the context of