Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Conceptual Dependency Theory: Fundamentals and Applications
Conceptual Dependency Theory: Fundamentals and Applications
Conceptual Dependency Theory: Fundamentals and Applications
Ebook93 pages1 hour

Conceptual Dependency Theory: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Conceptual Dependency Theory


The concept of conceptual dependency is a model of natural language comprehension that is employed in artificial intelligence (AI) systems.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Conceptual dependency theory


Chapter 2: Knowledge representation and reasoning


Chapter 3: Natural language processing


Chapter 4: Natural-language understanding


Chapter 5: Symbolic artificial intelligence


Chapter 6: Language of thought hypothesis


Chapter 7: Roger Schank


Chapter 8: Conceptual model


Chapter 9: Frame semantics (linguistics)


Chapter 10: Script theory


(II) Answering the public top questions about conceptual dependency theory.


(III) Real world examples for the usage of conceptual dependency theory in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of conceptual dependency theory' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of conceptual dependency theory.

LanguageEnglish
Release dateJun 29, 2023
Conceptual Dependency Theory: Fundamentals and Applications

Related to Conceptual Dependency Theory

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Conceptual Dependency Theory

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Conceptual Dependency Theory - Fouad Sabry

    Chapter 9: Conceptual dependency theory

    Artificial intelligence systems use conceptual dependence theory as a model for interpreting natural language.

    The model was first presented in 1969, in the early years of artificial intelligence, by Roger Schank of Stanford University. Students at Yale University who studied under Schank, including Robert Wilensky, Wendy Lehnert, and Janet Kolodner, heavily utilised this paradigm.

    Schank created the concept to describe knowledge for computer input using natural language. His aim was to make the meaning independent of the words used in the input, i.e., two statements with the same meaning would have a single representation, in part inspired by Sydney Lamb's work. Additionally, the system was made to make logical deductions.

    real-world things with individual characteristics.

    real-world deeds, each having a characteristic

    times

    locations

    The representation is then subject to a series of conceptual transitions. For instance, an ATRANS is used to represent a transfer like give or take, while a PTRANS is used to operate on locations like move or go. An MTRANS stands for mental operations like tell, etc.

    The action of an ATRANS on John and Mary, two real-world objects, is thus represented by a statement like John gave a book to Mary..

    {End Chapter 9}

    {End Chapter 1}

    Chapter 2: Knowledge representation and reasoning

    The representation and reasoning of knowledge (sometimes abbreviated as KRR), KR&R, KR²) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language.

    In order to construct formalisms that will make it simpler to design and build complicated systems, knowledge representation includes discoveries from the field of psychology into the ways in which people solve issues and represent information.

    In addition, the results of logic studies are incorporated into knowledge representation and reasoning in order to automate different types of reasoning, include things like the execution of rules and the connections between sets and subsets.

    Semantic nets, system architecture, frames, rules, and ontologies are a few examples of several types of formalisms that may be used to describe knowledge. Inference engines, theorem provers, and classifiers are all examples of automated reasoning engines.

    The earliest work in computerized knowledge representation was focused on general problem-solvers like the General Problem Solver (GPS) system, which was developed by Allen Newell and Herbert A. Simon in 1959. Other early work in computerized knowledge representation focused on specific problem domains. These systems included data structures for planning and deconstruction in their functionality. A objective would serve as the starting point for the system. After that, it would break that aim down into a series of smaller goals, and then it would work to devise tactics that would allow it to achieve each of those smaller goals.

    During this early period of artificial intelligence development, broad search algorithms like A* were also developed. However, because of the vague nature of the problems that needed to be solved, technologies like GPS could only be made to function well in very limited play domains (e.g. the blocks world). Researchers in artificial intelligence, such as Ed Feigenbaum and Frederick Hayes-Roth, came to the realization that in order to solve non-toy issues, it was important to concentrate systems on problems with more constraints.

    These efforts led to the cognitive revolution in psychology and to the phase of artificial intelligence that focused on knowledge representation. This phase of AI resulted in the development of expert systems in the 1970s and 1980s, production systems, frame languages, and a variety of other innovations. Expert systems that could match human expertise on a specialized job, such as medical diagnosis, were the primary emphasis of artificial intelligence research and development rather than broad problem solvers.

    During the middle of the 1980s, a number of scholars independently explored the idea of frame-based languages in addition to expert systems. A frame is analogous to an object class in that it is an abstract description of a category that describes objects that exist in the world, as well as issues and possible solutions to those problems. Frames were first used on systems that were designed to interact with humans, such as those that needed to understand natural language or social settings in which various default expectations, such as placing an order for food in a restaurant, narrowed the search space and allowed the system to choose appropriate responses to dynamic situations.

    It did not take long at all before both the rule-based researchers and the members of the frame groups discovered that there was a connection between the two methodologies. Frames were useful for portraying the actual world because they could be specified as classes, subclasses, and slots (data values) with a variety of limits on what values might be used. The process of making a medical diagnosis is an example of a situation that might benefit from the representation and application of rules. The development of integrated systems that merged frameworks and rules led to this. One of the most effective and well-known was Intellicorp's Knowledge Engineering Environment (KEE) from 1983. This environment was first introduced. KEE included a fully functional rule engine that supported both forward and backward chaining. In addition to that, it had a full frame-based knowledge base that was equipped with triggers, slots (data values), inheritance, and message forwarding. Message passing was first implemented in the object-oriented programming community rather than the AI research community; however, it was rapidly adopted by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines implemented by Symbolics, Xerox, and Texas Instruments.

    Any mechanically embodied intelligent process will be composed of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) regardless of such external semantic

    Enjoying the preview?
    Page 1 of 1