Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Artificial Intelligence Commonsense Knowledge: Fundamentals and Applications
Artificial Intelligence Commonsense Knowledge: Fundamentals and Applications
Artificial Intelligence Commonsense Knowledge: Fundamentals and Applications
Ebook185 pages2 hours

Artificial Intelligence Commonsense Knowledge: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Artificial Intelligence Commonsense Knowledge


In the field of artificial intelligence research, "commonsense knowledge" refers to truths about the real world, such as "Lemons are sour" or "Cows say moo," that are considered to be assumed to be known by every human being. It is a challenge that has not yet been resolved in the field of Artificial General Intelligence. Advice Taker, created in 1959 by John McCarthy, was the first artificial intelligence program to address common sense information.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Commonsense knowledge (artificial intelligence)


Chapter 2: Artificial intelligence


Chapter 3: Knowledge representation and reasoning


Chapter 4: Symbolic artificial intelligence


Chapter 5: Commonsense reasoning


Chapter 6: Logic in computer science


Chapter 7: Computational intelligence


Chapter 8: Frame (artificial intelligence)


Chapter 9: Explainable artificial intelligence


Chapter 10: Glossary of artificial intelligence


(II) Answering the public top questions about artificial intelligence commonsense knowledge.


(III) Real world examples for the usage of artificial intelligence commonsense knowledge in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of artificial intelligence commonsense knowledge' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of artificial intelligence commonsense knowledge.

LanguageEnglish
Release dateJun 29, 2023
Artificial Intelligence Commonsense Knowledge: Fundamentals and Applications

Read more from Fouad Sabry

Related to Artificial Intelligence Commonsense Knowledge

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Artificial Intelligence Commonsense Knowledge

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Intelligence Commonsense Knowledge - Fouad Sabry

    Chapter 1: Commonsense knowledge (artificial intelligence)

    In the field of artificial intelligence, commonsense knowledge refers to information about the everyday world that is taken as read by humans. It is one of the open questions in the field of AGI. Advice Taker, created by John McCarthy in 1959, was the first artificial intelligence program to deal with common sense knowledge. In the absence of all the relevant data, using common sense is a useful fallback. In the same way that people do when faced with the unknown, AI systems will often make common sense assumptions or default assumptions based on commonly held beliefs about everyday objects. This is written as Normally P holds, usually P, or typically P so Assume P in an AI system or standard English. If we know that Tweety is a bird, and we also know that typically birds fly, then we may reasonably assume that Tweety can fly, even if we don't know anything else about Tweety. With a truth maintenance process, the AI can update its opinion of Tweety as new information is discovered or learned. Since we already know that penguins cannot fly, any new information that Tweety is a penguin changes our original assumption.

    With the help of common sense reasoning, computers can make inferences about the nature of everyday situations just like humans can, and can even change their minds when new information becomes available. Time, gaps in knowledge, and the connection between causes and effects all fall into this category. An essential feature of explainable AI is the capacity to articulate the relationship between causes and effects. Since these truth maintenance algorithms keep detailed logs of assumptions, they also provide a built-in explanation facility. On contemporary common-sense reasoning benchmark tests like the Winograd Schema Challenge, all existing computer programs that attempt human-level AI perform extremely poorly in comparison to humans.

    It has been a long-standing problem in artificial intelligence to create exhaustive knowledge bases of commonsense assertions (CSKBs). Significant improvements were made from early expert-driven efforts like CYC and WordNet by the crowdsourced OpenMind Commonsense project, which in turn led to the crowdsourced ConceptNet KB. Text mining (WebChild, Quasimodo, TransOMCS, Ascent) and harvesting these directly from pre-trained language models are just a few of the methods that have been tried in an effort to automate CSKB construction (AutoTOMIC). Since they were built automatically, the quality of these resources is generally lower than ConceptNet, but they are much larger. Common sense knowledge representation also has some work to do. The common data model for CSKB projects, the triple store, may not be optimal for attacking more nuanced assertions in natural languages. GenericsKB stands out because it does not further normalize sentences and instead keeps them in their entirety.

    BullySpace, an extension of the commonsense knowledgebase ConceptNet, was developed by MIT researchers around 2013 to monitor social media for bullying comments. To help the system determine that comments like Put on a wig and lipstick and be who you really are are more likely to be an insult when directed at a boy than a girl, BullySpace included over 200 stereotype-based semantic assertions.

    In 2012, for instance, ConceptNet incorporated the following 21 linguistically neutral relations::

    IsA (An RV is a vehicle)

    UsedFor

    HasA (A rabbit has a tail)

    CapableOf

    Desires

    CreatedBy (cake can be created by baking)

    PartOf

    Causes

    LocatedNear

    AtLocation (Somewhere a Cook can be at a restaurant)

    DefinedAs

    SymbolOf (X represents Y)

    ReceivesAction (cake can be eaten)

    HasPrerequisite Unless step A is taken, X cannot carry out step Y.

    MotivatedByGoal For the sake of satisfying your appetite, you might bake.

    CausesDesire (baking makes you want to follow recipe)

    MadeOf

    HasFirstSubevent (Before proceeding with action X, entity Y must perform action Z).

    HasSubevent (eat has subevent swallow)

    HasLastSubevent

    Cyc

    Sources of information: Open Common Sense and ConceptNet (datastore and NLP engine)

    Quasimodo

    Webchild

    TupleKB

    True Knowledge

    Graphiq

    Ascent++

    {End Chapter 1}

    Chapter 2: Artificial intelligence

    As contrast to the natural intelligence exhibited by animals, including humans, artificial intelligence (AI) refers to the intelligence demonstrated by robots. Research in artificial intelligence (AI) has been described as the area of study of intelligent agents, which refers to any system that senses its surroundings and performs actions that optimize its possibility of attaining its objectives. In other words, AI research is a discipline that studies intelligent agents. The term AI impact refers to the process by which activities that were formerly thought to need intelligence but are no longer included in the concept of artificial intelligence as technology advances. AI researchers have adapted and incorporated a broad variety of approaches for addressing issues, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics, in order to tackle these difficulties. Computer science, psychology, linguistics, philosophy, and a great many other academic disciplines all contribute to the development of AI.

    The theory that human intellect can be so accurately characterized that a computer may be constructed to imitate it was the guiding principle behind the establishment of this discipline. This sparked philosophical debates concerning the mind and the ethical implications of imbuing artificial organisms with intellect comparable to that of humans; these are topics that have been investigated by myth, literature, and philosophy ever since antiquity.

    In ancient times, artificial creatures with artificial intelligence were used in various narrative devices.

    and are often seen in works of literature, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R.

    The formal design for Turing-complete artificial neurons that McCullouch and Pitts developed in 1943 was the first piece of work that is now widely understood to be an example of artificial intelligence.

    Attendees of the conference went on to become pioneers in the field of AI research.

    They, together with their pupils, were able to build programs that the press referred to as astonishing. These programs included machines that were able to learn checkers techniques, solving word problems in algebra, demonstrating logical theorems and having good command of the English language.

    Around the middle of the decade of the 1960s, research done in the United States

    was receiving a significant amount of funding from the Department of Defense, and facilities were being set up all around the globe.

    as well as continuous pressure from the Congress of the United States to invest in more fruitful endeavors, The United States of America

    both the Canadian and British governments stopped funding exploratory research in artificial intelligence.

    The following few years would be referred to in the future as a AI winter.

    a time when it was difficult to acquire financing for artificial intelligence initiatives.

    a kind of artificial intelligence software that mimicked the knowledge and analytical prowess of human professionals.

    By 1985, Over a billion dollars was now being transacted in the artificial intelligence business.

    While this is going on, The United States and the United Kingdom have reestablished support for university research as a direct result of Japan's computer programme for the fifth generation.

    However, When the market for lisp machines crashed in 1987, it was the beginning of a downward spiral.

    AI once again fallen into disfavor, as well as another, longer-lasting winter started.

    Geoffrey Hinton is credited for reviving interest in neural networks and the concept of connectionism.

    Around the middle of the 1980s, David Rumelhart and a few others were involved. During the 1980s, many soft computing tools were created.

    include things like neural networks, fuzzy systems, Theory of the grey system, the use of evolutionary computing as well as a number of methods derived from statistical or mathematical optimization.

    Through the late 1990s and into the early 21st century, AI worked to progressively rehabilitate its image by developing solutions that were tailored to address particular challenges. Because of the tight emphasis, researchers were able to develop conclusions that could be verified, use a greater number of mathematical approaches, and work with experts from other areas (such as statistics, economics and mathematics). In the 1990s, the solutions that were produced by AI researchers were never referred to as artificial intelligence, but by the year 2000, they were being employed extensively all around the world. According to Jack Clark of Bloomberg, the year 2015 was a watershed year for artificial intelligence. This is due to the fact that the number of software projects that employ AI inside Google went from sporadic use in 2012 to more than 2,700 projects in 2015.

    The overarching challenge of emulating (or fabricating) intelligence has been segmented into a variety of more specific challenges. These are certain characteristics or skills that researchers anticipate an intelligent system to possess. The greatest emphasis has been paid to the characteristics that are detailed below.

    Researchers in the early days of computer science devised algorithms that mirrored the step-by-step reasoning that people use when they solve problems or make logical inferences. Research in artificial intelligence had by the late 1980s and early 1990s established strategies for coping with uncertain or partial information. These approaches used notions from probability and economics. Even among humans, the kind of step-by-step deduction that early studies in artificial intelligence could replicate is uncommon. They are able to address the majority of their issues by making snap decisions based on their intuition.

    Information engineering and the representation of that knowledge are what enable artificial intelligence systems to intelligently respond to inquiries and draw conclusions about real-world events.

    An ontology is a collection of objects, relations, ideas, and attributes that are formally characterized in order to ensure that software agents are able to comprehend them. An ontology is a description of what exists. Upper ontologies are ontologies that seek to provide a basis for all other information and operate as mediators between domain ontologies, which cover specialized knowledge about a particular knowledge domain. Upper ontologies are the most broad ontologies, and they are also termed ontologies (field of interest or area of concern). A software that is genuinely intelligent would also require access to commonsense knowledge, which is the collection of facts that the typical human is aware of. In most cases, the description logic of an ontology, such as the Web Ontology Language, is used to express the semantics of an ontology. In addition to other domains, situations, events, states, and times; causes and effects; knowledge about knowledge (what we know about what other people know); default reasoning (things that humans assume are true until they are told differently and will continue to be true even when other facts are changing); and knowledge about knowledge about knowledge are all examples of domains. The breadth of commonsense information (the number of atomic facts that the typical human is aware of is immense) and the sub-symbolic nature of the majority of commonsense knowledge are two of the most challenging challenges in artificial intelligence (much of what people know is not represented as facts or statements that they could express verbally). Image interpretation, therapeutic decision assistance, knowledge discovery (the extraction of interesting and actionable insights from big datasets), and other disciplines are all areas that might benefit from artificial intelligence.

    An intelligent agent that is capable of planning creates a representation of the current state of the world, makes predictions about how their actions will affect the environment, and makes decisions that maximize the utility (or value) of the available options. In traditional problems of planning, the agent may make the assumption that it is the only system working in the world. This enables the agent to be assured of the results that will come from the actions that it takes. However, if the agent is not the sole player, it is necessary for the agent to reason under ambiguity, continually reevaluate its surroundings, and adapt to new circumstances.

    The study of computer systems that can improve themselves automatically via the accumulation of experience is referred to as machine learning (ML), and it has been an essential part of AI research ever since the start of the subject. In the learning method known as reinforcement, the agent is rewarded for appropriate replies and disciplined for inappropriate ones. The agent organizes its replies into categories in order to formulate a strategy for navigating the issue area it faces.

    The term natural language processing (NLP) refers to a technique that enables computers to read and comprehend human discourse. A natural language processing system that is sophisticated enough would make it possible to create user interfaces that employ natural language and would also make it possible to acquire information directly from human-written sources, such as

    Enjoying the preview?
    Page 1 of 1