Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Narrow Artificial Intelligence: Fundamentals and Applications
Narrow Artificial Intelligence: Fundamentals and Applications
Narrow Artificial Intelligence: Fundamentals and Applications
Ebook239 pages3 hours

Narrow Artificial Intelligence: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Narrow Artificial Intelligence


The term "weak artificial intelligence" refers to AI that only incorporates a small portion of the mind or, alternatively, AI that is just focused on doing a single specific task. According to John Searle, it "would be useful for testing hypotheses about minds, but would not actually be minds". Artificial intelligence that isn't very strong attempts to replicate how humans carry out simple tasks such as memorizing information, sensing its surroundings, and finding solutions to straightforward issues. Strong artificial intelligence, on the other hand, makes use of technology in order to be able to think and learn on its own. It is possible for computers to build their own ways of thinking in a manner similar to that of humans by making use of technologies such as algorithms and past knowledge. Artificial intelligence systems that are very advanced are currently learning how to function without the assistance of the humans who first developed them. Weak artificial intelligence is unable to think for itself; all it can do is mimic the physical actions it can observe and learn from.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Weak artificial intelligence


Chapter 2: Artificial intelligence


Chapter 3: Chatbot


Chapter 4: Machine learning


Chapter 5: Intelligent agent


Chapter 6: History of artificial intelligence


Chapter 7: Applications of artificial intelligence


Chapter 8: Turing test


Chapter 9: Glossary of artificial intelligence


Chapter 10: Explainable artificial intelligence


(II) Answering the public top questions about narrow artificial intelligence.


(III) Real world examples for the usage of narrow artificial intelligence in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of narrow artificial intelligence' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of narrow artificial intelligence.

LanguageEnglish
Release dateJul 3, 2023
Narrow Artificial Intelligence: Fundamentals and Applications

Related to Narrow Artificial Intelligence

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Narrow Artificial Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Narrow Artificial Intelligence - Fouad Sabry

    Chapter 1: Weak artificial intelligence

    Artificial intelligence that only implements a small portion of the mind is referred to as narrow AI. This is in contrast to strong AI, which implements the entirety of the mind, It stands in contrast to Strong AI, which can be described in a variety of ways, including:

    A machine with the ability to apply intelligence to any problem, rather than just one particular problem, is referred to as having artificial general intelligence (AGI).

    Intelligence comparable to that of a typical human being is defined as having been achieved by a machine on the same level.

    A machine that possesses an intelligence that is significantly higher than that of the typical human being is referred to as superintelligence.

    A machine that possesses awareness, sentience, and cognition is referred to as having artificial consciousness.

    Scholars such as Antonio Lieto have argued that the current research on artificial intelligence (AI) and cognitive modeling are perfectly aligned with the weak-AI hypothesis (this distinction should not be confused with the general vs. narrow AI distinction), and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic due to the fact that "artificial models of brain and mind can be used to understand mental phenomena without pretending that that that that th (as, on the other hand, implied by the strong AI assumption).

    AI can be classified as being "… limited to a single, task with very specific parameters.

    Most modern AI systems would be classified in this category."

    Weak AI is sometimes called narrow AI, However, the latter are typically understood to be subfields that fall under the former.

    The testing of hypotheses pertaining to minds or components of minds is often not included in narrow AI, rather than this, the adoption of certain superficially similar copycat features.

    Many currently existing systems that claim to use artificial intelligence are likely operating as a narrow AI focused on a specific problem, and are not insufficiently intelligent in the conventional sense.

    Siri, Cortana, , and Google Assistant are all examples of specialized artificial intelligence, but they are not good examples of a weak AIdiscuss, due to the fact that they only perform a limited set of predetermined functions.

    They do not put any components of thoughts into action, They make use of natural language processing in conjunction with previously established guidelines.

    They are not examples of strong artificial intelligence in particular since they do not possess actual intelligence nor self-awareness.

    AI researcher Ben Goertzel, 2010 on his personal blog, Siri was described as being VERY limited and brittle, as proven by the fact that it produces irritating answers if you ask inquiries that are outside the scope of the program.

    There is currently a lack of comprehensive documentation regarding the distinctions between weak and strong artificial intelligence (AI). As discussed in the Terminology section, weak artificial intelligence is typically linked with very simple technologies such as voice-recognition software like Siri or Alexa. Strong artificial intelligence, on the other hand, has not yet been fully deployed or tested, hence it is only really discussed in movies or other forms of popular culture media.

    Some observers believe that weak artificial intelligence could be dangerous because it could fail in unanticipated ways due to its brittleness. Weak artificial intelligence might result in interruptions to the electric grid, damage to nuclear power plants, problems for the global economy, and the wrong direction being taken by autonomous vehicles.

    Self-driving automobiles, robots employed in the medical profession, and diagnostic doctors are a few examples of what are considered to be examples of poor artificial intelligence. The fact that self-driving cars are capable of causing fatal accidents in a manner comparable to that of humans is the reason why all of these AI systems are subpar. It is possible that people will receive medicines that have been improperly categorized and organized. Incorrect medical diagnoses also have the potential to have severe, and even fatal, repercussions if they are made by an AI. Because of the patterns, it might be impossible to devise a single consistent approach that was successful each and every time.

    It's possible that we haven't yet recognized the presence of even the most basic artificial intelligence systems in our culture, but this is because they are so pervasive. To cite just a few examples, we now have autocorrection for people who type, speech recognition for computers that convert speech to text, and massive growth in the fields of data science. Artificial intelligence (AI) may be an effective instrument that may be used to make our lives better; nevertheless, it may also be a perilous technology with the potential for things to spiral out of control.

    Artificial intelligence and machine learning, or more specifically weak AI, have been used by social media platforms like Facebook and others like them to figure out how to predict how people will respond when they are shown specific images. For example, Facebook can tell you how people will react when they see a certain cat video. Weak artificial intelligence systems have been able to determine, based on the content that users publish and the patterns or trends that emerge, what people will identify with.

    {End Chapter 1}

    Chapter 2: Artificial intelligence

    As contrast to the natural intelligence exhibited by animals, including humans, artificial intelligence (AI) refers to the intelligence demonstrated by robots. Research in artificial intelligence (AI) has been described as the area of study of intelligent agents, which refers to any system that senses its surroundings and performs actions that optimize its possibility of attaining its objectives. In other words, AI research is a discipline that studies intelligent agents. The term AI impact refers to the process by which activities that were formerly thought to need intelligence but are no longer included in the concept of artificial intelligence as technology advances. AI researchers have adapted and incorporated a broad variety of approaches for addressing issues, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics, in order to tackle these difficulties. Computer science, psychology, linguistics, philosophy, and a great many other academic disciplines all contribute to the development of AI.

    The theory that human intellect can be so accurately characterized that a computer may be constructed to imitate it was the guiding principle behind the establishment of this discipline. This sparked philosophical debates concerning the mind and the ethical implications of imbuing artificial organisms with intellect comparable to that of humans; these are topics that have been investigated by myth, literature, and philosophy ever since antiquity.

    In ancient times, artificial creatures with artificial intelligence were used in various narrative devices.

    and are often seen in works of literature, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R.

    The formal design for Turing-complete artificial neurons that McCullouch and Pitts developed in 1943 was the first piece of work that is now widely understood to be an example of artificial intelligence.

    Attendees of the conference went on to become pioneers in the field of AI research.

    They, together with their pupils, were able to build programs that the press referred to as astonishing. These programs included machines that were able to learn checkers techniques, solving word problems in algebra, demonstrating logical theorems and having good command of the English language.

    Around the middle of the decade of the 1960s, research done in the United States

    was receiving a significant amount of funding from the Department of Defense, and facilities were being set up all around the globe.

    as well as continuous pressure from the Congress of the United States to invest in more fruitful endeavors, The United States of America

    both the Canadian and British governments stopped funding exploratory research in artificial intelligence.

    The following few years would be referred to in the future as a AI winter.

    a time when it was difficult to acquire financing for artificial intelligence initiatives.

    a kind of artificial intelligence software that mimicked the knowledge and analytical prowess of human professionals.

    By 1985, Over a billion dollars was now being transacted in the artificial intelligence business.

    While this is going on, The United States and the United Kingdom have reestablished support for university research as a direct result of Japan's computer programme for the fifth generation.

    However, When the market for lisp machines crashed in 1987, it was the beginning of a downward spiral.

    AI once again fallen into disfavor, as well as another, longer-lasting winter started.

    Geoffrey Hinton is credited for reviving interest in neural networks and the concept of connectionism.

    Around the middle of the 1980s, David Rumelhart and a few others were involved. During the 1980s, many soft computing tools were created.

    include things like neural networks, fuzzy systems, Theory of the grey system, the use of evolutionary computing as well as a number of methods derived from statistical or mathematical optimization.

    Through the late 1990s and into the early 21st century, AI worked to progressively rehabilitate its image by developing solutions that were tailored to address particular challenges. Because of the tight emphasis, researchers were able to develop conclusions that could be verified, use a greater number of mathematical approaches, and work with experts from other areas (such as statistics, economics and mathematics). In the 1990s, the solutions that were produced by AI researchers were never referred to as artificial intelligence, but by the year 2000, they were being employed extensively all around the world. According to Jack Clark of Bloomberg, the year 2015 was a watershed year for artificial intelligence. This is due to the fact that the number of software projects that employ AI inside Google went from sporadic use in 2012 to more than 2,700 projects in 2015.

    The overarching challenge of emulating (or fabricating) intelligence has been segmented into a variety of more specific challenges. These are certain characteristics or skills that researchers anticipate an intelligent system to possess. The greatest emphasis has been paid to the characteristics that are detailed below.

    Researchers in the early days of computer science devised algorithms that mirrored the step-by-step reasoning that people use when they solve problems or make logical inferences. Research in artificial intelligence had by the late 1980s and early 1990s established strategies for coping with uncertain or partial information. These approaches used notions from probability and economics. Even among humans, the kind of step-by-step deduction that early studies in artificial intelligence could replicate is uncommon. They are able to address the majority of their issues by making snap decisions based on their intuition.

    Information engineering and the representation of that knowledge are what enable artificial intelligence systems to intelligently respond to inquiries and draw conclusions about real-world events.

    An ontology is a collection of objects, relations, ideas, and attributes that are formally characterized in order to ensure that software agents are able to comprehend them. An ontology is a description of what exists. Upper ontologies are ontologies that seek to provide a basis for all other information and operate as mediators between domain ontologies, which cover specialized knowledge about a particular knowledge domain. Upper ontologies are the most broad ontologies, and they are also termed ontologies (field of interest or area of concern). A software that is genuinely intelligent would also require access to commonsense knowledge, which is the collection of facts that the typical human is aware of. In most cases, the description logic of an ontology, such as the Web Ontology Language, is used to express the semantics of an ontology. In addition to other domains, situations, events, states, and times; causes and effects; knowledge about knowledge (what we know about what other people know); default reasoning (things that humans assume are true until they are told differently and will continue to be true even when other facts are changing); and knowledge about knowledge about knowledge are all examples of domains. The breadth of commonsense information (the number of atomic facts that the typical human is aware of is immense) and the sub-symbolic nature of the majority of commonsense knowledge are two of the most challenging challenges in artificial intelligence (much of what people know is not represented as facts or statements that they could express verbally). Image interpretation, therapeutic decision assistance, knowledge discovery (the extraction of interesting and actionable insights from big datasets), and other disciplines are all areas that might benefit from artificial intelligence.

    An intelligent agent that is capable of planning creates a representation of the current state of the world, makes predictions about how their actions will affect the environment, and makes decisions that maximize the utility (or value) of the available options. In traditional problems of planning, the agent may make the assumption that it is the only system working in the world. This enables the agent to be assured of the results that will come from the actions that it takes. However, if the agent is not the sole player, it is necessary for the agent to reason under ambiguity, continually reevaluate its surroundings, and adapt to new circumstances.

    The study of computer systems that can improve themselves automatically via the accumulation of experience is referred to as machine learning (ML), and it has been an essential part of AI research ever since the start of the subject. In the learning method known as reinforcement, the agent is rewarded for appropriate replies and disciplined for inappropriate ones. The agent organizes its replies into categories in order to formulate a strategy for navigating the issue area it faces.

    The term natural language processing (NLP) refers to a technique that enables computers to read and comprehend human discourse. A natural language processing system that is sophisticated enough would make it possible to create user interfaces that employ natural language and would also make it possible to acquire information directly from human-written sources, such as newswire texts. Information retrieval, question answering, and machine translation are three examples of easy uses of natural language processing (NLP).

    Formal syntax was used by symbolic AI in order to convert the underlying structure of phrases into logical form. Due to the intractable nature of logic, this did not result in the production of usable applications.

    Machine perception refers to the capacity to draw inferences about characteristics of the external environment based on data collected by sensors (including cameras, microphones, wireless communications, and active lidar, sonar, radar, and tactile sensors). Applications include recognition of spoken language, The capability to assess visual information is what we mean when we talk about computer vision.

    Robotics makes extensive use of AI nowadays.

    The act of reducing a movement job to its basic components, such as individual joint motions, is referred to as motion planning. Compliant motion is a kind of movement that includes moving while retaining physical touch with an object. This type of movement happens rather often. Robots have the ability to gain knowledge via experience and figure out how to operate effectively in spite of friction and gear slipping.

    Affective computing is an umbrella term for a number of different fields of study that include computers that identify, interpret, replicate or attempt to comprehend human feelings.

    feeling and disposition both.

    For example, Some virtual assistants have been trained to talk in a conversational manner, and some can even banter in a comic manner.

    It gives the impression that they are more attuned to the psychological underpinnings of human interaction.

    or in any other way to make the interaction between humans and computers easier.

    However, this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are.

    Textual sentiment analysis and, more recently, multimodal sentiment analysis) are examples of somewhat successful applications of emotional computing. In these applications, artificial intelligence classifies the affects expressed by a filmed subject.

    A computer program that has general intelligence is capable of solving a broad range of issues in a manner that is comparable to the breadth and adaptability of human intellect. There are a few different schools of thought on how to go about creating artificial general intelligence. In their article, Work in Different Individual Domains Can be Integrated into an Advanced Multi-Agent System or Cognitive Architecture with General Intelligence, Hans Moravec and Marvin Minsky argue that the work done in different individual domains can be incorporated into an advanced cognitive architecture. Others are of the opinion that anthropomorphic aspects, such as a computer-generated brain or

    Enjoying the preview?
    Page 1 of 1