Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Artificial Intelligence: Ultimate Handbook
Artificial Intelligence: Ultimate Handbook
Artificial Intelligence: Ultimate Handbook
Ebook302 pages3 hours

Artificial Intelligence: Ultimate Handbook

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. A quip in Tesler's Theorem says "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology. Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomously operating cars, intelligent routing in content delivery networks, and military simulations.

Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other. These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"), the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences. Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).
LanguageEnglish
PublisherLulu.com
Release dateAug 26, 2020
ISBN9781716633010
Artificial Intelligence: Ultimate Handbook
Author

Li Wei

Dr. Li received his Ph.D. degree from the School of Information Technologies at The University of Sydney in 2012. He is currently a research fellow in the Centre for Distributed and High Performance Computing, and school of Information Technologies in The University of Sydney. His research is supported by Early Career Researcher (ECR) funding scheme and Clean energy and intelligent networks cluster funding scheme in The University of Sydney. His research interests include Internet of Things, wireless sensor network, task scheduling, resource management, optimization and nature-inspired algorithms. He is a member of IEEE and ACM

Related to Artificial Intelligence

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Artificial Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Intelligence - Li Wei

    Artificial Intelligence

    Ultimate Handbook

    Author: Wikipedians

    Date: April 12, 2020

    Version: 1.0

    Editor: Wei Li, liwei@ict.ac.cn

    Two heads are better than one.—- Chinese Old Saying

    Copyright

    This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

    C:\Users\wn\AppData\Local\Microsoft\Windows\INetCache\Content.Word\978-1-71663-301-0.png

    Preface

    In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelli- gence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of intelligent agents: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term artificial intelligence is often used to describe machines (or computers) that mimic cognitive functions that humans associate with the human mind, such as learning and problem solving.

    As machines become increasingly capable, tasks considered to require intelligence are often re- moved from the definition of AI, a phenomenon known as the AI effect. A quip in Tesler’s Theorem says AI is whatever hasn’t been done yet. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology. Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomously operating cars, intelligent routing in content delivery networks, and military simulations.

    Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an AI winter), followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other. These sub-fields are based on technical considerations, such as particular goals (e.g. robotics or machine learning), the use of particular tools (logic or artificial neural networks), or deep philosophical differences. Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).

    The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate ob- jects. General intelligence is among the field’s long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, proba- bility and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.

    The field was founded on the assumption that human intelligence can be so precisely described that a machine can be made to simulate it. This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.

    In the twenty-first century, AI techniques have experienced a resurgence following concurrent ad- vances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.

    Part I History

    Figure 1.1: Silver didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence

    Thought-capable artificial beings appeared as storytelling devices in antiquity, and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots). These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence. The study of mechanical or formal reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as 0 and 1, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church-Turing thesis. Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed changing the question from whether a machine was intelligent, to whether or not it is possible for machinery to show intelligent behaviour. The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete artificial neurons.

    The field of AI research was born at a workshop at Dartmouth College in 1956, where the term Artificial Intelligence was coined by John McCarthy to distinguish the field from cybernetics and escape the influence of the cyberneticist Norbert Wiener. Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research. They and their students produced programs that the press described as astonishing: computers were learning checkers strategies (c. 1954) (and by 1959 were reportedly playing better than the average human), solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI’s founders were optimistic about the future: Herbert Simon predicted, machines will be capable, within twenty years, of doing any work a man can do. Marvin Minsky agreed, writing, within a generation ... the problem of creating ’artificial intelligence’ will substantially be solved.

    They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an AI winter, a period when obtaining funding for AI projects was difficult.

    In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.

    The development of metal-oxide-semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) transistor technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.

    In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas. The success was due to increasing computational power (See Moore’s law and transistor count), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.

    In 2011, a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012. The Kinect, which provides a 3D body-motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research as do intelligent personal assistants in smartphones. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years. This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is a relatively complex game, more so than Chess.

    According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI Google increased from a sporadic usage in 2012 to more than 2,700 projects. Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks. He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people. In a 2017 survey, one in five companies reported they had incorporated AI in some offerings or processes. Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an AI superpower. However, it has been acknowledged that reports regarding artificial intelligence have tended to be exaggerated.

    The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

    The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.

    Eventually, it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an AI winter. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again.

    Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to new methods, the application of powerful computer hardware, and the collection of immense data sets.

    Myth and legend

    In Greek Mythology, Talos was a giant constructed of bronze who acted as guardian for the island of Crete. He would throw boulders at the ships of invaders, and would complete 3 circuits around the islands perimeter daily. According to pseudo-Apollodorus’ Bibliotheke, Hephaestus forged Talos with the aid of a cyclops and presented the automaton as a gift to Minos. In the Argonautica, Jason and the Argonauts defeated him by way of a single plug near his foot which, once removed, allowed the vital ichor to flow out from his body and left him inanimate.

    Pygmalion was a legendary king and sculptor of Greek Mythology, famously represented in Ovid’s Metamorphoses. In the 10th book of Ovid’s narrative poem, Pygmalion becomes disgusted with women when he witnesses the way in which the Propoetides prostitute themselves. Despite this, he makes offerings at the temple of Venus asking the goddess to bring to him a woman just like a statue he carved and fell in love with. Indeed the statue, Galatea, came to life and by some accounts she and Pygmalion conceived a child.

    The Golem is an artificial being of Jewish Folklore, created from clay and–depending on the source– often given some sort of objective. The earliest written account regarding golem-making is found in the writings of Eleazar ben Judah of Worms circa 12-13th C. During the Middle Ages, it was believed that the animation of a Golem could be achieved by insertion of a piece of paper with any of God’s names on it, into the mouth of the clay figure. Unlike legendary automata like Brazen Heads, a Golem was unable to speak.

    Alchemical means of artificial intelligence

    Figure 2.1: Depiction of a homunculus from Goethe’s Faust

    In Of the Nature of Things, written by the Swiss-born alchemist, Paracelsus, he describes a procedure which he claims can fabricate an artificial man. By placing the sperm of a man in horse dung, and feeding it the Arcanum of Mans blood after 40 days, the concoction will become a living infant. Predating Paracelsus was Jbir ibn Hayyn’s take on the homunculus: Takwin, o1994alchemical In Faust, The Second Part of the Tragedy by Johann Wolfgang von Goethe, an alchemically fabricated Homunculus, destined to live forever in the flask in which he was made, endeavors to be born into a full human body. Upon the initiation of this transformation, however, the flask shatters and the Homunculus dies.

    Early-modern legendary automata

    These automata legendary during the early modern period were said to possess the magical ability to answer questions put to them. The late medieval alchemist and scholar Roger Bacon was purported to have fabricated a brazen head, having developed a legend of having been a wizard. These legends were similar to the Norse myth of the Head of Mímir. According to legend, Mímir was known for his intellect and wisdom, and was beheaded in the Æsir-Vanir War. Odin is said to have embalmed the head with herbs and spoke incantations over it such that Mímir’s head remained able to speak wisdom to Odin. Odin then kept the head near him for counsel.

    Modern fiction

    By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s Darwin among the Machines. AI has become a regular topic of science fiction through the present.

    Realistic humanoid automata were built by craftsman from every civilization, including Yan Shi, Hero of Alexandria, Al-Jazari, Pierre Jaquet-Droz, and Wolfgang von Kempelen. The oldest known automata were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion—Hermes Trismegistus wrote that by discovering the true nature of the gods, man has been able to reproduce it.

    Figure 2.2: Al-Jazari’s programmable automata (1206 CE)

    Figure 2.3: Gottfried Leibniz, who speculated that human reason could be reduced to mechanical calculation

    Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanical—or formal—reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwrizm (who developed algebra and gave his name to algorithm) and European scholastic philosophers such as William of Ockham and Duns Scotus.

    Spanish philosopher Ramon Llull (1232-1315) developed several logical machines devoted to the production of knowledge by logical means; Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge. Llull’s work had a great influence on Gottfried Leibniz, who redeveloped his ideas.

    In the 17th century, Leibniz, Thomas Hobbes and René Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry. Hobbes famously wrote in Leviathan: reason is nothing but reckoning. Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate. These philosophers had begun to

    Enjoying the preview?
    Page 1 of 1