Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Artificial Intelligence: 1, #114
Artificial Intelligence: 1, #114
Artificial Intelligence: 1, #114
Ebook177 pages2 hours

Artificial Intelligence: 1, #114

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

Thousands of years ago, Western, Indian, Chinese, and Greek philosophers proposed making machines and computers smarter. These philosophers attempted to characterize human thought as a symbolic system.

Their efforts saw some light when, in the 1940s, their line of thought led to the discovery of a machine based on the intellectual principle of mathematical reasoning. The programmable digital computer was the machine in question. The invention quickly inspired scientists all over the world, motivating them to work even harder to make intelligent machines a reality.

One of these scientists was John McCarthy, who coined the term "artificial intelligence" in 1956 at a conference he hosted at Dartmouth College. John McCarthy believed that if scientists banded together and combined their efforts, an intelligent machine could be created. He summoned a small group of scientists to discuss the possibility of them developing an electric brain, or intelligent machine.

Following the conference, participants concluded that it was possible to create a machine that could compete with human intelligence. The government liked the concept and agreed to fund the dream.

Because of hardware limitations, the scientists realized at the end of the 1960s that the project was more ambitious than they had anticipated. By that point, the government had had enough of them and withdrew financial support, signaling years of difficulty for Artificial Intelligence (AI) research (because there was because).

The Japanese government had a visionary initiative in the 1980s to support an expert system, which was further research towards AI. This prompted the governments of the United States and the United Kingdom to reintroduce funding for AI research. The research received billions of dollars, but governments ran out of patience while waiting for the researchers to meet their goals, and they withdrew their funding once more. Artificial intelligence is increasingly becoming a part of our daily lives. AI is being used extensively in retail stores, automobiles, and healthcare facilities.

All of the greatness and success of artificial intelligence did not happen by chance; it was the result of advanced development in a field known as machine learning.

LanguageEnglish
Release dateSep 1, 2021
ISBN9781915018045
Artificial Intelligence: 1, #114

Related to Artificial Intelligence

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Artificial Intelligence

Rating: 5 out of 5 stars
5/5

1 rating1 review

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 5 out of 5 stars
    5/5
    Loved the intense detail of artificial intelligent history in the book :)

Book preview

Artificial Intelligence - Stephen K. Marchant

Introduction

Thousands of years ago, Western, Indian, Chinese, and Greek philosophers proposed making machines and computers smarter. These philosophers attempted to characterize human thought as a symbolic system.

Their efforts saw some light when, in the 1940s, their line of thought led to the discovery of a machine based on the intellectual principle of mathematical reasoning. The programmable digital computer was the machine in question. The invention quickly inspired scientists all over the world, motivating them to work even harder to make intelligent machines a reality.

One of these scientists was John McCarthy, who coined the term artificial intelligence in 1956 at a conference he hosted at Dartmouth College. John McCarthy believed that if scientists banded together and combined their efforts, an intelligent machine could be created. He summoned a small group of scientists to discuss the possibility of them developing an electric brain, or intelligent machine.

Following the conference, participants concluded that it was possible to create a machine that could compete with human intelligence. The government liked the concept and agreed to fund the dream.

Because of hardware limitations, the scientists realized at the end of the 1960s that the project was more ambitious than they had anticipated. By that point, the government had had enough of them and withdrew financial support, signaling years of difficulty for Artificial Intelligence (AI) research (because there was because).

The Japanese government had a visionary initiative in the 1980s to support an expert system, which was further research towards AI. This prompted the governments of the United States and the United Kingdom to reintroduce funding for AI research. The research received billions of dollars, but governments ran out of patience while waiting for the researchers to meet their goals, and they withdrew their funding once more. Artificial intelligence is increasingly becoming a part of our daily lives. AI is being used extensively in retail stores, automobiles, and healthcare facilities.

All of the greatness and success of artificial intelligence did not happen by chance; it was the result of advanced development in a field known as machine learning.

The Application We Know

The most plausible application of artificial intelligence that we are aware of is that of enemies in video games. The issue is that this is not true intelligence. Is that correct? Science defines intelligence as an entity capable of acquiring new information and then combining that information with previous information to define current information. As a result, the primary challenge in creating true intelligence is figuring out how to teach machines to learn.

The mechanical nature of enemies in video games is why they are still referred to as artificial intelligence. The average video game player regards a first person shooter enemy as intelligent in the sense that the enemy will follow them. As a result, if the player starts to hide, the enemy will get closer. If the player is shooting in the open, the enemy will flee for cover after a few bullets. This gives the impression that the enemy is thinking.

As shown in the pseudocode above, if the line trace of where the enemy is pointing does not correspond to the player's collision but to another object collision, the enemy will move forward. This draws the enemy closer and gives the impression that the enemy is aware the player is hiding. Assume, however, that the player is targeting them and the line trace collides with the player collision box. In that case, the enemy will use a randomized value to determine whether to fire at the player or hide behind a collision. The line trace represents the player's path of motion.

The randomization gives the impression that the enemy chooses whether to duck beneath the cover or fire at the enemy. However, because there is no prior information involved, this is bogus intelligence. This is up-to-date information about current decisions. Furthermore, the machine lacks the ability to learn about the character they are confronted with. This means that, while the machine appears to be intelligent, no intelligence is at work. Instead, you have an instruction set that has been pre-programmed by a thinking individual that tells the machine what to do in a specific case or circumstance.

You may now believe that devices such as Amazon Echo or Google Home are hubs of true mechanical intelligence. The issue is that these machines function in the same way as the enemies we just discussed.

Human speech is governed by rules; think of them as the instruction manual for speaking a language. The important thing to remember about rules is that if they exist, a machine will most likely follow them. I won't go into detail about how recurrent neural networks work, but you can basically assume that the machine is simply analyzing and making probability statistics on which words are likely to fit together based on those rules. This is a very accurate method of predicting which words you will say as a person when using something like voice typing or when captions are made inside a video that has auto-translated your words.

We haven't had many machines that can learn and use that knowledge to improve themselves up until now. However, we have only recently entered the digital age and are making use of a technique known as backpropagation.

Backpropagation is at the heart of almost all machine learning algorithms currently in use. A neural network is intended to take database input and perform actions on that information within neurons, resulting in an output. If the output is incorrect, we change what is happening inside the neurons in order to arrive at a more optimized answer. The standard method is to take the variables within the neural network and create sets of randomized variables in order to find the best-randomized variables. This is a very slow process that usually takes a long time to complete. The new method is known as back-propagation, whereas what I just described to you is known as forward propagation.

Instead of using randomized variables and never knowing if you have reached the ultimate randomize combination, we use calculus and the neural network outputs. After the answer is incorrectly performed, the information is fed back into the neural network, and the neural network takes the incorrect results as well as the numbers used to get to those results and creates a combination that is more optimized as a result. Backpropagation is the first intelligent form of machine learning. However, in most cases, the vast majority of neural network applications, such as Siri, are not back-propagating networks. This may change as time goes on, but backpropagation requires a lot of computing power that most devices simply do not have.

Chapter 1: The History Of Artificial Intelligence (A.I.)

The theme is so motivating that Hollywood has never stopped talking about it. Since Metropolis, a silent movie from 1927, we have created robots, computers, and programs that function for our own good or in pursuit of destruction. Quickly, you can quote 'Blade Runner: The Android Hunter;' 'AI Artificial Intelligence;' She,' a personal assistant with the voice of Scarlett Johansson; the Matrix and Terminator franchises; "Me, Robot' based on the important work of Isaac Asimov; and '2001: A Space Odyssey with the threatening HAL 9000.

Ideas related to artificial intelligence are long before the technology that made this possible has appeared. Human beings have always wanted a computer that can do the job of behaving and thinking like us. Research from different fields started to go down that road explicitly during the Second World War. In 1943, Warren McCulloch and Walter Pitts presented an article that talks for the first time about neural networks, artificial reasoning systems in the form of a mathematical model that mimics our nervous system.

Another critical article of the time is the work of Claude Shannon in 1950 on how to program a computer to play chess with easy yet effective position calculations.

Putting This Into Effect

In the same 1950s, the legendary Alan Turing devised a way to determine whether a computer could be used as a person in a written conversation. This is the Turing test, originally known as The Imitation Game, the title of the film that portrayed the life of a researcher with Benedict Cumberbatch in the leading role.

SNARC was born in 1951, a mathematical operations calculator simulating synapses, which are the relations between neurons. Marvin Minsky, a classmate of the first paper on neural networks, was the person in charge. And in 1952, Arthur Samuel developed an IBM 701 checkerboard game that continues to improve on its own and becomes a challenge for amateur players.

The Base

Everything we have said so far is very accurate, but it came before the time called a kickoff. Ground zero was at the so-called Dartmouth Conference in 1956. This meeting brought Nathan Rochester from IBM, Shannon from the chess post, Marvin from SNARC, John McCarthy, and many others. The research area was called artificial intelligence by McCarthy. Even the industry maxim was defined: any aspect of learning or another type of intelligence can be represented so that a computer can be designed to simulate it.

From then on, whoever took part in the congress or liked the proposals came together to make AI get off the ground. The potential was so exciting that private and government agencies invested heavily in the field, including ARPA, the Advanced Projects Research Agency, the same place where the internet was born.

Take a look at the sequence of advances of that period: in 57, Frank Rosenblatt introduced the perceptron. This Transformers Character Name algorithm is a neural network of a layer that classifies results and starts as a computer named Mark 1. As early as 58, Lisp's programming language appeared, which became a staple in Artificial Intelligence systems, and today inspires a whole family of languages.

In 59, we see the word machine learning for the first time, defining a device that gives computers the ability to learn a specific function without being explicitly programmed for it. Essentially, this means feeding a data algorithm so that the computer learns to perform a task automatically.

In 64, he had the world's first chatbot, ELIZA, who spoke automatically, imitating a psychanalyst, using responses based on keywords and syntactic structure. And in 69, the Shakey is shown to be the first robot to combine agility, voice and a certain autonomy of action. It was sluggish and defective, but it worked.

The High And The Low

The field of natural language processing has been one of the most exciting. It is an AI field for understanding human speech. It has many applications, such as translators, generation of text languages, speech recognition, voice processing, and more.

But at the same time, we had high hopes and many academic studies; in reality, all was not as concrete or as quick as predicted. Robots were not walking around with superpowerful software. That's why, from the middle of the '70s to the beginning of the '80s, we're living in a dark time known as the winter of artificial intelligence, an age of little coverage, expenditure cuts, and low exposure to the field.

The field needed to be reinvented. One of the areas that

Enjoying the preview?
Page 1 of 1