Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Turing Test: Fundamentals and Applications
Turing Test: Fundamentals and Applications
Turing Test: Fundamentals and Applications
Ebook153 pages2 hours

Turing Test: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Turing Test


The Turing test, which Alan Turing developed in 1950 and initially referred to as the imitation game, is a test that determines whether or not a machine is capable of exhibiting intelligent behavior that is equivalent to, or indistinguishable from, that of a human. Turing envisioned that a human evaluator would judge natural language talks between a human and a machine intended to create human-like responses. These discussions would take place between a human and the machine. The evaluator would be aware that one of the two parties in discussion was a machine, and all of the participants would be kept at a physical distance from one another. Because the conversation would take place solely through a text-only channel, such as a computer keyboard and screen, the outcome would not be contingent on the device's capacity to convert written words into spoken ones. The machine is considered to have successfully completed the test if the evaluator is unable to consistently differentiate between the machine and the human subject. The results of the test would not depend on whether or not the machine was capable of providing accurate responses to questions; rather, they would be based solely on how closely its responses resembled those that a human would provide.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Turing test


Chapter 2: Artificial intelligence


Chapter 3: Computing Machinery and Intelligence


Chapter 4: Chinese room


Chapter 5: Loebner Prize


Chapter 6: Artificial general intelligence


Chapter 7: History of artificial intelligence


Chapter 8: Philosophy of artificial intelligence


Chapter 9: Eugene Goostman


Chapter 10: Winograd schema challenge


(II) Answering the public top questions about turing test.


(III) Real world examples for the usage of turing test in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of turing test' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of turing test.

LanguageEnglish
Release dateJul 3, 2023
Turing Test: Fundamentals and Applications

Read more from Fouad Sabry

Related to Turing Test

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Turing Test

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Turing Test - Fouad Sabry

    Chapter 1: Turing test

    The Turing test was developed by Alan Turing in 1950 and was first known as the imitation game. According to the Turing test, a computer is considered to have successfully completed the exam if an evaluator is unable to consistently distinguish it from a human participant. The results of the test would not rely on whether or not the machine was capable of providing accurate responses to questions; rather, they would be based only on how closely its responses resembled those that a person would provide.

    Turing first proposed the test in his article Computing Machinery and Intelligence, which he wrote in 1950 when he was employed at the University of Manchester. There are some of its critiques, like as John Searle's Chinese room, that are contentious in and of themselves.

    For a very long time, people have pondered whether or not it is even conceivable for robots to think, This is well established in the contrast between dualist and materialist perspectives on the mind.

    René Descartes prefigures aspects of the Turing test in his 1637 Discourse on the Method when he writes:

    How many unique automata or moving devices might be manufactured by the industry of man? Because of this, it is not difficult for us to comprehend that a machine could be constructed in such a way that it could utter words and even emit some responses to action on it of a corporeal kind, which causes a change in its organs; for example, if it is touched in a particular part, it may ask what we wish to say to it; if it is touched in another part, it may exclaim that it is being hurt, and so on and so forth. But it never occurs that it will organize its speech in a variety of ways in order to provide an acceptable response to anything that may be said in its presence, as even the lowest kind of man is able to accomplish.

    Descartes makes the observation in this passage that automata are capable of reacting to human interactions. However, he argues that such automata are unable to reply correctly to things that are stated in their presence in the same manner that any person can. Therefore, Descartes anticipates the Turing test by characterizing the inadequacy of a suitable verbal answer as the factor that differentiates humans from automatons. Descartes fails to consider the possibility that future automata might be able to overcome such insufficiency. As a result, Descartes does not propose the Turing test as such, even though he prefigures its conceptual framework and criterion. However, Descartes does not consider the possibility that future automata might be able to overcome such insufficiency.

    Denis Diderot formulates in his 1746 book Pensées philosophiques a Turing-test criterion, despite this, the significant implicit limiting assumption is still being maintained, in light of the fact that the participants are natural living beings, instead of looking at the items that were made,:

    If they discover a parrot that is capable of answering any and all questions, I will certainly assert that it is an intelligent entity.

    This does not imply that he agrees with this; rather, it demonstrates that at the period in question, materialists generally held this view.

    Dualists maintain that the mind is an immaterial phenomenon (or, at the very least, has non-physical properties) (This proposal is somewhat comparable to the Turing test, although it is not known for certain that Ayer's well-known philosophical classic was known to Turing.) To put it another way, if anything is unable to pass the awareness test, we cannot consider it conscious.

    By the 1940s, there was a well-established norm in science fiction to have a human assess whether a computer or an extraterrestrial is intelligent. It is probable that Turing would have been aware of these tests.

    Previous to the formation of the area of artificial intelligence (AI) study in 1956, scientists in the United Kingdom had been investigating machine intelligence for up to ten years prior to that time. in addition, as a component of that inquiry, he put out what can be seen as the precursor of his more recent experiments:

    It is not difficult to construct a paper machine that is capable of playing a chess game that is not terrible at all.

    First published by Turing in 1950, Computing Machinery and Intelligence was the first study to concentrate only on the concept of artificial intelligence in machines. In his work from 1950, Turing opens with the assertion, I propose to address the subject 'Can machines think?') Following is Turing's description of the updated version of the game:

    The question that we are going to raise today is, What will happen when a computer assumes the place of A in this game? Will the interrogator be just as likely to make a mistake with his decision when the game is played in this manner as he is when it is played between a man and a woman? Our original question, Can robots think? has been replaced with these ones.

    Later on in the article, Turing proposes a equivalent alternate version in which a judge only has conversations with a man and a machine.

    Since the 1990s, it has been widely realized that the Turing test represents a certain notion of intelligence that is performative, similar to gender and sexuality.

    In 1966, Joseph Weizenbaum developed a computer software that gave the impression of being able to pass the Turing test.

    The program, referred to as ELIZA, functioned by searching the user's entered comments for certain words or phrases.

    In the event that a keyword is detected, An application of a rule that modifies the user's remarks is made, and the whole sentence is what is given back.

    In the event that a keyword is not located, Either by giving a standard comeback or by reiterating one of her previous statements, ELIZA gives her response.

    even though this view is highly contentious (see Naïveté of interrogators below).

    In 1972, Kenneth Colby developed the software known as PARRY, which has been referred to as ELIZA with attitude..

    In his work titled Minds, Brains, and Programs, which was published in 1980, John Searle introduced the Chinese room thought experiment and stated that the Turing test could not be utilized to assess whether or not a computer could think. It was pointed out by Searle that computer programs (like ELIZA) might easily pass the Turing test by just manipulating symbols of which they had no comprehension. Because they lacked comprehension, it would be inaccurate to refer to them as thinking in the same way that humans did. Because of this, Searle came to the conclusion that the Turing test was unable to demonstrate that computers were capable of thinking.

    With the inaugural competition taking place in November of 1991, the Loebner Prize is an annual competition that serves as a forum for practical Turing testing.

    There has never been a winner of either the silver (text only) or the gold (audio and visual) award. Nevertheless, the computer program that, in the view of the competition's judges, shows the most human conversational behavior among that year's submissions has been given the bronze medal in every competition since the tournament's inception. In recent years, the Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) has been recognized as the recipient of the bronze prize on three separate occasions (2000, 2001, 2004). 2005 and 2006 were both years of victory for Learning AI Jabberwacky.

    The Loebner Prize is an annual competition that evaluates conversational intelligence. Typical winners are chatterbot programs, also known as Artificial Conversational Entities (ACE). The early requirements for the Loebner Prize confined talks to the following: Interrogators were only allowed to pursue one line of enquiry with each entity encounter since each conversation between an entry and a concealed person was focused on a particular subject. The Loebner Prize in 1995 did away with its restriction on participants' ability to engage in dialogue. The amount of time spent interacting between the judge and the entity has changed over the Loebner Prizes. At the University of Surrey for the Loebner 2003 experiment, each interrogator had a total of five minutes to engage in conversation with an entity, machine, or hidden-human participant. More than twenty minutes were allotted for participant interactions in Loebner Prizes competitions between the years 2004 and 2007.

    Saul Traiger contends that there are at least three primary interpretations of the Turing test, two of which are presented in Computing Machinery and Intelligence, and one of which he refers to as the Standard Interpretation. In his article, Computing Machinery and Intelligence, Traiger discusses the two interpretations that are presented in that work.

    The original essay written by Turing explains a simple card game that may be played by up to three people. Player A is a male participant, Player B is a female participant, and Player C might be either a male or female participant. Player C performs the role of the interrogator. Player C is unable to see either player A or player B while playing the imitation game, and the only way they can interact with each other is via the use of written notes. Player C is attempting to discover which of the two people is the guy and which is the lady by interrogating players A and B. Player C will ask questions to both of them. Player A's mission is to mislead the interrogator into making the incorrect choice, while Player B's objective is to provide the interrogator with the information they need to make the appropriate choice.

    Turing then inquires as to:

    What will take place in this game if it is played by a computer instead of a human? Will the interrogator be just as likely to make a mistake with his decision when the game is played in this manner as he is when it is played between a man and a woman? Our original question, Can robots think? has been replaced with these ones.

    The second version was published later, in 1950, in a paper written by Turing. A computer, acting in the same capacity as player A in the original imitation game test, plays the role of player A here. On the other hand, a male, not a woman, takes on the part of player B in this production.

    Let us center our attention on one single digital computer C. Is it correct that C may be made to play the role of A in the imitation game adequately by altering this computer to have an enough storage, properly raising its speed of action, and supplying it with a suitable software, while a man plays the part of B in the game?

    In this particular iteration of the game, both Player A (the computer) and Player B are attempting to deceive the interrogator into picking the wrong choice.

    Although the usual interpretation was not included in the

    Enjoying the preview?
    Page 1 of 1