Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Artificial Intelligence The Impact on Society
Artificial Intelligence The Impact on Society
Artificial Intelligence The Impact on Society
Ebook112 pages1 hour

Artificial Intelligence The Impact on Society

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book provides an in-depth look at the revolutionary advances taking place in the field of artificial intelligence and a thoughtful exploration of their vast implications. It traces the origins and early history of AI, from the pioneering work of figures like Turing and von Neumann to the recent explosion driven by deeper neural networks and bigger data. 

LanguageEnglish
PublisherAry S. Jr
Release dateFeb 19, 2024
ISBN9798224875467
Artificial Intelligence The Impact on Society
Author

Ary S. Jr.

Ary S. Jr. is a Brazilian author who writes about various topics, such as psychology, spirituality, self-help, and technology. He has published several e-books, some of which are available on platforms like Everand, Scribd, and Goodreads. He is passionate about sharing his knowledge and insights with his readers, and aims to inspire them to live a more fulfilling and meaningful life.

Read more from Ary S. Jr.

Related to Artificial Intelligence The Impact on Society

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Artificial Intelligence The Impact on Society

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Intelligence The Impact on Society - Ary S. Jr.

    Chapter 1

    The Dawn of AI

    For centuries, humans have dreamt of creating intelligent machines that could think as we do. Ancient myths told of Hephaestus' mechanical servants in Greece and the Yaojiang man-made men of China, reflecting our deep fascination with artificial intelligence. However, it was not until the 1950s that serious research into building intelligent machines began.

    In 1956, a seminal meeting was held at Dartmouth College where scientists and researchers first coined the term artificial intelligence and proposed that learning and intelligence could be simulated mechanically. This landmark conference marked the birth of AI as a formal academic discipline. The following decades saw steady yet incremental progress as early systems were developed.

    Pioneering programs in the 1950s like the Logic Theorist demonstrated rudimentary symbolic reasoning by proving mathematical theorems. General Problem Solver utilized means-ends analysis for problem-solving. Landmark programs also included SHRDLU with primitive natural language skills and chess programs that steadily improved. However, these initial AI systems exhibited only narrow, specialized forms of intelligence constrained to their programming.

    A key evolution began in the 1980s with the rise of machine learning and neural networks. Inspired by biological brains, these new techniques allowed systems to learn on their own by recognizing patterns in data rather than relying solely on explicit programming. Progress accelerated in the 2010s as deep learning using multi-layer neural networks achieved major breakthroughs, matching and surpassing human-level performance at tasks like image recognition.

    AI is about to enter a revolutionary new era. We are getting closer to artificial general intelligence with flexible learning capabilities thanks to developments in machine self-supervision. These formidable technologies have the potential to greatly improve and augment humankind if they continue to develop. To guarantee their safe and positive development and influence, we must carefully address significant societal and ethical issues as we usher in this new era of intelligent machines. The development of artificial intelligence reflects both our collective evolution as a species and our unwavering quest to unravel the secrets of cognition.

    The first stirrings of artificial intelligence began in the 1950s with the creation of simple logic-based programs. These early systems were a far cry from the thinking robots of science fiction. Rather than mechanical beings, the earliest AIs were software programs running on new computing machines.

    One of the pioneering projects was the Logic Theorist, created by Allen Newell, Cliff Shaw, and Herbert Simon at RAND Corporation in 1956. This program demonstrated a basic capability for symbolic reasoning by automatically proving mathematical theorems. Though its abilities were narrow, the Logic Theorist showcased how computation could emulate logical deduction in a digital form.

    Over the next decade, AI research produced progressively more capable systems. General Problem Solver employed means-ends analysis to break problems down into subgoals and take steps towards solutions. SHRDLU achieved rudimentary natural language understanding by manipulating virtual blocks in a simulated world. Dendral demonstrated early machine learning techniques for proposing organic chemical structures. Chess programs also evolved from restricted endgames to championship-level play.

    While impressive, these early AI systems exhibited only narrow, specialized forms of intelligence. They lacked the broad, open-ended learning abilities of human cognition. A key limitation was their reliance on explicit programming to define rules and strategies for domains.

    A major evolution began in the 1980s with the advent of machine learning techniques. Neural networks, genetic algorithms, and other data-driven methods empowered systems to induce rules automatically by recognizing patterns in large datasets. This marked a shift from programmed systems to those that could learn on their own from experience.

    Advances continued into the 2010s as deeper neural networks revolutionized pattern recognition. Now with many layered connections modeled after the brain, deep learning networks achieved human-level performance on tasks like image classification that had long confounded rule-based approaches. The development of massive computing infrastructures enabled training on vast troves of internet data.

    AI is currently on the verge of open-ended learning. Through self-supervision, systems educated on continuously growing and improving datasets exhibit an increasing range of autonomous capabilities. Turing's dream of self-learning computers is coming true as machines are gradually automating their own path towards general intelligence through experience, even while human judgment still sets the objectives. AI has reached its most revolutionary stage to date thanks to the progression that started with Logic Theorist.

    One of the earliest and most influential proposals for evaluating machine intelligence was Alan Turing's landmark idea known as the Turing Test. In 1950, the pioneering computer scientist Turing argued that if a machine could converse indistinguishably from a human, it could be said to demonstrate human-level thought.

    The Turing Test proposed placing a human judge in typed conversation with both a person and a machine without revealing their identities. If the judge was unable to reliably tell the machine's responses apart from the humans through natural language questioning, the machine would be said to have passed the test. Though a simplified formulation, Turing believed this would serve as a practical test for artificial intelligence rather than examining internal processing.

    Since its conception, the Turing Test has sparked enduring philosophical debate about what truly constitutes thinking and intelligent behavior. It also challenged generations of AI researchers to build ever more convincing natural language agents capable of human-like conversation. However, fully passing Turing's exacting standard remained elusively out of reach for decades as machines struggled with open-domain understanding.

    A major milestone was reached in 2014 when programmer Eugene Goostman passed a modified Turing Test by convincing 33% of judges they were speaking to a teen, not a computer program, after just 5 minutes of chatting. While still a limited success, this marked machines demonstrating their most human-like dialogue abilities to date by feigning the personality of a 13-year-old boy.

    The Turing Test is still a major factor in the advancement of dialog systems and natural language processing nowadays. Turing's suggested criterion of indistinguishability from humans is being approached by machines as they develop new capacities for fluency, understanding, and language production. Even though the Turing Test has been rightfully attacked as an incomplete description, its simplicity is still enticing, and it will be a huge accomplishment when a machine can finally persuade a human that it is also a mind

    Enjoying the preview?
    Page 1 of 1