Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Existential Risk from Artificial General Intelligence: Fundamentals and Applications
Existential Risk from Artificial General Intelligence: Fundamentals and Applications
Existential Risk from Artificial General Intelligence: Fundamentals and Applications
Ebook120 pages1 hour

Existential Risk from Artificial General Intelligence: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Existential Risk from Artificial General Intelligence


Existential risk from artificial general intelligence refers to the idea that significant advancements in artificial general intelligence (AGI) could one day lead to the obliteration of the human race or to some other type of catastrophic event that cannot be reversed on a global scale.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Existential risk from artificial general intelligence


Chapter 2: Artificial general intelligence


Chapter 3: Superintelligence


Chapter 4: Technological singularity


Chapter 5: AI takeover


Chapter 6: Machine Intelligence Research Institute


Chapter 7: Nick Bostrom


Chapter 8: Friendly artificial intelligence


Chapter 9: AI capability control


Chapter 10: Superintelligence: Paths


(II) Answering the public top questions about existential risk from artificial general intelligence.


(III) Real world examples for the usage of existential risk from artificial general intelligence in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of existential risk from artificial general intelligence' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of existential risk from artificial general intelligence.

LanguageEnglish
Release dateJul 2, 2023
Existential Risk from Artificial General Intelligence: Fundamentals and Applications

Read more from Fouad Sabry

Related to Existential Risk from Artificial General Intelligence

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Existential Risk from Artificial General Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Existential Risk from Artificial General Intelligence - Fouad Sabry

    Chapter 1: Existential risk from artificial general intelligence

    Existential danger from artificial general intelligence refers to the idea that significant advancements in artificial general intelligence (AGI) might one day lead to the obliteration of the human race or to some other kind of catastrophic event that cannot be reversed on a global scale.

    In his article titled Darwin amid the Robots, which was published in 1863, the writer Samuel Butler was one of the first authors to voice significant worry on the possibility that highly evolved machines may offer existential hazards to the human race. He stated as follows:

    The conclusion is just a matter of time, but the fact that there will come a point in time when machines will have the true supremacy over the planet and the people who live it is something that no one with a properly philosophical mind can doubt even for a second.

    In the paper Sophisticated Machinery, A Heretical Theory, which was published in 1951, computer scientist Alan Turing suggested that artificial general intelligences would likely take control of the world when they got more intelligent than human beings:

    Now, for the purpose of the argument, let us suppose that [intelligent] robots are a true possibility, and consider the repercussions of building them... There would be no possibility of the robots passing away, and they would be able to engage in conversation with one another in order to improve their intelligence. Therefore, at some point in the future, we should have to anticipate that the robots will take over, in the same manner as is described in Samuel Butler's Erewhon.

    Finally, in the year 1965, I. J. Good came up with the idea that would later become known as a intelligence explosion. He also noted that the hazards were not well comprehended at the time:

    Let us define an ultraintelligent machine as one that is capable of far surpassing all of the intellectual activity of any man, regardless of how bright that guy may be. Since one of these intellectual pursuits is the creation of machines, a very intelligent machine would be able to create even more advanced machines; hence, there would surely be a intelligence explosion, and the intelligence of man would be left far behind. Therefore, the first really intelligent machine will be the final innovation that humanity will ever need to create, supposing that the machine will be submissive enough to instruct humans on how to keep it under control. It seems strange that this argument is only brought up so seldom in other genres except science fiction. There are instances when it is beneficial to treat science fiction as if it were real.

    Statements made sometimes by academics such as Marvin Minsky

    The go-to AI textbook for freshman and sophomore level students is called Artificial Intelligence: A Modern Approach.

    There is a possibility that the implementation of the system has defects that are not immediately noticeable but may have catastrophic effects. As an example, consider space probes: despite the fact that engineers are aware that defects in costly space probes are difficult to cure after launch, they have not been able to successfully prevent fatal problems from occuring in the past.

    No matter how much effort is spent on the pre-deployment design of a system, the specifications of that system almost always cause it to behave in a way that was not intended the very first time it is presented with a new situation. For instance, Microsoft's Tay exhibited no disrespectful behavior during the pre-deployment testing phase, but it was much too easy to trick it into behaving inappropriately when it was interacting with actual people.

    The following is an excerpt from the 2015 Open Letter on Artificial Intelligence, which makes reference to significant developments in the area of AI as well as the potential for AI to have massive long-term benefits or costs:

    With the progress that has been made in artificial intelligence research, the time has come to shift the emphasis of research away from just making AI more competent and toward maximizing the benefits that AI can provide to society. Such considerations were the impetus behind the AAAI 2008-09 Presidential Panel on Long-Term AI Futures as well as other projects on AI impacts, and they constitute a significant expansion of the field of artificial intelligence itself, which up until this point has primarily focused on techniques that are purposeless in and of themselves. We advocate for an expansion of research targeted at ensuring that more sophisticated artificial intelligence systems are resilient and useful. Our AI systems need to perform what we want them to do in order for us to be satisfied.

    This letter was signed by a number of prominent artificial intelligence researchers working in academia and business, including the president of the AAAI, Thomas Dietterich, as well as Eric Horvitz, Bart Selman, Francesca Rossi, Yann LeCun, and the creators of Google DeepMind and Vicarious.

    For humans, the concept of a superintelligent computer would be as foreign as the concept of human cognition would be for cockroaches. It is possible that such a computer would not have the best interests of mankind in mind; in fact, it is not clear that it would even care about the welfare of humans at all. If it is conceivable to create superintelligent AI, and if it is possible for the purposes of a superintelligence to be in contradiction with fundamental human values, then AI presents a danger to the survival of the human race. Because a superintelligence (a system that exceeds the capabilities of humans in every relevant endeavor) is able to outmaneuver humans whenever its goals conflict with human goals, the first superintelligence to be created will unavoidably result in the extinction of humanity unless the superintelligence decides to allow humanity to coexist with itself.

    Some academics have come up with fictitious situations with the goal of providing a more tangible illustration of some of their issues.

    Nick Bostrom expresses concern in his book Superintelligence that even if the timeline for the development of superintelligence turns out to be predictable, researchers may not take adequate safety precautions. This is in part because [it] could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous. Bostrom envisions a future in which artificial intelligence steadily gains increasing capability over the course of many decades. Accidents of many kinds may occur in the early stages of widespread deployment. For example, a driverless bus may veer into the lane of oncoming traffic, or a military drone may shoot into an uninvolved crowd. A significant number of activists have called for increased levels of control and regulation, and some of them have even predicted the imminent occurrence of a disaster. However, as time goes on, it becomes clear that the campaigners were incorrect. As artificial intelligence for automobiles becomes smarter, it will have fewer accidents; as military robots develop better at precision targeting, they will have less of an impact on innocent bystanders. Researchers have drawn an incorrect conclusion from the data, which is that the more intelligent an AI is, the safer it is. And so it is that we bravely proceed — into the spinning blades, as the very sophisticated AI makes a treacherous turn. "and leverages a crucial strategic advantage.

    Anthropomorphic arguments begin with the premise that robots are developing along a linear scale and that, as they progress to higher levels, they will begin to exhibit many human characteristics, such as a drive for power or a sense of morality. Even though anthropomorphic scenarios are prevalent in literature, the majority of academics who write on the existential danger of artificial intelligence do not believe they should be taken seriously. According to the renowned computer scientist Yann LeCun, There are many different instincts that humans have, such as the need to protect themselves, that motivate people to act in ways that are harmful to one another. These urges are hardwired into human brains, but there is none justification for designing robots with the same kinds of drives .

    The experiments conducted by Dario Floreano, in which certain robots spontaneously evolved a crude capacity for deception and tricked other robots into eating poison and dying, provide a scenario in which a trait, deception,

    Enjoying the preview?
    Page 1 of 1