Existential Risk from Artificial General Intelligence: Fundamentals and Applications
By Fouad Sabry
()
About this ebook
What Is Existential Risk from Artificial General Intelligence
Existential risk from artificial general intelligence refers to the idea that significant advancements in artificial general intelligence (AGI) could one day lead to the obliteration of the human race or to some other type of catastrophic event that cannot be reversed on a global scale.
How You Will Benefit
(I) Insights, and validations about the following topics:
Chapter 1: Existential risk from artificial general intelligence
Chapter 2: Artificial general intelligence
Chapter 3: Superintelligence
Chapter 4: Technological singularity
Chapter 5: AI takeover
Chapter 6: Machine Intelligence Research Institute
Chapter 7: Nick Bostrom
Chapter 8: Friendly artificial intelligence
Chapter 9: AI capability control
Chapter 10: Superintelligence: Paths
(II) Answering the public top questions about existential risk from artificial general intelligence.
(III) Real world examples for the usage of existential risk from artificial general intelligence in many fields.
(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of existential risk from artificial general intelligence' technologies.
Who This Book Is For
Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of existential risk from artificial general intelligence.
Read more from Fouad Sabry
Emerging Technologies in Medical
Related to Existential Risk from Artificial General Intelligence
Titles in the series (100)
Artificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation Rating: 0 out of 5 stars0 ratingsRecurrent Neural Networks: Fundamentals and Applications from Simple to Gated Architectures Rating: 0 out of 5 stars0 ratingsBio Inspired Computing: Fundamentals and Applications for Biological Inspiration in the Digital World Rating: 0 out of 5 stars0 ratingsRadial Basis Networks: Fundamentals and Applications for The Activation Functions of Artificial Neural Networks Rating: 0 out of 5 stars0 ratingsFeedforward Neural Networks: Fundamentals and Applications for The Architecture of Thinking Machines and Neural Webs Rating: 0 out of 5 stars0 ratingsConvolutional Neural Networks: Fundamentals and Applications for Analyzing Visual Imagery Rating: 0 out of 5 stars0 ratingsLong Short Term Memory: Fundamentals and Applications for Sequence Prediction Rating: 0 out of 5 stars0 ratingsGroup Method of Data Handling: Fundamentals and Applications for Predictive Modeling and Data Analysis Rating: 0 out of 5 stars0 ratingsK Nearest Neighbor Algorithm: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Immune Systems: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Systems Integration: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAlternating Decision Tree: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories Rating: 0 out of 5 stars0 ratingsAttractor Networks: Fundamentals and Applications in Computational Neuroscience Rating: 0 out of 5 stars0 ratingsStatistical Classification: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsCompetitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition Rating: 0 out of 5 stars0 ratingsMultilayer Perceptron: Fundamentals and Applications for Decoding Neural Networks Rating: 0 out of 5 stars0 ratingsHebbian Learning: Fundamentals and Applications for Uniting Memory and Learning Rating: 0 out of 5 stars0 ratingsNouvelle Artificial Intelligence: Fundamentals and Applications for Producing Robots With Intelligence Levels Similar to Insects Rating: 0 out of 5 stars0 ratingsRestricted Boltzmann Machine: Fundamentals and Applications for Unlocking the Hidden Layers of Artificial Intelligence Rating: 0 out of 5 stars0 ratingsPerceptrons: Fundamentals and Applications for The Neural Building Block Rating: 0 out of 5 stars0 ratingsNeuroevolution: Fundamentals and Applications for Surpassing Human Intelligence with Neuroevolution Rating: 0 out of 5 stars0 ratingsSituated Artificial Intelligence: Fundamentals and Applications for Integrating Intelligence With Action Rating: 0 out of 5 stars0 ratingsNaive Bayes Classifier: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAgent Architecture: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsCognitive Architecture: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsEmbodied Cognitive Science: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBackpropagation: Fundamentals and Applications for Preparing Data for Training in Deep Learning Rating: 0 out of 5 stars0 ratingsMonitoring and Surveillance Agents: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsSupport Vector Machine: Fundamentals and Applications Rating: 0 out of 5 stars0 ratings
Related ebooks
Artificial Intelligence Takeover: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFriendly Artificial Intelligence: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAI Prevails: How to Keep Yourself and Humanity Safe Rating: 0 out of 5 stars0 ratingsArtificial Intelligence and the End of Humanity Rating: 0 out of 5 stars0 ratingsThe Hidden Truth Rating: 0 out of 5 stars0 ratingsInteligencia Artificial y el fin de la humanidad Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Confinement: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBeyond Eden: Ethics, Faith, and the Future of Superintelligent AI Rating: 0 out of 5 stars0 ratingsThe Hitchhiker’s Guide to AI: A Handbook for All Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Effect: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsNarrow Artificial Intelligence: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsRage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All Rating: 4 out of 5 stars4/5Virtually Human: The Promise—and the Peril—of Digital Immortality Rating: 4 out of 5 stars4/5Synthetic Intelligence: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsThe Singularity Paradox: A Battle for the Future Rating: 0 out of 5 stars0 ratingsThe Case for Killer Robots Rating: 0 out of 5 stars0 ratingsTechnological Singularity: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Commonsense Knowledge: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Ethics: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsDreyfus Critique: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAgainst Utopia: Technology Won't Save Us Rating: 0 out of 5 stars0 ratingsTech's Ethics Rating: 0 out of 5 stars0 ratingsAI and/or Superintelligence: The Human Choice: 1A, #1 Rating: 0 out of 5 stars0 ratingsIn the Past, in the Presence and in the Future of Artificial Intelligence AI: 1A, #1 Rating: 0 out of 5 stars0 ratingsThe Last Story? - Mankind and Ai Rating: 0 out of 5 stars0 ratingsRobots Rising Rating: 4 out of 5 stars4/5AI is much more than Technology: Reflections on Artificial Intelligence - (Thought-Provoking Quotes, Essays & Articles) Rating: 0 out of 5 stars0 ratingsMachine Ethics: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Safety: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAI for Humanity: Preventing Paths to Self-Destruction: 1A, #1 Rating: 0 out of 5 stars0 ratings
Intelligence (AI) & Semantics For You
2084: Artificial Intelligence and the Future of Humanity Rating: 4 out of 5 stars4/5Artificial Intelligence: A Guide for Thinking Humans Rating: 4 out of 5 stars4/5Creating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5Mastering ChatGPT: 21 Prompts Templates for Effortless Writing Rating: 5 out of 5 stars5/5101 Midjourney Prompt Secrets Rating: 3 out of 5 stars3/5ChatGPT For Fiction Writing: AI for Authors Rating: 5 out of 5 stars5/5Dark Aeon: Transhumanism and the War Against Humanity Rating: 5 out of 5 stars5/5Our Final Invention: Artificial Intelligence and the End of the Human Era Rating: 4 out of 5 stars4/5Impromptu: Amplifying Our Humanity Through AI Rating: 5 out of 5 stars5/5Chat-GPT Income Ideas: Pioneering Monetization Concepts Utilizing Conversational AI for Profitable Ventures Rating: 4 out of 5 stars4/5Summary of Super-Intelligence From Nick Bostrom Rating: 5 out of 5 stars5/5ChatGPT Ultimate User Guide - How to Make Money Online Faster and More Precise Using AI Technology Rating: 0 out of 5 stars0 ratingsThe Secrets of ChatGPT Prompt Engineering for Non-Developers Rating: 5 out of 5 stars5/5What Makes Us Human: An Artificial Intelligence Answers Life's Biggest Questions Rating: 5 out of 5 stars5/5Midjourney Mastery - The Ultimate Handbook of Prompts Rating: 5 out of 5 stars5/5The Business Case for AI: A Leader's Guide to AI Strategies, Best Practices & Real-World Applications Rating: 0 out of 5 stars0 ratingsWays of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence Rating: 4 out of 5 stars4/5Discovery Writing with ChatGPT: AI-Powered Storytelling: Three Story Method, #6 Rating: 0 out of 5 stars0 ratingsAI for Educators: AI for Educators Rating: 5 out of 5 stars5/5The Algorithm of the Universe (A New Perspective to Cognitive AI) Rating: 5 out of 5 stars5/5ChatGPT For Dummies Rating: 0 out of 5 stars0 ratingsDancing with Qubits: How quantum computing works and how it can change the world Rating: 5 out of 5 stars5/5
Reviews for Existential Risk from Artificial General Intelligence
0 ratings0 reviews
Book preview
Existential Risk from Artificial General Intelligence - Fouad Sabry
Chapter 1: Existential risk from artificial general intelligence
Existential danger from artificial general intelligence refers to the idea that significant advancements in artificial general intelligence (AGI) might one day lead to the obliteration of the human race or to some other kind of catastrophic event that cannot be reversed on a global scale.
In his article titled Darwin amid the Robots,
which was published in 1863, the writer Samuel Butler was one of the first authors to voice significant worry on the possibility that highly evolved machines may offer existential hazards to the human race. He stated as follows:
The conclusion is just a matter of time, but the fact that there will come a point in time when machines will have the true supremacy over the planet and the people who live it is something that no one with a properly philosophical mind can doubt even for a second.
In the paper Sophisticated Machinery, A Heretical Theory,
which was published in 1951, computer scientist Alan Turing suggested that artificial general intelligences would likely take control
of the world when they got more intelligent than human beings:
Now, for the purpose of the argument, let us suppose that [intelligent] robots are a true possibility, and consider the repercussions of building them... There would be no possibility of the robots passing away, and they would be able to engage in conversation with one another in order to improve their intelligence. Therefore, at some point in the future, we should have to anticipate that the robots will take over, in the same manner as is described in Samuel Butler's Erewhon.
Finally, in the year 1965, I. J. Good came up with the idea that would later become known as a intelligence explosion.
He also noted that the hazards were not well comprehended at the time:
Let us define an ultraintelligent machine as one that is capable of far surpassing all of the intellectual activity of any man, regardless of how bright that guy may be. Since one of these intellectual pursuits is the creation of machines, a very intelligent machine would be able to create even more advanced machines; hence, there would surely be a intelligence explosion,
and the intelligence of man would be left far behind. Therefore, the first really intelligent machine will be the final innovation that humanity will ever need to create, supposing that the machine will be submissive enough to instruct humans on how to keep it under control. It seems strange that this argument is only brought up so seldom in other genres except science fiction. There are instances when it is beneficial to treat science fiction as if it were real.
Statements made sometimes by academics such as Marvin Minsky
The go-to AI textbook for freshman and sophomore level students is called Artificial Intelligence: A Modern Approach.
There is a possibility that the implementation of the system has defects that are not immediately noticeable but may have catastrophic effects. As an example, consider space probes: despite the fact that engineers are aware that defects in costly space probes are difficult to cure after launch, they have not been able to successfully prevent fatal problems from occuring in the past.
No matter how much effort is spent on the pre-deployment design of a system, the specifications of that system almost always cause it to behave in a way that was not intended the very first time it is presented with a new situation. For instance, Microsoft's Tay exhibited no disrespectful behavior during the pre-deployment testing phase, but it was much too easy to trick it into behaving inappropriately when it was interacting with actual people.
The following is an excerpt from the 2015 Open Letter on Artificial Intelligence, which makes reference to significant developments in the area of AI as well as the potential for AI to have massive long-term benefits or costs:
With the progress that has been made in artificial intelligence research, the time has come to shift the emphasis of research away from just making AI more competent and toward maximizing the benefits that AI can provide to society. Such considerations were the impetus behind the AAAI 2008-09 Presidential Panel on Long-Term AI Futures as well as other projects on AI impacts, and they constitute a significant expansion of the field of artificial intelligence itself, which up until this point has primarily focused on techniques that are purposeless in and of themselves. We advocate for an expansion of research targeted at ensuring that more sophisticated artificial intelligence systems are resilient and useful. Our AI systems need to perform what we want them to do in order for us to be satisfied.
This letter was signed by a number of prominent artificial intelligence researchers working in academia and business, including the president of the AAAI, Thomas Dietterich, as well as Eric Horvitz, Bart Selman, Francesca Rossi, Yann LeCun, and the creators of Google DeepMind and Vicarious.
For humans, the concept of a superintelligent computer would be as foreign as the concept of human cognition would be for cockroaches. It is possible that such a computer would not have the best interests of mankind in mind; in fact, it is not clear that it would even care about the welfare of humans at all. If it is conceivable to create superintelligent AI, and if it is possible for the purposes of a superintelligence to be in contradiction with fundamental human values, then AI presents a danger to the survival of the human race. Because a superintelligence
(a system that exceeds the capabilities of humans in every relevant endeavor) is able to outmaneuver humans whenever its goals conflict with human goals, the first superintelligence to be created will unavoidably result in the extinction of humanity unless the superintelligence decides to allow humanity to coexist with itself.
Some academics have come up with fictitious situations with the goal of providing a more tangible illustration of some of their issues.
Nick Bostrom expresses concern in his book Superintelligence that even if the timeline for the development of superintelligence turns out to be predictable, researchers may not take adequate safety precautions. This is in part because [it] could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous.
Bostrom envisions a future in which artificial intelligence steadily gains increasing capability over the course of many decades. Accidents of many kinds may occur in the early stages of widespread deployment. For example, a driverless bus may veer into the lane of oncoming traffic, or a military drone may shoot into an uninvolved crowd. A significant number of activists have called for increased levels of control and regulation, and some of them have even predicted the imminent occurrence of a disaster. However, as time goes on, it becomes clear that the campaigners were incorrect. As artificial intelligence for automobiles becomes smarter, it will have fewer accidents; as military robots develop better at precision targeting, they will have less of an impact on innocent bystanders. Researchers have drawn an incorrect conclusion from the data, which is that the more intelligent an AI is, the safer it is. And so it is that we bravely proceed — into the spinning blades,
as the very sophisticated AI makes a treacherous turn.
"and leverages a crucial strategic advantage.
Anthropomorphic arguments begin with the premise that robots are developing
along a linear scale and that, as they progress to higher levels, they will begin to exhibit many human characteristics, such as a drive for power or a sense of morality. Even though anthropomorphic scenarios are prevalent in literature, the majority of academics who write on the existential danger of artificial intelligence do not believe they should be taken seriously. According to the renowned computer scientist Yann LeCun, There are many different instincts that humans have, such as the need to protect themselves, that motivate people to act in ways that are harmful to one another. These urges are hardwired into human brains, but there is none justification for designing robots with the same kinds of drives
.
The experiments conducted by Dario Floreano, in which certain robots spontaneously evolved a crude capacity for deception
and tricked other robots into eating poison
and dying, provide a scenario in which a trait, deception,