Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Friendly Artificial Intelligence: Fundamentals and Applications
Friendly Artificial Intelligence: Fundamentals and Applications
Friendly Artificial Intelligence: Fundamentals and Applications
Ebook108 pages1 hour

Friendly Artificial Intelligence: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Friendly Artificial Intelligence


The term "friendly artificial intelligence" refers to a made-up form of artificial general intelligence (AGI) that, in theory, would have a beneficial (benevolent) impact on humankind, or at the at least, would be in line with human interests and/or make a contribution to the development of humankind as a species. It is a component of the ethics surrounding artificial intelligence and has a tight connection to the ethics surrounding machines. While machine ethics is concerned with how an artificially intelligent agent ought to behave, friendly artificial intelligence research is focused on how to practically bring about this conduct and ensure it is suitably controlled. Machine ethics is concerned with how an artificially intelligent agent ought to behave.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Friendly artificial intelligence


Chapter 2: Technological singularity


Chapter 3: Artificial general intelligence


Chapter 4: Superintelligence


Chapter 5: Ethics of artificial intelligence


Chapter 6: AI capability control


Chapter 7: Machine ethics


Chapter 8: Existential risk from artificial general intelligence


Chapter 9: AI aftermath scenarios


Chapter 10: Human Compatible


(II) Answering the public top questions about friendly artificial intelligence.


(III) Real world examples for the usage of friendly artificial intelligence in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of friendly artificial intelligence' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of friendly artificial intelligence.

LanguageEnglish
Release dateJul 2, 2023
Friendly Artificial Intelligence: Fundamentals and Applications

Read more from Fouad Sabry

Related to Friendly Artificial Intelligence

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Friendly Artificial Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Friendly Artificial Intelligence - Fouad Sabry

    Chapter 1: Friendly artificial intelligence

    The term friendly artificial intelligence (also known as friendly AI or FAI) refers to a hypothetical form of artificial general intelligence (AGI) that would either have a beneficial (benevolent) impact on humanity, or at the very least, be aligned with human interests, or contribute to the development of the human species in a way that would help it become a better version of itself. It is a component of the ethics surrounding artificial intelligence and has a tight connection to the ethics surrounding machines. Friendly artificial intelligence research focuses on how to practically bring about this behavior and making sure it is suitably restricted, in contrast to machine ethics, which is concerned with how an artificially intelligent agent ought to behave.

    Eliezer Yudkowsky is credited with coming up with the word, Yudkowsky (2008) provides a more in-depth discussion on how to create a friendly artificial intelligence. He maintains that friendliness, or the desire to not cause harm to human beings, should be programmed into the robot from the beginning; however, the designers of the robot should be aware that their own designs could be flawed, and that the robot will learn and develop over the course of its lifetime. Therefore, the challenge lies in the design of mechanisms. Specifically, it is necessary to define a mechanism for the progression of AI systems within the context of a system of checks and balances, and it is also necessary to endow the systems with utility functions that will continue to be welcoming despite these changes.

    In this context, friendly is a technical phrase that identifies agents that are safe and useful; it does not necessarily pick out agents that are friendly in the colloquial sense. The idea is primarily brought up in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence. This is done on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society. In other words, the concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence.

    The concerns around artificial intelligence have been around for a very long time. Kevin LaGrandeur demonstrated that the hazards that are unique to artificial intelligence may be seen in ancient literature involving artificial humanoid slaves such as the golem or the proto-robots of Gerbert of Aurillac and Roger Bacon. These examples can be found in both Hebrew and French. The extremely high levels of intelligence and power possessed by these humanoid constructs, which in those narratives come into direct confrontation with their status as slaves (which, by definition, are considered to be below human), result in a cataclysmic conflict.

    Philosopher Nick Bostrom, writing in modern times as the prospect of superintelligent AI looms nearer, has stated that superintelligent AI systems with goals that are not aligned with human ethics are inherently dangerous, unless extreme measures are taken to ensure the safety of humanity. This statement was made in modern times as the prospect of superintelligent AI looms nearer. He stated it in this manner:

    In general, we ought to work under the assumption that a'superintelligence' would be capable of accomplishing whatever objectives it sets for itself. As a result, it is of the utmost significance that the objectives we bestow upon it and the entirety of its motivational structure be human friendly.

    In 2008, Eliezer Yudkowsky advocated for the development of friendly AI as a means of reducing the existential risk posed by advanced forms of artificial intelligence. He explains it like this: The AI does not hate you, nor does it love you; however, you are made up of atoms that it can utilize for anything else.

    The Coherent Extrapolated Volition (CEV) model that Yudkowsky developed is advanced further. According to him, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted; extrapolated as we wish that interpreted, extrapolated as we wish that interpreted. The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of Friendliness, is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined in relation to the psychological and cogitative contexts in which it occurs.

    An method to artificial intelligence safety known as scaffolding has been presented by Steve Omohundro, in which one provably safe AI generation helps build the next provably safe generation.: 173

    1. The machine's one and only goal is to maximize the extent to which human desires can be satisfied.

    2. At first, the machine is unsure about what exactly those preferences are.

    3. The behavior of humans is the most reliable source for knowledge regarding human preferences.

    Russell's preferences might be thought of as an all-encompassing concept; They address everything that you may possibly be concerned about, arbitrarily far into the future.": 201

    The author of Our Final Invention, James Barrat, made the suggestion that a public-private partnership has to be developed to bring A.I.-makers together to share ideas on security. This partnership would be similar to the International Atomic Energy Agency, except it would involve corporations. He wants researchers working on artificial intelligence to get together and hold a conference akin to the Asilomar Conference on Recombinant DNA, which addressed the dangers of biotechnology.

    The possibility of benevolent artificial intelligence is called into question by those who hold the view that both human-level AI and superintelligence are highly improbable. While we do need to be cautious and prepared given the stakes involved, we don't need to be obsessing about the risks of superintelligence, according to an article that was published in The Guardian by Alan Winfield. In the article, Winfield makes the comparison between developing human-level artificial intelligence and developing faster-than-light travel in terms of difficulty.

    {End Chapter 1}

    Chapter 2: Technological singularity

    The technological singularity, sometimes referred to as the singularity itself An upgradable intelligent agent will eventually enter a runaway reaction of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a explosion in intelligence and ultimately leading to a powerful superintelligence that qualitatively far surpasses all human intelligence, according to the most popular version of the singularity hypothesis, which is called the intelligence explosion. In this version of the singularity hypothesis, the term intelligence explosion is used.

    John von Neumann was the first person to use the term singularity in reference to a technical phenomenon. The effects of the singularity, as well as its possible advantages or disadvantages for the human species, have been the subject of much discussion.

    The results of four surveys of AI researchers, research that was carried out during the years 2012 and 2013 by Nick Bostrom and Vincent C.

    Müller, It was estimated that artificial general intelligence (AGI) will be produced between the years 2040 and 2050 with a median chance of fifty percent.

    According to

    Enjoying the preview?
    Page 1 of 1