Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Super Artificial Intelligence: Fundamentals and Applications
Super Artificial Intelligence: Fundamentals and Applications
Super Artificial Intelligence: Fundamentals and Applications
Ebook135 pages74 hours

Super Artificial Intelligence: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Super Artificial Intelligence


A hypothetical being known as a superintelligence is an entity that possesses intelligence that is significantly higher than that of even the most brilliant and talented human minds. The term "superintelligence" can also be used to refer to a quality possessed by problem-solving systems, regardless of whether or not these high-level intellectual competencies are physically embodied in agents that operate in the real world. It is possible, but not guaranteed, that an intelligence explosion will result in the development of a superintelligence and be associated with a technological singularity.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Superintelligence


Chapter 2: Artificial Intelligence


Chapter 3: Technological Singularity


Chapter 4: Artificial General Intelligence


Chapter 5: AI Takeover


Chapter 6: Philosophy of Artificial Intelligence


Chapter 7: Ethics of Artificial Intelligence


Chapter 8: AI Capability Control


Chapter 9: Existential Risk from Artificial General Intelligence


Chapter 10: Human Compatible


(II) Answering the public top questions about super artificial intelligence.


(III) Real world examples for the usage of super artificial intelligence in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of super artificial intelligence' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of super artificial intelligence.

LanguageEnglish
Release dateJun 30, 2023
Super Artificial Intelligence: Fundamentals and Applications

Read more from Fouad Sabry

Related to Super Artificial Intelligence

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Super Artificial Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Super Artificial Intelligence - Fouad Sabry

    Chapter 1: Superintelligence

    A hypothetical being known as a superintelligence is an entity that possesses intelligence that is significantly higher than that of even the most brilliant and talented human minds. The term superintelligence can also be used to refer to a quality that is possessed by problem-solving systems (for example, superintelligent language translators or engineering assistants), regardless of whether or not these high-level intellectual competencies are actually embodied in agents that carry out activities in the real world. It is possible, but not guaranteed, that an intelligence explosion will result in the creation of a superintelligence and be associated with a technological singularity.

    Nick Bostrom, a philosopher at the University of Oxford, provides the following definition of superintelligence: any intellect that significantly outperforms the cognitive performance of humans in nearly all realms of interest. Bostrom, following in the footsteps of Hutter and Legg, defines superintelligence as the general dominance of goal-oriented behavior. However, he does not specify whether an artificial or human superintelligence would possess capabilities such as intentionality (similar to the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

    Researchers in the field of technology have different opinions regarding the likelihood that human intelligence will be surpassed in the future. Some people believe that developments in artificial intelligence (AI) will eventually lead to general reasoning systems that are free from the cognitive constraints that humans have. Others are of the opinion that in order for humans to acquire significantly higher levels of intelligence, either their biology will undergo direct modification or they will evolve. There are a number of futures studies scenarios that combine aspects from both of these options. These scenarios show that it is likely that people will interact with computers, or upload their thoughts to computers, in a way that enables significant intelligence amplification.

    The development of artificial general intelligence may, according to the opinions of some researchers, be followed very quickly by the creation of superintelligence. The first generally intelligent machines are likely to have an enormous advantage in at least some forms of mental capability almost as soon as they are created. These advantages may include the capability of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways that are not possible for biological entities. This may provide them with the potential to grow considerably more powerful than humans, either as a single entity or as a new species, and to supplant humans as the dominant species on the planet.

    David Chalmers, a philosopher, contends that the development of artificial general intelligence is the most likely route to the creation of superhuman intellect. This assertion is deconstructed by Chalmers into three separate arguments: first, that AI can attain equivalence to human intellect; second, that AI can be extended to surpass human intelligence; and third, that AI can be amplified further to utterly dominate humans across arbitrary tasks. In addition, the maximum speed at which neurons can send spike signals down axons is no more than 120 meters per second, but modern electronic processing centres are able to interact optically at the speed of light. Therefore, the most straightforward illustration of a superintelligence may be an imitated human mind that is executed on technology that is significantly more powerful than the brain. A human-like reasoner that was capable of thinking millions of times faster than present humans would have a dominant advantage in most reasoning activities, particularly ones that require quickness or extended strings of actions.

    Computers have the additional benefit of being modular, which means that either their size or their capacity for calculation can be expanded. Like many supercomputers, a brain that is not human or that has been engineered to be non-human could grow to be significantly larger than a human brain as it is now. Bostrom also brings up the possibility of collective superintelligence, which is when a large enough number of independent reasoning systems, provided they communicated and coordinated with one another well enough, could act as a whole with significantly greater capabilities than any single sub-agent.

    There is a possibility that there are techniques to significantly enhance the reasoning and decision-making capabilities of humans. It seems that the ways in which humans and chimpanzees think diverge more than differences in brain size or processing speed do between the two species.

    Carl Sagan postulated that the development of Caesarean sections and in vitro fertilization might make it possible for humans to grow larger heads, which would lead to increases in the heritable component of human intelligence brought about by natural selection. On the other hand, Gerald Crabtree has proposed that reduced selection pressure is leading to a gradual decline in human intelligence that has taken place over the course of several centuries, and that this trend is more than likely to carry on into the foreseeable future. There is no agreement among scientists regarding either of these possibilities, and the rate of change in biological characteristics would be quite gradual in either scenario, especially when compared to the rates of change in cultural characteristics.

    It may be possible to increase human intelligence more quickly through the use of selective breeding, nootropics, epigenetic regulation, and genetic engineering. According to what Bostrom has said, pre-implantation genetic diagnosis might be used to select for embryos with as much as four points of IQ increase (assuming one embryo is selected out of two), or with bigger gains, in the event that we one day come to grasp the genetic component of intelligence (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this procedure is repeated over many generations, the potential benefits may increase by a factor of one hundred or more. Bostrom hypothesizes that the selection process may be iterated relatively quickly by producing new gametes from embryonic stem cells.

    The majority of AI researchers polled believe that robots will someday be able to compete with humans in terms of intelligence; however, there is little agreement over when this is most likely to occur. At the AI@50 conference in 2006, 18 percent of attendees reported that they expected machines to be able to simulate learning and every other aspect of human intelligence by the year 2056; 41 percent of attendees expected this to happen sometime after the year 2056; and 41 percent of attendees expected machines to never reach that milestone.

    The median year by which respondents projected High-level machine intelligence with 50 percent confidence is 2061, according to the results of a survey of 352 machine learning researchers that was released in 2018. According to the findings of the survey, high-level machine intelligence is achieved when machines working unassisted are able to complete each and every task more effectively and at a lower cost than human laborers.

    The leaders of OpenAI presented their ideas for the governance of superintelligence in 2023. They estimate that the development of superintelligence might occur in fewer than 10 years.

    Bostrom voiced concern on the morals and ethics that a superintelligence should be programmed to uphold. He evaluated several different proposals:

    The notion known as coherent extrapolated volition (CEV) states that it ought to have the values upon which people would converge.

    The proposal that it ought to value moral rightness is known as the moral rightness (MR) idea.

    The moral permissibility (MP) proposition states that it ought to place a premium on adhering to the parameters of what might be considered morally permissible (and otherwise have CEV values).

    These words are elucidated by Bostrom:

    instead of carrying out what would be the cohesive projected volition of humanity, One could make an AI with the intention of programming it to act in a way that is morally acceptable, relying on the AI’s superior cognitive capacities to figure out just which actions fit that description.

    We can call this proposal moral rightness (MR) ..

    It would appear that there are some drawbacks associated with MR as well.

    It relies on the notion of morally right, a notoriously difficult concept, one that philosophers have struggled with since ancient times, but there is still no universal agreement on how to analyze it.

    Picking an erroneous explication of moral rightness could result in outcomes that would be morally very wrong ..

    It's possible that providing an AI with general linguistic capacity (similar to human level) is the first step toward endowing it with any of these [ethical] conceptions, at least, equivalent to that of a typical adult human).

    Such a general ability to understand natural language could then be used to understand what is meant by morally right. If the AI could grasp the meaning, It could look for activities that correspond to..

    In response to Bostrom's argument, Santos-Lang expressed concern that creators of artificial intelligence might try to begin with a single type of superintelligence.

    It has been hypothesized that if AI systems rapidly become superintelligent, they might engage in behaviors that are unexpected or above humanity's capabilities.

    Regarding hypothetical situations involving the demise of the human race, Bostrom (2002) suggests that the development of superintelligence could play a role:

    When we build the first superintelligent organism, we run the risk of making a mistake and programming it with objectives that will cause it to seek the extinction of the human race, presuming that its tremendous intellectual superiority will provide it with the capability to accomplish

    Enjoying the preview?
    Page 1 of 1