Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Artificial Consciousness: Fundamentals and Applications
Artificial Consciousness: Fundamentals and Applications
Artificial Consciousness: Fundamentals and Applications
Ebook136 pages1 hour

Artificial Consciousness: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Artificial Consciousness


A subfield of artificial intelligence and cognitive robotics, artificial consciousness (AC), also referred to as machine consciousness (MC) or synthetic consciousness, is a field that studies artificially created consciousness. "Define that which would have to be synthesized were consciousness to be found in an engineered artifact," this is the goal of the theory of artificial consciousness.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Artificial consciousness


Chapter 2: Cognitive science


Chapter 3: Consciousness


Chapter 4: Philosophy of artificial intelligence


Chapter 5: Computational theory of mind


Chapter 6: Artificial brain


Chapter 7: Mind uploading


Chapter 8: Global workspace theory


Chapter 9: Cognitive architecture


Chapter 10: Models of consciousness


(II) Answering the public top questions about artificial consciousness.


(III) Real world examples for the usage of artificial consciousness in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of artificial consciousness' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of artificial consciousness.

LanguageEnglish
Release dateJul 3, 2023
Artificial Consciousness: Fundamentals and Applications

Read more from Fouad Sabry

Related to Artificial Consciousness

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Artificial Consciousness

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Consciousness - Fouad Sabry

    Chapter 1: Artificial consciousness

    A subfield of artificial intelligence and cognitive robotics, artificial consciousness (AC), also referred to as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field that studies artificially created consciousness. Define that which would have to be synthesized were consciousness to be found in an engineered artifact, this is the goal of the theory of artificial consciousness, which aims to explain how consciousness could be created artificially (Aleksander 1995).

    The neural correlates of consciousness (often abbreviated as NCC) are a theory that the field of neuroscience proposes explains how consciousness arises through the interaction of different regions of the brain. However, this viewpoint has been criticized on numerous occasions. Those who advocate for the use of AC think that it is possible to build systems (such as computer systems) that are capable of emulating the NCC's interoperation.

    Due to the fact that there are numerous proposed categories of consciousness, There is a wide variety of ways in which artificial consciousness could be implemented.

    Within the canon of philosophical works, Perhaps the taxonomy of consciousness that divides it into access and phenomenal varieties is the one that is used the most.

    The components of experience that can be perceived are what are meant by the term access consciousness., whereas phenomenological awareness is concerned with those parts of experience that appear to be beyond comprehension, instead being characterized qualitatively in terms of raw feels, what it is like or qualia (Block 1997).

    The perspective held by type-identity theorists and other skeptics is that consciousness can only be experienced in certain physical systems. They reason that this is due to the fact that consciousness possesses qualities that must always be dependent on its physical constitution (Block 1978; Bickle 2003).

    Other theorists, such as the functionalists, define mental states in terms of the causal roles that they play. According to these theorists, any system that can instantiate the same pattern of causal roles will instantiate the same mental states, including consciousness. This holds true regardless of the physical constitution of the system (Putnam 1967).

    David Chalmers has provided one of the most direct and compelling arguments in support of the possibility of AC. His argument, which can be found in his essay titled Chalmers 2011, states, in general terms, that it is sufficient to perform certain sorts of calculations in order to possess a conscious mind. In the overview, he provides the following justification for his claim: Calculations are carried out by computers. Computations have the ability to capture the abstract causal organization of other systems.

    The idea that mental qualities are organizationally invariant is perhaps the most contentious component of Chalmers' argument.

    There are two categories of mental attributes, psychological analysis as well as phenomenology research.

    Psychological properties, including things like faith and vision, are those that can be identified by their capacity to produce results..

    He adverts to the work of Armstrong 1968 and Lewis 1972 in claiming that [s]ystems with the same causal topology…will share their psychological properties.

    There is no obvious way to define phenomenological qualities in terms of the roles they play in the phenomenon. Therefore, reasoning is necessary in order to establish that phenomenological qualities are capable of being individuated based on the causal role they play. In order to accomplish this goal, Chalmers offers his Dancing Qualia Argument.

    To begin, Chalmers makes the assumption that even while agents may have the same causal organizations, they may nonetheless have unique experiences. He then challenges us to imagine transforming one agent into another by the substitution of pieces (for example, brain portions being replaced by silicon) while still maintaining the causal organization of the original entity. Ex hypothesi, the experience of the agent undergoing transformation would shift (as the parts were replaced), but there would be no change in the causal topology, and there would be no way for the agent to notice the shift in experience because there would be no way for the agent to notice the change in experience because there would be no change in the causal topology.

    Critics of AC argue that Chalmers's assumption that all mental qualities and external connections may be adequately described by abstract causal organization is a red herring and that Chalmers is engaging in circular reasoning.

    In the event that it was discovered that a specific piece of machinery possessed consciousness, determining its legal status would be an important ethical concern (e.g. what rights it would have under law). A sentient computer, for instance, that was owned and utilized as a tool or the central computer of a larger machine is an example of a particular ambiguity. Should there be laws that cover such a scenario? In this particular situation, a definition of consciousness would also be required by legal standards. Such ethics have not been studied or developed to a great extent due to the fact that artificial consciousness is still mostly a theoretical matter; yet, it is something that has frequently been a theme in fiction (see below).

    Thomas Metzinger, a German philosopher, called in 2021 for a worldwide embargo on the practice of synthetic phenomenology until the year 2050. Metzinger contends that humans have a responsibility of care toward any conscious artificial intelligences they develop, and that moving forward too quickly runs the risk of causing a explosion of artificial pain..

    The subject of whether or not robots should have rights was expressly addressed in the guidelines for the Loebner Prize competition in 2003:

    61. In the event that a publicly available open source Entry submitted by the University of Surrey or the Cambridge Center is selected as the winner of either the Silver Medal or the Gold Medal in any given year, the Medal as well as the Cash Award will be presented to the organization that was responsible for the development of that Entry. If no such body can be identified, or if there is disagreement between two or more claimants, the Gold Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the location of the contest, the Cash Award and the Gold Medal in its own right. If no such body can be identified, or if there is disagreement between two or more claimants, the Entry will be disqualified from the competition.

    It is generally accepted that in order for a machine to be artificially sentient, it must possess a number of characteristics that are associated with consciousness. Others besides Bernard Baars (Baars 1988) hypothesized a range of functions in which consciousness plays a role. These functions were postulated by Bernard Baars. The following are some of the functions of consciousness that have been proposed by Bernard Baars: decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, autoprogramming and self-maintenance function, and definition and context setting function. Igor Aleksander proposed a total of 12 guiding principles for the development of artificial consciousness in his book Aleksander 1995. These include the following: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion, The purpose of the Artificial Consciousness project, or AC, is to determine whether or if these and other components of consciousness may be synthesized in an engineering artifact such as a digital computer and, if so, how this can be done. This list is not comprehensive; there are a great deal more that are not mentioned here.

    Awareness may be one of the prerequisites, however the precise definition of awareness presents several challenges. The findings of the tests involving neuroscanning on monkeys imply that a process, and not just a condition or item, is responsible for activating neurons. A useful aspect of awareness is the ability to make predictions based on the information acquired through the senses or imagined, as well as the creation and testing of alternative models of each process based on that information. Such modeling calls for a great deal of adaptability. Modeling of the physical world, modeling of one's own internal states and processes, and modeling of other aware entities are all involved in the process of developing such a model.

    At a minimum, there are three varieties of awareness: agency awareness, goal awareness, and sensorimotor awareness, all of which may or may not involve conscious awareness. For instance, when it comes to agency awareness, you might be aware that you carried out a particular action the day before, but you might not be mindful of it right now. When you have goal awareness, you might be aware that you need to look for something that you lost, but you might not be mindful of it right now. Within the realm of sensorimotor awareness, it is possible that you are aware that your hand is resting on something, but that you are not currently cognizant of this fact.

    The difference between awareness and consciousness is frequently muddied, and the terms are commonly used interchangeably due to the fact that objects of awareness are frequently conscious.

    The processes of learning, practicing, and retrieving need interaction between conscious events and memory systems. This is the IDA model.

    Acquiring knowledge is also regarded as essential for AC.

    By Bernard Baars, In order to represent and respond appropriately to novel and significant occurrences, conscious experience is required. (Baars 1988).

    By

    Enjoying the preview?
    Page 1 of 1