Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Rational Agent: Fundamentals and Applications
Rational Agent: Fundamentals and Applications
Rational Agent: Fundamentals and Applications
Ebook167 pages2 hours

Rational Agent: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Rational Agent


A person or thing is said to be rational, or reasonable, if it always strives to execute the best possible actions based on the premises and knowledge that it is provided with. A rational agent is any entity that can make decisions, most commonly a human but also a company, machine, or piece of software.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Rational agent


Chapter 2: Artificial intelligence


Chapter 3: Game theory


Chapter 4: Rational choice theory


Chapter 5: Bounded rationality


Chapter 6: Satisficing


Chapter 7: Software agent


Chapter 8: Intelligent agent


Chapter 9: Outline of artificial intelligence


Chapter 10: Herbert A. Simon


(II) Answering the public top questions about rational agent.


(III) Real world examples for the usage of rational agent in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of rational agent' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of rational agent.

LanguageEnglish
Release dateJul 3, 2023
Rational Agent: Fundamentals and Applications

Read more from Fouad Sabry

Related to Rational Agent

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Rational Agent

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Rational Agent - Fouad Sabry

    Chapter 1: Rational agent

    A rational agent or rational being is a person or thing that always strives to do the best possible actions in accordance with the premises and knowledge that it is provided with. A rational agent is any entity that can make choices, most often a human but also a company, computer, or piece of software.

    A number of academic fields, including artificial intelligence, cognitive science, decision theory, economics, ethics, game theory, and the study of practical reason, make use of the idea of rational agents.

    In the context of economics, the term rational agent refers to fictitious customers and the manner in which they make choices in a competitive market. In neoclassical economic theory, this idea is considered to be one of the assumptions that are made. Neoclassical economics has a long-standing history of using marginal analysis, which is where the idea of economic rationality first emerged. The philosopher Jeremy Bentham's notion of the hedonistic calculus, which is also known as the felicific calculus, provides a thorough explanation of the significance of the concept of a rational agent to the utilitarian school of thought in philosophy.

    The course of action that a rational individual adopts is contingent on:

    the choices that the agent would make

    the knowledge that the agent has about its surroundings, which may have been gained through previous encounters.

    the several courses of action, responsibilities, and obligations that are accessible to the agent

    the anticipated or actual advantages, as well as the likelihood of the activities being successful.

    It is a common presumption in both game theory and classical economics that the individuals, companies, and businesses involved are rational. However, the degree to which individuals and businesses operate rationally is a contentious topic for discussion. When trying to codify and make predictions about the actions of people and businesses, economists often rely on the models of rational choice theory and limited rationality. The traveler's dilemma is an example of a situation in which rational actors take actions that go opposed to the common sense of many observers.

    The concepts of utilitarianism and rational agency are rejected by a number of economic theories, particularly those that may be classified as heterodox.

    For instance, Thorstein Veblen, who is regarded as the father of institutional economics, is against the concept of hedonistic calculus and pure rationality. He is quoted as saying, The hedonistic conception of man is that of a lightning calculator of pleasures and pains who oscillates like a homogeneous globule of desire of happiness under the impulse of stimuli that shift him about the area, but leave him intact.

    Instead, Veblen considers human economic decisions to be the product of a number of interrelated and cumulatively complex factors: It is not simply the capacity of man to endure pleasures and pains as a result of the action of appropriate forces; rather, it is the nature of man to engage in productive activity. He is... a well-organized system of tendencies and routines that looks for opportunities to actualize and express itself in the course of an unfolding action. They are the results of his inherited characteristics and his previous experience, cumulatively hammered out under a certain body of traditions, conventionalities, and material conditions; and they provide the point of departure for the subsequent phase in the process. The individual's economic life history is a process of adaptation of means to goals that cumulatively alter as the process goes on, with both the agent and his environment being the consequence of the most recent process at any given moment in time. This process is a cumulative adaptation process. Additionally, evolutionary economics offers critiques to the Rational Agent, one of which is referred to as the parental bent (the idea that biological impulses can and do frequently override rational decision making based on utility). Arguments against rational agency have also cited the enormous influence of marketing as proof that humans can be persuaded to make economic decisions that are irrational in nature. These arguments have been cited as proof that humans can be persuaded to make economic decisions that are irrational in nature.

    A concept known as neuroeconomics is one that seeks to get a deeper comprehension of the decision-making process by combining the disciplines of neuroscience, social psychology, and other areas of study. Neuroeconomics, in contrast to the rational agent theory, does not aim to forecast broad patterns of human behavior; rather, it focuses on the decision-making processes of people in specific contexts.

    The field of artificial intelligence has appropriated the phrase rational agents from the field of economics in order to define self-operating programs that are able to behave in a way that is oriented toward a goal. Research in artificial intelligence, game theory, and decision theory all share a significant amount of common ground nowadays. In artificial intelligence, rational agents are very similar to intelligent agents, which are self-operating software programs that demonstrate intelligence.

    {End Chapter 1}

    Chapter 2: Artificial intelligence

    As contrast to the natural intelligence exhibited by animals, including humans, artificial intelligence (AI) refers to the intelligence demonstrated by robots. Research in artificial intelligence (AI) has been described as the area of study of intelligent agents, which refers to any system that senses its surroundings and performs actions that optimize its possibility of attaining its objectives. In other words, AI research is a discipline that studies intelligent agents. The term AI impact refers to the process by which activities that were formerly thought to need intelligence but are no longer included in the concept of artificial intelligence as technology advances. AI researchers have adapted and incorporated a broad variety of approaches for addressing issues, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics, in order to tackle these difficulties. Computer science, psychology, linguistics, philosophy, and a great many other academic disciplines all contribute to the development of AI.

    The theory that human intellect can be so accurately characterized that a computer may be constructed to imitate it was the guiding principle behind the establishment of this discipline. This sparked philosophical debates concerning the mind and the ethical implications of imbuing artificial organisms with intellect comparable to that of humans; these are topics that have been investigated by myth, literature, and philosophy ever since antiquity.

    In ancient times, artificial creatures with artificial intelligence were used in various narrative devices.

    and are often seen in works of literature, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R.

    The formal design for Turing-complete artificial neurons that McCullouch and Pitts developed in 1943 was the first piece of work that is now widely understood to be an example of artificial intelligence.

    Attendees of the conference went on to become pioneers in the field of AI research.

    They, together with their pupils, were able to build programs that the press referred to as astonishing. These programs included machines that were able to learn checkers techniques, solving word problems in algebra, demonstrating logical theorems and having good command of the English language.

    Around the middle of the decade of the 1960s, research done in the United States

    was receiving a significant amount of funding from the Department of Defense, and facilities were being set up all around the globe.

    as well as continuous pressure from the Congress of the United States to invest in more fruitful endeavors, The United States of America

    both the Canadian and British governments stopped funding exploratory research in artificial intelligence.

    The following few years would be referred to in the future as a AI winter.

    a time when it was difficult to acquire financing for artificial intelligence initiatives.

    a kind of artificial intelligence software that mimicked the knowledge and analytical prowess of human professionals.

    By 1985, Over a billion dollars was now being transacted in the artificial intelligence business.

    While this is going on, The United States and the United Kingdom have reestablished support for university research as a direct result of Japan's computer programme for the fifth generation.

    However, When the market for lisp machines crashed in 1987, it was the beginning of a downward spiral.

    AI once again fallen into disfavor, as well as another, longer-lasting winter started.

    Geoffrey Hinton is credited for reviving interest in neural networks and the concept of connectionism.

    Around the middle of the 1980s, David Rumelhart and a few others were involved. During the 1980s, many soft computing tools were created.

    include things like neural networks, fuzzy systems, Theory of the grey system, the use of evolutionary computing as well as a number of methods derived from statistical or mathematical optimization.

    Through the late 1990s and into the early 21st century, AI worked to progressively rehabilitate its image by developing solutions that were tailored to address particular challenges. Because of the tight emphasis, researchers were able to develop conclusions that could be verified, use a greater number of mathematical approaches, and work with experts from other areas (such as statistics, economics and mathematics). In the 1990s, the solutions that were produced by AI researchers were never referred to as artificial intelligence, but by the year 2000, they were being employed extensively all around the world. According to Jack Clark of Bloomberg, the year 2015 was a watershed year for artificial intelligence. This is due to the fact that the number of software projects that employ AI inside Google went from sporadic use in 2012 to more than 2,700 projects in 2015.

    The overarching challenge of emulating (or fabricating) intelligence has been segmented into a variety of more specific challenges. These are certain characteristics or skills that researchers anticipate an intelligent system to possess. The greatest emphasis has been paid to the characteristics that are detailed below.

    Researchers in the early days of computer science devised algorithms that mirrored the step-by-step reasoning that people use when they solve problems or make logical inferences. Research in artificial intelligence had by the late 1980s and early 1990s established strategies for coping with uncertain or partial information. These approaches used notions from probability and economics. Even among humans, the kind of step-by-step deduction that early studies in artificial intelligence could replicate is uncommon. They are able to address the majority of their issues by making snap decisions based on their intuition.

    Information engineering and the representation of that knowledge are what enable artificial intelligence systems to intelligently respond to inquiries and draw conclusions about real-world events.

    An ontology is a collection of objects, relations, ideas, and attributes that are formally characterized in order to ensure that software agents are able to comprehend them. An ontology is a description of what exists. Upper ontologies are ontologies that seek to provide a basis for all other information and operate as mediators between domain ontologies, which cover specialized knowledge about a particular knowledge domain. Upper ontologies are the most broad ontologies, and they are also termed ontologies (field of interest or area of concern). A software that is genuinely intelligent would also require access to commonsense knowledge, which is the collection of facts that the typical human is aware of. In most cases, the description logic of an ontology, such as the Web Ontology Language, is used to express the semantics of an ontology. In addition to other domains, situations, events, states, and times; causes and effects; knowledge about knowledge (what we know about what other people know); default reasoning (things that humans assume are true until they are told differently and will continue to be true even when other facts are changing); and knowledge about knowledge about knowledge are all examples of domains. The breadth of commonsense information (the number of atomic facts that the typical human is aware of is immense) and the sub-symbolic nature of the majority of commonsense knowledge are two of the most challenging challenges in artificial intelligence (much of

    Enjoying the preview?
    Page 1 of 1