Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

General Game Playing: Fundamentals and Applications
General Game Playing: Fundamentals and Applications
General Game Playing: Fundamentals and Applications
Ebook121 pages1 hour

General Game Playing: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is General Game Playing


The concept of general game playing, sometimes known as GGP, refers to the development of artificial intelligence programs that are capable of competing well in more than one game. Computers are programmed to play many different games, such as chess, using an algorithm that is built specifically for that game and cannot be used in any other setting. For instance, a computer software that is designed to play chess cannot also play checkers. On the road to creating artificial general intelligence, generic game playing is seen as a necessary milestone.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: General game playing


Chapter 2: Artificial intelligence


Chapter 3: Machine learning


Chapter 4: Game Description Language


Chapter 5: List of programming languages for artificial intelligence


Chapter 6: Monte Carlo tree search


Chapter 7: Deep reinforcement learning


Chapter 8: Artificial intelligence in video games


Chapter 9: Machine learning in video games


Chapter 10: Google DeepMind


(II) Answering the public top questions about general game playing.


(III) Real world examples for the usage of general game playing in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of general game playing' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of general game playing.

LanguageEnglish
Release dateJul 4, 2023
General Game Playing: Fundamentals and Applications

Read more from Fouad Sabry

Related to General Game Playing

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for General Game Playing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    General Game Playing - Fouad Sabry

    Chapter 1: General game playing

    The goal of general game playing (GGP) is to create AI systems that are capable of playing and winning at a variety of games.

    Barney Pell established the term Meta-Game Playing and created the MetaGame framework in 1992. This was one of the earliest examples of a program making use of automated game generation, and it was the first to do so specifically for chess-like games. Then, Pell created the Metagamer platform. An additional metagame engine with a full Forth-based programming language, Axiom, was added to the package in 2007.

    The z-Tree algorithm was created by Urs Fischbacher in 1998.

    The Stanford Logic Group at Stanford University in California is working on a project called General Game Playing, the goal of which is to develop a system for playing a wide variety of games. In terms of GGP AI standardization, it is by far the most well-known and is generally accepted as the gold standard. The Game Description Language is used to define the games through sets of rules. A game hosting server checks players' actions for legality and keeps them abreast of state legislation changes so they can continue to play.

    The AAAI Conference has featured annual General Game Playing competitions since 2005. Competing AIs have their gaming skills evaluated based on their records in a number of different games. Competitors in the first round are scored on their ability to make legal moves, gain the upper hand, and finish games as quickly as possible. The AIs then compete against one another in a runoff round featuring increasingly difficult games. Up until 2013 the competition was won by the AI with the most game victories; its creator was awarded $10,000.

    Other broad game systems exist, each with its own language for defining gameplay parameters. Other examples of general-purpose gaming software:

    To test game environments, including those created automatically using procedural content generation and to find potential loopholes in the gameplay that a human player could exploit, GVGP has the potential to be used, and it has already been used, to create real video game AI automatically.

    The GVGAI is an annual competition for artificial intelligence in video games. In place of the board games used in the GGP, two-dimensional video games reminiscent of (and sometimes based on) arcade and console games from the 1980s are used here. It has allowed scientists and professionals to put their best general video game playing algorithms to the test and see how they stack up against one another. Video Game Description Language (VGDL), not to be confused with Game Development Language (GDL), is a coding language with simple semantics and commands that can be easily parsed, and it is used in a large number of games that are part of the competition's associated software framework. PyVGDL, created in 2013, is one such VGDL implementation.

    Due to the need for GGP AI to be able to play a variety of games, it is impossible to use game-specific algorithms in its development. Instead, game-agnostic algorithms should be used in the AI's development. The AI must also be a dynamic system that learns from its experiences rather than simply relying on the results of the past. Because of this, open loop methods frequently prove to be the most efficient.

    Algorithms that engage with games must function on the assumption that all games share commonalities. Half-Real: A Book About the World of Online Games, according to Jesper Juul, are a bridge between the real and the fantastic. The player is invested in the outcome, the player's actions have an impact on the game's outcome, the player has some control over the game's consequences, and the game is based on a set of rules.

    {End Chapter 1}

    Chapter 2: Artificial intelligence

    As contrast to the natural intelligence exhibited by animals, including humans, artificial intelligence (AI) refers to the intelligence demonstrated by robots. Research in artificial intelligence (AI) has been described as the area of study of intelligent agents, which refers to any system that senses its surroundings and performs actions that optimize its possibility of attaining its objectives. In other words, AI research is a discipline that studies intelligent agents. The term AI impact refers to the process by which activities that were formerly thought to need intelligence but are no longer included in the concept of artificial intelligence as technology advances. AI researchers have adapted and incorporated a broad variety of approaches for addressing issues, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics, in order to tackle these difficulties. Computer science, psychology, linguistics, philosophy, and a great many other academic disciplines all contribute to the development of AI.

    The theory that human intellect can be so accurately characterized that a computer may be constructed to imitate it was the guiding principle behind the establishment of this discipline. This sparked philosophical debates concerning the mind and the ethical implications of imbuing artificial organisms with intellect comparable to that of humans; these are topics that have been investigated by myth, literature, and philosophy ever since antiquity.

    In ancient times, artificial creatures with artificial intelligence were used in various narrative devices.

    and are often seen in works of literature, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R.

    The formal design for Turing-complete artificial neurons that McCullouch and Pitts developed in 1943 was the first piece of work that is now widely understood to be an example of artificial intelligence.

    Attendees of the conference went on to become pioneers in the field of AI research.

    They, together with their pupils, were able to build programs that the press referred to as astonishing. These programs included machines that were able to learn checkers techniques, solving word problems in algebra, demonstrating logical theorems and having good command of the English language.

    Around the middle of the decade of the 1960s, research done in the United States

    was receiving a significant amount of funding from the Department of Defense, and facilities were being set up all around the globe.

    as well as continuous pressure from the Congress of the United States to invest in more fruitful endeavors, The United States of America

    both the Canadian and British governments stopped funding exploratory research in artificial intelligence.

    The following few years would be referred to in the future as a AI winter.

    a time when it was difficult to acquire financing for artificial intelligence initiatives.

    a kind of artificial intelligence software that mimicked the knowledge and analytical prowess of human professionals.

    By 1985, Over a billion dollars was now being transacted in the artificial intelligence business.

    While this is going on, The United States and the United Kingdom have reestablished support for university research as a direct result of Japan's computer programme for the fifth generation.

    However, When the market for lisp machines crashed in 1987, it was the beginning of a downward spiral.

    AI once again fallen into disfavor, as well as another, longer-lasting winter started.

    Geoffrey Hinton is credited for reviving interest in neural networks and the concept of connectionism.

    Around the middle of the 1980s, David Rumelhart and a few others were involved. During the 1980s, many soft computing tools were created.

    include things like neural networks, fuzzy systems, Theory of the grey system, the use of evolutionary computing as well as a number of methods derived from statistical or mathematical optimization.

    Through the late 1990s and into the early 21st century, AI worked to progressively rehabilitate its image by developing solutions that were tailored to address particular challenges. Because of the tight emphasis, researchers were able to develop conclusions that could be verified, use a greater number of mathematical approaches, and work with experts from other areas (such as statistics, economics and mathematics). In the 1990s, the solutions that were produced by AI researchers were never referred to as "artificial

    Enjoying the preview?
    Page 1 of 1