Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Neat versus Scruffy: Fundamentals and Applications
Neat versus Scruffy: Fundamentals and Applications
Neat versus Scruffy: Fundamentals and Applications
Ebook111 pages1 hour

Neat versus Scruffy: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Neat versus Scruffy


Artificial intelligence (AI) research can take either a clean and organized approach or a messy and unorganized one. The distinction was initially made in the 1970s and continued to be a topic of debate until the middle of the 1980s. Research in artificial intelligence has almost entirely used "neat" methods throughout the 1990s and into the 21st century, and these methods have shown to be the most successful.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Neats and scruffies


Chapter 2: Artificial intelligence


Chapter 3: Symbolic artificial intelligence


Chapter 4: Artificial general intelligence


Chapter 5: History of artificial intelligence


Chapter 6: Philosophy of artificial intelligence


Chapter 7: Hubert Dreyfus's views on artificial intelligence


Chapter 8: Physical symbol system


Chapter 9: GOFAI


Chapter 10: Blocks world


(II) Answering the public top questions about neat versus scruffy.


(III) Real world examples for the usage of neat versus scruffy in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of neat versus scruffy' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of neat versus scruffy.

LanguageEnglish
Release dateJul 3, 2023
Neat versus Scruffy: Fundamentals and Applications

Related to Neat versus Scruffy

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Neat versus Scruffy

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Neat versus Scruffy - Fouad Sabry

    Chapter 1: Neats and scruffies

    Artificial intelligence (AI) research may take either a clean and organized approach or a messy and unorganized one. The difference was first established in the 1970s and continued to be a topic of debate until the middle of the 1980s. Research on artificial intelligence has almost entirely used neat methods throughout the 1990s and into the 21st century, and these methods have shown to be the most effective.

    Neats are computer programs that make use of computer programs that are based on formal paradigms such as logic, mathematical optimization, or neural networks. Researchers and analysts working in the field of neat have voiced the ambition that a single formal paradigm may be expanded and refined in order to attain both general intelligence and superintelligence.

    Scruffies acquire intelligent behavior via the use of a wide variety of distinct algorithms and approaches. It may be necessary to do a significant amount of manual coding or knowledge engineering in order to fix messy programs. Scruffies have suggested that the implementation of general intelligence is only possible through addressing a huge number of issues that are fundamentally unconnected, and that there is no silver bullet that will enable programs to generate general intelligence on their own.

    The tidy method is analogous to the field of physics in the sense that it relies on straightforward mathematical models as its basis. The scruffy method is more comparable to biology, in which a significant portion of the task is analyzing and classifying a variety of events.

    Roger Schank is credited for establishing the difference between tidy and scruffy in the middle of the 1970s. Schank used these terms to characterize the distinction between his work on natural language processing (which represented commonsense knowledge in the form of large amorphous semantic networks) and the work of John McCarthy, Allen Newell, Herbert A. Simon, Robert Kowalski, and others whose work was based on logic and formal extensions of logic. Schank's work on natural language processing represented commonsense knowledge in the form of large amorphous semantic networks. SHRDLU was successful, but it was not possible to scale it up into a natural language processing system that would be of any value since it did not have a systematic architecture. It was discovered that it was difficult to maintain a bigger version of the software, which means that it was too cluttered to be enlarged.

    Other artificial intelligence labs, the biggest of which were located at Stanford University, Carnegie Mellon University, and the University of Edinburgh, concentrated their efforts on logic and formal problem solving as a foundation for AI. These organizations gave financial assistance to neats like John McCarthy, Herbert Simon, Allen Newell, Donald Michie, and Robert Kowalski so that they may continue their research.

    A procedural/declarative distinction was another way that the difference between the methodology used by MIT and those of other labs was explained. It was intended for programs such as SHRDLU to function as agents that would carry out activities. They carried out the procedures. Other programs were developed as inference engines that altered formal assertions (or declarations) about the world and converted these manipulations into actions. These other programs were referred to as declaration manipulators..

    Nils Nilsson addressed the subject at his presidential speech to the Association for the Advancement of Artificial Intelligence in the year 1983. He argued that the field required both at the time. It was his writing A significant portion of the information that we want our programs to possess is able to be stated declaratively, and it should be done so in some type of logic-like formalism. There is a time and a place for ad hoc structures, but the vast majority of them originate from the domain itself. Alex P. Pentland and Martin Fischler, both of SRI International, shared the same opinion regarding the anticipated role of deduction and logic-like formalisms in the field of future AI research; however, they did not agree that these factors would play as significant of a role as Nilsson had described.

    Rodney Brooks, in the middle of the 1980s, was the one who first used the scruffy method to robotics. He promoted the construction of robots that were, in his words, Fast, Cheap, and Out of Control, which was also the title of a paper that he and Anita Flynn co-authored in 1989. They did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical machine learning techniques, like Shakey or the Stanford cart. Additionally, they did not plan their actions using formalizations based on logic, like the 'Planner' language. Earlier robots, like Shakey and the Stanford cart, were able to do both of these things. They only responded to the information provided by their sensors in a manner that was most likely to assist them in surviving and moving. Knowledge engineers are responsible for entering each and every one of the millions of facts that are stored in the Cyc database. These facts are related to the many complexity that exist in the world. Every one of these inputs is an impromptu addition to the system's already impressive intelligence. It is possible that there is a neat solution to the issue of commonsense knowledge (such as machine learning algorithms with natural language processing that might analyze the content that is accessible through the internet), but there has not yet been a successful attempt at developing such a project.

    In 1986, Marvin Minsky published The Society of Mind, a book that advocated a view of intelligence and the mind as an interacting community of modules or agents that each handled different aspects of cognition. Some modules were specialized for very specific tasks (for example, edge detection in the visual cortex), while other modules were specialized to manage communication and prioritization. Minsky believed that this view of intelligence and the mind was the best way to explain how the mind works (e.g. planning and attention in the frontal lobes). This paradigm was proposed by Minsky as a model of both biological human intelligence as well as a blueprint for future work in artificial intelligence.

    This paradigm is purposefully scruffy because it does not anticipate there to be a single algorithm that can be applied to all of the tasks involved in intelligent behavior. Rather, it assumes that there are several algorithms, each of which may be applied to a specific job. Minsky penned:

    How exactly do these mystical forces give us our intelligence? The catch is that there really isn't any catch at all. The strength of our intellect comes not from any one ideal concept but rather from the tremendous variety of our experiences.

    As of 1991, Minsky was continuing to produce studies analyzing the comparative merits of the tidy and scruffy techniques, e.g.

    Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy.

    In the 1990s, new statistical and mathematical approaches to artificial intelligence were created employing highly developed formalisms such as mathematical optimization and neural networks. These new AI methods were a step forward in the field of AI. Peter Norvig and Stuart Russell have coined the phrase the triumph of the neats to refer to this overarching movement toward more formal approaches in the field of artificial intelligence. However, the majority of these solutions have been applied to particular issues that required particular responses, and

    Enjoying the preview?
    Page 1 of 1