Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

APPLICATIONS OF ARTIFICIAL INTELLlGENCE
APPLICATIONS OF ARTIFICIAL INTELLlGENCE
APPLICATIONS OF ARTIFICIAL INTELLlGENCE
Ebook638 pages6 hours

APPLICATIONS OF ARTIFICIAL INTELLlGENCE

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Recently, we have witnessed a wave of emerging technologies, from Internet of things and blockchain to artificial intelligence, demonstrating significant potential to transform and disrupt multiple sectors. Artificial intelligence (AI) refers to intelligence demonstrated by machines, while the natural intelligence is the intelligence displayed b

LanguageEnglish
PublisherGotham Books
Release dateJun 14, 2022
ISBN9781956349771
APPLICATIONS OF ARTIFICIAL INTELLlGENCE

Read more from Matthew Sadiku

Related authors

Related to APPLICATIONS OF ARTIFICIAL INTELLlGENCE

Related ebooks

Technology & Engineering For You

View More

Related articles

Related categories

Reviews for APPLICATIONS OF ARTIFICIAL INTELLlGENCE

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    APPLICATIONS OF ARTIFICIAL INTELLlGENCE - Matthew Sadiku

    front_cover.jpg

    APPLICATIONS OF ARTIFICIAL

    INTELLlGENCE

    MATTHEW N. O. SADIKU

    Regents Professor Emeritus & IEEE Life Fellow

    Department of Electrical & Computer Engineering

    Email: sadiku@ieee.org or matthew_sadiku@yahoo.com

    Web: www.matthew-sadiku.com

    SARHAN M. MUSA

    Professor

    Department of Electrical & Computer Engineering

    Prairie View A & M University

    Prairie View, TX 77446

    SUDARSHAN R. NELATURY

    Associate Professor

    Department of Electrical & Computer Engineering

    Penn State, Erie

    Gotham Books

    30 N Gould St.

    Ste. 20820, Sheridan, WY 82801

    https://gothambooksinc.com/

    Phone: 1 (307) 464-7800

    © 2022 Matthew Sadiku. All rights reserved.

    No part of this book may be reproduced, stored in a retrieval system, or transmitted by any means without the written permission of the author.

    Published by Gotham Books (June 1, 2022)

    ISBN: 978-1-956349-78-8 (H)

    ISBN: 978-1-956349-76-4 (P)

    ISBN: 978-1-956349-77-1 (E)

    Any people depicted in stock imagery provided by iStock are models, and such images are being used for illustrative purposes only.

    Because of the dynamic nature of the Internet, any web addresses or links contained in this book may have changed since publication and may no longer be valid. The views expressed in this work are solely those of the author and do not necessarily reflect the views of the publisher, and the publisher hereby disclaims any responsibility for them.

    DEDICATION

    TO OUR CHILDREN

    Motunrayo, Ann, and Joyce

    Mahmoud, Ibrahim, and Khalid

    Charles Finney and Charles Wesley

    PREFACE

    Recently, we have witnessed a wave of emerging technologies, from the Internet of things and blockchain to artificial intelligence, demonstrating significant potential to transform and disrupt multiple sectors. Artificial intelligence (AI) refers to intelligence demonstrated by machines, while natural intelligence is the intelligence displayed by humans and animals. AI is an umbrella term John McCarthy, a computer scientist, coined in 1955 and defined as The science and engineering of intelligent machines. AI is the development of computer systems that are able to perform tasks that would require human intelligence. From ancient times, humans have been dreaming of creating artificial intelligence. AI represents the hopes and fears of an industry seeking more intelligent solutions. It has the potential to change everything.

    Typically, AI systems demonstrate at least some of the following human behaviors: planning, learning, reasoning, problem-solving, knowledge representation, perception, speech recognition, decision-making, language translation, motion, manipulation, intelligence, and creativity. AI is an interdisciplinary and comprehensive field covering numerous areas such as computer science, psychology, linguistics, philosophy, neurosciences, cognitive science, thinking science, information science, system science, and biological science. Today, AI is integrated into our daily lives in several forms, such as personal assistants, automated mass transportation, aviation, computer gaming, facial recognition at passport control, voice recognition on virtual assistants, driverless cars, companion robots, etc.

    Although AI is a branch of computer science, there is hardly any field that is unaffected by this technology. Common areas of applications include agriculture, business, law enforcement, oil and gas, banking and finance, education, transportation, healthcare, engineering, automobiles, entertainment, manufacturing, speech and text recognition, facial analysis, telecommunications, and military.

    The book is divided into nineteen chapters. Chapter 1 provides a comprehensive introduction to artificial intelligence and its components: expert systems, fuzzy logic, artificial neural network, and machine learning. deep learning, natural language processing, and robots.

    Chapter 2 examines the role of AI in the different sectors of smart cities. Today, the focus is on smart cities, which aspire to connect all aspects of urban life.

    While several technologies are needed to realize the concept of smart cities, the two key enablers of the smart city are the Internet of things and artificial intelligence.

    Chapter 3 investigates the role of AI in the different sectors of the smart grid. The smart grid has emerged as a replacement for ill-suited traditional power grids designed as one-directional systems. It is an electrical power grid that is integrated with an AI-enabled, two-way communication network providing energy and information.

    Chapter 4 provides an overview of a broad range of applications of AI in healthcare. Healthcare is regarded as the next domain to be revolutionized by AI. In healthcare, AI can help manage and analyze data. AI can have a significant impact in making healthcare more accessible.

    Chapter 5 explores the advances of artificial intelligence applied in engineering. AI engineering is essentially the use of AI technologies in the development of AI applications. The field aims to equip practitioners to ensure human needs are translated into understandable, ethical, and trustworthy AI.

    Chapter 6 deals with various applications of AI in education. Applications of AI in education are on the rise and are receiving a lot of attention at all levels of education. The promise of AI applications lies partly in their efficiency and efficacy.

    Chapter 7 addresses various applications of AI in business. Businesses of all types and sizes are considering artificial intelligence to solve their problems. AI can increase productivity, gain competitive advantage, complement human intelligence, and reduce the cost of operations.

    In Chapter 8, we cover various applications of artificial intelligence in the industry. AI has impacted our lives significantly by fostering advances in many industries such as healthcare, eCommerce, pharmaceutical industry, energy industry, agriculture industry, petroleum industry, telecommunications, industry, construction industry, online advertising, consumer electronics, education, and entertainment.

    Chapter 9 addresses the various uses of artificial intelligence in the manufacturing industry. AI in manufacturing is a game-changer. AI is already transforming manufacturing in several ways and also changing the way we design products. Manufacturing has a strong association with AI, especially the use of automation. Today industrial leaders such as Google, Microsoft, Procter & Gamble, and IBM have invested heavily in AI.

    Chapter 10 gives various applications of AI tools in agriculture. AI is transforming agriculture in many ways. Farmers are relying on AI technology in their crop production. Some companies are leveraging computer vision and deep learning algorithms to process the data captured by drones. Food producers are using AI to sort products and reduce labor.

    Chapter 11 presents applications of AI in the food and beverage industry. AI is poised to revolutionize the food industry. AI technology can be implemented in all stages of the food supply chain, resulting in an overall improvement and a significant increase in efficiency. The application of AI in the food and beverage industry is already transforming the way we think about food production, food manufacturing, food processing, food quality, food delivery, food consumption, and food storage.

    Chapter 12 deals with various uses of AI technology in autonomous vehicles. AI performs several tasks in a self-driving automobile. AI helps vehicles to navigate through the traffic and handle complex situations. AI can respond quickly to data points generated from hundreds of different sensors. Car manufacturers are using AI tools in every aspect of the car-making process.

    Chapter 13 provides an introduction to artificial intelligence-based chatbots. At the heart of chatbot lies natural language processing, a branch of AI. An AI chatbot is usually trained to operate more or less on its own. It has a key advantage of being able to learn a lot about its users.

    Chapter 14 explores the impact of various artificial intelligence tools on social media companies. AI is a key component of the popular social networks we use every day. In the digital age, AI is constantly transforming social media.

    Chapter 15 covers how AI can be used in games in various ways. AI tools are used different aspects of a game to mimic, imitate, learn, forget, teach, and collaborate. AI techniques can help generate intelligent, responsive behavior that molds on your reactions as a player. AI makes the game more interactive by boosting player’s experience.

    In chapter 16, various uses of humanized AI are covered. Humanized AI is that which understands human emotions like happiness, stress, urgency, anger, to detect emotions like laughter, anger, arousal, and pain. It responds to natural language very much like a human friend. Humanizing principles can be applied to every machine that involves human-AI collaboration.

    Chapter 17 focuses on the various applications of wearable AI. AI in wearables is aimed at improving the functionalities and user experience to provide users with real-time insights. Wearable AI gadgets such as smartwatches and fitness bands can be used to monitor health-oriented vitals such as heart rate and blood pressure. These gadgets are a boon for firefighters in critical rescue operations.

    Chapter 18 introduces the use of artificial intelligence in cybersecurity. AI tools have been increasingly applied for cybercrime detection and prevention. They can be used to learn how to enable security experts to understand the cyber environment in order to detect abnormalities. They can also be used to help broaden the horizons of existing cyber security solutions.

    Chapter 19 is the last chapter, which examines various applications of AI in the military and defense. The promise of AI (automation, informed decision making, self-control, self-regulation, and self-actuation of combat systems, etc.) are driving militaries around the world to accelerate research and development. The modern uses of AI in the military are not limited to the battlefields. AI can help reduced the risk of life loss in wars. AI can be used for training systems.

    The second section of each chapter (with the exception of chapter one) provides an overview of AI. This may seem repetitious, but it is intended to make the chapter self-contained. If it makes reading grievous, one might remember Zig Zigler’s quote,

    Repetition is the mother of learning, the father of action, which makes it the architect of accomplishment."

    This book is designed to help learners decode the mystery of artificial intelligence and its various applications. It provides researchers, students, and professionals a comprehensive introduction, benefits, and challenges for each application area of AI technologies. The authors were motivated to write this book partly due to the lack of a single source of reference on the various applications of AI tools. Hence, the book will help a beginner to have an introductory knowledge about AI and its applications. The main objective of the authors is to provide a concise treatment that is easily digestible for each application. It is hoped that the book will be useful to practicing engineers, computer scientists, and information business managers.

    ABOUT THE AUTHORS

    Matthew N. O. Sadiku received his B. Sc. degree in 1978 from Ahmadu Bello University, Zaria, Nigeria, and his M.Sc. and Ph.D. degrees from Tennessee Technological University, Cookeville, TN in 1982 and 1984 respectively. From 1984 to 1988, he was an assistant professor at Florida Atlantic University, Boca Raton, FL, where he did graduate work in computer science. From 1988 to 2000, he was at Temple University, Philadelphia, PA, where he became a full professor. From 2000 to 2002, he was with Lucent/Avaya, Holmdel, NJ as a system engineer and with Boeing Satellite Systems, Los Angeles, CA as a senior scientist. He is presently a professor emeritus of electrical and computer engineering at Prairie View A& M University, Prair View, TX.

    He is the author of over 970 professional papers and over 95 books including Elements of Electromagnetics (Oxford University Press, 7th ed., 2018), Fundamentals of Electric Circuits (McGraw-Hill, 7 the ed., 2021, with C. Alexander), Computational Electromagnetics with MATLAB (CRC Press, 4th ed., 2019), Principles of Modern Communication Systems (Cambridge University Press, 2017, with S. O. Agbo), and Emerging Internet-based Technologies (CRC Press, 2019). In addition to the engineering books, he has written Christian books including Secrets of Successful Marriages, How to Discover God’s Will for Your Life, and commentaries on all the books of the New Testament Bible. Some of his books have been translated into French, Korean, Chinese, Italian, Portuguese, and Spanish.

    He was the recipient of the 2000 McGraw-Hill/Jacob Millman Award for outstanding contributions in the field of electrical engineering. He was also the recipient of Regents Professor award for 2012-2013 by the Texas A& M University System. He is a registered professional engineer and a fellow of the Institute of Electrical and Electronics Engineers (IEEE) for contributions to computational electromagnetics and engineering education. He was the IEEE Region 2 Student Activities Committee Chairman.

    He was an associate editor for IEEE Transactions on Education. He is also a member of the Association for Computing Machinery (ACM) and the American Society of Engineering Education (ASEE).

    His current research interests are in the areas of computational electromagnetic, computer networks, and engineering education. His works can be found in his autobiography, My Life and Work (Trafford Publishing, 2017) or his website: www.matthew-sadiku.com. He currently resides Hockley, Texas. He can be reached via email at sadiku@ieee.org

    Sarhan M. Musa is a professor in Electrical and Computer Engineering Department at Prairie View A& M University. He holds a Ph.D. in Electrical Engineering from the City University of New York. He is the founder and director of Prairie View Networking Academy (PVNA), Texas. He is LTD Sprint and Boeing Welliver Fellow. Professor Musa is internationally known through his research, scholarly work, and his published books. He has had a number of invited talks at international conferences.

    He has received a number of prestigious national and university awards and research grants. He is a senior member of the IEEE and has also served as the member of technical program committee and steering committee for a number of major journals and conferences. Professor Musa has written more than a dozen books on various areas in Electrical and Computer Engineering. His current research interests cover many topics in the artificial intelligence/ML, data analytics, Internet of things, wireless network, data center protocols, and computational methods.

    Sudarshan R. Nelatury is an associate professor of electrical and computer engineering at Penn State University, USA. He received his M.E. and Ph.D. in electrical engineering from Osmania University (OU) in 1983 and 1996 respectively. He was with the Department of ECE at the University College of Engineering, OU during 1983-1999. He was invited by Villanova University in 1999 and worked as a visiting faculty till May 2003. He then moved to Penn State University in June 2003 and has been working in the electrical and computer engineering department till date. Nelatury authored/co-authored over 150 technical articles including 2 textbooks and 6 solutions manuals. During 2012, he was invited by the Moore School of Electrical Engineering, University of Pennsylvania (UPenn), where he spent his first Sabbatical. He was recipient of Outstanding Research Award and Council of Fellows Research Award in 2008 from Penn State Erie, The Behrend College for his prolific publication record. He has been a Senior Member of IEEE and was Life Member of IETE and ISTE of Indi.

    Chapter 1 - Introduction

    CHAPTER 1

    INTRODUCTION

    If the government regulates against use of drones or stem cells or artificial intelligence, all that means is that the work and the research leave the borders of that country and go someplace else.Peter Diamandis

    1.1 INTRODUCTION

    Artificial intelligence refers to intelligence demonstrated by machines, while natural intelligence is the intelligence displayed by humans and animals. It is the development of computer systems that are able to perform tasks that would require human intelligence. The field of artificial intelligence (AI) aims at creating and studying software or hardware systems with a general intelligence similar to, or greater than, that of human beings. AI is now one of the most important global issues of the 21st century.

    Artificial intelligence is the branch of computer science that deals with designing intelligent computer systems that mimic human intelligence. Typically, AI systems demonstrate at least some of the following human behaviors: planning, learning, reasoning, problem solving, knowledge representation, perception, speech recognition, decision-making, language translation, motion, manipulation, intelligence, and creativity. The ability of machines to process natural language, to learn, to plan makes it possible for new tasks to be performed by intelligent systems. The main purpose of AI is to mimic the cognitive function of human beings and perform activities that would typically be performed by a human being. Without being taught by humans, machines use their own experience to solve a problem [1].

    AI is a stand-alone independent electronic entity that functions much like a human expert. Today, AI is integrated into our daily lives in several forms, such as personal assistants, automated mass transportation, aviation, computer gaming, facial recognition at passport control, voice recognition on virtual assistants, driverless cars, companion robots, etc.

    This chapter provides a comprehensive introduction to artificial intelligence. It begin by providing some historical background on AI. It then presents various components of AI. It briefly covers some common applications of AI. It discusses artificial general intelligence (AGI), weak AI, and strong AI. It also addresses the benefits and challenges of AI and AGI. The last section concludes with comments.

    1.2 HISTORICAL BACKGROUND

    From ancient times, humans have been dreaming of creating artificial intelligence.Artificial Intelligence somewhat scares and intrigues us. Early advocates of AI envisioned machines that had a wide variety of human capabilities. Modern AI research started in the mid-1950s when AI researchers were convinced that & machines will be capable, within twenty years of doing any work a man can do. In the 1970s, it was obvious that researchers had grossly underestimated the difficulty of the project. In 20 years, AI researchers have been shown to be fundamentally mistaken. By the 1990s, AI researchers expect that today’s artificial intelligence will eventually evolve into artificial general intelligence (AGI) [1].

    In the modern age, important events and milestones in the evolution of AI include the following [2]:

    1950: Alan Turing publishes Computing Machinery and Intelligence. In the paper, Turing—famous for breaking the Nazi’s ENIGMA code during WWII—proposes to answer the question "can machines think?

    1956: John McCarthy coins the term artificial intelligence at the first-ever AI conference at Dartmouth College. This is when the field of AI officially started.

    1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer basedon a neural network that learned through trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptron, which becomes the landmark work on both neural networks and AI.

    1980s: Neural networks featuring backpropagation — algorithms for training thenetwork—become widely used in AI applications.

    1997: IBM’s Deep Blue beats then world chess champion Garry Kasparov, in a chess match.

    2011: IBM’s Watson captivated the public when it defeated two former Champions Ken Jennings and Brad Rutter on the game show Jeopardy!

    2015: Baidu’s Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.

    2016: DeepMind’s AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses.

    The latest focus on AI has given birth to natural language processing, computer vision, robotics, machine learning, deep learning, and more.

    1.3 COMPONENTS OF ARTIFICIAL INTELLIGENCE

    Today, AI is already everywhere, and it drives many aspects of our lives. AI is not a single technology but a range of computational models and algorithms. Artificial intelligence has the following main components [3,4]:

    1.3.1 Expert systems:

    Expert system (ES) was the first successful implementation of artificial intelligence and may be regarded as a branch of AI mainly concerned with a specialized knowledge-intensive domains like medicine. An expert system is computer software that simulates the judgment and behavior of a human expert. It is also known as an intelligent system or knowledge-based system. It encapsulates specialist knowledge of a particular domain of expertise and can make intelligent decisions. It has a knowledge base and a set of rules that infer new facts from the knowledge. Expert systems solve problems with an inference engine that draws from a knowledge base equipped with information about a specialized domain, mainly in the form of if-then rules. It is based on expert knowledge in order to emulate human expertise in any specific field. The basic concept behind ES is that expertise (such as a highly-skilled medical doctor or lawyer) is transferred from a human expert to a computer system. Non-expert users, seeking advice in the field, question the system to get expert knowledge [5].

    Expert systems are widely used in industries. From 1982 - 1990, the Japanese government heavily funded expert systems and other AI-related endeavors as a part of their Fifth Generation Computer Project (FGCP). Expert systems are finding a wide range of applications due to their capability to provide solutions to a variety of real-life problems. They are widely used in healthcare, business, and manufacturing.

    1.3.2 Fuzzy logic:

    This makes it possible to create rules for how machines respond to inputs that account for a continuum of possible conditions, rather than straightforward binary. Where each variable is either true or false (yes or no), the system needs absolute answers. However, these are not always available. Fuzzy logic allows variables to have a truth value between 0 and 1. It uses approximate human reasoning in knowledge-based systems. It was introduced in 1960s by Lotfi Zadeh of University of California, Berkeley known as father of fuzzy set theory. Fuzzy logic is useful in manufacturing processes as it can handle situations that cannot be adequately handled by traditional true/false logic [6].

    1.3.3 Neural networks 

    These are specific types of machine learning systems that consist of artificial synapses designed to imitate the structure and function of brains. An artificial neural network (ANN) is an information processing device that is inspired by the way the brain processes information. They were originally developed to mimic the learning process of the human brain. The idea of ANNs was inspired by the structure of the human brain and what the brain can do. They may be regarded as a sort of parallel processor designed to imitate the way the brain accomplishes tasks. They are made up of artificial neurons, take in multiple inputs, and produce a single output. The network observes and learns as the synapses transmit data to one another, processing information as it passes through multiple layers [7]. As shown in Figure 1.1, artificial neural networks are multi-layer fully connected neural nets.

    A picture containing sky, outdoor Description automatically generated

    Figure 1.1 Artificial neural networks

    Different types of artificial neural network are available: (1) support vector machine (SVM), (2) self-organization map (SOM), (3) multilayer perceptron (MLP). Typically, Neurons are organized in layers. Due to the fact ANNs can reproduce and model nonlinear processes, they have found several applications in a wide range of disciplines including system identification and control, quantum chemistry, pattern recognition, medical diagnosis, finance, data mining, machine translation, neurology, and psychology [8].

    1.3.4 Machine Learning:

    Machine learning (ML) is essentially the study of computer algorithms that improve automatically through experience. It is the field that focuses on how computers learn from data. This includes a broad range of algorithms and statistical models that make it possible for systems to find patterns, draw inferences, and learn to perform tasks without specific instructions. Machine learning is a process that involves the application of AI to automatically perform a specific task without explicitly programming it. Learning algorithms work on the assumption that strategies, algorithms, and inferences that worked well in the past are likely to work well in the future. ML techniques may result in data insights that increase production efficiency. Using ML can save time for practitioners and provide unbiased, repeatable results. Today, artificial intelligence is narrow and mainly based on machine learning. There are two types of learning: supervised learning and unsupervised learning. Supervised learning focuses on classification and prediction. It involves building a statistical model for predicting or estimating an outcome based on one or more inputs. It is often used to estimate risk. Supervised ML is where algorithms are given training data. Learning from data is used when there is no theoretical or prior knowledge solution, but data is available to construct an empirical solution. Supervised ML is increasingly being used in medicine such as in cardiac electrophysiology. In unsupervised learning, we are interested in finding naturally occurring patterns within the data. Unlike supervised learning, there is no predicted outcome. Unsupervised learning looks for internal structure in the data. Unsupervised learning algorithms are common in neural network models. Machine learning techniques have been currently applied in the analysis of data in various fields including medicine, finance, business, education, advertising, cyber security, and energy applications [9].

    1.3.5 Deep Learning:

    This is a form of machine learning based on artificial neural networks. Deep learning (DL) architectures are able to process hierarchies of increasingly abstract features, making them especially useful for purposes like speech and image recognition and natural language processing. Deep learning networks can deal with complex non-linear problems. It extracts complex features from high-dimensional data and applies them to develop a model that relates inputs to outputs. The most common form of deep learning architectures is multi-layer neural networks. Deep learning has many advantages over shallow learning. Due to this, deep learning networks have received much attention as they can deal with more complex non-linear problems. Recently, companies such as IBM, Microsoft, Google, Apple, and Baidu have invested in and developed deep learning. They have taken advantage of their massive data and large computational power to deploy deep learning on a large scale. Although DL has achieved some success and found applications in various fields, it is still in its infancy [10].

    1.3.6 Natural Language Processors

    For AI to be useful to us humans, it needs to be able to communicate with us in our language. Computer programs can translate or interpret language as it is spoken by normal people. Language is crucial around the world in communication, entertainment, media, culture, drama, movie, and the economy. Natural language processing (NLP) refers to the field of study that focuses on the interactions between human language and computers. It is a computational approach to text analysis. It involves the study of mathematical and computational modeling of various aspects of language. It is an Interdisciplinary field involving computer science, linguistics, logic, and psychology.

    NLP is important because of the major role language such as English plays in human intelligence and because of the wealth of potential applications. NLP is commonly used for text mining, machine translation, and automated question-answering. Applications of NPL include interfaces to expert systems and database query systems, machine translation, text generation,

    Enjoying the preview?
    Page 1 of 1