Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Artificial Intelligence, Machine Learning, and Deep Learning in Precision Medicine in Liver Diseases: Concept, Technology, Application and Perspectives
Artificial Intelligence, Machine Learning, and Deep Learning in Precision Medicine in Liver Diseases: Concept, Technology, Application and Perspectives
Artificial Intelligence, Machine Learning, and Deep Learning in Precision Medicine in Liver Diseases: Concept, Technology, Application and Perspectives
Ebook713 pages6 hours

Artificial Intelligence, Machine Learning, and Deep Learning in Precision Medicine in Liver Diseases: Concept, Technology, Application and Perspectives

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Artificial Intelligence, Machine Learning, and Deep Learning in Precision Medicine and Liver Diseases: Concept, Technology, Application, and Perspectives combines four major applications of artificial intelligence (AI) within the field of clinical medicine specific to liver diseases: radiology imaging, electronic health records, pathology, and multiomics. The book provides a state-of-the-art summary of AI in precision medicine in hepatology, clarifying the concept and technology of AI and pointing to the current and future applications of AI within the field of hepatology. Coverage includes data preparation, methodology and application within disease-specific cases in fibrosis, viral and steatohepatitis, cirrhosis, hepatocellular carcinoma, acute liver failure, liver transplantation, and more. The ethical and legal issues of AI and future challenges and perspectives are also discussed.

By highlighting many new AI applications which can further research, diagnosis, and treatment, this reference is the perfect resource for both practicing hepatologists and researchers focused on AI applications in medicine.

  • Introduces the concept of AI and machine learning of precision medicine in the field of hepatology
  • Discusses current challenges of AI in healthcare and proposes future tasks for AI in new workflows of healthcare
  • Provides real-world applications from domain experts in clinical medicine
LanguageEnglish
Release dateAug 20, 2023
ISBN9780323993760
Artificial Intelligence, Machine Learning, and Deep Learning in Precision Medicine in Liver Diseases: Concept, Technology, Application and Perspectives

Related to Artificial Intelligence, Machine Learning, and Deep Learning in Precision Medicine in Liver Diseases

Related ebooks

Biology For You

View More

Related articles

Reviews for Artificial Intelligence, Machine Learning, and Deep Learning in Precision Medicine in Liver Diseases

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Intelligence, Machine Learning, and Deep Learning in Precision Medicine in Liver Diseases - Tung-Hung Su

    Section 1

    Basics of artificial intelligence in medicine

    Outline

    Chapter 1. Artificial intelligence in health care: past and present

    Chapter 2. Data-centric artificial intelligence in health care: progress, shortcomings, and remedies

    Chapter 1: Artificial intelligence in health care

    past and present

    Alicia Chu ¹ , Liza Rachel Mathews ² , and Kun-Hsing Yu ²       ¹ Cornell University, Ithaca, NY, United States      ² Harvard Medical School, Department of Biomedical Informatics, Boston, MA, United States

    Abstract

    Artificial intelligence (AI) has evolved from a theoretical concept to techniques that could address real-world health care challenges. As the complexity and amount of health care data continue to increase, AI is expected to have a significant role in medical practice and research in decades to come. In this chapter, we first discuss the history of AI and introduce key concepts in AI studies, including unsupervised learning, supervised learning, reinforcement learning, neural networks, and explainability. In addition, we summarize developments in AI for medical image assessment, mining of electronic health record data, mobile health, public health surveillance, and genomic data interpretation. We further discuss ongoing challenges in the development of medical AI applications, including gaining the trust of users, ensuring that AI does not cause unreasonable harm, and the potential to aggravate health disparities using models derived from historical data. We conclude the chapter by outlining future opportunities for developing reliable AI systems for augmenting health care.

    Keywords

    Artificial; intelligence; Data science; Diagnosis; Electronic health records; Health care; Information systems; m-Health; Machine learning

    Chapter outlines

    Clinical applications

    Introduction

    Past: a brief history of artificial intelligence in health care

    Present: artificial intelligence in health care today

    Image-based applications

    Electronic health record mining

    Reinforcement learning for identifying effective treatments

    Wearables

    Pandemic response

    Genomics

    Future: opportunities and challenges of artificial intelligence in health care

    Conclusion

    References

    Chapter outlines

    • Machine learning is the predominant technique in artificial intelligence (AI) for health care. Supervised learning, unsupervised learning, and reinforcement learning are the three major branches of machine learning.

    • There are many successful AI applications in health care, most notably in image-based evaluations, electronic health record data mining, mobile health, public health surveillance, and genomic data interpretation.

    • Many challenges exist before we can safely apply AI to routine clinical tasks, including the potential for aggravating health disparities using models trained on historical data and the lack of trust among users.

    Clinical applications

    • Several image-based applications of artificial intelligence (AI) have been developed across various specialties in medicine over the past few years, such as computer-assisted detection in radiology and AI-empowered pathology evaluation.

    • Using electronic health record data, automated risk prediction models can access evolving clinical phenotypes and rich clinical narratives provided by clinicians.

    • Artificial intelligence can also assist in disease surveillance, which may be useful for addressing public health emergencies.

    Introduction

    Fueled by the large collections of biomedical data, advanced data-driven algorithms, open-source software development platforms, and advanced computing hardware, artificial intelligence (AI) has evolved from a theoretical concept to an exciting reality (Yu et al., 2018). As the complexity and amount of health care data continue to increase, AI is expected to have a significant role in medical practice and research in decades to come. In this chapter, we discuss the history of AI, current developments and technologies, and what the future holds for AI in health care.

    Past: a brief history of artificial intelligence in health care

    In 1956, at the Dartmouth Summer Research Project on Artificial Intelligence, the term AI' was first coined. Early on, health care was viewed as one of the most promising domains of applications for AI, starting with rule-based decision support systems in the 1970s (Yu et al., 2018). Rule-based systems required explicit rules and manual updates. Therefore, the system relied on the integrity of medical knowledge encoded in the rules. Examples of rule-based systems were DXplain, MYCIN, and Internist-1. DXplain is a clinical decision support system that was proposed in 1987 for the use of clinicians who did not necessarily possess expertise in programming (Barnett et al., 1987). The system seeks to assist, not replace physicians by providing diagnostic hypotheses based on signs and symptoms input by the user. DXplain provides justifications for its diagnoses and is able to evolve based on user feedback from physicians. Similarly, MYCIN, developed in 1973 at Stanford University, was designed to aid physicians in determining appropriate treatment options for microbial infections. Prior studies found a significant discrepancy between the therapeutic decisions of infectious disease experts and physicians in the same institution. In that publication, MYCIN possessed knowledge about most antimicrobial drugs and bacteria. In addition, the system depended on around 100 established rules to give advice to physicians. MYCIN prompts the user to answer yes or no to a list of questions about the patient; then, it makes predictions about possible bacteria infecting the patient and the best course of treatment (Shortliffe et al., 1973). Another example of a rule-based system was INTERNIST-1, an experimental computer program that made clinical diagnoses within the domain of internal medicine (Miller et al., 1982). INTERNIST-1 was developed in the 1970s at the University of Pittsburgh. The program functioned by using the patient's history, laboratory results, and symptoms to make differential diagnoses. The model did not resemble previous medical expert systems that used Bayesian statistics or pattern recognition. Instead, the model identified a problem area, a group of observations, to make all of the possible diagnoses. Although rule-based systems had many merits, it was expensive to build and needed human-authored updates.

    Although the 1970s marked various technological innovations for AI in a variety of domains, including health care, funding agencies began to have concerns about the usability of the proposed AI applications at that time. In particular, the British Science Committee commissioned Sir James Lighthill to review the field of AI research to aid in research funding distribution. Sir Lighthill's evaluation was published in the paper "Artificial intelligence: a general survey,'' now commonly referred to as the Lighthill report, in 1973 (Lighthill, 1972). That report divided AI research into categories A (advanced automation), C (computer-based central nervous system research), and B (bridge activity: building robots). In category A, Lighthill noted achievements such as the Automatic Landing System of Smith's Aviation Ltd. which surpassed human performance. However, the report criticized the lack of general application and emphasized that pattern recognition had not become up to par with conventional methods. Category C focused on computer-based studies on the central nervous system. Lighthill stated that although work in categories A and C had respectable achievements, they were below expectations. Category B, building robots, was described as disappointing because of inadequate progress toward robots with human-like abilities such as hand–eye coordination and problem-solving. Because of the pessimistic outlook of the Lighthill report, the British government ended funding for AI research in all but two universities (McCarthy, 1974). That report is generally thought to herald an AI winter spanning from the 1970s to the 1990s, when there was limited funding and interest in AI research.

    During the AI winter, a few groups of computer science researchers continued to make key technical advances in machine learning (ML) in the 1980s and 1990s, which eventually led to the renaissance of AI decades later. Unlike the predominant AI in the 1970s that relied heavily on encoding experts' knowledge in the system, ML approaches derive useful signals directly from the data. There are three major branches of ML: supervised ML, unsupervised ML, and reinforcement learning.

    Supervised ML makes predictions based on inputs by examining patterns in large quantities of training data, which includes the features or properties of example instances and their outcome labels. The overall goal of supervised ML is to predict the outcome label using the input features. Regression and classification are two basic forms of supervised ML (Yu et al., 2018).

    In contrast, unsupervised ML takes unlabeled data and extracts patterns from the data automatically. For example, some branches of unsupervised ML sought to identify clusters of data points with similar properties, detect abnormal data points, or obtain a representation of the data points in lower dimensions. With ML methods, new patterns in the data can be uncovered without needing specific decision rules, as previously seen (Yu et al., 2018).

    Reinforcement learning is another branch of ML known for its power to learn optimal behavioral rules for robotic control and complex board games, including Go. In these sequential control tasks, it is difficult to label the right or wrong moves because subsequent behaviors can modify the impact of previous moves. To address this issue, reinforcement learning defines the environment with which it interacts and identifies a sequence of actions that maximize a defined goal, instead of trying to classify each action as right or wrong. Every time the model interacts with the environment, the reinforcement learning agent perceives the current environment and chooses a course of action. The agent evaluates the reward signal after each action, and through a series of trials it maximizes the expected reward. As a result, the reinforcement learning agent must balance the exploitation of past knowledge and exploration to discover better options (Kaelbling et al., 1996).

    More recently, ML-based AI has received much public attention largely owing to the success of deep neural networks, a branch of ML that uses multiple layers of neurons to construct artificial neural networks. A prototype of this branch of research was first described in 1943 by McCullouch and Pitt, who developed models of artificial neurons that followed the concept of biological neurons processing information in the brain. The structure of a deep neural network includes an input layer, an output layer, and multiple hidden layers in between (LeCun et al., 2015). One layer is defined by its nodes, which could be variables being measured. Each layer receives input from its previous one and passes its processing results to the next layer (Ching et al., 2018; IBM Cloud Education, 2020). The concept of using multilayer neural networks to connect input and output can be applied to supervised or unsupervised ML as well as reinforcement learning. As a result of deep learning, since 2012, image processing and speech recognition have had significant advances. Many neural networks have more than 100 layers and up to hundreds of millions of parameters. Advances in computational hardware, especially graphics processing units and other specifically designed integrated circuits, enable highly parallel computation to optimize model parameters efficiently, contributing to the development of complex neural network architectures (Yu et al., 2018).

    To develop reliable deep learning models, researchers need to collect a representative set of data with a decent size. Fortunately, many large research consortia and biobanks, such as the UK Biobank and the Cancer Genome Atlas, have provided large datasets that support various ML-based investigations for their target populations. In addition, the HITECH Act, signed into law in 2009, promotes the use of electronic health records (EHRs) and better security of medical data through financial incentives (U.S. Department of Health & Human Services, Office for Civil Rights, 2017). Consequently, from 2008 to 2012, there was a 59% increase in physicians who reported having an EHR system (Yu et al., 2018). The implementation of EHRs in hospitals provides large datasets and allows for easier integration of AI systems into hospitals. Specifically, EHRs contain detailed notes on patients and their laboratory results, and researchers have developed methods for natural-language processing to extract phenotypical information for each patient. Powered by the availability of big biomedical data, algorithms, fast computational hardware, and open-source packages/Application Programming Interfaces (APIs), there has been tremendous growth in AI applications in health care.

    AI-empowered medical imaging interpretation is one of the fastest-growing fields in medical AI. Many critical innovations in deep learning methods for computer vision were developed in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), which started in 2010 and took place annually until 2017. The ILSVRC consisted of a workshop and competition that set the benchmark for image classification, single-object localization, and object detection. The ILSVRC released a public dataset composed of human-annotated images and another test dataset without the annotations. The annotated dataset was used by participants worldwide to train their algorithm, and then the algorithm was assessed with the test dataset by the evaluation server. Convolutional neural networks (CNNs) received a lot of attention and renewed interest since AlexNet, an implementation of CNN, won the ILSVRC challenge in 2012 by a large margin (Tajbakhsh et al., 2016). Major components of CNNs include convolutional layers, pooling layers, and fully connected layers. Combined, these layers of neurons can extract useful signals associated with the outcome of interest, without relying on human intervention to define what is important in the images. Before CNNs, most researchers made significant efforts to design computational modules to extract image features from the images, although those features may not capture the most useful features to identify the objects of interest. Thus, CNNs have made the training process notably more effective by their capacity to learn critical image patterns directly from the data. Key algorithms from ILSVRC were AlexNet in 2012, VGG and GoogLeNet in 2014, and ResNet in 2015 (Russakovsky et al., 2015). These innovations have served as the backbone neural network architecture for many medical image-based AI applications, because developers can leverage transfer learning techniques to tailor a general image classification system into image recognition models focusing on specific medical image modalities.

    An obstacle to the implementation and effectiveness of ML-based AI in medical settings is its black box nature, in which the AI's decisions lack transparency and do not provide intuitive explanations to its human users. Studies on explainable AI (XAI) systems aim justify AI's behavior to human evaluators to improve the model and ensure the safety of patients. For example, in the 1990s, an artificial neural network falsely predicted that patients with pneumonia as well as asthma would have a lower mortality rate (Adadi & Berrada, 2018). This mistake originated from the fact that asthmatic patients would survive because they would often be directly admitted to the intensive care unit and treated more actively. If the decision processes of the model were understood, this issue would have been avoided. Because explainability is a user-dependent concept, there is no standard through which an XAI can be evaluated against a non-XAI to compare explainability. Potential methods of measurement could be user satisfaction, or whether the user's decision-making has improved as a result of the explanation. Enhancing the interpretability of AI models remains an active area of research.

    Present: artificial intelligence in health care today

    There are many successful AI applications in health care, most notably in image-based applications, mining of EHR data, mobile health (m-health), public health surveillance, and genomic data interpretation (Miotto et al., 2018). Next, we summarize key breakthroughs in AI.

    Image-based applications

    Several image-based applications of AI have been developed across various specialties in medicine (including radiology, ophthalmology, dermatology, pathology, and gastroenterology) (Litjens et al., 2017). In Jan. 2017, Arterys Cloud DL, a software platform for interpreting cardiac magnetic resonance images, became the first deep learning technology approved by the US Food and Drug Administration (FDA) (Arterys Inc., n.d.; Lieman-Sifry et al., 2017). Since then, several additional AI applications have received FDA clearance (The Medical Futurist, 2022). We will discuss AI applications for different image modalities next.

    Computer-assisted detection (CAD) in radiology has been a focus of interest in research for many decades (Holme & Aabakken, 2018; Mori et al., 2018; Rajkomar et al., 2017). Because of subtle signals lurking in megabytes of radiology imaging data, it has been challenging to develop rule-based approaches for CAD. With breakthroughs in deep learning, it is relatively straightforward to develop ML systems to learn diagnostic features from radiology images and the associated diagnostic labels generated by expert radiologists (Shen et al., 2017). Thus, we have witnessed a Cambrian explosion of ML-based radiology imaging systems (Lehman et al., 2019; Wang et al., 2017). As an illustration, one study showed that using deep neural networks significantly improved fracture detection in radiographs by emergency medicine clinicians (Lindsey et al., 2018). This deep learning model was trained on 135,409 radiographs annotated by 18 senior subspecialized orthopedic surgeons. The results from a controlled experiment showed a 47% decrease in the misinterpretation of a fracture radiograph, and that the sensitivity of the average clinician increased from 80.8% to 91.5% whereas the specificity increased from 87.5% to 93.9%. Another study showed that a deep neural network was able to detect cancerous pulmonary nodules on a chest x-ray (Topol, 2019). The algorithm was achieved an accuracy that exceeded 17 of 18 radiologists on 34,000 patient images.

    Pathology evaluation is often required to diagnose, subtype, and stage cancers. For many complex diseases, including cancer, assessment by human experts through microscopy is insufficient to infer patient prognosis accurately, and many regions in the world lack expert pathologists to evaluate disease samples and confirm the diagnoses. In these instances, AI can be particularly useful (Ektefaie et al., 2021; Kather et al., 2019; Yu et al., 2020). With advances in reliable whole-slide digitization techniques, many successful automated pathology assessment methods have been proposed (Acs & Rimm, 2018; Beck et al., 2011; Coudray et al., 2018; Steiner et al., 2018; Yu et al., 2017). As an illustration, researchers developed an automated system using 2186 stained whole-slide images of lung adenocarcinoma and squamous cell carcinoma to distinguish between shorter-term and longer-term survivors (Yu et al., 2016). These methods were also shown to apply to histopathology slides of other organs, which demonstrates the potential to empower personalized treatment plans based on predicted risks. Another study successfully used deep CNNs to extract histologic signals that were predictive of clinically important diagnoses, prognoses, and genomic variants in renal cell carcinoma (RCC). The CNNs successfully diagnosed RCC histologic subtypes, predicted stage I clear cell papillary RCC survival outcomes, and identified image features indicative of copy number alterations (Marostica et al., 2021).

    AI innovations in health care also have the potential to allow for more accessible care. For example, a deep convolutional network trained on 129,450 clinical images composed of 2032 diseases achieved performances similar to those of 21 board-certified dermatologists when classifying skin cancer from biopsy-proven clinical images (Esteva et al., 2017). With these results, there is a possibility that these systems can be implemented in smartphones, allowing diagnoses to be made at a lower cost and at the convenience of patients (Nasr-Esfahani et al., 2016).

    Image-based AI systems have also shown substantial potential in the detection of ophthalmology diseases (Poplin et al., 2018; Ting et al., 2017; Wong & Bressler, 2016). In 2018, the FDA approved an autonomous AI diagnostic system for the detection of diabetic retinopathy (DR) and diabetic macular edema from retinal images (Michael D Abràmoff et al., 2018; Michael David Abràmoff et al., 2016). In a study conducted in a primary care setting involving 819 participants with diabetes, the AI system achieved a sensitivity of 87.2% and specificity of 90.7%. Before this pivotal trial, the FDA had yet to authorize an AI diagnostic system for use in the clinic. DR is the primary cause of blindness or vision loss in working-aged men or women in the United States. The ability to diagnose DR at an early stage is crucial but challenging because many patients with diabetes fail to adhere to the recommended schedule of eye exams. Introducing an autonomous diagnostic system in primary care offices enables the early diagnosis of DR and provides lower costs and better accessibility for patients.

    Electronic health record mining

    EHR data provide large-scale, longitudinal, and detailed profiles of real-world patient populations that have allowed for the development of generalizable prognostic predicting models (Petrone, 2018; Rajkomar et al., 2018; Rose, 2018). Using EHR data, automated risk prediction models can access evolving clinical phenotypes and the rich clinical narratives provided by clinicians (Artzi et al., 2020; Flaks-Manov et al., 2020). In one study, researchers developed a risk prediction model for chronic kidney disease (CKD) progression (Perotte et al., 2015). Using longitudinal laboratory test results in addition to clinical documentation, the model was able to predict CKD progression more accurately compared with models that did not include these variables. In addition, using genetic variants from a genome-wide association study, risk models have been able to attain only moderate predictions of disease risk. However, linking EHR data to genetic data has several advantages for polygenic risk score prediction—it increases the diversity of the patient population and provides accessible and nuanced descriptions of phenotypes (R. Li et al., 2020; Mahmoudi et al., 2020). In another study, ML models using EHR data were employed to predict the risk for atherosclerotic cardiovascular disease (ASCVD) (Ward et al., 2020). Typically, the pooled cohort equation (PCE) is used to predict ASCVD risk, but this method has historically poor performance among patients of Asian or Hispanic descent. EHR-trained ML models achieved a performance that was similar to that of PCE or even improved, especially for patients whose risk could not be predicted by PCE. These results showcased the potential of applying data-driven approaches to extract clinically actionable information from existing EHR datasets.

    Reinforcement learning for identifying effective treatments

    Sepsis is a leading cause of mortality among hospitalized patients. Although there are clinical guidelines for sepsis management, robust and personalized decision support tools are lacking that can assist a clinician in identifying the most effective treatment for septic patients throughout the disease course in real-time. Thus, a group of researchers developed the AI Clinician, which uses reinforcement learning to identify optimal treatments by learning connections among treatments, patient characteristics, and outcomes in the observational data (Komorowski et al., 2018). This reinforcement learning agent was developed and tested on intensive care unit databases. Variables such as demographics, vital signs, fluids, and vasopressors given were extracted and used to optimize the model to minimize patient mortality. The most common deviation from the AI Clinician's recommended treatment was the insufficient administration of vasopressor. The results showed that when clinician-administered doses differed from the amount recommended by the AI Clinician, mortality rates rose, and mortality was lowest when the clinician's recommended dose matched that of the AI Clinician. Overall, reinforcement learning provided the AI Clinician with large amounts of patient data (analogous to clinician experience) from which the system was able to learn the optimal treatment by studying previous treatment decisions, and ultimately improve patient outcomes (Prasad et al.,

    Enjoying the preview?
    Page 1 of 1