Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Medical Image Analysis
Medical Image Analysis
Medical Image Analysis
Ebook1,356 pages11 hours

Medical Image Analysis

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Medical Image Analysis presents practical knowledge on medical image computing and analysis as written by top educators and experts. This text is a modern, practical, self-contained reference that conveys a mix of fundamental methodological concepts within different medical domains. Sections cover core representations and properties of digital images and image enhancement techniques, advanced image computing methods (including segmentation, registration, motion and shape analysis), machine learning, how medical image computing (MIC) is used in clinical and medical research, and how to identify alternative strategies and employ software tools to solve typical problems in MIC.
  • An authoritative presentation of key concepts and methods from experts in the field
  • Sections clearly explaining key methodological principles within relevant medical applications
  • Self-contained chapters enable the text to be used on courses with differing structures
  • A representative selection of modern topics and techniques in medical image computing
  • Focus on medical image computing as an enabling technology to tackle unmet clinical needs
  • Presentation of traditional and machine learning approaches to medical image computing
LanguageEnglish
Release dateSep 20, 2023
ISBN9780128136584
Medical Image Analysis

Related to Medical Image Analysis

Related ebooks

Technology & Engineering For You

View More

Related articles

Related categories

Reviews for Medical Image Analysis

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Medical Image Analysis - Alejandro Frangi

    Part I: Introductory topics

    Outline

    Chapter 1. Medical imaging modalities

    Chapter 2. Mathematical preliminaries

    Chapter 3. Regression and classification

    Chapter 4. Estimation and inference

    Chapter 1: Medical imaging modalities

    Mathias Unberatha; Andreas Maierb    aDepartment of Computer Science, Johns Hopkins University, Baltimore, MD, United States

    bPattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany

    Abstract

    This chapter introduces the most common medical imaging modalities, including X-ray imaging, computed tomography, magnetic resonance imaging, functional imaging, ultrasound, and photoacoustic imaging. In addition to describing the energy-tissue interactions that determine image formation, we describe computational methods to describe image quality. Finally, we discuss clinical use cases and explore how different imaging modalities contribute to patient care in both diagnosis and treatment.

    Keywords

    Imaging; Image quality; Medical imaging modalities; Vision; CT; MRI; US

    Learning points

    •  Common medical imaging modalities

    •  Physics of imaging and imaging contrast mechanisms

    •  Exemplar clinical workflows and associated imaging modalities

    •  Modalities of importance in diagnosis and/or treatment

    1.1 Introduction

    In medical imaging, many different physical mechanisms are used to generate views of the body. Each of these so-called modalities is created for a specific diagnostic or therapeutic purpose. As imaging is driven by its utility, the physical effect that is best suited to achieve each diagnostic or therapeutic task is selected. As a result, diagnostic and interventional imaging is continuously evolving to optimally cover all aspects relevant to clinical diagnosis and treatment workflows.

    From a high-level perspective, imaging can be summarized as follows: A pulse of energy is introduced in the system to be imaged and the response of the system is measured in a spatially and temporally resolved manner. This view of image formation reveals several key properties of medical images that determine their spatial, temporal, and functional resolution: First, the type of energy injected into the system (e.g., electromagnetic radiation) will determine the physical effects governing energy–tissue interaction, and thus the contrast mechanism. Consequently, the choice of energy source will alter the functional window into the human body and thus depend on the clinical task. Second, the temporal behavior of energy injection and interaction is highly diverse, ranging from quasi-instantaneous as for high-brilliance X-ray sources to several minutes as in radioactive decay. These properties will determine the temporal resolution of an imaging modality. Finally, the geometric arrangement of energy source and detector array will determine the imaging geometry (e.g., pinhole camera model) setting theoretical upper bounds on the ideally achievable spatial resolution.

    Contingent on the above properties, medical images of a particular modality will exhibit specific characteristics implying two conclusions: First, modality-specific image processing approaches become necessary to fully leverage a modality's potential; and second, multiple modalities are usually necessary in the clinical diagnosis and treatment workflow to optimally gain access to the required information.

    In the remainder of this chapter, we will first discuss metrics to quantify and compare image quality to determine their clinical utility. Next, we briefly introduce common medical imaging modalities and reason about their core properties. Finally, we discuss how this panel of modalities is combined to derive clinically useful diagnostic and interventional workflows that optimally benefit the patient.

    1.2 Image quality

    This section briefly introduces concepts for objective image quality assessment, which are important to quantify the performance of any imaging system and investigate how image quality changes as components of the system are modified. For a more detailed discussion of related topics we refer the reader to [1] and [2].

    1.2.1 Resolution and noise

    Several aspects during image formation affect the spatial resolution of an imaging system. The fundamental lower bound on the spatial resolution that can be achieved with a particular configuration is determined by the Nyquist–Shannon sampling theorem, stating that only signals with can be expressed without loss of actual information, where is the sampling rate, i.e., the inverse pixel size in case of images. In real systems, the resolution is typically limited due to imperfections and signal corruption during the image formation process, such as scattering or noise, rather than the pixel size. Fig. 1.1 shows this effect using a bar pattern. As a consequence, system response functions are used to describe the resolving power of an imaging system. The most common variants used are the point spread function (PSF) and the modulation transfer function (MTF). As depicted in Fig. 1.2, the PSF describes the response of an imaging modality to an infinitely small point source. Due to corruption processes during image formation, the point source will be blurred, the degree of which can be quantified representing the quality of an imaging system. While the PSF evaluates the system's response to an input that is singular in the spatial domain, the MTF assesses the response to a signal in the frequency domain, i.e., sinusoidal patterns. It is computed as the ratio of amplitudes at each sinusoidal frequency, and hence, in an idealistic scenario normalized to 100%. Consequently, the ideal MTF is unity for a perfect signal transfer and will decay to zero at the Nyquist–Shannon limit. In real systems, the resolution gradually decays towards the frequency limit, as shown in Fig. 1.2. Experimentally, both the PSF and MTF require prospective imaging of a dedicated test object, a so-called phantom, to quantify image quality. However, this approach may not always be feasible, suggesting the need for methods that solely rely on the acquired image. One such metric is the signal-to-noise ratio (SNR), which is defined as ratio of the average signal value to its standard deviation computed within a homogeneous region of the image. Yet, the SNR does not fully characterize noise since it does not account for its texture, i.e., the spatial frequency distribution of noise. The noise power spectrum (NPS) describes both the magnitude and spatial frequency characteristics of image noise, and thus should be preferred when comparing and evaluating new imaging technologies. Unfortunately, its computation is not necessarily straightforward, since noiseless images usually cannot be obtained.

    Figure 1.1 A bar pattern allows to visualize the loss of resolution that is introduced by an imaging system. The system blur reduces the contrast, which results in a loss of high spatial frequencies towards the right side of the bar pattern. Image reprinted from [3] with permission.

    Figure 1.2 An ideal point object is used to measure the point spread function (PSF) of an imaging system. Its normalized Fourier transform is called the modulation transfer function (MTF). It measures the magnitude of each individual frequency. Image reprinted from [3] with permission.

    It is worth mentioning that (1) the list of aforementioned metrics to quantify image quality and resolving power is not exhaustive and (2) all these metrics are based on linear system theory, i.e., they assume linearity and shift-invariance. In fact, for most imaging systems, these quantities are not constant across the image; a popular example is the location-dependence of the PSF in wide field-of-view microscopy due to lens aberrations far from the optical axis.

    1.2.2 Comparing image appearance

    The quantitative metrics on image quality described above, such as the MTF or NPS, enable the comparison of image quality; however, they rely on metrics that are derived from the image to be evaluated only. If a template image with ideal or desirable characteristics is available, then relational metrics can be used that compare image appearance, e.g., before and after the application of a novel image processing algorithm. One can distinguish two classes of such measures, namely mathematically defined metrics and methods that consider characteristics of the human visual system to incorporate perceptual quality measures (see Section 1.2.3). Mathematically defined metrics are attractive since they exhibit low computational complexity and are independent of viewing conditions.

    The most famous representative of mathematically defined measures is the root mean square error (RMSE), which, as the name suggests, computes the square root of the squared difference in image intensities averaged over the whole image. Since the RMSE follows a quadratic scoring rule, large errors are heavily penalized, suggesting that applying RMSE is useful if large errors are particularly undesirable. However, if this is not the case, the mean absolute error (MAE), which describes the average deviation, is more appropriate, since all individual differences receive equal weight. It is worth mentioning that both the RMSE and the MAE exhibit two fundamental limitations: First, they rely on intensity correspondence, i.e., pixel intensities across images must have the same magnitude in the ideal case, suggesting that cross-modality comparisons are difficult. Second, the magnitude of the metric itself is dependent on the value range and scaling of image intensities, suggesting that RMSE and MAD values are not easily interpretable. As an alternative, metrics that consider intensity correlations rather than deviations can be used. The normalized cross-correlation (NCC) measures the degree of linear correlation between the intensity values of two images with a dynamic range that is bound by , and therefore is invariant to intensity scaling. Yet, even if two signals are linearly related there may be further distortions to luminance and contrast that can be assessed using the structural similarity index measure (SSIM), a perception-based technique. Image regions that are spatially close are assumed to have strong interdependencies that, if lost, will be perceived as a degradation of image quality. The SSIM is composed of three distinct terms that measure luminance, contrast, and structure. The universal quality index (UQI), SSIM's well-known predecessor, is a special case of SSIM that gives rise to unstable results if the intensity means or standard deviations are close to zero.

    1.2.3 Task-based assessment

    Evaluating and comparing image quality using the above metrics yields quantitative results that are meaningful overall; however, there are cases where straightforward computation-based analysis does not accurately reflect the clinical utility. A prominent example that is fiercely discussed is regularization as it is frequently used in image reconstruction problems in presence of insufficient measurements. In such cases, it is possible to obtain strikingly high SNR values with close to perfect SSIM, correlation coefficient, and RMSE to an image computed from sufficient samples. Unfortunately, these metrics fail to reflect that in most diagnostic settings overall image quality is secondary to detectability of pathology. Visual cues hinting at pathology, however, are usually small relative to the image content and as a consequence do not affect the aforementioned image quality metrics, setting the stage for task-based assessment of image quality, an active area of research on its own.

    The traditional approach to task-based assessment considers reader studies. To this end, medical experts are recruited and presented with large numbers of images that have been processed with the method to be evaluated. In forced choice experiments, these experts must then rate images that may contain a pathologic cue. Every reader performing this task will exhibit a distinct true positive rate (TPR) and false positive rate (FPR), which largely depend on a reader's estimate of disease prevalence from their training and their aggressiveness in decision making. The TPR/FPR pair of a reader corresponds to a single operating point in receiver operating characteristic (ROC) space, suggesting that many readers must be recruited to populate the ROC diagram for medical imaging system evaluation and finally to draw representative conclusions on the system's performance. It is evident that this endeavor is time consuming and costly, and consequently, model observers (MOs) appear as promising surrogate for human readers in medical image quality assessment [4]. MOs are designed to detect a signal against a noisy background, rendering the problem as a two-exclusive-hypotheses test (signal present/absent). Many formulations for MOs exist, ranging from tasks where the signal is known exactly to tasks where the signal is only known statistically. Since the decision of MOs for either hypothesis depends on an adjustable threshold, evaluating the TPR and FPR for an MO at different values of this threshold populates the ROC space, and thus mimics human readers with different decision making characteristics.

    1.3 Modalities and contrast mechanisms

    This section provides an overview on typical imaging modalities and their respective contrast mechanisms. Understanding the contrast mechanism of any imaging modality offers insights into what anatomical structures can be imaged with high quality, which determines the clinical use cases for the modality. For each category, we present one table that captures the cornerstones of each modality. For a more extensive description, we refer the reader to [3].

    1.3.1 X-ray transmission imaging

    After the discovery of X-rays, transmission imaging was immediately found to be of diagnostic value. Today's most relevant X-ray-based modalities are reported in Table 1.1. All of them use high-energy X-rays that are able to penetrate soft and hard tissues in the body. Emitted from an X-ray source, the photons pass the body and are collected on the other side by a detector. Using the energy loss from source to detector, contrast is generated. The more energy is lost, the higher the contrast. A typical application of digital radiography are chest X-ray images, as shown in Table 1.1. Here, we can clearly see the contrast difference between the lungs and the other soft tissues in the chest. Today, digital radiography is a standard modality to diagnose a wide range of applications from lung diseases to orthopedic applications.

    Table 1.1

    Images from [3] and by courtesy of Siemens Healthcare AG.

    Mammography is a modality that is specialized to the female breast. It is used in particular for breast cancer screening and diagnosis as it is cost-effective and widely available. The most common technique is projection imaging, which allows the detection of masses and dense tissues which have been shown to be risk factors towards development of breast cancer. With respect to the other modalities, also mammographic imaging allows the reconstruction of volumetric data from a series of projections. As the rotational range is limited, only incomplete data are sampled that are used to reconstruct slice images, which is commonly referred to as tomosynthesis. Given the same X-ray dose, tomosynthesis was shown to be superior for diagnosis of precursors of cancer. Yet, breast projection imaging is still predominantly used as the image data can be read much faster by experts than the volumetric data. Note that this is a significant cost factor in this high-throughput screening task.

    C-arm systems also employ X-rays, but they are built with the aim of providing real-time guidance during minimally invasive treatments. In such a treatment a small wire that is referred to as catheter is inserted into the body through a blood vessel that is connected to the area of interest. Using real-time X-ray guidance, the catheter is navigated towards the actual target where the actual treatment is performed. Applications of C-arm angiography systems predominantly aim at treatment of the heart and the brain. To improve the visibility of vessels, iodine-based contrast agent is used such that the path towards the structure of interest can be visualized. C-arm systems are typically also equipped with a 3D option that allows to compute slice images from a rotational scan. In contrast to tomosynthesis, almost complete data can be acquired, which allows the reconstruction of high-resolution high-contrast images.

    Computed tomography (CT) was a game changer when it was introduced as a medical product in 1974 as it allowed to virtually compute slice images of patients. Doing so, skulls could be virtually opened and its inner workings could be explored without actually having to perform surgery on a patient. As such the technology was quickly adopted in brain analysis to detect malformations and tumors already at early stages. Since then, CT has been continuously developed further and it is a routine diagnostic tool in today's clinical practice. With fast gantry rotation speeds of up to 4 Hz, the beating heart can be imaged with frame rates close to 75 ms. Full body scans can be performed in the range of about 20 seconds. As such CT is routinely used for emergency diagnostics.

    The last X-ray-based modality that we present here is X-ray phase contrast imaging. It is still at an early state of development and is not routinely used in clinical practice. However, being able to not just measure X-ray absorption, but also X-ray phase and microangle scattering in the so-called darkfield signal adds further diagnostic value to any X-ray-based imaging modality. As demonstrated in Table 1.1, complicated measurement setups have to be designed in order to use this technology with clinically available X-ray sources and detectors. With these additional efforts, one is able to obtain this complementary information for each projection image. As the darkfield signal is dependent on the orientation of microscopic fibers, 3D darkfield imaging is able to reconstruct their orientation for each element of a volume.

    1.3.2 Molecular imaging

    Molecular imaging makes use of radioactive tracers that are used to explore the patient's metabolism. The tracers typically employ molecules that are known to be relevant for specific functional processes in the body. A well-known tracer that is quite commonly employed is fluorodeoxyglucose (FDG), which is a sugar that is marked with an ¹⁸F fluorine atom. As ¹⁸F is a radioactive isotope, it undergoes radioactive decay inside the patient's body. The goal of molecular imaging, or – as it is also called – emission tomography, is to reconstruct the distribution of the tracer inside the patient's body as it is indicative where the respective functional response is found. For FDG as in our example, the tracer is indicative of sugar consumption, which is often related to inflammation and immune response. Hence, an FDG image can be interpreted as a map of bodily activity.

    In emission tomography, we distinguish two fundamental concepts. Positron emission tomography (PET) uses tracers that predominantly produce particles during decay. Once such a positron hits an electron, both annihilate and produce two γ quanta that are emitted exactly in opposite directions. A PET detector uses coincidence electronics to identify events that emerge from the same decay. Doing so, a set of rays can be determined and the final image is generated using an appropriate reconstruction algorithm.

    In contrast to PET, single-photon emission computed tomography (SPECT) is based on γ decay. A single photon is directly produced by the radioactive isotope. In order to detect the direction of origin at the detector, large metal collimators are used to focus the acceptance angles for each pixel towards a certain orientation. Doing so, the shape of the collimator lamellae determines the projection geometry and resolution. As demonstrated in Table 1.2, the use of iterative, regularized reconstruction is very effective for emission tomography, which can incorporate prior knowledge on the scan to increase the maximal resolution of the system by almost a factor of 2.

    Table 1.2

    Images from [3].

    For molecular imaging, typically very low amounts of radioactivity are introduced into the patient's body. As a result, activity maps are extremely sensitive to very low concentrations of tracers. However, there is also a price to pay for this high sensitivity: Typically emission tomography images are of much lower resolution than, e.g., X-ray-based images. As also noted in Table 1.2, most emission tomography systems that are sold today actually combine functional imaging with an additional structural modality such as CT or magnetic resonance imaging (MRI).

    1.3.3 Optical imaging

    All optical modalities that we discuss in this section operate either within the visible spectrum or at an infrared range that is close to the visible spectrum. Table 1.3 presents an overview on these modalities.

    Table 1.3

    Images from [3] and by courtesy of Talking Eyes GmbH.

    One of the earliest approaches to explore the inner workings of human beings is endoscopy. An optical system and a light source are used to transfer an image from within the body to the operator of the endoscope. Today, the optical systems are typically digital and can be operated in real-time. Applications are broad and range from diagnosis to interventions. The endoscope optic is either rigid as shown in Table 1.3 or flexible. Rigid endoscopes are often used in interventions in which the optic is inserted into the patient through a minimal incision that allows just enough space for the optics plus additional tools used during the intervention. Due to these restrictions, this type of intervention is often also referred to as keyhole surgery. Flexible endoscopes are able to follow human anatomy and allow the insertion using natural orifices and are often used in gastro- or colonoscopy.

    Microscopes allow the optical magnification of small objects using visible light. They use a system of lenses in order to create high magnification factors so microscopic structures are visible at the eyepiece of the microscope. As such they are often used for biological analysis of cells, e.g., in Petri dishes. Clinically the analysis of live cells is not a standard procedure. However, microscopes play an important role in the analysis of tissues that are resected during surgery. After resection, the tissue is stained using particular dyes that indicate diagnostically relevant cellular structures in a certain color. Next, the specimen is fixed in a substrate that allows to cut the material into slices of few millimeters of thickness. These slices are then collected and transferred onto slides that are investigated using the microscope. In Table 1.3, we see a tissue sample stained with hematoxylin and eosin. This supports analysis of the image, e.g., as cell nuclei are shown in blue. This histological analysis gives important cues on the state of the tissue. For example, it allows to assess the aggressiveness of a cancer by counting the number of mitotic cells in the slide image as the number of dividing cells at a fixed time is related to the actual tumor growth.

    Fundus imaging is a technique that explores the background of the eye – the retina. In principle, a normal camera plus an appropriate lens and light source is sufficient to acquire such images. Retinal imaging allows direct visualization of vessels inside the human body without having to perform any incision. Obvious applications are analysis of eye diseases such as glaucoma. Furthermore, fundus imaging also allows assessment of the vessel density and quality. In particular, in patients with a long history of high blood pressure, it can for example be seen that they typically exhibit higher curvature and tortuosity of their retinal vessels.

    Optical coherence tomography (OCT) is a method to perform 3D imaging of the retina. In contrast to fundus imaging, not only a superficial image is scanned, but a full depth profile is acquired on each position of the retina. As such it allows for additional diagnostic analysis and in particular the thickness of different retinal layers was found to be an important factor that is often related to characteristic diseases. If multiple scanning is performed at each location, the analysis of the noise structure allows the visualization of vessels inside the retina without the use of contrast agent. This OCT angiography (OCTA) is an emerging technology that is speculated to have a high diagnostic value for several eye diseases. Table 1.3 shows a typical OCT scanner and one slice through the retina in which the different retinal layers are visible.

    1.3.4 Large wavelengths and transversal waves

    In this section, we present modalities that employ long wavelengths and transversal waves. One of the most important medical modalities is MRI. In MRI, a strong external magnetic field is used to align dipoles within the human body. One molecule that is of diagnostic importance is H2O, i.e., water, which can be used for this so-called magnetic resonance effect. After alignment in direction of the strong external magnetic field, the dipoles can be excited using radio frequencies between 12 and 300 MHz, depending on the strength of the external magnetic field. After excitation, these dipoles return to their equilibrium state and while doing so, they emit a matching radio frequency pulse corresponding to the one at which they have been excited. This resonance effect allows to determine the density of the affected dipoles in the volume under consideration. Using additional encoding steps, one is able to even resolve this density spatially which forms images and the one depicted in Table 1.4. The magnetic resonance effect is dependent on a multitude of tissue characteristics. As a result many different contrasts can be obtained to encode various effects from temperature to blood oxygenation and even Brownian motion and blood flow.

    Table 1.4

    Images from [3,5]. Muyinatu Bell supplied the photograph of the photoacoustic imaging system used in her lab at Johns Hopkins University (https://pulselab.jhu.edu/).

    Ultrasound imaging uses a transceiver to emit and measure pressure waves into bodily tissues. At tissue boundaries these pressure waves are partially reflected. Measuring these reflections allows to reconstruct a depth profile along the direction of pressure wave propagation. If several of those directions are scanned in a linear manner, a slice image can be formed. This type of slice imaging is probably one of the most well-known imaging modalities and is used in many pregnancy screening tests. While 2D slice images are difficult to decipher for non-expert users, also 3D scan probes have emerged, as shown in Table 1.4. These 3D scans are popular with soon to be parents to get a first glimpse at their soon to be newborn child. Today, ultrasound is extremely accessible, as there are also hand-held devices available that have a total weight of less than 10 kg. Such devices can be used at sites of emergency and even inside ambulance cars.

    Photoacoustic imaging relies on the same transceiver technology as ultrasound imaging for measuring pressure waves; however, the signal is not generated by emitting ultrasonic waves but by stimulating tissue using a high-energy density pulsed light source, i.e., a laser. Non-ionizing laser pulses are delivered to the region of interest where some of the energy will be absorbed and converted to heat, leading to thermoelastic expansion and emission of ultrasound. The magnitude of the photoacoustic signal is dependent on the optical absorption in the target tissue. As a consequence, photoacoustic imaging is primarily used for the visualization of blood vessels due to the absorption in hemoglobin that is several magnitudes higher than in surrounding tissue. However, exogenous contrast agents with well-known absorption spectra can be engineered to target specific anatomy. Photoacoustic imaging is usually integrated with ultrasound imaging platforms since both modalities share the detection mechanism, giving rise to hybrid imaging systems.

    1.3.5 A historical perspective on medical imaging

    The modalities that we described in this section have been developed within a time frame of millennia. Most relevant developments, however, happened in the time frame of the past 200 years. We can see an accelerating trend in terms of development over the last 50 years with relevant modalities still emerging after the year 2000.

    Probably the oldest imaging modality presented here is endoscopy. Endoscopes that appear to be used in this context were already found in the ruins of Pompeii. Still, we can only date the first use of the term endoscope to 1953. In this year, Desormeaux developed a device for inspection of the urinary tract.

    Also magnifying lenses have been used already since the 14th century. The term microscope and the first device were introduced by Galilei in 1609. Yet, it still was a rather simple design and it was improved numerous times to form the microscopes that we know today.

    Shortly after the invention of photography in 1839, Helmholtz was the first to introduce this technology for the inspection of the eye in 1851, coining the term ophthalmoscope. Already in 1861 Maxwell introduced a color version of this technology.

    The development of X-ray transmission-based modalities began with Röntgen's discovery on November 8, 1895. Within the short time frame of only roughly 6 weeks, Röntgen submitted a first paper on his observations on December 28, 1895 and shared his discovery with the rest of the world. As the new method was not protected by patents, quickly after the publication companies decided to create new products. In particular three companies, located in Hamburg, Erlangen, and Chicago, were amongst the first early adopters. Today these companies, after several merges and acquisitions, are still existing as parts of Philips Healthcare, Siemens Healthineers, and General Electric Healthcare.

    Already right after the discovery of X-rays, X-ray movies or fluoroscopy was invented. Unfortunately, the idea took much longer to develop, as the rapid succession of X-ray images implied a high dose burden for the patient. As such it took until the discovery of the image intensifier in the 1940s/1950s to develop a useful medical-grade system. The first C-arm system that actually bore this name was created by Hugo Rost and colleagues in 1954 and forms the basis of today's C-arm angiography

    Enjoying the preview?
    Page 1 of 1