Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Microscope Image Processing
Microscope Image Processing
Microscope Image Processing
Ebook1,028 pages10 hours

Microscope Image Processing

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Microscope Image Processing, Second Edition, introduces the basic fundamentals of image formation in microscopy including the importance of image digitization and display, which are key to quality visualization. Image processing and analysis are discussed in detail to provide readers with the tools necessary to improve the visual quality of images, and to extract quantitative information. Basic techniques such as image enhancement, filtering, segmentation, object measurement, and pattern recognition cover concepts integral to image processing. In addition, chapters on specific modern microscopy techniques such as fluorescence imaging, multispectral imaging, three-dimensional imaging and time-lapse imaging, introduce these key areas with emphasis on the differences among the various techniques.

The new edition discusses recent developments in microscopy such as light sheet microscopy, digital microscopy, whole slide imaging, and the use of deep learning techniques for image segmentation and analysis with big data image informatics and management.

Microscope Image Processing, Second Edition, is suitable for engineers, scientists, clinicians, post-graduate fellows and graduate students working in bioengineering, biomedical engineering, biology, medicine, chemistry, pharmacology and related fields, who use microscopes in their work and would like to understand the methodologies and capabilities of the latest digital image processing techniques or desire to develop their own image processing algorithms and software for specific applications.
  • Presents a unique practical perspective of state-of-the-art microscope image processing and the development of specialized algorithms
  • Each chapter includes in-depth analysis of methods coupled with the results of specific real-world experiments
  • Co-edited by Kenneth R. Castleman, world-renowned pioneer in digital image processing and author of two seminal textbooks on the subject
LanguageEnglish
Release dateAug 26, 2022
ISBN9780128210505
Microscope Image Processing

Related to Microscope Image Processing

Related ebooks

Computers For You

View More

Related articles

Reviews for Microscope Image Processing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Microscope Image Processing - Fatima Merchant

    Chapter One: Introduction

    Kenneth R. Castleman; Fatima A. Merchant

    Abstract

    Over the past 400 years, the optical microscope has seen increasing use in biomedical research, clinical medicine, and many other fields. Digital image processing, now an integral part of microscopy, can be used to extract quantitative information about a specimen, and it can transform an image to make it much more informative than it would otherwise be.

    This book describes methods, techniques, and algorithms that have proven useful in the processing and analysis of digital microscope images, and it illustrates their application with specific examples. It will serve as a reference for users of digital microscopy, including scientists, engineers, clinicians, and graduate students in biology, medicine, chemistry, pharmacology, and related disciplines. It will be particularly useful for those who use microscopes and commercial image processing software in their work. In addition to conventional bright-field microscopy, this book also discusses processing techniques that are applicable to confocal, fluorescence, structured illumination, and three-dimensional microscopy.

    The microscope forms an optical image that represents the specimen. It can be digitized to produce a digital image, which can be displayed or interpolated to form a continuous image. This book discusses image processing algorithms in terms of the four image types: the optical image, the digital image, the continuous image, and the displayed image, which each represent the specimen.

    Keywords:

    Microscope; Digital image; Digital microscopy; Image processing; Optics; Confocal microscope; Fluorescence microscopy; Resolution

    1.1: The Microscope and Image Processing

    Invented over 400 years ago, the optical microscope has seen steady improvement and increasing use in biomedical research and clinical medicine as well as in many other fields [1,2]. Today many variations of the basic microscope instrument are used with great success, allowing us to peer into spaces much too small to be seen with the unaided eye. More often than not, in this day and age, the images produced by a microscope are converted into digital form for storage, analysis, or processing prior to display and interpretation [3–5]. Digital image processing greatly enhances the process of extracting information about the specimen from a microscope image. For that reason, digital imaging is steadily becoming an integral part of microscopy. Digital processing can be used to extract quantitative information about the specimen from a microscope image, and it can transform an image so that a displayed version is much more informative than it would otherwise be [6,7].

    1.2: The Scope of This Book

    This book discusses the methods, techniques, and algorithms that have proven useful in the processing and analysis of digital microscope images. We do not attempt to describe the workings of the microscope, except to outline its limitations and the reasons for certain processes. Neither do we discuss the proper use of the instrument. These topics are beyond our scope and are well covered in other works. Instead we focus on techniques for processing microscope images.

    At this time microscope imaging and image processing are of vital interest to the scientific and engineering communities. Recent developments in cellular-, molecular-, and nanometer-level imaging technologies have led to rapid discoveries and have greatly advanced knowledge in biology, medicine, chemistry, pharmacology, and other fields.

    Microscopes have long been used to capture, observe, measure, and analyze images of various living organisms and structures at scales far below the limits of human visual perception. With the advent of affordable, high-performance computer and image sensor technologies, digital imaging has essentially replaced film-based photomicrography for microscope image acquisition and storage. Digital image processing has become essential to the success of subsequent data analysis and interpretation of the new generation of microscope images. There are microscope imaging modalities that require digital image processing just to produce an image suitable for viewing. Digital processing of microscope images has opened up new realms of medical research and brought about the possibility of advanced clinical diagnostic procedures.

    The approach used in this book is to describe image processing algorithms that have proved useful in microscope image processing and to illustrate their application with specific examples. Useful mathematical results are presented without derivation or proof, but with references to the earlier work. We have relied on a collection of chapter contributions from leading experts in the field to present detailed descriptions of state-of-the-art methods and algorithms developed to solve specific problems in microscope imaging. Each chapter provides first a summary, then an in-depth analysis of the methods, and finally specific examples to illustrate application. The insight gained from these examples of successful application should guide the reader in developing their own applications.

    Although a number of monographs and edited volumes have been written on the topic of computer-assisted microscopy, most of these books focus on the basic concepts and technicalities of microscope illumination, optics, hardware design, and digital camera setups. They do not discuss in detail the practical issues that arise in microscope image processing or the development of specialized algorithms for digital microscopy.

    This book is intended to complement existing works by focusing on the computational and algorithmic aspects of microscope image processing. It should serve the users of digital microscopy as a reference for the basic algorithmic techniques that routinely prove useful in microscope image processing.

    The intended audience for this book includes scientists, engineers, clinicians, and graduate students working in the fields of biology, medicine, chemistry, pharmacology, and other related disciplines. It is intended for those who use microscopes and commercial or free image processing software in their work and would like to understand the methodologies and capabilities of the latest digital image processing techniques. This book is also intended for those who develop their own image processing algorithms and software for specific applications that are not covered by existing software products.

    In summary, this book presents a discussion of algorithms and processing methods that complements the existing selection of books on microscopy and digital image processing.

    1.3: Our Approach

    A few basic considerations govern our approach to discussing microscope image processing algorithms. These are based on years of experience using and teaching digital image processing in microscope applications. They are intended to prevent many of the common misunderstandings that crop up to impair communication and confuse one seeking to understand how to use this technology productively. We have found that a detailed grasp of a few fundamental concepts does much to facilitate learning this topic, to prevent misunderstandings, and to foster successful application. We cannot claim that our approach is standard or commonly used. We only claim that it makes the job easier for both the reader and the authors.

    1.3.1: The Four Types of Images

    To the question Is the image analog or digital? the answer is, Both. In fact, at any one time we may be dealing with four separate images, each of which is a representation of the specimen that lies beneath the microscope objective lens. This is a central issue because, whether we are looking at the pages of this book, at a computer display, or through the eyepieces of a microscope, we are viewing only images and not the original object. It is only with a clear appreciation of these four images, and the relationships among them, that we can move smoothly through the design and effective use of microscope image processing algorithms. We have endeavored to use this formalism consistently throughout this book to solidify the foundation of the reader’s understanding.

    1.3.1.1: The Optical Image

    The optical components of the microscope act to create an optical image of the specimen on the image sensor, which is most commonly a charge-coupled device (CCD) array. The optical image is actually a continuous distribution of light intensity across a two-dimensional surface. It contains some information about the specimen, but it is not a complete representation of the specimen. It is, in the common case, a two-dimensional projection of a three-dimensional object, and it is limited in resolution and subject to distortion and noise introduced by the imaging process. Though an imperfect representation, it is what we have to work with if we seek to view, analyze, interpret, and understand the specimen.

    1.3.1.2: The Continuous Image

    We can assume that the optical image corresponds to, and is represented by, a continuous function of two spatial variables. That is, the coordinate positions (x, y) are real numbers, and the light intensity at a given spatial position is a nonnegative real number. This mathematical representation we call the continuous image. More specifically, it is a real-valued analytic function of two real variables. This affords us considerable opportunity to use well-developed mathematical theory in the design and analysis of algorithms. We are fortunate that the imaging process allows us to assume analyticity, since analytic functions are much more well behaved than those that are merely continuous (see Section 1.3.2.1).

    1.3.1.3: The Digital Image

    The digital image is produced by the process of digitization. The continuous optical image is sampled, commonly on a rectangular grid, and those sample values are quantized to produce a rectangular array of integers. That is, the coordinate positions (n, m) are integers, and the light intensity at a given integer spatial position is represented by a nonnegative integer. Further, random noise is introduced into the resulting data. Such treatment of the optical image is brutal in the extreme. Improperly done, the digitization process can severely damage an image or even render it useless for analytical or interpretation purposes. More formally, the digital image may not be a faithful representation of the optical image and, therefore, of the specimen. Vital information can be lost in the digitization process, and more than one project has failed for this reason alone. Properly done, image digitization yields a numerical representation of the specimen that is faithful to the original spatial distribution of light that emanated from the specimen.

    What we actually process or analyze in the computer, of course, is the digital image. This array of sample values (pixels) taken from the optical image, however, is only a relative of the specimen, and a rather distant one at that. It is the responsibility of the user to ensure that the relevant information about the specimen that is conveyed by the optical image is preserved in the digital image as well. This does not mean that all such information must be preserved. This is an impractical (actually impossible) task. It means that the information required to solve the problem at hand must not be lost in either the imaging process or the process of digitization.

    We have mentioned that digitization (sampling and quantization) is what generates a corresponding digital image from an existing optical image. To go the other way, from discrete to continuous, we use the process of interpolation. By interpolating a digital image, we can generate an approximation to the continuous image (analytic function) that corresponds to the original optical image. If all goes well, the continuous function that results from interpolation will be a faithful representation of the optical image.

    1.3.1.4: The Displayed Image

    Finally, before we can visualize our specimen again, we must display the digital image. Human eyes cannot view or interpret an image that exists only in digital form. A digital image must be converted back into optical form before it can be seen. The process of displaying an image on a screen is also an action of interpolation, this time implemented in hardware. The display spot, as it is controlled by the digital image, acts as the interpolation function that creates a continuous visible image on a screen or on paper. The display hardware must be able to interpolate the digital image in such a way as to preserve the information of interest.

    1.3.2: The Result

    We see that each image we work with is actually a set of four images. Each optical image corresponds to both the continuous image that describes it and the digital image that would be obtained by digitizing it (assuming some particular set of digitizing parameters). Further, each digital image corresponds to the continuous function that would be generated by interpolating it (assuming a particular interpolation method). Moreover, the digital image also corresponds to the displayed image that would appear on a particular display screen. Finally, we assume that the continuous image is a faithful representation of the specimen and that it contains all of the relevant information required to solve the problem at hand. In this book we refer to these as the optical image, the continuous image, the digital image, and the displayed image. Their relationship is shown in Fig. 1.1.

    Fig. 1.1

    Fig. 1.1 The four images of digital microscopy. The microscope forms an optical image of the specimen. This is digitized to produce the digital image, which can be displayed and interpolated to form the continuous image. The displayed image allows the original image to be visualized.

    This leaves us with an option as we go through the process of designing or analyzing an image processing algorithm. We can treat the image as an array of numbers (which it is), or we can analyze the corresponding continuous image. Both of these represent the optical image, which, in turn, represents the specimen. In some cases we have a choice and can make life easy for ourselves. Since we are actually working with an array of integers, it is tempting to couch our analysis strictly in the realm of discrete mathematics. In many cases this can be a useful approach. But we cannot ignore the underlying analytic function to which that array of numbers corresponds. To be safe, an algorithm must be true to both the digital image and the continuous image. Thus we must pay close attention to both the continuous and the discrete aspects of the image.

    To focus on one and ignore the other can lead a project to disaster. In the best of all worlds, we could go about our business, merrily flipping back and forth between corresponding continuous and digital images as needed. The implementations of digitization and interpolation, however, do introduce distortion, and caution must be exercised at every turn. Throughout this book we strive to point out the resulting pitfalls.

    1.3.2.1: Analytic Functions

    The continuous image that corresponds to a particular optical image is more than merely continuous. It is a real-valued analytic function of two real variables. An analytic function is a continuous function that is severely restricted in how wiggly it can be. Specifically, it possesses all of its derivatives at every point [5]. This restriction is so severe, in fact, that if you know the value of an analytic function and all of its (infinitely many) derivatives at a single point, then that function is unique, and you know it everywhere (Section 12.4.1). In other words, only one analytic function can pass through that point with those particular values for its derivatives. To be dealing with functions so tightly restricted relieves us of many of the concerns that keep pure mathematicians entertained.

    As an example, assume that an analytic function of one variable passes through the origin where its first derivative is equal to 2, and all other derivatives are zero. The analytic function y = 2x uniquely satisfies this condition and thus is that function. Of all the functions that pass through the origin, only this one meets the stated requirements.

    Thus when we work with a monochrome image, we can think of it as an analytic function of two dimensions. A multispectral image can be viewed as a collection of such functions, one for each spectral band. The restrictions implied by the analyticity property make life much easier for us than it would otherwise be. Working with such a restricted class of functions allows us considerable latitude in the mathematical analysis that surrounds image processing algorithm design. We can make the types of assumptions that are common to engineering disciplines and actually get away with them.

    The continuous and digital images are even more restricted than previously stated. The continuous image is an analytic function that is bandlimited as well. The digital image is a band-limited, sampled function. The effects created by all of these sometimes conflicting restrictions are discussed in later chapters. For present purposes it suffices to say only that, by following a relatively simple set of rules, we can analyze the digital image as if it were the specimen itself. It is also true that violating any of these rules can lead to disaster.

    1.3.3: The Sampling Theorem

    The theoretical results that provide us with the most guidance as to what we can get away with when digitizing and interpolating images are the Nyquist sampling theorem (1928) and the Shannon sampling theorem (1949). They specify the conditions under which an analytic function can be reconstructed, without error, from its samples (Section 3.2). Although this ideal situation is never quite attainable in practice, the sampling theorems nevertheless provide us with means to keep the damage to a minimum and to understand the causes and consequences of failure, when it occurs. We cannot digitize and interpolate without the introduction of noise and distortion. We can, however, preserve sufficient fidelity to the specimen so that we can solve the problem at hand. The sampling theorem is our map through that dangerous territory. This topic is covered in detail in Chapter 3. By following a relatively simple set of rules, we can produce usable results with digital microscopy.

    1.4: The Challenge

    And so we are left with the following situation. The object of interest is the specimen that is placed under the microscope. The instrument forms an optical image that represents that specimen. We assume that the optical image is well represented by a continuous image (which is an analytic function), and we strive, through the choices available in microscopy, to ensure that this is the case. Further, the optical image is sampled and quantized in such a way that the information relevant to the problem at hand has been retained in the digital image. We can interpolate the digital image to produce an approximation to the continuous image or to make it visible for interpretation. We must now process the digital image, either to extract quantitative data from it or to prepare it for display and interpretation by a human observer. In subsequent chapters the model we use is that the continuous image is an analytic function that represents the specimen and that the digital image is a quantized array of discrete samples taken from the continuous image. Although we actually process only the digital image, interpolation gives us access to the continuous image whenever it is needed.

    Our approach, then, is that we keep in mind that we are always dealing with two images that are representations of the optical image produced by the microscope, and this in turn represents a projection of the specimen. When analyzing an algorithm we can employ either continuous or discrete mathematics, as long as the relationship between these images is understood and preserved. In particular, any processing step performed upon the digital image must be legitimate in terms of what it does to the underlying continuous image.

    1.5: Modern Microscopy

    The past 400 years, and indeed the past few decades, have seen tremendous development in microscopy. Modern techniques, described later in this book, have drastically increased the utility of the microscope as a tool in research, medicine, and industry. The resolution limit found by Abbe in 1873 (Section 2.6) has been surpassed by a wide margin. Molecules can now be located with nanometer precision (see Figure 13.1). Specimens previously showing up only as blurred features can now be imaged in detail.

    In addition to conventional bright-field microscopy, this book also discusses processing techniques that are applicable to confocal, fluorescence, structured illumination, and three-dimensional microscopy. The combination of precision optical equipment and advanced image processing techniques can produce extremely useful imagery.

    1.6: Nomenclature

    The nomenclature of recently developed techniques has not yet become standardized. Indeed, one of the challenges to understanding this complex field of endeavor is to recognize the differences, similarities, and identities among techniques that have different names. In this book we adopt a set of definitions that are common and useful, but by no means universal.

    Digital microscopy consists of theory and techniques collected from several fields of endeavor. As a result, the descriptive terms used therein bear a collection of specialized definitions. Often, ordinary words are pressed into service and given specific meanings. We have included a glossary to help the reader navigate through the jargon, and we encourage its use. If a concept becomes confusing or difficult to understand, it may well be the result of one of these specialized words. As soon as that is cleared up, the pathway to understanding opens again.

    1.7: Summary of Important Points

    1.A microscope forms an optical image that represents the specimen.

    2.The continuous image is a real-valued analytic function of two real variables that represents the optical image.

    3.An analytic function is not only continuous but possesses all of its derivatives at every point.

    4.The process of digitization generates a digital image from the optical image.

    5.The digital image is an array of integers obtained by sampling and quantizing the optical image.

    6.The process of interpolation generates an approximation of the continuous image from the digital image.

    7.Image display is an interpolation process that is implemented in hardware. It makes the digital image visible.

    8.The optical image, the continuous image, the digital image, and the displayed image each represent the specimen.

    9.The design or analysis of an image processing algorithm must take into account both the continuous image and the digital image.

    10.In practice, digitization and interpolation cannot be done without loss of information and the introduction of noise and distortion.

    11.Digitization and interpolation must both be done in a way that preserves the image content that is required to solve the problem at hand.

    12.Digitization and interpolation must be done in a way that does not introduce noise or distortion that would obscure the image content needed to solve the problem at hand.

    References

    [1] Spector D.L., Goldman R.D., eds. Basic Methods in Microscopy. Cold Spring Harbor Laboratory Press; 2005.

    [2] Török P., Kao F.-J., eds. Optical Imaging and Microscopy: Techniques and Advanced Systems. Springer; . 2007;vol. 87.

    [3] Sluder G., Wolf D.E. Digital Microscopy. second ed. Academic Press; 2003.

    [4] Inoue S., Spring K.R. Video Microscopy. second ed. Springer; 1997.

    [5] Murphy D.B. Fundamentals of Light Microscopy and Electronic Imaging. Wiley-Liss; 2001.

    [6] Castleman K.R. Digital Image Processing. Prentice-Hall; 1996.

    [7] Diaspro A., ed. Confocal and Two-Photon Microscopy. Wiley-Liss; 2001.

    Chapter Two: Fundamentals of Microscopy

    Kenneth R. Castleman; Ian T. Young

    Abstract

    This chapter describes the optical imaging process, including image formation by a lens, focal length, the point spread function (psf ), magnification, and resolution. The diffraction limit of resolution is imposed by the wave nature of light. Resolution can be specified by the psf or by the optical transfer function (OTF). The OTF is the Fourier transform of the psf. An ideally shaped lens will transform a diverging spherical wave into a converging spherical wave. Any departure from that ideal shape will introduce aberration and reduce resolution. An optical system that has been calibrated both spatially and photometrically can make accurate measurements of specimens in a microscope.

    Keywords

    Optical imaging; Focal length; Point spread function; Magnification; Resolution; Diffraction; Aberration; Calibration

    2.1: The Origins of the Microscope

    During the 1st century AD the Romans were experimenting with different shapes of clear glass. They discovered that, by holding a piece that was thicker in the middle than at the edges over an object, they could make that object appear larger. They also used lenses to focus the rays of the sun and start a fire. By the end of the 13th century, spectacle makers were producing lenses to be worn as eyeglasses to correct for deficiencies in vision. The word lens derives from the Latin word lentil because these magnifying chunks of glass were similar in shape to a lentil bean.

    In 1590, two Dutch spectacle makers, Zacharias Janssen and his father Hans, started experimenting with lenses. They mounted several lenses in a tube, producing considerably more magnification than was possible with a single lens. This work led to the invention of both the compound microscope and the telescope [1].

    In 1665, Robert Hooke, the English physicist who is sometimes called the father of English microscopy, was the first person to see cells. He made his discovery while examining a sliver of cork. In 1674 Anton van Leeuwenhoek, while working in a dry goods store in Holland, became so interested in magnifying lenses that he learned how to make his own. By carefully grinding and polishing, he was able to make small lenses with high curvature, producing magnifications of up to 270 times. He used his simple microscope to examine blood, semen, yeast, insects, and the tiny animals swimming in a drop of water. Leeuwenhoek became quite involved in science and was the first person to describe cells and bacteria. Because he neglected his dry goods business in favor of science, and because many of his pronouncements ran counter to the beliefs of the day, he was ridiculed by the local townspeople. From the great many discoveries documented in his research papers, Anton van Leeuwenhoek (1632–1723) has come to be known as the father of microscopy. He constructed a total of 400 microscopes during his lifetime. In 1759 John Dolland built an improved microscope using lenses made of flint glass, greatly improving

    Enjoying the preview?
    Page 1 of 1