Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Human Visual System Model: Understanding Perception and Processing
Human Visual System Model: Understanding Perception and Processing
Human Visual System Model: Understanding Perception and Processing
Ebook101 pages1 hour

Human Visual System Model: Understanding Perception and Processing

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What is Human Visual System Model


Experts in image processing, video processing, and computer vision make use of a human visual system model in order to deal with biological and psychological processes that are not yet completely understood. An example of such a model is utilized in order to simplify the behaviors of a system that is extremely complex. Whenever there is an improvement in our understanding of the actual visual system, the model is updated.


How you will benefit


(I) Insights, and validations about the following topics:


Chapter 1: Human visual system model


Chapter 2: Data compression


Chapter 3: Image compression


Chapter 4: Transform coding


Chapter 5: Optical illusion


Chapter 6: Chroma subsampling


Chapter 7: Compression artifact


Chapter 8: Grayscale


Chapter 9: Tone mapping


Chapter 10: Color appearance model


(II) Answering the public top questions about human visual system model.


(III) Real world examples for the usage of human visual system model in many fields.


Who this book is for


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of Human Visual System Model.

LanguageEnglish
Release dateMay 6, 2024
Human Visual System Model: Understanding Perception and Processing

Read more from Fouad Sabry

Related to Human Visual System Model

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Human Visual System Model

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Human Visual System Model - Fouad Sabry

    Chapter 1: Human visual system model

    Experts in the fields of image processing, video processing, and computer vision employ a model of the human visual system (HVS model) to account for biological and psychological processes that are still poorly understood. A model of this kind is employed to reduce the complexity of the system's behavior. The model is revised as our understanding of the real visual system grows.

    Analysis of visual perception is known as psychovisual research..

    It is possible to manipulate one's perception and vision using a model of the human visual system. Color television, lossy compression, and Cathode Ray Tube (CRT) television are all applications of the HVS paradigm.

    It was once believed that the bandwidth requirements of color television were too great for the existing technology. The HVS's color resolution was found to be far lower than its brightness resolution, which meant that chroma subsampling could be used to squeeze more color into the signal.

    Lossy picture compression formats such as JPEG are yet another illustration. According to our HVS model, we are unable to make out high-frequency detail; as a result, JPEG allows us to quantize these parts without a noticeable drop in quality. Bandstop filtering is used in audio compression to remove frequencies that humans can't hear.

    During our evolutionary past, when humans had to protect ourselves or seek for food, we adapted several aspects of the HVS. HVS properties are frequently displayed in optical illusions.

    Due of a shortage of rods, the human eye functions like a low-pass filter. to view Mach bands

    Unacceptable color resolution (fewer cones in human eye than rods)

    Motion sensitivity

    greater sensitivity to the surroundings

    Seeing a camouflaged animal has a much more profound effect than simply sensing its texture.

    More emphasis can be placed on texture than on disparity in 3D.

    Facial recognition built in (babies smile at faces)

    A normal-appearing inverted face depth (facial features overrule depth information)

    Even with the mouth and eyes turned inside out, the inverted face appears normal.

    Film and television use flickering at high frequencies to trick the viewer into perceiving a continuous image by exploiting the persistence of vision.

    In order to create the illusion of a higher flicker frequency, interlaced televisions paint half-images.

    Color broadcasting (chrominance at half resolution of luminance corresponding to proportions of rods and cones in eye)

    Compressing an Image (difficult to see higher frequencies more harshly quantised)

    Estimation of motion (use luminance and ignore colour)

    Watermarking and Steganography

    {End Chapter 1}

    Chapter 2: Data compression

    In information theory, data compression, source coding, and other related fields: In common parlance, a device that engages in the process of data compression is known as an encoder, whereas a device that engages in the process's inverse—that is, decompression—is known as a decoder.

    Data compression is the process of lowering the size of a data file, and is a term that is used rather often. Source coding is an encoding process that takes place at the original data source, prior to the data being stored or transferred. This process is referred to in the context of data transmission. It is important not to mistake source coding with other types of coding, such as channel coding, which is used for error detection and correction, or line coding, which is a method for mapping data onto a signal.

    Data compression is beneficial since it cuts down on the amount of space and bandwidth needed to store and transfer information. The procedures of compression and decompression both need a significant amount of computational resources. The space-time complexity trade-off is something that must be considered while compressing data. For example, a video compression method might call for expensive hardware in order for the video to be decompressed quickly enough to be watched as it is being decompressed. Additionally, the option to fully decompress the video before watching it might be inconvenient or call for additional storage space. When designing data compression schemes, designers must make trade-offs between a number of different factors. These factors include the level of compression achieved, the amount of distortion that is introduced (when using lossy data compression), and the amount of computational resources that are needed to compress and decompress the data.

    In order to represent data without losing any information in the process, lossless data compression methods often make use of statistical redundancy. This ensures that the process may be reversed. Because the vast majority of data in the actual world has statistical redundancy, lossless compression is feasible. For instance, a picture may include patches of color that do not change over the course of multiple pixels; in this case, the data may be recorded as 279 red pixels rather of the traditional notation of red pixel, red pixel,... This is a fundamental illustration of run-length encoding; there are many more methods to decrease the size of a file by removing redundant information.

    Compression techniques such as Lempel–Ziv (LZ) are now among the most widely used algorithms for lossless data storage. Table entries are replaced for repeating strings of data in the LZ technique of compression, which is a table-based compression model. This table is built dynamically for the vast majority of LZ algorithms by using data from previous stages of the input. Most of the time, the table itself is Huffman encoded. Grammar-based codes like this one are capable of successfully compressing substantially repetitious input, such as a biological data collection of the same or nearly related species, a massive versioned document collection, internet archives, and so on. Constructing a context-free grammar that derives a single string is the fundamental undertaking of grammar-based coding systems. Sequitur and Re-Pair are two further techniques for compressing grammar that have practical applications.

    Probabilistic models, such as prediction by partial matching, are used in the most powerful lossless compressors developed in recent times. Indirect statistical modeling is another way to think about the Burrows–Wheeler transform, which you may also consider.

    Around the same time as digital photos were becoming more widespread in the late 1980s, the first standards for lossless image compression were developed. At the beginning of the 1990s, lossy compression techniques started to become more commonplace. These perceptual distinctions are used by a

    Enjoying the preview?
    Page 1 of 1