Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Intelligent Speech Signal Processing
Intelligent Speech Signal Processing
Intelligent Speech Signal Processing
Ebook394 pages3 hours

Intelligent Speech Signal Processing

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Intelligent Speech Signal Processing investigates the utilization of speech analytics across several systems and real-world activities, including sharing data analytics, creating collaboration networks between several participants, and implementing video-conferencing in different application areas. Chapters focus on the latest applications of speech data analysis and management tools across different recording systems. The book emphasizes the multidisciplinary nature of the field, presenting different applications and challenges with extensive studies on the design, development and management of intelligent systems, neural networks and related machine learning techniques for speech signal processing.

  • Highlights different data analytics techniques in speech signal processing, including machine learning and data mining
  • Illustrates different applications and challenges across the design, implementation and management of intelligent systems and neural networks techniques for speech signal processing
  • Includes coverage of biomodal speech recognition, voice activity detection, spoken language and speech disorder identification, automatic speech to speech summarization, and convolutional neural networks
LanguageEnglish
Release dateMar 27, 2019
ISBN9780128181317
Intelligent Speech Signal Processing

Read more from Nilanjan Dey

Related to Intelligent Speech Signal Processing

Related ebooks

Enterprise Applications For You

View More

Related articles

Reviews for Intelligent Speech Signal Processing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Intelligent Speech Signal Processing - Nilanjan Dey

    (India).

    Preface

    Intelligent speech signal processing methods have increasingly replaced the conventional analog signal processing methods in several applications, including speech analysis and processing, telecommunications, and tracking. These intelligent speech signal processing approaches support different areas in a variety of everyday problems, multimedia communications, industrial automation, and biometrics. Incorporating different signal processing approaches, such as signal analysis using an analytical signal description, can be combined for efficient speech detection. In intelligent systems, pattern recognition and machine learning methods are vital tools for reasoning under uncertainty. They help to extract significant information from massive data in an automated fashion using statistical and computational methods. This domain is related to probability, statistics, optimization methods, and control theory. The focus is on providing solutions for tasks at which intelligence is inevitably essential. Application domains include computer vision, speech processing, natural language processing, man–machine interfaces, expert systems, and robotics, etc. Typically, there are general attributes that should be included in the intelligent signal processing system, namely nonlinearity, adaptively, and robustness. A speech signal processing device that operates in a nonstationary environment can be considered intelligent once it is able to explore the information content of its input in an efficient mode and at all times.

    This book highlights researchers from machine learning, data analysis, data management, and speech processing provider fields. The authors sought trends and techniques in intelligent speech signal processing and data analysis to spotlight scientific breakthroughs in applied applications. The book includes 10 chapters. In Chapter 1, Santosh focuses on speech recognition/processing/synthesis in healthcare. He provides detailed information about how speech synthesis impacts healthcare and how it also impacts its business model. In Chapter 2, Passricha and Aggarwal discuss end-to-end acoustic modeling using the Conventional Neural Network (CNN) to establish the relationship between the raw speech signal and phones in a data-driven manner. This system has superior performance compared to the traditional cepstral feature-based systems, however, it requires a large number of parameters. In Chapter 3, Singh et al. propose a real-time DSP-based system for voice activity detection and background noise reduction. In Chapter 4, Sad et al. introduce a novel system to disambiguate conflict classification results in audio visual speech recognition (AVSR) applications. The performance of the proposed recognition system is evaluated on three publicly available audio-visual datasets, using the generative Hidden Markov Model, and three discriminative techniques, viz. random forests, support vector machines, and adaptive boosting. In Chapter 5, Das and Roy provide in-depth concepts of various Deep Learning techniques for spoken language identification, including their advantages and limitations. In Chapter 6, Jat et al. suggest a conceptual system design to enable people to automate processes in the home by using voice commands. In Chapter 7, NithyaKalyani and Jothilakshmi discuss several approaches for extractive and abstractive speech summarization, and they investigate speech summarization in the Indian language. Additionally, the chapter analyzes various speech recognition techniques and their performance on recognizing Tamil speech data. In Chapter 8, Sarkar and Dey introduce the dynamics of emotional speech signals using recurrence analysis. In Chapter 9, Karan et al. introduced nonconventional techniques for speech processing that overcame the problem of short-time processing of the speech signal. In Chapter 10, Saha et al. discuss the artificially intelligent customized voice response system design using speech synthesis markup language. This chapter introduces a low-cost artificially intelligent voice response system driven by the Amazon Web Server on an IoT cloud platform and Raspberry Pi.

    This book supports and enhances the utilization of speech analytics in several systems and real-world activities. It provides a well-standing forum to discuss the characteristics of the intelligent speech signal processing systems in different domains. The book is proposed for professionals, scientists, and engineers who are involved in the new techniques of intelligent speech signal processing methods and systems.

    Chapter 1

    Speech Processing in Healthcare: Can We Integrate?

    K.C. Santosh    Department of Computer Science, The University of South Dakota, Vermillion, SD, United States

    Abstract

    This chapter focuses on the way speech recognition, processing, and synthesis help in the healthcare. The chapter begins with the basic idea of speech recognition in the domain, and it particularly focuses on a complete healthcare project so as to obtain a clear understanding of the value of speech processing. The chapter also provides detailed information about how speech synthesis affects healthcare and its business model.

    Keywords

    Speech recognition; Text-to-speech; Healthcare; Signal/pattern analysis; Machine learning

    Speech recognition—also known as name voice recognition—refers to the translation from speech into words in a machine-readable format [1–3].

    Speech processing has been considered for various purposes in the domain, for example, signal processing, pattern recognition, and machine learning [3]. Starting with the improvement of customer service, as well as the role of hospital care in combating crime, among other purposes, we have found that speech recognition has increased its global market from $104.4 billion in 2016 to an estimated $184.9 billion in 2021 (source: https://www.news-medical.net/whitepaper/20170821/Speech-Recognition-in-Healthcare-a-Significant-Improvement-or-Severe-Headache.aspx). This is not a new trend; for cases in which different languages are needed, speech-to-text conversion is an example that has been widely used. The opposite holds true as well [4, 5]. For example, can we process or reuse speech data that occurred during a telephone conversation a few years previously, in which a client claimed that fraud happened on his or her credit card? Yes, this is possible. Beside other sources of data, speech can be taken as an authentic component to describe an event or scene wherein emotions can be analyzed [6–8].

    Examples exist showing how speech analysis can be integrated into healthcare. In Fig. 1.1, a complete healthcare automated scenario has been created, which can be summarized as follows:

    A patient visits clinical center (hospital), where he/she gets X-rayed, provides sensor-based data (external and internal), and receives (handwritten and machine-printed) prescription(s) and report(s) from the specialist. In these events, a patient and other staff (including the specialists) have gone through different levels of conversation, and, if recorded, they will be able to integrate these with signal processing, pattern recognition, image processing, and machine learning.

    Fig. 1.1 Smart healthcare and the use of speech processing: can we get more information?

    In the aforementioned healthcare project for instance, it would be convenient to combine speech and signal processing tools and techniques with image analysis-based tools and techniques [9–12]. More specifically, it is important to note that doctors can predict or guess about the presence of tuberculosis, for instance, based on verbal communication (quoted answers to questions, such as do you sleep well? and how are your eating habits?) before they start the X-ray screening procedure. If this is the case, in the complete project (outlined earlier), speech processing can help. This means that we would be able to come up with the complete information so that further processing can be performed.

    It is important to note, for instance, that speech and voice cues (before and after the doctor’s visit) can help one understand the patient’s willingness to continue with treatment. Speech and voice can definitely convey emotions over time. Further, pain can simply be read by speech/voice level. We can also automate and check the trends related to how doctors and other staff members behave toward patients. Can speech be a component in helping to find consistency that is evident in other sources of data, as shown in Fig. 1.1? The use of a (proposed) convolutional neural network, as in Fig. 1.1, helps explain the fact that a machine, unlike a human expert, for instance, can make a decision without the bias possible in human choices. Also, visualization is possible that can show how different sources of data are connected. We consider artificial intelligence (AI) and machine learning (ML) tools for automation since data have to be collected over time. Analyzing big data is extremely important since it is not possible manually because humans are more error-prone and human analysis is costlier.

    As mentioned earlier, local languages other than English can be considered in healthcare. In one work [13], authors reported the use of the Tamil language, in addition to English, to estimate heartbeat from speech/voice. A few more can be cited, showing how local, regional languages, such as German [14], Malay [15], and Slovenian [16], have helped speech technology progress. In another context [17], emergency medical care often depends on how quickly and accurately field medical personnel can access a patient’s background information and document their assessment and treatment of the patient. What if we could automate speech/voice recognition tools in the field? It is clear that research scientists should come up with precise tools that people can trust. Analyzing speech/voice concurrently with background music or other noise is important. Other works can be referenced for more detailed information [18–21].

    As data change and increase, machine learning could help to automate the data retrieval and recording system. The use of the extreme-learning voice activity detection is prominent in the field [22]. As we go further, for example with real-time speech/voice recognition/classification, active learning should be considered since scientists found that learning over time is vital [23].

    In general, Fig. 1.1 shows how important different sources of data and speech are as components in things such as sensor-based and image data (X-ray and/or reports that are handwritten or machine printed).

    References

    [1] Flanagan J., Rabiner L., eds. Speech Synthesis. Pennsylvania: Dowden, Hutchinson & Ross, Inc.; 1973.

    [2] Flanagan J. Speech Analysis, Synthesis, and Perception. Berlin-Heidelberg-New York: Springer-Verlag; 1972.

    [3] Xian-Yi C., Yan P. Review of modern speech synthesis. In: Hu W., ed. Electronics and Signal Processing. Lecture Notes in Electrical Engineering. Berlin, Heidelberg: Springer; . 2011;vol. 97.

    [4] Iida A., Campbell N. Speech database design for a concatenative text-to-speech synthesis system for individuals with communication disorders. Int. J. Speech Technol. 2003;6(4):379–392.

    [5] Bossemeyer R., Hardzinski M. Talking call waiting: an application of text-to-speech. Int. J. Speech Technol. 2001;4(1):7–17.

    [6] Sailunaz K., Dhaliwal M., Rokne J., Alhajj R. Emotion detection from text and speech: a survey. Soc. Netw. Anal. Min. 2018;8(28).

    [7] Revathi A., Jeyalakshmi C. Emotions recognition: different sets of features and models. Int. J. Speech Technol. 2018.

    [8] Swain M., Routray A., Kabisatpathy P. Databases, features and classifiers for speech emotion recognition: a review. Int. J. Speech Technol. 2018;21(1):93–120.

    [9] Santosh K.C., Antani S. Automated chest X-ray screening: can lung region symmetry help detect pulmonary abnormalities?. IEEE Trans. Med. Imaging. 2018;37(5):1168–1177.

    [10] Vajda S., Karagyris A., Jaeger S., Santosh K.C., Candemir S., Xue Z., Antani S., Thoma G. Feature selection for automatic tuberculosis screening in frontal chest radiographs. J. Med. Syst. 2018;42(8):146.

    [11] Karargyris A., Siegelman J., Tzortzis D., Jaeger S., Candemir S., Xue Z., Santosh K.C., Vajda S., Antani S.K., Folio L.R., George R. Thoma: combination of texture and shape features to detect tuberculosis in digital chest X-rays. Int. J. Comput. Assist. Radiol. Surg. 2016;11(1):99–106.

    [12] Santosh K.C., Vajda S., Antani S.K., Thoma G.R. Edge map analysis in chest X-rays for automatic abnormality screening. Int. J. Comput. Assist. Radiol. Surg.

    Enjoying the preview?
    Page 1 of 1