Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Hybrid Censoring Know-How: Designs and Implementations
Hybrid Censoring Know-How: Designs and Implementations
Hybrid Censoring Know-How: Designs and Implementations
Ebook947 pages6 hours

Hybrid Censoring Know-How: Designs and Implementations

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Hybrid Censoring Know-How: Models, Methods and Applications focuses on hybrid censoring, an important topic in censoring methodology with numerous applications. The readers will find information on the significance of censored data in theoretical and applied contexts, and descriptions of extensive data sets from life-testing experiments where these forms of data naturally occur. The existing literature on censoring methodology, life-testing procedures, and lifetime data analysis provides only hybrid censoring schemes, with little information about hybrid censoring methodologies, ideas, and statistical inferential methods. This book fills that gap, featuring statistical tools applicable to data from medicine, biology, public health, epidemiology, engineering, economics, and demography.

  • Presents many numerical examples to adequately illustrate all inferential methods discussed
  • Mentions some open problems and possible directions for future work
  • Reviews developments on Type-II and Type-I HCS, including the most recent research and trends
  • Explains why hybrid censored sampling is important in practice
  • Provides details about the use of HCS under different settings and on various designs of HCS
  • Describes the use of hybrid censoring in other reliability applications such as reliability sampling plans, step-stress testing, and quality control
LanguageEnglish
Release dateJan 6, 2023
ISBN9780123983909
Hybrid Censoring Know-How: Designs and Implementations
Author

Narayanaswamy Balakrishnan

Narayanaswamy Balakrishnan is a distinguished university professor in the Department of Mathematics and Statistics at McMaster University Hamilton, Ontario, Canada. He is an internationally recognized expert on statistical distribution theory, and a book-powerhouse with over 24 authored books, four authored handbooks, and 30 edited books under his name. He is currently the Editor-in-Chief of Communications in Statistics published by Taylor & Francis. He was also the Editor-in-Chief for the revised version of Encyclopedia of Statistical Sciences published by John Wiley & Sons. He is a Fellow of the American Statistical Association and a Fellow of the Institute of Mathematical Statistics. In 2016, he was awarded an Honorary Doctorate from The National and Kapodistrian University of Athens, Athens, Greece. In 2021, he was elected as a Fellow of the Royal Society of Canada.

Read more from Narayanaswamy Balakrishnan

Related to Hybrid Censoring Know-How

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Hybrid Censoring Know-How

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Hybrid Censoring Know-How - Narayanaswamy Balakrishnan

    Preface

    N. Balakrishnana; Erhard Cramerb; Debasis Kunduc     aMcMaster University, Hamilton, ON, Canada

    bRWTH, Aachen, Germany

    cIIT, Kanpur, India

    Though the notion of hybrid censoring was introduced in 1950s, very little work had been done on it until 2000. During the last two decades, however, the literature on hybrid censoring has exploded, with many new censoring plans having been introduced and several inferential methods having been developed for numerous lifetime distributions. A concise synthesis of all these developments were provided in the Discussion paper by N. Balakrishnan and D. Kundu in 2013 (Computational Statistics & Data Analysis, Vol. 57, pages 166–209). Since the publication of this Discussion paper in 2013, more than 300 papers have been published in this topic. The current book is an expanded version of the Discussion paper covering all different aspects of hybrid censoring, especially focusing on recent results and developments.

    As research in this area is still intensive, with many papers being published every year (see Chapter 1 for pertinent details), we have tried our best to make the bibliography as complete and up-to-date as possible. We have also indicated a number of unresolved issues and problems that remain open, and these should be of interest to any researcher who wishes to engage in this interesting and active area of research.

    Our sincere thanks go to all our families who provided constant support and great engagement through out the course of this project. Thanks are also due to Ms. Susan Ikeda (Senior Editorial Project Manager, Elsevier) for her keen interest in the book project and also in its progress, Ms. Debbie Iscoe (McMaster University, Canada) for her help with typesetting some parts of the manuscript, and Dr. Faisal Khamis for his diligent work on literature survey in the beginning stage of the project. Without all their help and cooperation, this book would not have been completed!

    We enjoyed working on this project and in bringing together all the results and developments on hybrid censoring methodology. It is our sincere hope that the readers of this book will find it to be a useful resource and guide while doing their research!

    N. Balakrishnan (McMaster University, Hamilton, ON, Canada)

    Erhard Cramer (RWTH, Aachen, Germany)

    Debasis Kundu (IIT, Kanpur, India)

    Chapter 1: Introduction

    Abstract

    In this chapter, some historical comments are made about the developments on conventional censoring schemes, viz., Type-I and Type-II censoring schemes. Then, formal definitions and some basic properties are provided for data obtained from both these censoring schemes, as well as a comparison of their expected termination times. A detailed review of the historical literature on hybrid censoring schemes is then provided, explaining various inferential results developed under these censoring schemes. Finally, the extensive literature that has developed on hybrid censoring, especially in the last two decades, and the need for a study of hybrid censoring schemes are both described.

    Keywords

    Type-I censoring; Type-II censoring; Order statistics; Expected termination time; Hybrid censoring schemes

    Chapter Outline

    1.1  Historical perspectives

    1.2  Type-I and Type-II censoring

    1.3  Need for hybrid censoring

    1.4  Antecedents

    1.5  Burgeoning literature

    1.6  Scope of the book

    1.7  Notation

    1.1 Historical perspectives

    Reliability of many manufactured items has increased substantially over time due to the ever-growing demands of the customers, heavy competition from numerous producers across the world, and strict requirements on quality assurance, as stipulated by ISO9000, for example. As a result, providing information on the reliability of items (such as mean lifetime, median lifetime, percentile of the lifetime distribution, and so on), within a reasonable period of time, becomes very difficult under a traditional life-test. So, because of cost and time considerations, life-tests in these situations necessarily become censored life-testing experiments (often, with heavy censoring). The development of accurate inferential methods based on data obtained from such censored life-tests becomes a challenging task to say the least. This issue, therefore, has attracted the attention of numerous researchers over the last several decades!

    Early scenarios requiring the consideration of censoring seemed to have originated in the context of survival data. For example, Boag (1949) discussed the estimation of proportion of cancer patients in UK who were cured following a treatment, and while doing so, did consider censoring corresponding to all those cancer patients who were alive at the end of the clinical trial; one may also refer to Harris et al. (1950), Berkson and Gage (1952) and Littel (1952) for further detailed discussions in this regard. Note that the form of censoring considered in this context is time-censoring, meaning the trial ended at a certain pre-fixed time; consequently, individuals who were alive at the end of the trial ended up getting censored at this specific termination time. However, according to David (1995), the word censoring explicitly appeared for the first time in the works of Hald (1949) and Gupta (1952). In fact, Hald (1949) made a clear distinction between truncation and censoring, depending on whether the population from which the sample is drawn is truncated or the sample itself is truncated, respectively. Yet, in many of the early works, the word truncation got used in place of censoring; see, for example, Cohen (1949, 1950). It was Gupta (1952) who pointed out that censoring could arise in two different ways, one when the observation gets terminated at a pre-fixed time (with all observations up till that time point being observed and all those after being censored as in the study of Boag (1949) mentioned above) and another when the observation gets terminated when a certain number of smaller observations is achieved (with the remaining larger observations being censored). Then, for the purpose of distinguishing between these two different cases, he referred to them as Type-I censoring and Type-II censoring, respectively.

    What is clear from the descriptions and explanations of Hald (1949) and Gupta (1952) is that truncation is a feature of the population distribution while censoring is inherently a feature of the sample observed. In spite of this clear distinction, still today, some authors mistakenly refer to censored distributions and truncated samples.

    Following the work of Gupta (1952), Epstein and Sobel (1953, 1954) published pioneering results on censored data analysis from a life-testing and reliability viewpoint, basing their results on exponential lifetime distribution. They utilized some interesting distributional properties of order statistics (more specifically, on spacings) from the exponential distribution to develop exact inferential results for the case of Type-II censored samples. Though their distributional results on spacings from the exponential distribution were already known from the earlier works of Sukhatme (1937) and Rényi (1953), the approach taken by Epstein and Sobel (1954) in developing exact inferential methods based on censored data from a life-testing viewpoint attracted the attention of many researchers subsequently, resulting in numerous publications in the following years; the works of Deemer and Votaw (1955), Cohen (1955), and Bartholomew (1957) are some of the early noteworthy ones in this direction.

    During the last six decades or so, since the publication of these early works, the literature on censored data analysis has expanded phenomenally, by considering varying forms of censored data, dealing with a wide range of lifetime distributions, and developing many different methods of inference. For a detailed overview of various developments on inferential methods for truncated distributions and based on censored samples, interested readers may refer to Nelson (1982), Schneider (1986), Cohen (1991), and Balakrishnan and Cohen (1991).

    1.2 Type-I and Type-II censoring

    In the preceding section, while describing different forms of censoring that had been considered, it becomes evident that are two basic forms of censoring:

    (i)  Type-I censoring and

    (ii)  Type-II censoring.

    As the historical details in the last section clearly reveal, both these forms of censoring have been discussed extensively in the literature, and this continues on to date. Though the form of censoring schemes in these two cases may look somewhat similar, with slight variation in the resulting likelihood functions, there is a significant difference when it comes to development of inferential methods based on these two forms of censoring, as will be seen in the subsequent chapters.

    Type-I censoring would naturally arise in life-testing experiments when there is a constraint on time allocated for the reliability experiment to be conducted. This may be due to many practical considerations such as limitations on the availability of test facility, cost of conducting the experiment, need to make reliability assessment in a timely manner, and so on. It is evident that the duration of the life-test is fixed in this censoring scheme, but the number of complete failure times to be observed will be random. These result in advantages as well as disadvantages, the fixed duration being a distinct advantage in the sense that the experimenter would know a priori how long the test is going to last, while the random number of failures to be observed being a clear disadvantage. For example, if the duration was fixed to be too small compared to the average (or median) lifetime of the product, then a rather small number of complete failures would be realized with a high probability, and this would in turn have a negative impact on the precision or accuracy of the inferential methods subsequently developed.

    In a Type-II censoring scheme, on the other hand, the number of complete failures to be observed is fixed a priori; consequently, the experimenter can have a control on the amount of information (in the form of complete failures) to be collected from the life-testing experiment, thus having a positive impact on the precision of subsequent inferential methods. However, it has a clear disadvantage that the duration of the life-test is random, which would pose difficulty in the planning/conducting of the reliability experiment, and also has the potential to result in unduly long life-test (especially when the product under test is highly reliable).

    For the purpose of illustration, let us consider the distribution.¹ Let us further suppose we have units available for the life-test, and that we choose as the pre-fixed termination times under Type-I censoring scheme, and as the pre-fixed number of complete failures to be observed under Type-II censoring scheme. In this situation, the following facts are evident:

    (a)  The random number of failures that would occur until the pre-fixed time T, say D, will have a -distribution;

    (b)  Consequently, the number of failures that would be expected to occur by time T will simply be , under the Type-I censoring scheme;

    (c)  Under the Type-II censoring scheme, as stated above, the termination time will be random, say Y,² and will in fact equal the m-th order statistic from a sample of size n from the -distribution, and hence is known to have a -distribution.

    To get a clear idea about the difference between the two censoring schemes, we have presented, in Table 1.1, the exact values of the following quantities:

    (i)  Termination time T, expected number of failures , and , corresponding to Type-I censoring;

    (ii)  Expected termination time , number of failures m, and , corresponding to Type-II censoring.

    From binomial probabilities, it is then easy to verify that the cumulative distribution function (cdf) of Y is, for ,

    (1.1)

    where denotes the incomplete beta ratio and (for ) denotes the complete beta function. It is also easy to see that . Furthermore, we may also observe in this situation that

    (1.2)

    a fact that is readily seen in the last column of Table 1.1.

    Table 1.1

    1.3 Need for hybrid censoring

    From the results presented in Table 1.1, the following essential points need to be emphasized, which should justify the coverage/exposition of this book being focused on the importance and need for hybrid censoring methodology:

    (1)  First issue relating to Type-I censoring is concerning the occurrence of no failures. This would especially be the case if the termination time T had been chosen to be unduly small (in other words, if the length of the test compared to mean life time of the product under test is too small). For example, in the case of , had T been fixed as 0.2, then there is almost a 11% chance that the life-test would not result any complete failure. However, if the termination time T gets increased to 0.3, then the chance of zero failures being observed would decrease to about 3%. Moreover, if the number of units tested becomes larger, say then even with the termination time being , the chance of zero failures being observed would be as small as 1%;

    (2)  Another issue with the case of Type-I censoring is that if the termination time T is pre-fixed to be large, though one has a certain required number of complete failures m in mind (at the planning stage of the experiment), then the corresponding values of suggest that, with high probability, the actual number of failures observed would be at least m. For example, had the experimenter chosen the termination time of the experiment to be and in fact had number of complete failures in mind, then there is almost a 85% chance that the number of failures observed would be at least 6. Similarly, if the termination time had been chosen to be and had preliminarily in mind, then there is almost a 88% chance that the number of failures observed would be at least 7. This suggests that if the experimenter has an idea on the number of failures to be observed and chooses the termination time T to be large, then a Type-I censoring scheme would end up result in an unnecessarily long life-test with high probability;

    (3)  Furthermore, in the case of Type-I censoring, it can also be observed that, with the preliminary value of m one has in mind, if the termination time T had been chosen to be small, then with high probability the actual number of complete failures that will be observed will end up being less than m. For example, if the experimenter had in mind but had chosen the termination time to be , then there is almost a 62% chance that the number of failures observed would be at most 7, being less than that was in the mind of the experimenter prior to conducting the experiment. Thus, it is more likely that the life-test would be concluded by time T in this case;

    (4)  The final point worth mentioning is that in the case of Type-II censoring, the test duration would be long if one were to choose m to be large enough in comparison to n, but still may not exceed a large value of T the experimenter may preliminarily have in mind before conducting the experiment with a high probability. For example, suppose units are under test and the number of failures to be observed has been fixed to be . If the experimenter had an idea of having the duration of the test to be at least , for example, the chance that the Type-II censoring scheme would result in a test exceeding 0.8 would be only about 32%.

    Let us consider the first two points above and discuss their ramifications in statistical as well as pragmatic terms. With regard to Point (1) above, it becomes evident that, at least when the number of test units n is small, there will be a non-negligible probability that one may not observe any complete failure at all from the life-test in the case of Type-I censoring. In such a situation, it is clear that a meaningful inferential method can not be developed (whether it is point/interval estimation or test of hypothesis) unconditionally and, therefore, all pertinent inferential methods in this case need to be developed only conditionally, conditioned on the event that at least one complete failure is observed.³ In fact, this is the basis for the comment that "there is a significant difference when it comes to development of inferential methods based on these two forms of censoring" made in the beginning of last section!

    With regard to Point (2) that if the termination time T is pre-fixed to be large under Type-I censoring scheme, then with a high probability, the actual number of failures observed would be larger than the preliminary number of failures (say, m) the experimenter would have had in mind, which is what formed the basis for the original proposal of hybrid censoring by Epstein (1954). It is for this reason he defined the hybrid termination time of in order to terminate the life-test as soon as the preliminary number of failures the experimenter had in mind is achieved, and otherwise terminate at the pre-fixed time T. We refer to this censoring scheme here as Type-I hybrid censoring scheme, adding the phrase Type-I to emphasize that this scheme is based on a time-based guarantee (viz., not to exceed a pre-fixed time T). It is then evident that

    where Y has its cumulative distribution function as given in (1.1), which readily yields the mixture representation for the hybrid termination time as

    (1.3)

    with the mixture probability as in (1.2). Strictly speaking, in the above mixture form, T may be viewed as a degenerate random variable at time T. With time , it is of interest to associate a count random variable (analogous to D) corresponding to the number of complete failures observed in the life-test. It is then clear that

    (1.4)

    which is in fact a clumped binomial random variable, with all the binomial probabilities for m to n being clumped at the value m; see, for example, Johnson et al. (2005) for details on this clumped binomial distribution. From (1.3) and (1.4), the values of and can be readily computed, and these are presented in Table 1.2 for the purpose of comparing Type-I censoring and Type-I hybrid censoring schemes in terms of termination time and expected number of failures.

    Table 1.2

    From Table 1.2, we observe the following points:

    (1)  The intended purpose of Type-I hybrid censoring scheme, as introduced by Epstein (1954), is clearly achieved in certain circumstances. For example, if the experimenter was planning to conduct the life-test for a period of and had an interest in observing 7 complete failures out of a total of units under test, then under the Type-I hybrid censoring scheme, the test on an average would have lasted for a period of 0.61, and would have resulted in 6.44 complete failures on an average (instead of 7 the experimenter had in mind). Instead, had the experimenter planned to conduct the life-test for a period of in the same setting, then the test would have lasted on an average for a period of 0.63, and would have resulted in 6.84 complete failures on an average;

    (2)  However, if the time T had been chosen to be too small for the value of the number of complete failures interested in observing, then the life-test, with high probability, would end by time T. For example, if the experimenter had chosen to conduct the life-test for a period of , but had an interest in observing possibly 7 complete failures from the life-test, then the test on an average would have lasted for a period of 0.49, and would have resulted in 4.93 complete failures on an average (instead of 7 the experimenter had in mind). This follows intuitively from the fact that, in this case, there is about 83% chance that the failure would occur after time (see the value reported in the last column of Table 1.2);

    (3)  A final point worth noting is that, like in Type-I censoring scheme, the case of no failures is a possibility in the case of Type-I hybrid censoring scheme as well! This then means that inferential procedures can be developed only conditionally, conditioned on the event that at least one complete failure is observed, just as in the case of Type-I censoring!

    While Point (5) above highlights the practical utility of Type-I hybrid censoring scheme, Point (6) indicates a potential shortcoming in Type-I hybrid censoring scheme exactly as Point (3) indicated earlier in the case of Type-I censoring, viz., that if the test time T is chosen to be too small in comparison to the number of complete failures the experimenter wishes to observe from the life-test, then with high probability, the test would terminate by time T, in which case few failures will be observed from the life-test, leading to possibly imprecise inferential results.

    It is precisely this point that led Childs et al. (2003) to propose another form of hybrid censoring based on the hybrid termination time of , called Type-II hybrid censoring scheme. It is of interest to mention that the phrase Type-II is incorporated here in order to emphasize the fact that this censoring scheme provides a guarantee for the number of failures to be observed (viz., that the observed number of complete failures would be at least m). It is then clear in this case that

    (1.5)

    where Y has its cdf as given in (1.1), which readily yields the mixture representation for the hybrid termination time as

    (1.6)

    with the mixture probability as in (1.2). Here again, T may be viewed as a degenerate random variable at time T. With time , it will be useful to associate a count random variable (analogous to D) corresponding to the number of complete failures observed in the life test, with support and probability mass function as

    (1.7)

    which is in fact a clumped binomial random variable, with all the binomial probabilities for 0 to m being clumped at the value m. Observe the difference in the two clumped binomial distributions that arise in the cases of Type-I hybrid censoring scheme and Type-II hybrid censoring scheme here; in the former, it is clumped on the right at the value m and in the latter, it is clumped on the left at the value m. It is instructive to note here that the event would occur in both cases listed in (1.5): in the first case when , the termination of the life-test would occur at Y resulting in exactly m complete failures, and in the second case when , the termination would occur at T, but if no failure occurs in the interval then also exactly m complete failures would be realized. Now, from (1.6) and (1.7), we can readily compute the values of and . These are presented in Table 1.3 for the purpose of comparing Type-II censoring and Type-II hybrid censoring schemes in terms of termination time and expected number of failures.

    Table 1.3

    Thing to note with Type-II hybrid censoring scheme is that all pertinent inferential methods based on it will be unconditional, just like in the case of Type-II censoring, due to the fact that at least m failures are guaranteed to be observed. Additional advantages and the intended purpose of Type-II hybrid censoring scheme, as stated originally by Childs et al. (2003), become clear from Table 1.3. If the experimenter had chosen the test time T to be small compared to the number of complete failures to be observed from the life-test, then the life-test in all likelihood would proceed until that many complete failures are observed. For example, for the choice of , if the experimenter had chosen the time T to be 0.4, then there is about 99% chance that the failure would occur after and so the expected duration of the test becomes 0.728 and consequently the expected number of complete failures observed becomes 8.002, as seen in Table 1.3. On the other hand, had the experimenter chosen the test time T to be large compared to the number of complete failures to be observed from the life-test, then the life-test in all likelihood would proceed until time T. For example, for the choice of , if the experimenter had chosen the time T to be 0.8, then there is about 3% chance that the failure would occur before , in which case we see that the expected duration of the test is 0.801 and the expected number of failures observed becomes 8.04. Thus, in either case, the Type-II hybrid censoring scheme provides guarantee for observing enough complete failures from the test to facilitate the development of precise inferential results, whether it is point/interval estimation or hypothesis tests. Of course, this advantage naturally comes at a price of having a longer life-test than under any of Type-I censoring, Type-I hybrid censoring, and Type-II censoring.

    1.4 Antecedents

    As mentioned earlier, Epstein (1954) was the first one to introduce a hybrid censoring scheme to facilitate early termination of a life-test as soon as a certain number of failures the experimenter had in mind is achieved, instead of carrying on with the test until the pre-fixed time T. He then considered the case of exponential lifetimes and derived expressions for the mean termination time as well as the expected number of failures under the Type-I hybrid censoring scheme form that he introduced. In addition, he also considered a replacement case in which failed units are replaced a once by new units drawn from the same exponential population, and derived explicit expressions for the same quantities so that a comparison could be made between the two cases. Later, Epstein (1960b) developed hypothesis tests concerning the exponential mean parameter, while Epstein (1960a,c) discussed the construction of confidence intervals (one-sided and two-sided) for the mean lifetime of an exponential distribution based on a Type-I hybrid censored data, using chi-square distribution for the pivotal quantity and using a chi-square percentage point approximately even in the case of no failures. These procedures were subsequently adopted as reliability qualification tests and reliability acceptance tests based on exponential lifetimes as standard test plans in MIL-STD-781-C (1977), wherein the performance requirement is specified through mean-time-between-failure (MTBF). Harter (1978) evaluated the performance of these confidence bounds for the MTTF through Monte Carlo simulations. A formal rule for obtaining a two-sided confidence interval for the MTTF, in the exponential case, was given by Fairbanks et al. (1982) who demonstrated that their rule is very close to the approximation provided earlier by Epstein (1960c) and also provided a validation for their rule.

    In the paper by Bartholomew (1963), the exact conditional distribution of the maximum likelihood estimator of the mean of an exponential distribution under a time censored life-test (i.e., under Type-I censoring) was derived through conditional moment generating function (Conditional MGF) approach, conditioned on the event that at least one complete failure is observed. This method was adopted by Chen and Bhattacharyya (1988) to develop the exact distribution theory for the maximum likelihood estimation of the exponential mean lifetime under Type-I hybrid censoring scheme, and the conditional moment generating function approach has since become a standard tool for developing exact distribution theory for maximum likelihood estimators of parameters under various forms of hybrid censored data, as will be seen in the ensuing chapters.

    The prediction of times of future failures, based on a Type-I hybrid censoring scheme, for the case of exponential distribution, was discussed by Ebrahimi (1992) for both cases when the failed units are not replaced and replaced by new units. All the works mentioned so far dealt with a scaled exponential distribution, involving only the mean lifetime parameter. A two-parameter exponential distribution, consisting of a threshold parameter (interpreted as guarantee period in reliability literature) and a scale parameter (relating to the residual mean lifetime parameter), was considered by Ebrahimi (1986) who then developed point and interval estimation methods as well as hypothesis tests for both cases when the hybrid life-tests involved without and with replacement of failed units.

    All these early developments were on various inferential aspects based on data observed from Type-I hybrid censoring scheme, as introduced by Epstein (1954). But, as mentioned in the preceding section, the Type-II hybrid censoring scheme, guaranteeing at least a pre-specified number of complete failures to be observed in the life-test, was introduced by Childs et al. (2003) to overcome some of the shortcomings of Type-I hybrid censoring scheme; but, this advantage comes at a price of having a longer life-test, as mentioned earlier. Since then, the literature on hybrid censoring has exploded with varying forms of hybrid schemes, for many different lifetime distributions, and the development of a wide range of inferential methods. The following section gives an account of the recent growth in this area of research!

    1.5 Burgeoning literature

    As mentioned above, the literature on hybrid censoring has grown significantly in recent years. For example, a quick search on zbmath, using hybrid censoring, hybrid censored, and truncated life test, truncated life tests, truncated life testing as keywords, produced the frequency table (Table 1.4) and the histogram for the publication record⁴ (see Fig. 1.1).

    Table 1.4

    Figure 1.1 Histogram of publication record data on hybrid censoring for the period 1954–2022 for given time intervals given in Table 1.4.

    1.6 Scope of the book

    The primary objectives in preparing this book have been to produce an up-to-date volume, with emphasis on both theory and applications that will form as a reference guide for practitioners involved in the design of a life-testing experiment as well as in the analysis of lifetime data observed from such experiments. The models and methods described in the book would provide the reader a know-how regarding the designs and implementations of various hybrid censoring schemes and their merits and demerits. A central aspect in our presentation and analysis of the different hybrid censoring schemes is to identify the key shared features as well as the structural elements. In particular, this enables a structured and efficient approach to complex hybrid censoring models and is intended to support both the design and analysis of new, possibly even more complex models and the development of associated statistical procedures. A complete treatment of basic theory, including the derivation of all associated characteristics, properties and inferential results, is presented for all the different hybrid censoring schemes considered in the literature. To facilitate a better understanding and appreciation of these developments, a comprehensive review of all pertinent results on the conventional censoring schemes, viz., Type-II, Type-I, and progressive censoring, is also

    Enjoying the preview?
    Page 1 of 1