Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Handbook of Evidence-Based Practice in Clinical Psychology, Adult Disorders
Handbook of Evidence-Based Practice in Clinical Psychology, Adult Disorders
Handbook of Evidence-Based Practice in Clinical Psychology, Adult Disorders
Ebook2,006 pages23 hours

Handbook of Evidence-Based Practice in Clinical Psychology, Adult Disorders

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Handbook of Evidence-Based Practice in Clinical Psychology, Volume 2 covers the evidence-based practices now identified for treating adults with a wide range of DSM disorders. Topics include fundamental issues, adult cognitive disorders, substance-related disorders, psychotic, mood, and anxiety disorders, and sexual disorders. Each chapter provides a comprehensive review of the evidence-based practice literature for each disorder and then covers several different treatment types for clinical implementation. Edited by the renowned Peter Sturmey and Michel Hersen and featuring contributions from experts in the field, this reference is ideal for academics, researchers, and libraries.
LanguageEnglish
PublisherWiley
Release dateAug 2, 2012
ISBN9781118144756
Handbook of Evidence-Based Practice in Clinical Psychology, Adult Disorders
Author

Michel Hersen

Michel Hersen (Ph.D. State University of New York at Buffalo, 1966) is Professor and Dean, School of Professional Psychology, Pacific University, Forest Grove, Oregon. He is Past President of the Association for Advancement of Behavior Therapy. He has written 4 books, co-authored and co-edited 126 books, including the Handbook of Prescriptive Treatments for Adults and Single Case Experimental Designs. He has also published more than 220 scientific journal articles and is co-editor of several psychological journals, including Behavior Modification, Clinical Psychology Review, Journal of Anxiety Disorders, Journal of Family Violence, Journal of Developmental and Physical Disabilities, Journal of Clinical Geropsychology, and Aggression and Violent Behavior: A Review Journal. With Alan S. Bellack, he is co-editor of the recently published 11 volume work entitled Comprehensive Clinical Psychology. Dr. Hersen has been the recipient of numerous grants from the National Institute of Mental Health, the Department of Education, the National Institute of Disabilities and Rehabilitation Research, and the March of Dimes Birth Defects Foundation. He is a Diplomate of the American Board of Professional Psychology, Distinguished Practitioner and Member of the National Academy of Practice in Psychology, and recipient of the Distinguished Career Achievement Award in 1996 from the American Board of Medical Psychotherapists and Psychodiagnosticians. Dr. Hersen has written and edited numerous articles, chapters and books on clinical assessment.

Related to Handbook of Evidence-Based Practice in Clinical Psychology, Adult Disorders

Related ebooks

Psychology For You

View More

Related articles

Reviews for Handbook of Evidence-Based Practice in Clinical Psychology, Adult Disorders

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Handbook of Evidence-Based Practice in Clinical Psychology, Adult Disorders - Michel Hersen

    PART I

    Overview and Foundational Issues

    Chapter 1

    Evidence-Based Practice in Adult Mental Health*

    B. CHRISTOPHER FRUEH, JULIAN D. FORD, JON D. ELHAI, AND ANOUK L. GRUBAUGH

    INTRODUCTION

    There is widespread and growing awareness that behavioral and mental health care, like other sectors of health care, require rigorous practice standards and professional accountability (Institute of Medicine, 2001; Kazdin, 2008; President’s New Freedom Commission on Mental Health, 2003). Evidence-based practice (EBP) and empirically supported treatments are a critical element of these standards for both child and adult populations (APA, 2006; Barlow, 2000; Spring 2007; Spring et al., 2008; Torrey et al., 2001; Weisz, Hawley, Pilkonis, Woody, & Follette, 2000). Unfortunately, interventions used in clinical, behavioral, and mental health practice settings are often not carefully based on empirical evidence, resulting in a discrepancy between research and practice (Cook, Schnurr, & Foa, 2004; P. W. Corrigan, Steiner, McCracken, Blaser, & Barr, 2001; Ferrell, 2009; Frueh, Cusack, Grubaugh, Sauvageot, & Wells, 2006; Gray, Elhai, & Schmidt, 2007; Henggeler, Sheidow, Cunningham, Donohue, & Ford, 2008; Kazdin, 2008; Schoenwald & Hoagwood, 2001; Stewart & Chambless, 2007). In this chapter we provide an overview of EBP in adult mental health, including definitions, purpose, processes, and challenges.

    DEFINING EVIDENCE-BASED PRACTICE

    Evidence-based practice is an empirically based approach to identify and appraise the best available scientific data in order to guide the implementation of assessment and intervention practices. This entails making decisions about how to integrate scientific evidence with clinical practice, taking account of relevant practice setting, population, provider, and other contextual characteristics. Exact definitions of what constitutes an EBP have been proposed. Some have suggested that designation of an intervention as an EBP requires favorable empirical support from at least two randomized controlled trials (RCTs) conducted by independent researchers/labs (Chambless & Hollon, 1998), or seven to nine smaller experimental design studies each with at least three subjects conducted by at least two independent researchers (Chambless & Hollon, 1998; Lonigan, Elbert, & Johnson, 1998). These requirements were proposed in order to define specific treatment models as empirically supported treatments (ESTs). The ESTs are a subcategory of EBT that focuses on specific (usually manualized) treatment models for which substantial scientific evidence of efficacy or effectiveness has been accrued.

    Others have proposed the value of expert consensus panels, meta-analyses, and/or Cochrane database reviews to overcome the potential biases of individual or critical reviews (Spring et al., 2008). Further, governments and health insurance companies have developed detailed EBP guidelines for specific psychiatric disorders, such as the UK’s National Institute for Clinical Health and Excellence (NICE, 2005) and the United States’ Institute of Medicine and National Research Council (IOM, 2007) guidelines for treating posttraumatic stress disorder (PTSD), in order to guide (or mandate) efficacious mental health-care practices.

    EBP does not necessarily imply the designation of certain treatment models as evidence based. An alternative way to conceptualize EBP is to place less emphasis on specific intervention protocols (e.g., manualized treatment models) and focus instead on empirically supported general content-domain practice elements (Chorpita, Daleiden, & Weisz, 2005; Rosen & Davison, 2003). For example, practice elements might include the development of a therapeutic working alliance and enhancing client motivation, teaching of skills for coping with symptoms, or facilitation of therapeutic processing of distressing emotions.

    In addition, research evidence is not necessarily the only base for determining what constitutes EBP. The American Psychological Association Presidential Task Force on Evidence-Based Practice (2006) explicitly proposed requiring evidence from clinicians’ real-world observations and from client values and preferences in addition to research evidence as a basis for establishing EBP. These added requirements reflect an attempt to ensure that EBP is not only likely to produce quantifiable outcomes (based on the results of scientific research), but will also have utility for clinicians (First et al., 2003) and will be acceptable to and respectful of the recipients of the services. Regardless of the specific evidentiary requirements that are defined as necessary to establish a mental or behavioral health practice as evidence-based, EBP must be defined in terms of behaviorally specific practices that can be readily and reliably taught to and followed by clinicians. Both treatment models and transtheoretical practice elements involve competencies that must be operationalized and replicable. Practitioner competencies for EBP fall into four broad areas, including: (1) assessment skills, (2) process skills (i.e., enhancing client motivation and the clinician-client working alliance), (3) communication skills for collaborative decision making, and (4) intervention skills (Spring et al., 2008).

    Two other important concepts related to EBP require definition. Dissemination is the targeted distribution of synthesized scientific evidence and materials related to an intervention, practice, or clinical population to relevant key stakeholders (e.g., health-care administrators, clinicians, patients). Implementation is the use of specific strategies to ensure the successful adoption of disseminated EBPs and integration into practice patterns within clinical settings.

    PURPOSE

    The EBP in mental health care is important for several reasons. It allows for a shared vocabulary and conceptual framework to facilitate transdisciplinary research and high-quality practice in mental health care, providing a framework and process to ensure accountability and reduce the research-practice gap for the sake of the public health (IOM, 2001; Kazdin, 2008). Additionally, the conceptual framework provided by EBP allows for improved communication among professionals and disciplines, thus facilitating the dissemination and implementation of the very best available clinical practices with sufficient fidelity to ensure high-quality services.

    THE PROCESS OF EVIDENCE-BASED PRACTICE

    Because EBP is multifaceted and constantly evolving as empirical knowledge is accumulated, it requires an ongoing process. This process, a central tenet of EBP, involves several steps (as outlined by Spring 2007 and Spring et al., 2008).

    1. Ask patient-centered questions relevant at the individual, community, or population level. For example, questions that have informed the development of EBP include: (a) Who are the patients who do not respond favorably to the best available treatments (e.g., those with Axis II personality disorders or more chronic symptoms), and how can adaptations of these treatments or alternative new treatments effectively address the barriers or problems that have limited these patients’ ability to benefit? (b) What are the core symptoms or features of each disorder that must be addressed therapeutically in order to produce clinically significant change, and how can treatment be structured to directly address those symptoms or features? (c) What modifications in treatment models or practices can increase the pace at which change occurs, in order to relieve patients’ suffering and increase their functioning in the most timely and least costly manner?

    2. Identify and acquire the best available empirical evidence to address relevant questions. As noted earlier, the evidence should include the results of scientifically rigorous research, observations of how clinicians actually deliver services, and preferences expressed by patients that are relevant to effectively engaging and motivating them in treatment.

    3. Appraise the evidence critically (see next section) in order to make appropriate implementation decisions.

    4. Apply the evidence in practice, taking into account relevant factors such as limitations in the evidence base, clinical context, patient values and preferences, and available resources.

    5. Assess outcomes, adjust in an iterative (and ongoing) manner, and disseminate when appropriate.

    EVIDENCE APPRAISAL IN EVIDENCE-BASED PRACTICE

    In order to make the most effective practice decisions, the best available empirical evidence must not only be identified and acquired, but also critically appraised and integrated. Relevant data can take many forms, including single case, time series, or open trials; randomized clinical trials; meta-analyses; consensus panels or agency guidelines.

    Single case, time series, open trials: Smaller, nonrandomized treatment studies are typically an important early step in the development and evaluation of new interventions or applications of established interventions to new populations or via novel service delivery modes. Such studies can provide important information about intervention feasibility, acceptance of the intervention by patients and providers, and potential for efficacy. Alone, however, these trials rarely provide sufficient evidence to support an intervention as an EBP.

    Randomized clinical trials (RCTs): Larger, randomized trials that are designed to carefully control for alternative factors that may account for what appear to be the outcomes of a treatment are usually the gold standard required for acknowledging an intervention as an EBP. There are a number of key elements to consider when evaluating the quality and applicability of an RCT (Borkovec & Castonguay, 1998; Chambless & Hollon, 1998). These include: (1) study design, (2) methods and measures, (3) sample characteristics and size, (4) clinician characteristics, (5) dependent variable considerations, (6) data analyses, (7) results and effect sizes (statistical and clinical significance), and (8) potential side effects and adverse events outcomes and considerations. See also the Consolidated Standards of Reporting Trials (CONSORT) statement, which was developed to improve the quality of reports of RCTs (Begg et al., 1996; CONSORT, 2009).

    Clinical trials are often classified according to their phase (I to IV; NIH Guidelines, 2009) based on a system originally developed for medical outcome studies. A Phase I clinical trial involves testing a treatment model or practice with a relatively small number of recipients (in pharmacotherapy research this tends to range between 20 and 80) who are assessed before and after (and often during) the treatment in order to establish whether the treatment is safe and associated with sufficient benefits to warrant further testing. Phase I clinical trials may also test different variations of the treatment, such as fewer or more sessions (comparable to the dose of a medicine), and the mechanisms by which the treatment achieves outcomes (comparable to testing how a medicine is metabolized and affects the body). Phase II clinical trials test the efficacy of a treatment by rigorously comparing its outcomes versus those of usual clinical care or relatively innocuous alternative conditions that control for alternative possible sources of improvement (comparable to a placebo in medical research). Phase III clinical trials test the effectiveness of a treatment by administering it to much larger numbers of recipients (several hundred to thousands) in real-world circumstances that may include a comparison with the best available alternative treatment(s), careful monitoring of side effects, and follow-up assessments to determine if the benefits are sustained over time. Finally, Phase IV trials typically constitute postmarketing studies that are geared toward gathering more specific information about the risks, benefits, and optimal use of the intervention.

    Critical reviews, meta-analyses, consensus panels, and agency guidelines: Reviews of empirical knowledge base can take a variety of forms, which can include objective efforts to quantifiably summarize and synthesize a large number of RCTs (e.g., meta-analyses). Literature reviews can also help summarize what types of studies have been conducted and organize evidence to address a range of potentially important questions that extend beyond those addressed by a single RCT. These include questions regarding short- and long-term efficacy, efficacy for specific subgroups, effectiveness in practice settings, and comparisons across multiple interventions, limitations, and future directions for research and development.

    Efficacy and effectiveness: Two conceptual forms of research studies represent two broad methods for evaluating outcomes, with efficacy study designs emphasizing internal validity (i.e., whether the intervention works in a controlled research setting) and effectiveness studies emphasizing external validity (whether the intervention works in real-world practice settings; Frueh, Monnier, Elhai, Grubaugh, & Knapp, 2004; Seligman, 1996). An RCT is a type of efficacy study that includes the use of manualized protocols with a fixed number of sessions and random assignment to different conditions. Although important for drawing inferences about causality, an inherent limitation of most RCTs is that they tend to emphasize laboratory rigor over real-world implementation. That is, RCTs generally include lengthy assessments that may or may not be practical in other settings, or they rely on interventions that may not easily translate to other settings due to varying provider and patient characteristics. Most RCTs to date have excluded patients with the most severe forms of the disorder being targeted, those with comorbid diagnoses, and those generally considered fragile or vulnerable. Additionally, most RCTs do not adequately represent ethnoracial minorities. These issues have raised questions among clinicians regarding the effectiveness of these interventions for the patients seen in their practice settings, many of whom have these characteristics.

    Keeping up with the literature: Because scientific knowledge is constantly accumulating, EBP requires a continuous quality improvement perspective (IOM, 2001). New treatments or practices are under development constantly in the mental and behavioral health field, with research supporting their efficacy and effectiveness often emerging quite rapidly (despite the fact that clinical trials usually require several years to complete each phase). For example, only two medication treatments (sertraline, paroxetine) are considered sufficiently safe, efficacious, and effective for adults with PTSD to warrant approval by the U.S. Food and Drug Administration (which establishes federal guidelines for EBP for all pharmaceutical treatments), despite over 30 years of vigorous clinical trials since that diagnosis was formally recognized by the American Psychiatric Association in 1980 in the Diagnostic and Statistical Manual (3rd revision)—and no medication has been FDA approved for the treatment of PTSD with children. However, between 2000 and 2002 a series of Phase I clinical trials were reported suggesting that an antihypertensive medication (Prazosin) was associated with reduced nightmares in PTSD, and from 2003 to 2008 several large Phase II clinical trials confirmed the efficacy or Prazosin for PTSD nightmares and for some of the core daytime symptoms of PTSD as well (Raskind et al., 2007; F. Taylor et al., 2006; H. Taylor et al., 2008).

    CULTURAL COMPETENCE IN EBP

    Evidence-based practice by its very definition requires respect for diversity and knowledge about the limitations of EBPs as they pertain to various groups (Spring et al., 2008; Whaley & Davis, 2007). Because ethnoracial minorities are often not well represented in RCTs, concerns have been raised about the validity of EBPs for ethnoracial minorities and whether EBP standards are even relevant for many underserved/understudied groups. Certainly, more research needs to be conducted with such groups in a variety of practice settings, with a focus on effectiveness research. However, it is not realistic to conduct efficacy or effectiveness trials for every possible configuration of intervention, comorbid condition, practice setting, and ethnoracial or socioeconomic status group. This alone is not reason enough to dismiss using theoretically sound and empirically supported interventions. Rather, it is important to follow the EBP process outlined earlier, reviewing, synthesizing, and adapting the best available empirical data to make contextualized practice decisions that take into account limitations of the existing knowledge base. In fact, the perspectives of cultural competence and EBP are complementary to each other in that they each emphasize the importance of thoughtfully adapting interventions from RCTs for use with specific populations and clinical contexts (Whaley & Davis, 2007). In this regard, extant empirical data can be used to tailor and refine interventions as needed to ensure that they are sensitive to and appropriate for specific clinical populations.

    CHALLENGES TO DISSEMINATION AND IMPLEMENTATION OF EBP IN PRACTICE SETTINGS

    Empirical evidence limitations: A major barrier to dissemination and implementation of EBP for many adult psychiatric disorders is that the empirical literature base remains undeveloped, especially with regard to co-occurring disorders and among underserved/understudied populations. We know very little about the efficacy of established interventions for patients with multiple psychiatric diagnoses, or with regard to the optimal timing of treating one disorder versus another among those with dual diagnoses. That is, for example, a clinician may rightly be hesitant to use a specific EBP intervention that has been shown in clinical trials to have efficacy for depressed patients with the clinicians’ depressed and anxious patient, since it may be unclear how well the EBP’s treatment effects generalize to patients with anxiety comorbidity. Also, the clinician working in an independent practice may be leery of adopting an EBP that proved efficacious in an academic medical center’s RCT (since RCTs often have strict eligibility and exclusion criteria, as well as tending to provide treatments in a time-limited format that often is not sufficient to fully address complicated clinical problems). However, evidence actually suggests that private practice and community setting patients show comparable gains to those published in academic medical centers’ RCTs despite RCT’s strict inclusion criteria. And, evidence suggests that diagnostic complexity does not appear to substantially alter the effectiveness of EBPs tested on only a single disorder. Collaboration between researchers and clinicians has resulted in innovative adaptations of ESTs designed to enhance their applicability to clinicians and patients in real-world settings (e.g., Cook et al., 2004; Fava et al., 2006; Kazdin, 2001; Stroup et al., 2006).

    Barriers to dissemination and implementation of EBP in practice settings: There is little evidence that EBPs are yet effectively disseminated or implemented in the vast majority of real-world practice settings, or that EBPs are implemented in ways that are likely to support wider dissemination efforts (Drake et al., 2001; Gold, Glynn, & Mueser, 2006; Mueser, Torrey, Lynde, Singer, & Drake, 2003; Shumway & Sentell, 2004). The literature on effective dissemination practices emphasizes the need to provide clinicians the training, tools, and ongoing supervision to deliver empirically validated treatments (P. W. Corrigan, Steiner, McCracken, Blaser, & Barr, 2001; Friedberg, Gorman, & Beidel 2009; Henggeler et al., 2008; Torrey et al., 2001). Although necessary, however, these strategies are recognized as insufficient to overcome clinical and administrative barriers to the implementation and maintenance of EBPs in most practice settings, public and private. These barriers generally include a lack of motivation and resistance to change by providers, lack of skills and inadequate training among providers, limited resources and incentives for providers, deficient incentives for providers and administrators, cost concerns regarding implementation and maintenance, lack of ongoing quality assurance or fidelity monitoring, limited involvement and commitment from key stakeholders, diffuse leadership, and insufficient accountability at multiple organizational levels (Addis & Waltz, 2002; P. W. Corrigan et al., 2001; P. Corrigan, McCracken, & Blaser, 2003; Drake et al., 2001; Frueh et al., 2009; Ganju, 2003; Mueser et al., 2003; Schoenwald & Henggeler, 2003; Schoenwald & Hoagwood, 2001; Torrey et al., 2001).

    Practitioner beliefs and resistance: Practitioner beliefs about and resistance to EBP is a major concern. Clinicians often have concerns regarding the effectiveness of EBPs, including a possible compromised therapeutic relationship when using potentially sterile treatment manuals, when individual patient needs are not met, when treatment credibility is undermined by a formulaic lockstep approach, when there is contraindication in the most typical patients (e.g., those with comorbid conditions, ethnoracial minorities; see the aforementioned), when clinical innovation is hampered, and the belief that service innovations may reflect the interests and needs of administrators or payers of services rather than patients (Addis, 2002; Barlow, Levitt, & Bufka, 1999; Frueh, Cusack, Grubaugh, Sauvageot, & Wells, 2006; Gold et al., 2006; Hoagwood, Burns, Kiser, Ringeisen, & Schoenwald, 2001). Additionally, even with positive attitudes toward EBP, logistical challenges frequently hamper implementation efforts. These include difficulty in learning new skills, lack of infrastructure to provide clinicians with training, ongoing supervision and feedback (i.e., maintain fidelity of implementation), and lack of researcher-clinician partnerships (Cook et al., 2004; P. W. Corrigan et al., 2001; Schoenwald et al., 2003; Sullivan et al., 2005; Torrey et al., 2001). In fact, a survey of practicing psychologists demonstrates that fewer than half have a clear idea of treatment manuals, most mistakenly believing that they are cookbook approaches imposed by insurers (Addis & Krasnow, 2000).

    Practitioner implementation: Even when practitioner beliefs and expectations regarding EBP are positive, this does not necessarily translate into actual or effective implementation. Studies evaluating attitudes toward and use of EBP among mental health professionals show that although only a minority of participants report negative views of EBPs, favorable EBP attitudes are not strongly correlated with reported clinical behaviors (e.g., Gray, Elhai, & Schmidt, 2007). Even respondents who tend to rely on treatments that are not empirically based report positive opinions about EBPs, suggesting that practitioners may hold widely varying evidentiary standards (Frueh et al., 2006).

    Limited practice accountability at provider, facility, and system levels: Unfortunately, practice accountability, treatment quality, and outcomes are not systematically linked at all levels in most mental health clinics and systems (Gold et al., 2006). Performance standards usually focus on counting the number of patients treated (e.g., billable hours), general procedure categories administered, and medical record keeping tasks (e.g., progress notes completed). While these administrative criteria are important in terms of documenting that work is being accomplished, they do not ensure (and may actually shift the clinician’s attention away from) the delivery of quality care and the attainment of meaningful benefits by patients. Some health-care programs and systems (such as the U.S. Veterans Affairs health-care system) also include measurement of patient attendance and drop out, scheduling of follow-up appointments, and referrals, all of which are meaningful variables. However, mental health systems rarely monitor actual clinical outcomes at the individual patient or aggregate level, improvements in social and occupational functioning, or specific details of clinical practice behaviors (e.g., treatment fidelity monitoring) to ascertain whether EBP is used at all or used appropriately. Thus, frontline clinicians, facility directors, and system administrators are rarely held accountable for using EBP. Without such accountability, large-scale adoption of EBP is unlikely to develop fully.

    STRATEGIES FOR DISSEMINATION AND IMPLEMENTATION OF EBP IN PRACTICE SETTINGS

    To effectively disseminate EBP in real-world practice settings, a multipronged effort targeted at every level of the organization (provider, facility, and system) is needed. Proposed strategies (Frueh et al., 2009) for overcoming such identified challenges are: set clear goals; nurture broad-based organizational commitment and key stakeholder involvement; implement specialty training efforts to provide information and change attitudes; provide ongoing technical assistance and clinical supervision; conduct fidelity and competence monitoring; and ensure accountability to the extent possible.

    1. Set clear goals: It is important to set goals that are clear, specific, and realistic—and to then communicate these goals effectively to relevant change agents and key stakeholders. Goals should be objectively measurable so that progress in relevant domains can be empirically tracked, and goals can be revised accordingly. For example, a dissemination and implementation goal might be to train a cohort of clinicians to deliver an EST with 90% fidelity (or greater) to the model based on independent quality assurance review of videotaped treatment sessions, and to sustain this level of fidelity through weekly consultation meetings in which tapes are reviewed by the clinicians and expert supervisors. An outcomes goal might be to enable patients to reduce the severity of the symptoms of their primary disorder(s) to within the subclinical or nonclinical range when a sufficient dose (e.g., length of treatment or number of sessions) of the EST has been received.

    2. Nurture broad-based organizational commitment and key stakeholder involvement: Successful dissemination and implementation of any new program within most large mental health agencies and practice settings requires broad-based organizational commitment, including representation and buy-in from the full range of key stakeholders, including patients (whose voices can be powerful and persuasive) (Cusack, Wells, Grubaugh, Hiers, & Frueh, 2007; Frueh et al., 2009; Gold et al., 2006). While administrators, clinicians, and other stakeholders are committed to improving patient outcomes, they must be convinced that proposed assessment procedures and interventions can do so cost effectively. After obtaining stakeholder buy-in, empirical data should guide decisions regarding how services will be implemented. These decisions should be flexible to ensure practices are appropriate and sensitive to the needs of both patients and providers within a particular facility. Similar to a bottom-up approach, provider feedback and concerns regarding the intervention should be solicited and appropriately addressed. Importantly, it has been proposed that making treatment manuals more user friendly and obtaining clinician input and refining treatment procedures to address high priority challenges (e.g., preventing and managing psychiatric crises; engaging and achieving progress with refractory patients) is key. Such feedback could alter aspects of the intervention but will ultimately increase the likelihood that the intervention will be successful by promoting a sense of ownership and collaboration among providers (Sullivan et al., 2005).

    3. Incorporate specialized training efforts to provide information and change attitudes: Specialized training is necessary to help clinicians make the leap from learning new practices or treatments to actually incorporating the skills and knowledge into their clinical repertoire. Refresher training also is important to enable clinicians to keep up with refinements in the EBP as well as broader advances in their field, and to identify and address knowledge deficits. Available empirical data can be used to persuade stakeholders to accept the need for developing or changing clinical practices, and to provide direction on how those practices might be structured. For example, introducing specific measures or treatment manuals that are relatively user friendly and providing resources and instruction on their use can be extremely helpful to well-intentioned clinicians who want to change their practice but are unsure of where to begin. It is also important to include relevant instructional approaches that are tailored to specific audiences. Further, the use of multichannel approaches (e.g., video, Web-based, behavioral role plays) to clarify and reinforce key points is important to facilitate and consolidate learning. In combination, these training efforts can help to increase knowledge, expand skill sets, and dispel inaccurate practitioner beliefs.

    4. Provide ongoing supervision and peer review: It is not sufficient to merely provide education and training to clinicians (Henggeler et al., 2008). The dissemination and implementation literatures suggest that single or compressed training sessions in practice settings seldom result in meaningful or lasting changes. Ongoing clinical supervision is an essential component of effective dissemination and implementation efforts, providing clinicians with role modeling and guidance by experts and assistance in competently adapting the EBP or EST to their setting and patients while maintaining fidelity to the model or practices (P. W. Corrigan, Steiner, McCracken, Blaser, & Barr, 2001; Torrey et al., 2001). Obstacles in training and supervision of EBPs have hampered real-world implementation in practice settings, requiring special attention and innovative models of dissemination (Friedberg, Gorman, & Beidel, 2009; Sudak, 2009).

    5. Conduct fidelity monitoring: It is well-recognized that fidelity monitoring is essential to ensure effective long-term implementation of EBP (Backer, Liberman, & Kuehnel, 1986; McGrew, Bond, Dietzen, & Salyers, 1994). Without such monitoring, protocol deviations inevitably occur, including the introduction of theoretically incompatible, unsupported, or inert treatment strategies. For example, we found that clinicians being trained in the use of cognitive-behavioral treatment for PTSD were often tempted to fall back on old habits (e.g., avoiding discussion of trauma exposure details or PTSD symptoms) or to use elements of other incompatible interventions (Frueh et al., 2009). Fidelity efforts will be most effective if they are relatively quick, user friendly, and cost efficient (Schoenwald & Henggeler, 2003). Strategies include checklists to ensure session content areas are covered, peer review, and formal supervision in group or individual format. Optimally, actual observation of delivery of treatment by the supervisor (e.g., behind a one-way window) and the clinician (e.g., reviewing videotapes of treatment sessions) provides real-time samples of the patient-clinician interaction that can be discussed in terms of the clinician’s successes in adhering to EBP as well as deviations that are opportunities for reexamining the purpose and benefits of the EBP and recommitting to it.

    6. Ensure accountability to the extent possible: The final, and perhaps most difficult to incorporate, component of dissemination and implementation is accountability at all levels of the organization. Once an organization has made the decision to change, has implemented new procedures and services, and clinicians are trained and prepared to implement EBP, accountability is necessary. This requires: (a) ongoing assessment of protocol adherence (e.g., fidelity monitoring) and clinical outcomes, which can be used to help ensure accountability; and (b) ongoing training and supervision to maintain currency with EBP. Strategies must also be developed to incorporate goal-obtainment accountability at provider, facility, and system levels. This should include both incentives for success and feedback, guidance, and ultimately consequences if goals are not met. This may be the most difficult challenge to surmount in most practice settings, and requires committed and creative leadership.

    CONCLUSIONS

    Widespread dissemination and implementation of EBP, including the conceptual framework and processes this entails, is critical to improving mental and behavioral health patient outcomes, yet is widely lacking in practice settings (Kazdin, 2008). Many clinicians still do not know that their jobs can be made easier and the benefits their patients accrue can be increased by incorporating EBP into their practice. Although EBP requires fidelity to a model of assessment and treatment and the tools for implementation (e.g., validated interview or questionnaire measures, replicable treatment strategies), the reality of clinical practice is that clinicians are using their own idiosyncratic implicit practice models. Utilizing EBP simply makes this explicit and open to thoughtful innovation, as opposed to the less desirable alternative of sporadic changes in practice or unquestioning adherence to practice as usual.

    Thus, it is time that practitioners and mental health-care systems align their clinical services with the best available empirical data. Ultimately, each clinician will choose her or his own practice model, but with an explicit EBP or set of ESTs the array of choices is enlarged and each choice about assessment or treatment strategy can be made on a more fully informed basis. It is essential that EBPs are defined, researched, taught, and supervised in a manner that facilitates rather than artificially constricts the clinician’s ability to adapt practices to best suit the setting and each unique patient or cohort of patients. Schoenwald and Hoagwood (2001) noted the process of moving efficacious treatments to usual-care settings is complex and may require adaptations of treatments, settings, and service systems (p. 1196). Accordingly, we have outlined a number of challenges to implementing EBP in clinical settings and suggested strategies for overcoming these challenges. These strategies include efforts at every level of the organization: clinician, facility, and system.

    A great deal of change is still needed with regard to research and practice efforts. There is need to (a) enhance the empirical database on EBP for virtually all psychiatric disorders, especially with regard to comorbid conditions and understudied populations (e.g., ethnoracial minorities); (b) conduct and systematically evaluate related dissemination and implementation strategies and outcomes; (c) expand the empirical database on EBPs, as well as dissemination and implementation efforts for interventions in highly vulnerable populations (e.g., prisoners, veterans, low-income females, children, persons with substance abuse disorders); (d) examine and initiate widespread change efforts related to mental health practices in public-sector agencies; and (e) incorporate changes to graduate and medical training curricula to reflect the advances in EBP for adult psychopathology.

    While this chapter has focused on the challenges and opportunities presented by the translation of scientific and clinical innovation into clinical practice, the true starting point for the widespread adoption of an EBP perspective to mental and behavioral health services is teaching EBP as a fundamental clinical practice framework in pregraduate and graduate education programs. When EBP is the rule and not the exception in clinical training across the mental and behavioral health professions, the dissemination and implementation of EBP in practice settings will be much closer to being the accepted and welcomed norm rather than a controversial externally imposed requirement. Precisely what EBP means in each setting and for each clinician will continue to be adaptable based on the best judgment of the clinician and the best interests of the patient, and as a result of utilizing an EBP framework those judgments and interests will be explicit and open to empirically guided enhancement. Considered in that light, EBPs provide a win-win opportunity for clinical innovation, if taught and applied in the spirit of humanistic empiricism that is their true foundation.

    REFERENCES

    Addis, M. E. (2002). Methods for disseminating research products and increasing evidence-based practice: Promises, obstacles, and future directions. Clinical Psychology: Science and Practice, 9, 367–378.

    Addis, M. E., & Krasnow, A. D. (2000). A national survey of practicing psychologists’ attitudes toward psychotherapy treatment manuals. Journal of Consulting and Clinical Psychology, 68, 331–339.

    Addis, M. E., & Waltz, J. (2002). Implicit and untested assumptions about the role of psychotherapy treatment manuals in evidence-based mental health practice. Clinical Psychology: Science and Practice, 9, 421–424.

    American Psychological Association Presidential Task Force on Evidence-Based Practice. (2006). Evidence-based practice in psychology. American Psychologist, 61, 271–285.

    Backer, T. E., Liberman, R. P., & Kuehnel, T. G. (1986). Dissemination and adoption of innovative psychosocial interventions. Journal of Consulting and Clinical Psychology, 54, 111–118.

    Barlow, D. H. (2000). Evidence-based practice: A world view. Clinical Psychological: Science and Practice, 7, 241–242.

    Barlow, D. H., Levitt, J. T., & Bufka, L. F. (1999). The dissemination of empirically supported treatments: A view to the future. Behaviour Research and Therapy, 37, S147–S162.

    Begg, C. B., Cho, M. K., Eastwood, S., Horton, R., Moher, D., Olkin, I.,. . . Stroup, D. F. (1996). Improving the quality of reporting of randomized controlled trials: the CONSORT statement. Journal of the American Medical Association, 276, 637–639.

    Borkovec, T. D., & Castonguay, L. G. (1998). What is the scientific meaning of empirically supported therapy? Journal of Consulting and Clinical Psychology, 66, 136–142.

    Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported treatments. Journal of Consulting and Clinical Psychology, 66, 7–18.

    Chorpita, B. F., Daleiden, E. L., & Weisz, J. R. (2005). Identifying and selecting the common elements of evidence-based interventions: A distillation and matching model. Mental Health Services Research, 7, 5–20.

    Consolidated Standards for Reporting of Trials. (2009). Retrieved from http://www.consort-statement.org/

    Cook, J. J., Schnurr, P. P., & Foa, E. B. (2004). Bridging the gap between posttraumatic stress disorder research and clinical practice: The example of exposure therapy. Psychotherapy: Theory, Research, Practice, Training, 41, 374–387.

    Corrigan, P., McCracken, S., & Blaser, B. (2003). Disseminating evidence-based mental health practices. Evidence-Based Mental Health, 6, 4–5.

    Corrigan, P. W., Steiner, L., McCracken, S. G., Blaser, B., & Barr, M. (2001). Strategies for disseminating evidence-based practices to staff who treat people with serious mental illness. Psychiatric Services, 52, 1598–1606.

    Cusack, K. J., Wells, C. B., Grubaugh, A. L., Hiers, T. G., & Frueh, B. C. (2007). An update on the South Carolina Trauma Initiative. Psychiatric Services, 58, 708–710.

    Drake, R. E., Goldman, H. H., Leff, H. S., Lehman, A. F., Dixon, L., Mueser, K. T., & Torrey, W. C. (2001). Implementing evidence-based practices in routine mental health service settings. Psychiatric Services, 52, 179–182.

    Fava, M., Rush, A. J., Wisniewski, S. R., Nierenberg, A. A., Alpert, J. E., McGrath, P. J., . . . STAR*D Study Team. (2006). A comparison of mirtazapine and nortriptyline following two consecutive failed medication treatments for depressed outpatients: A STAR*D Report. American Journal of Psychiatry, 163, 1161–1172.

    Ferrell, C. B. (2009). Reengineering clinical research science: A focus on translational research. Behavior Modification, 23, 7–23.

    First, M., Pincus, H., Levine, J., Williams, J., Ustun, B., & Peele, R. (2003). Clinical utility as a criterion for revising psychiatric diagnoses. American Journal of Psychiatry, 161, 946–954.

    Friedberg, R. D., Gorman, A. A., & Beidel, D. C. (2009). Training psychologists for cognitive-behavioral therapy in the raw world: A rubric for supervisors. Behavior Modification, 33, 104–123.

    Frueh, B. C., Cusack, K. J., Grubaugh, A. L., Sauvageot, J. A., & Wells, C. (2006). Clinician perspectives on cognitive-behavioral treatment for PTSD among public-sector consumers with severe mental illness. Psychiatric Services, 57, 1027–1031.

    Frueh, B. C., Grubaugh, A. L., Cusack, K. J., & Elhai, J. D. (2009). Disseminating evidence-based practices for adults with PTSD and severe mental illness in public-sector mental health agencies. Behavior Modification, 33, 66–81.

    Frueh, B. C., Monnier, J., Elhai, J. D., Grubaugh, A. L., & Knapp, R. G. (2004). Telepsychiatry treatment outcome research methodology: Efficacy versus effectiveness. Telemedicine Journal and E-Health, 10, 455–458.

    Ganju, V. (2003). Implementation of evidence-based practices in state mental health systems: Implications for research and effectiveness studies. Schizophrenia Bulletin, 29, 125–131.

    Gold, P. B., Glynn, S. M., & Mueser, K. T. (2006). Challenges to implementing and sustaining comprehensive mental health service programs. Evaluation & The Health Professions, 29, 195–218.

    Gray, M. J., Elhai, J. D., & Schmidt, L. O. (2007). Trauma professionals’ attitudes towards and utilization of evidence-based practices. Behavior Modification, 31, 732–748.

    Henggeler, S. W., Sheidow, A. J., Cunningham, P. B., Donohue, B. C., & Ford, J. D. (2008). Promoting the implementation of an evidence-based intervention for adolescent marijuana abuse in community settings: Testing the use of intensive quality assurance. Journal of Clinical Child and Adolescent Psychology, 37, 682–689.

    Hoagwood, K., Burns, B. J., Kiser, L., Ringeisen, H., & Schoenwald, S. K. (2001). Evidence-based practice in child and adolescent mental health services. Psychiatric Services, 52, 1179–1189.

    Institute of Medicine and National Research Council. (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: The National Academies Press.

    Institute of Medicine and National Research Council. (2007). Treatment of posttraumatic stress disorder: An assessment of the evidence. Washington, DC: The National Academies Press.

    Kazdin, A. (2001). Progression of therapy research and clinical application of treatment require better understanding of the change process. Clinical Psychology: Science and Practice, 8, 143–151.

    Kazdin, A. (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63, 146–159.

    Lonigan, C. J., Elbert, J. C., & Johnson, S. B. (1998). Empirically supported psychosocial interventions for children: An overview. Journal of Clinical Child Psychology, 27, 138–145.

    McGrew, J. H., Bond, G. R., Dietzen, L., & Salyers, M. (1994). Measuring the fidelity of implementation of a mental health program model. Journal of Consulting and Clinical Psychology, 62, 670–678.

    Mueser, K. T., Torrey, W. C., Lynde, D., Singer, P., & Drake, R. E. (2003). Implementing evidence-based practices for people with severe mental illnesses. Behavior Modification, 27, 387–411.

    National Institute for Health and Clinical Excellence. (2005). The management of posttraumatic stress disorder in primary and secondary care. London: NICE.

    National Institute of Health (2009). Clinical trials.gov. Retrieved from http://clinicaltrials.gov/ct2/info/understand

    President’s New Freedom Commission on Mental Health. (2003). Achieving the promise: Transforming mental health care in America. Final report. (Department of Health and Human Services Report No. SMA-03-3832). Rockville, MD: Department of Health and Human Services.

    Raskind, M., Peskind, E., Hoff, D., Hart, K., Holmes, H., Warren, D., . . . McFall, M. E. (2007). A parallel group placebo controlled study of Prazosin for trauma nightmares and sleep disturbance in combat veterans with Posttraumatic Stress Disorder. Biological Psychiatry, 61, 928–934.

    Rosen, G. M., & Davison, G. C. (2003). Psychology should list empirically supported principles of change (ESPs) and not credential trademarked therapies or other treatment packages. Behavior Modification, 27, 300–312.

    Schoenwald, S. K., & Henggeler, S. W. (2003). Current strategies for moving evidence-based interventions into clinical practice: Introductory comments. Cognitive & Behavioral Practice, 10, 275–277.

    Schoenwald, S. K., & Hoagwood, K. (2001). Effectiveness, transportability, and dissemination of interventions: What matters when? Psychiatric Services, 52, 1190–1197.

    Seligman, M. E. (1996). Science as an ally of practice. American Psychologist, 51, 1072–1079.

    Shumway, M., & Sentell, T. L. (2004). An examination of leading mental health journals for evidence to inform evidence-based practices. Psychiatric Services, 55, 649–653.

    Spring, B. (2007). Evidence-based practice in clinical psychology: What it is, why it matters; and what you need to know. Journal of Clinical Psychology, 63, 611–631.

    Spring, B., Walker, B., Brownson, R., Mullen, E., Newhouse, R., Satterfield, J., . . . Hitchcock, K. (2008). Definition and competencies for evidence-based behavioral practice (EBBP). Counsel for Training in Evidence-Based Behavioral Practice: Evanston, IL.

    Stewart, R. E., & Chambless, D. L. (2007). Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology, 63, 267–281.

    Stroup, T. S., Lieberman, J. A., McEvoy, J. P., Swartz, M. S., Davis, S. M., Rosenheck, R. A., . . . CATIE Investigators. (2006). Effectiveness of olanzapine, quetiapine, risperidone, and ziprasidone in patients with chronic schizophrenia following discontinuation of a previous atypical antipsychotic. American Journal of Psychiatry, 163, 611–622.

    Sudak, D. M. (2009). Training in cognitive behavioral therapy in psychiatry residency: An overview for educators. Behavior Modification, 33, 124–137.

    Sullivan, G., Duan, N., Mukherjee, S., Kirchner, J., Perry, D., & Henderson, K. (2005). The role of health services researchers in facilitating intervention research. Psychiatric Services, 56, 537–542.

    Taylor, F., Lowe, K., Thompson, C., McFall, M., Peskind, E, Kanter, E., . . . Raskind, M. A. (2006). Daytime prazosin reduces psychological distress to trauma specific cues in civilian trauma posttraumatic stress disorder. Biological Psychiatry, 59, 577–581.

    Taylor, H., Freeman, M., & Cates, M. (2008). Prazosin for treatment of nightmares related to posttraumatic stress disorder. American Journal of Health Systems in Pharmacy, 65, 716–722.

    Torrey, W. C., Drake, R. E., Dixon, L., Burns, B. J., Flynn, L., Rush, A. J., . . . Klatzker, D. (2001). Implementing evidence-based practices for persons with severe mental illness. Psychiatric Services, 52, 45–50.

    Weisz, J. R., Hawley, K. M., Pilkonis, P. A., Woody, S. R., & Follette, W. C. (2000). Stressing the (other) three Rs in the search for empirically supported treatments: Review procedures, research quality, relevance to practice and the public interest. Clinical Psychology: Science and Practice, 7, 243–258.

    Whaley, A. L., & Davis, K. E. (2007). Cultural competence and evidence-based practice in mental health services: A complementary perspective. American Psychologist, 62, 563–574.

    *This work was partially supported by grants MH074468 from the National Institute of Mental Health, CD-207015 from Veterans Affairs HSR&D, OJJDP-CT-52525-JS from the Department of Justice, and awards from the McNair Foundation and Menninger Foundation.

    Chapter 2

    Developing Clinical Guidelines for Adults

    Experience From the National Institute for Health and Clinical Excellence

    STEPHEN PILLING

    INTRODUCTION

    The National Institute for Health and Clinical Excellence (NICE) was established in 1999 to develop guidance for the National Health Service in England and Wales. It operates as an independent body within the state-funded health-care system, known as the National Health Service (NHS), but its funding and the remit for its work comes from the government. The central role of NICE is to provide recommendations on the best practices in health care and thereby set standards by which health care and health outcomes can be improved. It produces guidance in four areas: (1) Clinical Practice, that is, clinical guidelines, such as guidance focused primarily on a disorder or condition, such as depression or type I diabetes; (2) Technology Appraisals, which are cost-effectiveness reviews of health technologies, usually, but not limited to, drugs. For example, in mental health technology, appraisals have covered novel hypnotics, Electroconvulsive Therapy (ECT), drugs for the treatment of attention-deficit/hyperactivity disorder (ADHD), and parent training in conduct disorder; (3) Interventional Procedures, which are concerned primarily with the efficacy and safety of surgical procedures but do, on occasion, move outside this field to consider nonsurgical interventions—for example, transmagnetic stimulation for the treatment of depression; and (4) Public Health Guidance, which is concerned with public health interventions, and its brief extends beyond health care to involve social care and the wider environment. These four elements constitute the largest single program for the development of clinical guidance in the world. This chapter will provide an introduction to the work of NICE in mental health, concentrating primarily on the clinical guidelines program for adults and its impact on the development of psychological treatments in the UK, but will also refer to other elements of the NICE program. Some of the guidelines cover disorders of both adults and children; for example, eating disorders. (See Volume 1, Chapter 4, by Pilling and Fonagy for an account of NICE’s work on children and adolescent mental health.)

    Although NICE is probably the largest guideline development organization, it is not alone. The past 20 years have seen a major expansion in guidance development around the world. Some measure of the rate of expansion can be obtained from the following figures. Parry, Cape, and Pilling (2003) reported that the total number of guidelines concerned with the treatment of depression on the National Guidelines Clearinghouse, an international register of completed guidelines (www.guideline.gov), was 170. As of December 31, 2008, this figure had reached 487. Despite this proliferation of guidelines, relatively few have explicitly focused on psychological therapies although there are some exceptions; see, for example, the guideline on Treatment Choice in Psychological Therapies and Counselling (Department of Health, 2001). Despite this lack of explicit focus, psychological interventions have assumed an increasingly important role in a number of mental health guidelines, in particular the NICE mental health guidelines, where psychological interventions have been identified as key recommendations for implementation in 13 out of the 16 mental health guidelines so far produced (www.nccmh.org.uk). Indeed, the location of psychological interventions within broadly based diagnostic or condition-based guidelines has been important in establishing psychological therapies as mainstream treatment options.

    DEVELOPING CLINICAL GUIDELINES

    The focus of this chapter is on the development of clinical guidelines, particularly as they apply to psychological therapies for adults, and mental health more generally. It will first consider the rationale for their development; briefly review the methods by which they are produced, reviewing some of the particular problems in their development; consider the evidence for their effective implementation; briefly describe some major implementation initiatives in the NHS; and briefly consider future developments in clinical guidelines in mental health and psychological therapies. The chapter draws on the author’s experience as the joint director of the National Collaborating Centre for Mental Health, a British Psychological Society and Royal College of Psychiatrists joint initiative that develops clinical practice guidelines in mental health for NICE.

    Rationale for Clinical Guidelines

    Clinical guidelines are defined as systematically developed statements to assist practitioner and patient decisions about appropriate healthcare for specific clinical circumstances (Field & Lohr, 1990, p. 8). This definition, developed in the 1990 Institute of Medicine report, is the one adopted by the UK health-care system (Department of Health, 1996). They differ from standard literature reviews and textbooks in the explicit methods used in their construction, which usually involves a representative guideline development group of professionals who use a systematic approach to identify and evaluate the evidence. Increasingly, patients and caregivers also participate in the development of guidelines. Evidence from secondary research, usually in the form of systematic reviews, is used in combination with the expertise and knowledge of the guideline development group to arrive at a set of recommendations for clinical practice. Guideline development groups increasingly follow standard methods for development of recommendations including, where appropriate, formal and informal consensus methods (NICE, 2009a). An important characteristic of high-quality clinical guidelines is that the method is transparent and well described so that the evidence supporting each recommendation is clearly identifiable.

    A primary aim of clinical guidelines is to promote clinically cost-effective care; in order to achieve this, they need to be based on the best available evidence. In seeking to achieve this aim, guidelines set standards for interventions by health-care professionals that should guide professional behavior; however, they are not a substitute for the clinical judgment exercised by a health-care professional when determining the most effective care for an individual (NICE, 2009a). Because clinical guidelines are based on the needs of the hypothetical typical patient, and because patient needs inevitably vary from the average, recommendations may require adaptation to suit the needs of the individual. For example, this might mean varying the duration of the recommended treatment for patients with comorbid disorders, such as in the treatment of PTSD with comorbid personality disorder (NICE, 2005b). Typically, it might be expected that the recommendations contained in a guideline would apply to about 80% of the individuals with the condition or disorder covered by that particular guideline (Eddy, 1990). Clinical guidelines should also be distinguished from protocols. Protocols specify precisely what a health-care professional should do in a certain set of circumstances and should be based on good-quality evidence with a high degree of certainty about the benefits and/or risks of the intervention; for example, the means of administration of certain cytotoxic drugs, and opinions/options. In contrast, a synthesis of the evidence sets out a range of possible interventions and the evidence for their effectiveness without necessarily specifying in detail the circumstances in which they might be used, such as the empirically supported treatment approach adopted by the American Psychological Association (Chambless, 1993). Clinical guidelines can be particularly important when considerable uncertainty exists about the correct intervention for a particular disorder or problem, as is often the case in mental health.

    The focus in clinical guidelines on setting standards for and improving patient outcomes has a number of direct and indirect consequences. In addition to improved quality of care, these include the allocation of resources toward more effective treatments and away from less effective treatments and improved access to effective care, particularly if the guidelines also make recommendations about the nature of service delivery systems. Clinical guidelines are also increasingly used to better inform patients about the type of care they may expect. This requires that the guidance be presented in forms accessible to patients. NICE produces publications for all its guidance that specifically aim to inform patients and their caregivers about the guidance it has issued (NICE, 2009a). This means that guidelines cannot only help patients in making informed decisions about their care, but may also improve communication between patients and professionals. Guidelines may also be used by health-care commissioners and managers to guide the purchasing of services to: (a) develop the service structures needed to deliver effective health care, (b) develop the systems for effective monitoring, and (c) evaluate services. Finally, guidelines can also have a role in the education and training of health-care professionals and may assume an increasing prominence in the curricula of the undergraduate and postgraduate training of health-care professionals. The aim of clinical guidelines is to reduce existing uncertainties about which interventions are most likely to bring benefit for patients and generate greater cost-effectiveness in the health care system, thereby ensuring that unacceptable variation in both clinical practice and the distribution of resources is reduced.

    As can be seen from the previous discussion, NICE and other clinical guidelines are increasingly drawn into the discussion of the allocation of resources. This represents an engagement with issues beyond the simple identification of effective interventions and is a challenge that faces all health-care systems where the demand for health care outstrips the system’s capacity and resources to deliver, including those in the United States (Peterson, 2008; Steinbrook, 2008). While it is not the role of clinical guidelines or their developers to decide on what resources should be allocated by a community for the health care of its citizens, guidelines can have a role to play in determining the means by which those agreed resources are allocated. Therefore, it is important that the methods used are seen to be fair and transparent: This is particularly important if guidelines are to retain the support of the professional and patient communities whose practice and care is directly impacted by them. Inevitably, this requires that the values on which the guidelines are based are made explicit, as are the methods by which the guidelines are produced.

    While guideline development methods have become increasingly transparent (Grilli, Magrini, Penna, Mura, & Liberati 2000), there are few guideline development organizations that make explicit the values that underpin their work. The NICE is one of the few guideline development organizations that has an explicit statement of its social value judgements (NICE, 2005c). These values include: (a) statements on recommending interventions where good evidence is available but not doing so where it is not available; (b) considering cost-effectiveness in the evaluation of interventions; (c) considering age, race, or gender only where it is an indicator of the likely effectiveness of an intervention that cannot be accounted for by any other means; and (d) not denying interventions because the condition that they aim to ameliorate may, in part, be self-inflicted (NICE, 2005c). This approach recognizes that it is extraordinarily difficult, if not impossible, to develop an agreed set of rules that could allow for the allocation of health-care resources on which all citizens could agree (Daniels & Sabin, 2002), but that it may be possible to obtain agreement on the process by which the decisions are reached. This approach, known as procedural justice (Daniels & Sabin, 2002), underpins the NICE program and requires a transparent and fair process in which all relevant stakeholders are actively involved. Guideline developers are not expected to satisfy everyone since this may well not be achievable, but they can be held to account for the reasonableness of their decision-making process (Rawlins & Dillon, 2005).

    Methods for Developing Clinical Guidelines

    There are a number of methods for the synthesis of evidence to support clinical decision making and the development of clinical guidelines. These include systematic reviews (Egger, Davey Smith, & Altman, 2001); meta-analysis or other methods of aggregating multiple data sets on the effects of an intervention (Egger et al., 2001); evidence briefings (see www.nta.nhs.uk for examples from the field of substance misuse), reviews of the cost or cost-effectiveness of health interventions (Whitten et al., 2002), and formal and informal consensus methods (Black et al., 1999). As will be seen from the following discussion, these approaches are not mutually exclusive. Systematic reviews are the building blocks of almost all high-quality evidence reviews and the methods for doing this, while not always followed, are the best developed. See Moher et al. (1999) for a statement on the methods to be adopted for high-quality systematic reviews.

    In most cases clinical guidelines, including those from NICE, are condition- or problem-based. Thus, they may address diagnoses, such as depression or diabetes, or problems, such as violence in psychiatric settings. They focus on what should be done and less on how treatments might be delivered (Parry et al., 2003). The consequences of this approach for clinical guidelines in mental health and psychological therapies will be returned to later in this chapter; however, in most health-care systems the clinical guideline remains the most complete manifestation of evidence-based medicine, and one that may also include advice on the care pathways and service structures to support effective delivery of care.

    Initially, most clinical guidelines were developed by specialist uniprofessional groups; for example, by groups of specialist physicians such as cardiologists or neurologists. In a systematic review of specialist guidelines developed by uniprofessional groups, Grilli et al. (2000) focused on three areas of guideline development: (1) professional and stakeholder involvement, (2) identification of primary evidence, and (3) appropriate grading of recommendations. They highlighted some of the potential problems with this specialist uniprofessional approach. For example, of the 431 specialty guidelines they reviewed, only 5% were rated as adequate in terms of the search strategies used, the structure of the guideline development groups, and the grading of recommendations. Grilli and collegues argued that this demonstrated the need for a multidisciplinary approach with explicit and transparent methods based on international standards of good practice. Recent trends in guideline development have supported this view and the recent significant international expansion of evidence-based medicine has often been based in multiprofessional development programs such as NICE in England and Wales, the Scottish Intercollegiate Guidelines Network (SIGN), and the Agency for Healthcare Research and Quality (AHRQ) in the United States (Parry et al., 2003). In addition, there have been recent and important advances in the methods for the evaluation of the quality of clinical guidelines. Perhaps the best developed, reliable, and most widely used method is the AGREE instrument (www.agreecollaboration.org). It was produced by an international group of guideline developers and methodologists (AGREE, 2003) to assess the quality of guideline development. It includes ratings of the scope and purpose of the guideline, the extent of stakeholder involvement, the rigor of development, the clarity and presentation, the applicability of the guideline, and the degree of editorial independence of the developers. For an example of its use in an international review of the quality of clinical guidelines in schizophrenia, see Gaebel, Weinmann, Sartorius, Rutz, and McIntyre (2005).

    Clinical guidelines typically rely on two main methods for the identification and aggregation of data from primary research: systematic review and meta-analysis. Thus, the quality of these activities is central to the development of high-quality clinical guidelines. Systematic reviews usually summarize large bodies of evidence by synthesizing the results of multiple primary investigations using strategies designed to reduce bias and random error (Egger et al., 2001). In well-conducted systematic reviews, these methods are predefined and presented in a reliable, transparent, and reproducible manner (Egger et al., 2001). They clearly specify the means by which studies will be identified, selected for inclusion, appraised and their results aggregated, and include steps to minimize bias at each of these stages. In most systematic reviews of the efficacy of a clinical intervention, the randomized controlled trial (RCT)—regardless of the results—is the preferred building block (Starr & Chalmers, 2003). A systematic review usually, but not always, contains a quantitative synthesis of the results—a meta-analysis—but this might not always be possible. For example, when the designs of the studies included in the analysis are too different from each other for an average of their results to be appropriate. This may occur when: (a) combining data from individual or cluster randomized trials without data available to allow for adjustments to take into account the effects of clustering; (b) if the outcomes are not adequately reported; or (c) where the difference in the nature of the outcome measures is too great to allow for a direct comparison. A meta-analysis can also be performed without a systematic review simply by combining the results from more than one study, but considerable caution is often needed in the interpretation of the results.

    One of the major difficulties that arises in the interpretation and use of systematic reviews in support of clinical decision making derives from the methods used to identify, select, and critically appraise the relevant studies. For most well-conducted systematic reviews, a well-designed electronically based search strategy is required, one that includes clearly specified search terms relevant to the subject under review and searches relevant databases such as Medline, EMBASE, and PsycINFO. The development of these strategies is well described in a number of publications (Egger et al., 2001) and any well-conducted review should report on the number of relevant studies identified at each stage of the search and appraisal process (Moher et al., 1999); however, even the best-designed search strategies have their limitations. When searching for efficacy studies these include the inability of the search strategies to fully compensate for the consequences of bias in publication. These biases include the presence or absence in the review of unpublished studies or the selective reporting of outcomes, limitations of the Medical Subject Headings terms used in the descriptions of some studies, and the delay in entering recently published studies onto relevant databases. Solutions to the latter two problems can be addressed to an extent by hand searching the references of identified studies and regularly updating the searches during the course of a review, but the problem of publication bias presents a much greater challenge.

    The extent of the problems presented by unpublished studies is illustrated in the systematic review by Whittington et al. (2004) that compared the clinical recommendations that could be made about the use of selective serotonin reuptake inhibitors (SSRIs) for children and adolescents with depression, based on published and unpublished clinical trials of these drugs The analysis was performed in support of the NICE clinical guideline on Depression in Children (NICE, 2005a). They demonstrated that if published studies alone had been used, a systematic review would have supported the widespread use of these drugs with few concerns being raised about the potential increased incidence of suicidal ideation in this very vulnerable group. The addition of unpublished studies led to a very different outcome, with all but one of the SSRIs—fluoxetine—being identified as having an unacceptable harm/benefit ratio. This problem of selective reporting of trial outcomes has been confirmed in a number of studies, which demonstrated that the inclusion of previously unpublished data may significantly alter the outcomes of a systematic review. For example, Melander, Ahlqvist-Rastad, Meijer, & Beermann (2003), in a review of trials of SSRIs submitted to the Swedish medicines regulatory authority, demonstrated that studies with significant results were more likely to be published than those with nonsignificant results.

    Another source of bias may arise from investigator allegiance. This can be seen in a number of studies of pharmaceutical industry sponsorship, including Perlis et al. (2005) and Lexchin, Bero, Djulbegovic, and Clark (2003). Perlis et al. reported that company-sponsored trials

    Enjoying the preview?
    Page 1 of 1