Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Evidence-Based Training Methods, 3rd Edition: A Guide for Training Professionals
Evidence-Based Training Methods, 3rd Edition: A Guide for Training Professionals
Evidence-Based Training Methods, 3rd Edition: A Guide for Training Professionals
Ebook515 pages

Evidence-Based Training Methods, 3rd Edition: A Guide for Training Professionals

Rating: 4.5 out of 5 stars

4.5/5

()

Read preview

About this ebook

Let evidence guide your training.

Your training is much more effective when your methods are based on evidence. In this third edition of Evidence-Based Training Methods, Ruth Colvin Clark offers concrete training guidance as she connects research to practice. This book is rich with examples of how research enhances training, and with it as your guide, you can incorporate evidence and learning psychology into your program design, development, and delivery decisions.

New in this edition, Clark examines research on feedback with application tips for training as well as on using animations in critical thinking exercises. Games have also inspired a great deal of recent research, leading to updated information about which ones are effective and why. Ultimately, Clark advises focusing on instructional methods that are relatively inexpensive to implement and offers simple, effective changes.
LanguageEnglish
Release dateDec 30, 2019
ISBN9781949036589
Evidence-Based Training Methods, 3rd Edition: A Guide for Training Professionals

Related to Evidence-Based Training Methods, 3rd Edition

Training For You

View More

Reviews for Evidence-Based Training Methods, 3rd Edition

Rating: 4.5 out of 5 stars
4.5/5

5 ratings1 review

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 5 out of 5 stars
    5/5
    Dr. Ruth Colvin Clark has been researching what really works in the training industry for decades now. In this book, she gives anyone who teaches a concise review of what myths exist in our professions regarding training - - and helps us expand our knowledge about what really works.Her writing style is engaging throughout the book, even making you assess your own learning as you progress. This should be on the bookshelf of any college professor, trainer, or instructional design professional.

Book preview

Evidence-Based Training Methods, 3rd Edition - Ruth Colvin Clark

Preface: Why Evidence-Based Training Methods, Third Edition?

I wrote the first edition of Evidence-Based Training Methods because there was a wealth of reports from research scientists in the academic literature. I believed then and today that much of this evidence remains unknown to practitioners. Academic research professionals and workforce learning practitioners constitute two quite separate communities of practice; there is little overlap in their publications and conferences. Most practitioners lack the time to search, read, and synthesize the many research reports available. Plus, although many research papers do give some guidance for practitioners, guidance is not their main goal. I believe practitioners need not only guidelines but also examples to illustrate implementing those guidelines.

Naturally, research continues to evolve. Fortunately, the science of instruction and learning does not move as quickly as, for example, medical research. However, many of the guidelines in the second edition needed updating. In the past few years, the research community has broadened their inquiry to evaluate not only immediate learning but also delayed learning and motivation. Sometimes an instructional method that may not offer big learning advantages but that is highly motivational is worth implementing.

Two new chapters in this edition focus on feedback and animations. Some topics such as games have inspired a great deal of recent research leading to updated chapters. I am also encouraged by a continued interest in evidence-based guidelines among practitioners—especially those in the allied health professions stimulated by the focus on evidence-based medicine. Finally, what author does not look back on their previous writing and not want to improve it? A third edition has offered me the opportunity to pursue all these goals.

What’s in This Book?

This book is organized from smaller to larger instructional elements. Following the introductory chapters, I focus in part 2 on evidence on use of the basic modes for communication, including graphics (still and animated), text, and audio. Part 3 looks at evidence on a less granular level by reviewing three important instructional methods: examples, practice, and feedback. Finally, in part 4 I take a more macro view of lesson design with guidelines for explanations, teaching procedures, and building critical thinking skills. The book ends with an updated chapter on games and a recap of many evidence-based guidelines as they apply to your instructional design and development processes.

Each chapter includes introductory questions about the instructional method, some evidence on those questions, guidelines based on the evidence, and a short application checklist at the end. For a quick overview, go to the appendix to see a high-level checklist of guidelines, then go back to the specific chapters that review the evidence regarding any guidelines of particular interest to you.

Limits of the Book

There are many topics of interest in our field and you might wonder why certain topics are not addressed. My selection of topics is guided by the evidence available and by my ability to create a coherent set of guidelines around that evidence.

No one person can claim to be cognizant of all relevant evidence. My apologies for omissions. Nor can I claim a flawless interpretation of the evidence I do review. In this edition I have cited the evidence sources. These citations provide you the opportunity to review the evidence firsthand and draw your own conclusions.

—Ruth Colvin Clark, 2020

Ruth@Clarktraining.com

   Part 1

Evidence-Based Practice and Learning

Chapter 1

Training Fads and Fables

Blood, Phlegm, Black Bile, and Yellow Bile

Training Mythology and Investments in Learning

Training Myth #1: Learning Styles

Training Myth #2: Media Panaceas

Training Myth #3: The More They Like It, the More They Learn

Training Myth #4: Learners Are Good Judges of Their Training Needs

Training Myth #5: Active Engagement Is Essential to Learning

Training Myth #6: Games, Stories, and Simulations Promote Learning

Applying Evidence-Based Practice to Your Training

Do you talk on your cell phone (handheld or hands free) while driving? If yes, you are not alone. At any given moment about 7 percent of all drivers are using their phones (Zebra 2019). According to the National Safety Council, about a quarter of all crashes involve cell phone conversations. Evidence shows that even hands-free cell phones are potentially lethal distractions, putting you at four times greater risk of a crash. As of early 2009, when the first edition of this book was written, five states had banned handheld phones while driving. As I write this updated edition 10 years later, 20 states have similar bans (National Conference State Legislatures). In the first half of 2019, Arizona signed a cell phone ban law. The legislation was not prompted primarily by evidence but rather by a well-publicized death of a local patrol officer killed by a driver who was texting at the time.

The journey from evidence to application of evidence is often slow, and workforce learning is no exception. This chapter will show how applying evidence to your instructional programs and products can save your organization time and money wasted on training fads that don’t work.

Blood, Phlegm, Black Bile, and Yellow Bile

Our story starts in the early 1600s—the birth years of evidence-based practice. Prior to 1628, people believed that blood was produced by the heart and the liver and was continuously used up by the body. In other words, there was no accurate conception of blood circulation. William Harvey introduced the revolutionary idea that blood was not consumed by the body. Based on measures of blood volume and anatomical observations, he proposed that blood was pumped from the heart and circulated throughout the body, returning again to the heart. Harvey, along with Galileo, Descartes, and others, turned the 17th century world upside down by advocating evidence and reason—rather than traditional wisdom and faith—as the basis for knowledge and decisions.

We’ve come a long way from the days when medical diagnosis and treatments were based on a balance of the four body humors of blood, phlegm, black bile, and yellow bile. If you were lucky, your treatment prescribed an amulet, which at least did no harm. If you were not so lucky, you were subjected to bloodletting. Although great strides were made in medical science, more than 400 years passed before health science professionals formally adopted evidence-based practice. Old habits die hard. Even though we’ve had evidence about the dangers of cell phones while driving for more than 20 years, that data is still being translated into policy changes. To see the latest updates on use of technology while driving, search the websites of the National Safety Council and the Insurance Institute for Highway Safety.

What Do You Think?

See how your current knowledge matches up with evidence. Mark each statement you think is true.

A. To accommodate different learning styles, it’s best to explain a visual with words presented in text and in audio.

B. Instructor-led classroom training results in better learning than computer-delivered instruction.

C. Courses that get higher student ratings generally produce better learning outcomes.

D. Learners make accurate decisions about their instructional needs.

E. Active engagement is essential to learning.

F. Games are effective instructional methods.

Training Mythology and Investments in Learning

How much do you think is invested in workforce learning? In 2009, when the first edition of this book was published, average employee spending was a little more than $1,000 per employee. In 2016, spending rose to $1,273 (ATD 2017). This is a low figure because it does not take into account one of the biggest expenses of training—worker production time lost during training events. No doubt, the organizations you work with make large investments in training. What kind of return does your organization get on its training investment? Think of the last class that you developed or facilitated. To what extent did the content sequencing, training methods, and facilitation techniques of that class promote learning and consequent improvements in quality, efficiency, safety, and other bottom-line metrics? I’m not surprised if you do not know the return your organization receives; few organizations capture this data.

Many common training practices are based more on fads and fables than on evidence of what works. This chapter will review several popular training myths and the facts that debunk them.

Training Myth #1: Learning Styles

Are you a visual or auditory learner? Has your organization invested resources in learning styles? Like the four body humors of blood, phlegm, yellow bile, and black bile, I think learning styles represent one of the more wasteful and misleading pervasive learning myths of the past 50 years. From auditory learners to visual learners or from sensors to intuitives, learning styles come in many flavors. And learning styles have been a profitable movement. Including books, assessments, and classes, a great deal of resources have been devoted to learning styles. For some reason, the idea of a learning style has a charismatic intuitive appeal that is very compelling. Ask almost anyone whether they are a visual learner or a verbal learner and you will get an immediate commitment to a specific learning style!

The learning style myth leads to some very unproductive training approaches that are counter to modern evidence of what works. For example, many trainers believe that visuals should be described by words in text format for visual learners and narration mode for auditory learners. To accommodate visual and auditory learners, a visual on a slide is explained with text and audio narration of that text. As you will see in chapter 7, evidence has shown this practice to depress learning.

The time and energy spent perpetuating the various learning style myths can be more wisely invested in supporting individual differences that are proven to affect learning—namely, prior knowledge of the learner. If you make one change as a result of reading this book, give up the learning style myth!

Evidence About Learning Styles

Do we have any evidence about learning styles? Kratzig and Arbuthnott (2006) calculated the relationship among three learning style indicators. They asked a group of university students to do three things. First, each participant rated their own learning style as visual, auditory, or kinesthetic. Second, each individual took a learning style test that classified them as a visual, auditory, or kinesthetic learner. Finally, each person was given three tests to measure visual memory, auditory memory, and kinesthetic memory.

If the learning style concept had substance, we would expect to find some positive relationships among these measures. For example, someone who considered themselves a visual learner would score higher on the visual index of a learning styles test and have better memory for visual information. However, when all of the measures were compared, there were absolutely no relationships! A person who rated themselves an auditory learner was just as likely to score higher on the kinesthetic scale of the learning style test and show best memory for visual data. The research team concluded that in contrast to learning style theory, it appears that people are able to learn effectively using all three sensory modalities.

A comprehensive review by Pashler and others (2008) concluded that while people do differ regarding aptitudes, at present there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number. In a recent review, Kirschner (2017) concurred: The premise that there are learners with different learning styles and that they should receive instruction using different instructional methods that match those styles is not a ‘proven’ fact, but rather a belief which is backed up by precious little, if any, scientific evidence.

In spite of the lack of evidence for learning styles, this myth is still prevalent in many educational and training environments—another example of how slowly evidence transfers to practice.

The lack of evidence about learning styles is the basis for my first recommendation.

Fads and Fables Guideline 1

Do not waste your training resources on any form of learning style products, including instructor training, measurement of learning styles, or books.

Training Myth #2: Media Panaceas

Only a few years ago, computer-delivered instruction incited a revolution in training. Of course, computers were not the first technology to cause a stir. Decades prior to computers, radio, film, and television were hailed as having high potential to revolutionize education. The first widespread dissemination of computer-based training (CBT) was primarily delivered on mainframe computers. Soon, however, advances in digital memory, display hardware, programming software, and Internet distribution catalyzed a rapid evolution of CBT to recent technological panaceas, including social media, digital games, simulations, and immersive virtual reality, to name a few. With each new technology wave, enthusiasts ride the crest with claims that finally there are tools to really revolutionize education and training. And yet, if you have been around for a few of these waves, those claims begin to sound a bit hollow. In just a few years, the latest media hype of today will fade, yielding to the inexorable evolution of technology and a fresh spate of technological hyperbole.

What’s wrong with a technology-centric view of instruction? Instructional scientists have learned a lot about how humans learn. Like Harvey, who gave birth to the modern mental model of blood circulation, instructional psychology has revealed the strengths and limits of a human brain, which is the product of thousands of years of evolution. When we plan instruction solely to leverage the latest technology gismo, we ignore the psychology of human memory, which, as we have learned again with cell phones and driving, has severe limits. In fact, technology today can deliver far more information faster than the human brain can absorb it.

Evidence Against the Technology Panacea

For more than 70 years, instructional scientists have attempted to demonstrate the superiority of each new technology over traditional classroom instruction. One of the first published media comparison studies appeared in the 1940s. The U.S. Army believed it could improve instructional quality and reliability by replacing many instructors with films. To their credit, before setting policy based on this idea, the army tested it. They compared learning a simple procedure from a lesson delivered by film, by instructor, and by print. Each version used similar words and visuals. What do you think they found?

Instructor-led training led to the best learning.

Paper-based, the least expensive, led to the best learning.

Films could replace instructors since they led to the best learning.

Learning was the same with instructor, print, and film.

The army discovered that participants from all three lesson versions learned the procedure equally well. In technical terms, there were no significant differences in learning among the three groups. Since that early experiment, hundreds of studies have compared learning from classroom instruction with learning from the latest technology—the most recent being various forms of digital distance learning. In fact, so many media comparisons have been published that a synthesis of all of the results, called a meta-analysis, found the same basic conclusion that the army reported so many years ago: No major differences in learning from classroom lessons compared to electronic distance learning lessons (Bernard et al. 2004; U.S. Department of Education 2010).

But wait! There is an important caveat to this conclusion. The basic instructional methods must be the same in all versions. In other words, if the classroom version includes graphics and practice exercises, the computer version must include similar graphics and practice opportunities. That’s because what causes learning are the psychological active ingredients of your lessons regardless of what media you are using. Rather than asking which technology is best for learning, you will find more fertile ground by using a blend of media that allows you to space out learning events, provide post-training performance support, and foster synchronous and asynchronous forms of collaboration. In fact, the U.S. Department of Education found a significant learning advantage to courses using media blends compared to pure classroom based or pure online learning (2010).

The more than 70 years of media comparison research is the basis for my second recommendation.

Fads and Fables Guideline 2

Ignore panaceas disguised as technology solutions; instead, apply proven practices on best use of instructional methods to all media you use to deliver training. Select a mix of media that supports core human psychological learning processes.

As a postscript to this media discussion, what were once considered distinct and separate delivery technologies are increasingly converging. For example, there is now online access to multiple instructional resources. Hand-held mobile devices merge functionalities of computers, newspapers, telephones, cameras, radios, clocks, and context-sensitive performance support, to name a few. Perhaps the media selection discussion will evolve into a discussion of instructional methods, most of which can be delivered by a mix of digital media and in-person instructional environments.

Training Myth #3: The More They Like It, the More They Learn

Do you collect student ratings at the end of your courses? More than 90 percent of all organizations use end-of-training surveys to gather participant evaluation of the quality of the course, the effectiveness of the instructor, how much was learned, and so on. These rating sheets are commonly called smile sheets or Level 1 evaluations. If you are an instructor or a course designer, chances are you have reviewed ratings sheets from your classes. You might also have a sense of how much learning occurred in that class. Based on your own experience, what do you think is the relationship between participant ratings of a class and the actual learning that occurred?

Classes that are higher rated also yield greater learning.

Classes that are higher rated actually yield poorer learning.

There is no relationship between class ratings and learning from that class.

To answer this question, researchers have collected student satisfaction ratings as well as lesson test scores that measure actual learning. They then evaluated the relationships between the two. For example, they considered whether higher ratings correlated with more learning or less learning.

Evidence on Liking and Learning

A meta-analysis synthesized more than 1,400 student course ratings with correlated student test data. Sitzmann and others (2008) found a positive relationship between ratings and learning. But the correlation was very small! In fact, it was too small to have any practical value. Specifically, the research team concluded that reactions have a predictive relationship with cognitive learning outcomes, but the relationship is not strong enough to suggest reactions should be used as an indicator of learning.

Do you think that learners rate lessons with graphics higher than lessons without graphics? Do you think that lessons with graphics support better learning than lessons without graphics? Sung and Mayer (2012a) compared student ratings and learning from lessons that included relevant graphics, distracting graphics, decorative graphics, and no graphics. They found that all of the lessons with graphics got better ratings than lessons lacking visuals even though only the relevant graphics led to better learning. In other words, there was no relationship between liking and learning. The next chapter will look at evidence on graphics and learning in more detail.

Besides graphics, what other factors are associated with higher ratings? The two most important influencers of ratings are instructor style and human interaction. Instructors who are psychologically open and available—in other words, personable instructors—are associated with higher course ratings. In addition, the opportunity to socially interact during the learning event with the instructor as well as with other participants leads to higher ratings (Sitzmann and others 2008).

A 2018 experiment compared a science lesson delivered via immersive virtual reality (IVR) with the same lesson content delivered via a PowerPoint presentation. Which version got better ratings? Which led to more learning? Parong and Mayer (2018) report that the IVR lesson got better ratings, but the slide presentation led to better test outcomes.

Evidence from many studies that review the correlation between student ratings and student learning is the basis for my third recommendation.

Fads and Fables Guideline 3

Don’t rely on student ratings as indicators of learning effectiveness. Instead, use valid tests to assess the pedagogical effectiveness of any learning environment. Focus on instructional methods that lead both to liking and learning.

Training Myth #4: Learners Are Good Judges of Their Training Needs

One of the potential benefits of e-learning is the opportunity to offer environments that move beyond the one-size-fits-all instruction typical of instructor-led training. Most e-learning courses offer choices, such as which lessons to take in a course, whether to study an example or complete a practice exercise, or how much time to spend on a given topic. E-courses with high levels of such options are considered high in learner control. How effective are courses with high learner control? Do your learners make good decisions regarding how much to study, what to study, and what instructional methods to select?

Evidence on Learner Decisions

More than 20 years of research comparing learning from courses that are learner controlled with courses that offer fewer choices concludes that quite often learners do not make good instructional decisions. Some learners are overly confident in their knowledge and therefore skip elements that in fact they need. A case in point: Hegarty and her associates (2012) asked subjects to compare wind, pressure, or temperatures on either a simple or more complex weather map. The more complex map included geographical detail as well as multiple weather variables not needed to complete the assignment. Task accuracy and efficiency was better on the simpler maps. However, about a third of the time the subjects chose to use the more complex maps to complete the task.

Dunlosky and Rawson (2012) provided technical term definitions and asked 158 students to judge their level of confidence in recalling the definition correctly. When students judged their response as correct, it was actually correct only 57 percent of the time. In other words, they were overconfident in their knowledge. Participants who were most overconfident retained fewer than 30 percent of the definitions, whereas those who showed little overconfidence during study retained nearly all of the definitions they had practiced. The authors concluded that judgment accuracy matters a great deal for effective learning and durable retention; overconfidence leads to the premature termination of study and to lower levels of retention. When left to their own devices, many students use ineffective methods to monitor their learning, which can produce overconfidence and underachievement.

The overall picture is that many learners do not make accurate assessments of their learning and thus do not make accurate or efficient choices regarding what and how to study.

With two exceptions, learners are often poor judges of their skill needs and will need support in courses that offer higher learner control. One exception is learners with higher prior knowledge of the content. As a result of greater background knowledge, these learners usually make better judgments about their learning needs. A second exception is control over pacing. All learners should have the opportunity to manage their rate of progress in e-learning using back and forward progress buttons.

One way to improve outcomes in e-learning is to make important topics and instructional methods, such as examples and practice, a default rather than an option to be selected (Schnackenberg and Sullivan 2000). In a default lesson, the continue button automatically leads to important instructional methods and the learner will have to consciously choose to bypass them. There are a number of ways to provide guidance to learners to help them more accurately assess their own needs. The bottom line is many learners new to the content will not make accurate self-assessments of their own knowledge and skills and overconfidence will lead to underachievement.

Fads and Fables Guideline 4

Don’t count on your learners to always make good decisions about their instructional needs. If your course builds in options, accompany those options with guidance.

Training Myth #5: Active Engagement Is Essential to Learning

Active learning is one of the most cherished laws of workforce development. As a response to the pervasive use of noninteractive lectures, the training community has pushed active learning as an essential ingredient of effective instruction. By active learning they refer to overt behavioral activities on the part of learners. These include activities such as making content outlines, collaborating on problems, or labeling graphics. However, evidence points to a more nuanced definition of active learning. Engagement is essential, but it is psychological engagement rather than physical engagement that counts. And physical engagement can sometimes interfere with psychological engagement.

The Evidence on Active Engagement

Imagine two groups of learners studying a biology chapter. Group A is provided with a concept map developed by the chapter author as a support guide. Group B is provided with a blank concept map, which the learners are asked to fill in as they read. Clearly, Group B is more actively engaged. However, Group A learned more than Group B (Stull and Mayer 2007). Perhaps individuals in Group B did not complete the map correctly. Alternatively, perhaps the mental activity needed to complete the concept map absorbed cognitive resources needed for learning. A similar experiment by Leopold and others (2013) evaluated learning of a science text between learners who developed their own summaries with those who studied pre-prepared summaries. Best learning occurred among those who studied the predefined summaries. The authors suggest that learners who engage in behavioral processing may not have engaged in psychological processes. In contrast, those studying a predefined summary had more resources to invest in deeper psychological processing. Chapter 4 will look more closely at evidence on engagement in learning. For now, I offer the following guideline.

Fads and Fables Guideline 5

Behavioral activity during instruction does not necessarily lead to learning. It is psychological engagement that is most important.

Training Myth #6: Games, Stories, and Simulations Promote Learning

Attend any training conference, look at the latest training books, or check out your favorite social media site. Chances are you will find real estate devoted to mobile learning, games, immersive virtual reality, simulations, social media, or whatever is the technology or instructional method du jour. Training lore is full of claims and recommendations about the latest training methods like these. What’s wrong with these kinds of recommendations?

First, using such broad terms for techniques makes statements about them often meaningless. Take games for instance. Do you mean puzzle games, quiz show games, strategy games, or simulation games? Do you mean individual paper and pencil games, video games, or group participation games? As a category, games include so much diversity that it is just about impossible to make any generalizations about their instructional effectiveness. I’ll have more to say about games in chapter 16. If you are especially interested in games, feel free to jump there now.

No Yellow Brick Road

Second, even if we narrow down to a fairly specific set of criteria for any given instructional method, its effectiveness will depend upon the intended learning outcome and the learners. Is your goal to build awareness, to help learners memorize content, to teach procedural skills, to motivate, or to promote critical thinking?

And what about your learners? Regarding learner differences, prior knowledge (not learning styles!) is the most important factor that moderates the effects of instructional methods. Techniques that help novice learners are not necessarily going to apply to a learner with more expertise.

The lack of universal effectiveness of most instructional techniques is the basis for what I call the No Yellow Brick Road principle. By that I mean that there are few best practices that will work for all learners and for all learning goals. The chapters to follow will show that much contemporary research focuses on

Enjoying the preview?
Page 1 of 1