Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy
Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy
Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy
Ebook702 pages

Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Your Groundbreaking Framework for Measurement and Reporting

Most people find measurement, analytics, and reporting daunting—and L&D professionals are no different. As these practices have become critically important for organizations’ efforts to improve performance, talent development professionals have often been slow to embrace them for many reasons, including the seeming complexity and challenge of the practices. Few organizations have a well-thought-out measurement and reporting strategy, and there are often scant resources, limited time, and imperfect data to work with when organizations do attempt to create one.

Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy is a much-needed and welcomed resource that breaks new ground with a framework to simplify the discussion of measurement, analytics, and reporting as it relates to L&D and talent development practitioners. This book helps practitioners select and use the right measures for the right reasons; select, create, and use the right types of reports; and create a comprehensive measurement and reporting strategy.

Recognizing the angst and reluctance people often show in these areas, authors and experts David Vance and Peggy Parskey break down the practices and processes by providing a common language and an easy-to-use structure. They describe five types of reports, four broad reasons to measure, and three categories of measures. Their method works for large and small organizations, even if yours is an L&D staff of one or two. The guidance remains the same: Start small and grow.

Measurement Demystified is a great first book for talent development professionals with no prior knowledge of or experience with measurement and a valuable resource for measurement experts. Those adept at lower levels of training evaluation will grow their knowledge base and capabilities, while measurement experts will discover shortcuts and nuggets of information to enhance their practices. A more comprehensive treatment of these important topics will not be found elsewhere.
LanguageEnglish
Release dateNov 17, 2020
ISBN9781950496860
Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy

Related to Measurement Demystified

Training For You

View More

Reviews for Measurement Demystified

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Measurement Demystified - David Vance

    Preface

    Measurement Demystified is intended to meet an important, unfilled need in the learning and development (L&D) profession: a comprehensive and holistic introduction to the topics of measurement, analytics, and reporting. In fact, we hope this book becomes the first book you read on these topics. It is written to provide the foundation, framework, and language for you to gain a basic understanding of these areas and then apply that knowledge to create your own measurement and reporting strategy. Also, armed with this information, you will be prepared to appreciate the more advanced and in-depth coverage on topics such as the evaluation of learning programs.

    The book is also written as a reference and performance support tool for the experienced learning practitioner, containing 120 learning measures, including efficiency measures, which are not the focus of other books on evaluation. You will find definitions and formulas for each measure, including recommendations for their use.

    While comprehensive, the book is meant to be accessible to all, and no previous knowledge of L&D measurement is required. Step-by-step guidance is provided for selecting measures, creating reports, and crafting your own measurement and reporting strategy. In addition, more than 180 figures and tables are provided to enhance understanding and illustrate real-world application. Plus, a sample measurement and reporting strategy is included in the appendix along with a comprehensive glossary.

    Measurement Demystified breaks new ground with the Talent Development Reporting principles (TDRp) framework for both measurement and reporting. This framework helps simplify the discussion of measurement, analytics, and reporting by providing a common language and an easy-to-use structure, consisting of four broad reasons to measure, three categories of measures, and five types of reports. The framework is employed throughout the book to provide an integrated, holistic approach to this important topic.

    Notably, this is the first book to include the Association for Talent Development’s benchmarks as well as the eight learning metrics recommended by the International Organization for Standardization (ISO), and it tackles unchartered territory with its comprehensive discussion of reporting. To our knowledge, no other book focuses on L&D reporting or provides a framework to help practitioners identify the type of report best suited to a particular need.

    This book is about much more than evaluation. It is about all the reasons for measuring and how the measures should be used once they are obtained. In this book, we guide you on how to select, calculate, and use the measures, including how to use the reports to run learning like a business and deliver greater value. In sum, a more comprehensive introduction to these important topics will not be found elsewhere.

    Introduction

    The L&D profession has made steady progress over the last 60 years defining processes and introducing new concepts, systems, and tools. Many practitioners, though, still don’t know where to start or how best to proceed with measurement, particularly with data analytics and reporting. For example, many are asked to create a measurement strategy or show the value of their investment in learning, but don’t know how. And most L&D professionals have limited resources, which makes the task all the more challenging, especially considering the more than 180 measures we have identified that are available just for L&D. (We cover 120 in this book!)

    We propose to simplify measurement, analytics, and reporting by applying a framework called the Talent Development Reporting principles (TDRp). Like any good framework, TDRp provides a common language for measures and the reasons for measuring. It recommends grouping measures into three categories, which will facilitate both the discussion and selection of measures. It also recommends a framework and common language for reporting based on the reasons for measuring. Moreover, TDRp breaks new ground by recommending the adoption of three standard reports for the management of individual programs and the entire department. TDRp also provides practical guidance on how to use the measures and reports to meet the needs of the various stakeholders in your organization.

    Let’s first offer a little history of how we got here before exploring TDRp in detail.

    History

    In 2010, a group of L&D leaders began discussing the need to create standards for measurement, reporting, and management within the industry. The discussions started casually among like-minded colleagues at conferences and centered on two key questions: Why is measurement capability in most organizations underdeveloped, and why does every organization spin its wheels creating its measurement approaches from scratch? The answer to both questions was the same: a lack of standards for measurement and reporting. Without standards, every talent function essentially had to start from scratch to first identify the measures that made sense in their organization and then design the appropriate reports that would provide insights for decisions and corrective action.

    As the discussions widened to include a broader group of industry thought leaders, the realization hit: The Generally Accepted Accounting Principles (GAAP) employed by accountants in the United States (and International Financial Reporting Standards employed by accountants in the rest of the world) could be the inspiration for its mission. The rationale was: If the GAAP framework for measurement and reporting works so well for accountants who also have hundreds of measures and numerous reports, why don’t we have something like it for the learning profession?

    Moreover, they argued, accounting is not the only profession that has adopted frameworks and standards to help them organize vast amounts of data and provide a common language. Chemistry uses the periodic table; biology classifies organisms into kingdoms, phylum, families, and species; medicine classifies humans (and animals) by systems and organs. The founders of TDRp thought the time had come to develop a framework for learning.

    The working group consisted of about 30 thought leaders and prominent practitioners, with Kent Barnett and Tamar Elkeles leading the effort (see sidebar). They recruited the experts shown in appendix A, including your authors, who helped conduct the research of current practices and wrote the whitepaper. After numerous revisions to the whitepaper, the group agreed on the key assumptions and principles, the three types of measures, and the recommended management reports (one of the five types of reports).

    The Origins of TDRp

    By Kent Barnett

    In the fall of 2010, Tamar Elkeles, at that time the CLO of Qualcomm, and I, then CEO of KnowledgeAdvisors, were at lunch celebrating the retirement of Frank Anderson. Frank, the outgoing president of Defense Acquisition University, was a visionary and highly respected learning leader, so it was the perfect place to launch a strategic industry initiative. During our conversation, we agreed it was time to create standards to help us measure the impact and performance of L&D. We realized that the financial world had standardized reporting. By looking at the income statement, balance sheet, and cash flow statement, one could analyze the financial performance of any organization. Shouldn’t we be able to do the same thing in learning?

    Tamar and I agreed to co-chair a new council with the goal of creating standardized reporting for talent development. More than 30 thought leaders and leading organizations joined our effort, and out of that grew the Talent Development Reporting principles (TDRp). Most importantly, early on in the process Dave Vance accepted our offer to join us. As our work progressed, Dave took the lead and spearheaded the efforts to create the Center for Talent Reporting.

    Ten years later, the TDRp framework is being adopted around the world and

    Dave Vance has turned the Center for Talent Reporting into an integral part of our industry’s advancement.

    The working group focused initially on L&D but quickly extended the principles to all core talent processes, defined as those processes that directly contribute to achieving high-level organizational outcomes. By mid-2012, we expanded TDRp to include talent acquisition, performance management, leadership development, capability development, and total rewards. (See appendix A for more detail.) In this book, we concentrate only on L&D. You can find the measures and sample reports for the other HR processes at CenterforTalentReporting.org.

    Now that we had developed TDRp, it needed a home. The Center for Talent Reporting (CTR) was created in 2012 to be such a home and to advocate for TDRp’s adoption. CTR would also provide resources to help the profession implement TDRp, including webinars, written guidance, workshops, and an annual conference.

    With this background, let’s turn to Measurement Demystified, which we wrote to help you, the L&D practitioner, better measure, analyze, report, and manage learning at both a program and department level, with the ultimate aim of delivering greater value to your organization.

    About This Book

    Our approach outlined in the following chapters will work for both small and large organizations, even if yours is an L&D staff of only one or two. Typically, smaller organizations will have fewer programs, so the number of measures and reports will also be smaller. Larger organizations will have greater complexity and require greater effort, so they will have to set some priorities. Even so, the guidance remains the same: Start small and grow. The approach also works for all types of organizations—for profit, nonprofit, government, education, and the military.

    Our outlook on each topic is very practical. We all have limited resources, including limited time and imperfect data. We all operate in an environment of continual change and uncertainty. As practitioners, our goal is to do the best we can with what we have to help our organizations succeed. Consequently, we use imperfect and often incomplete data because that is usually better than the alternative, which is to do nothing. We plan, estimate, and forecast knowing that we will be wrong but, if we do it smartly, the effort will be worthwhile and contribute to our organization’s success.

    With that approach in mind, each chapter builds on the preceding chapters. You can jump directly to a chapter that interests you but, if you are not already familiar with all the reasons for measuring and the TDRp framework, we advise you to read chapter 1 first. Likewise, since we present a new framework for reporting, it will be helpful if you read chapter 8 before other chapters on reporting. After you are familiar with the framework and measures, you can use the book for performance support and go to the relevant section for definitions of measures or guidance on reports. Here is a description of the chapters:

    In chapter 1, we start by discussing the many reasons to measure and then share the TDRp framework, classifying the reasons to measure into four categories to simplify communication and understanding. We provide a maturity model for measurement, employing the four broad reasons to measure, and classify measures into three types and reports into five types. The chapter ends with a discussion of the recently released International Organization for Standardization’s (ISO) Human Capital Reporting Standards and their integration with TDRp.

    Chapter 2 completes our foundational discussion of measurement by explaining the importance of including all three types of measures in a measurement strategy. In chapter 3 we begin our detailed discussion of measures by introducing efficiency (or activity) measures, which are by far the most numerous in the profession. We provide definitions and recommendations on 107 of these foundational measures, including those benchmarked by the Association for Talent Development in its annual State of the Industry report and those recommended by the ISO.

    Chapters 4 and 5 explore effectiveness and outcome measures, the subjects of many books on evaluation. We provide a comprehensive introduction to these important measures and a discussion of the key differences between the Kirkpatrick and Phillips approaches. We define each measure and detail the options for calculation. We also include a list of measures that commonly benchmarked.

    In chapter 6, we provide guidance on how to create a robust measurement strategy, including all the key elements. Then in chapter 7, we incorporate what we’ve learned so far to guide the reader in selecting the right measures based on their purpose for measuring. We provide examples of recommended measures for common programs and improvement initiatives.

    Chapter 8 revisits the TDRp framework to explore the five different types of reports, employing the measures we’ve described so far. We suggest how to select the proper report to meet a specific need. Chapter 9 focuses on one type of report, the management report, and details the three specific management reports recommended for use in managing learning programs and department initiatives.

    Chapters 10, 11, and 12 complete the exploration of reporting, first by providing guidance on creating a reporting strategy, and second by providing instruction on how to create values for the selected measures, including planning and forecasting. Some readers will find chapters 11 and 12 challenging, not because the concepts or measurements are difficult, but because there are so many options.

    We end by sharing implementation guidance in chapter 13 and pulling all the elements of the book together in chapter 14. In addition to a history of the TDRp adoption, the appendix includes an example document of roles and responsibilities for L&D and goal owners, a sample measurement and reporting strategy, a sample project implementation plan, and a glossary.

    To see how the concepts fit together, review the chapter layout here (Figure I-1).

    Definitions of Terms We Use in the Book

    The glossary provides definitions for more than 190 terms; here we share some of the most basic and important terms we use in the book.

    We use the term measure as a noun to be synonymous with metric and KPI (key performance indicator). At one time, KPI might have been reserved for only the few key or important measures, but today it is commonly used for any measure.

    Figure I-1. Chapter Layout

    While many in the profession consider any operation involving numbers to be analytics, we reserve the term to mean higher-level analysis, often involving statistical tools. For example, we will not refer to determining the number of participants or courses as analytics. The same is true for reporting the average participant reaction for a program. In both cases, the value of the measure is simply the total or average of the measured values—no analysis required. In contrast, a detailed examination of any of the values of these measures, perhaps using their frequency distribution, the use of regression to forecast the value of a measure, or the use of correlation to discover relationships among measures will be referred to as data analytics or analysis.

    Think of it this way: measurement provides the quantification of the measure, which is typically an input for analysis (the old term for analytics). There are exceptions, however, and sometimes analysis is required to determine the value of a measure (isolated impact, for example). In summary, simply measuring and reporting the value of a measure does not generally rise to the level of analytics; more than arithmetic is required to be considered analytics. (Figure I-2 describes the connections among measurement, analytics, methodologies, and reporting.)

    With this context, we suggest the following definitions for these important terms:

    •  Measure (synonymous with metric and KPI). As a noun, it is the name associated with a particular indicator. For example, the number of participants is a measure. As a verb, it is the act of finding the value of the indicator.

    •  Measurement. The process of measuring or finding values for indicators.

    •  Analytics (synonymous with analysis). An in-depth exploration of the data, which may include advanced statistical techniques, such as regression, to extract insights from the data or discover relationships among measures.

    The Institute for Operations Research and Management Science defines analytics as the scientific process of transforming data into insights for making better decisions, but we believe this definition is overly restrictive. We agree that the intent of analytics is to provide insights, but the effort is not always directed toward decision making. Sometimes, the goal is simply a better understanding of the data or of relationships among multiple measures. Furthermore, an analytics effort may not always provide insights or lead to better decisions, just as an experiment may not always produce the hypothesized result.

    While terms such as program, initiative, course, and class are often used interchangeably, we will define each specifically, borrowing from academic terminology:

    •  Program. A course or series of courses with similar learning objectives designed to accomplish a business or HR goal or meet an organizational need. For example, a program to improve leadership may comprise four related courses over a six-month period. At the university level, a program leading to a degree in economics may require 12 courses over a four-year period.

    •  Course. A class or series of classes, an online module or series of online modules, prework, post-work, performance support, discussion boards, and other types of learning to convey related and integrated content. For example, a course on leadership may consist of four hours of prereading, two online modules, four instructor-led classes, an online discussion board, and performance support. In a corporate environment, each course will have a specific designation in the learning management system (LMS). At the university level, students will enroll in specific courses each term such as Economics 101.

    •  Class. Each physical or virtual meeting of students where content is conveyed by an instructor. A course may consist of just one class if the content can be conveyed in one sitting or it may require multiple classes to convey all the content. At the university level, a semester-long course like Econ 101 might meet for two classes per week for 10 weeks. It is also possible that the number of students enrolled in a course exceeds the optimum class size, which necessitates multiple classes even if the content can be conveyed in a single sitting. So, a one-hour instructor-led course for 150 employees will require six classes of 25 each. The analogy at the university level is 300 students taking Econ 101 where enrollment is limited to 100 per class. In this case there will be three sections with 100 in each.

    •  Online or e-learning module. A single session of computer, tablet, or mobile-based instruction that may last from five or 10 minutes to an hour or more. Each online module will typically require the user to log in, with completion recorded in the LMS.

    •  Initiative. May be used in place of program but may also designate a coordinated series of actions to improve the effectiveness or efficiency of the L&D department. For example, there may be an initiative to reduce complaints about the LMS, lower department costs, or improve the application rate of learning in general across all courses. In this book we will use the term program when the effort addresses business or HR goals, or organizational needs like onboarding or basic skills training. We will use the term initiative when the effort is not directly aligned to business or HR goals or organizational needs, but instead focuses more on improving the efficiency or effectiveness of L&D department processes and systems or all programs.

    Here are several other key terms and their definitions, which we will use frequently:

    •  Learning and development. The name of the professional field and many training departments dedicated to increasing the knowledge, skills, and capabilities of the workforce. Other names for L&D departments include training, organization development, and talent development, although the last two may include additional responsibilities such as succession planning.

    •  Formal learning. Learning that is structured and organized or directed by someone other than the learner. This includes instructor-led training (ILT) where the instructor is physically located with the participants, virtual ILT (vILT) where the instructor is at a different location than the participants, e-learning, structured coaching, and structured mobile learning.

    •  Informal learning. Learning that is not structured, organized, or directed by someone else. The participant learns on their own though self-discovery. This includes social learning, knowledge sharing, on-the-job learning, unstructured coaching, and personal learning through Internet or library exploration.

    •  CLO (chief learning officer). The person ultimately responsible for learning in an organization. This position may also be named vice president of training or director of training. If the person also has responsibility for other aspects of talent, the position may be called chief talent officer (CTO) or chief human resources officer (CHRO).

    •  Employees or headcount. The unique count of all employees at a point in time. Part-time employees are counted as well as full-time employees. Note: If an organization uses many contingent workers (temporary employees and contract workers), consideration should be given to using the term workforce (employees plus contingent workers) in addition to, or as replacement for, number of employees.

    •  FTE (full-time equivalent). This is a way of measuring full-time effort (40 hours per week x 52 weeks per year) when some employees are part-time and do not work 40 hours per week or 52 weeks per year. For example, if two part-time employees each work half-time, the full-time equivalent of their effort is 1.0 FTE.

    Finally, throughout this book, we discuss the connections among the four foundational elements of TDRp:

    •  Reporting, which we define as an approach to structure measures and analysis to share results with stakeholders.

    •  Measurement methodologies, which we define as a process and suite of standards and tools that guide how practitioners execute a specific approach. The learning measurement profession uses several well-known methodologies such as:

      Kirkpatrick Four Levels of Evaluation

      Phillips ROI Methodology

      Brinkerhoff Success Case Method.

    •  Analytics and measurement, the third and fourth components we defined previously.

    In the interplay of these relationships, measurement is at the base, supplying the inputs to our methodologies (for example, Kirkpatrick or Phillips), as well as the analytics we employ (Figure 2). In some cases, however, the methodologies will dictate what measures we must use. Or, as we mentioned earlier, analytics may be required to compute a specific measure, such as the isolated impact of learning.

    Figure I-2. The Interplay of Reporting, Analytics, Measurement, and Methodologies

    Reporting provides a way to display our data, ascertain trends, and provide insights into progress against targets or goals. Reports will often trigger a request for a deeper dive. Depending on how we have structured the reports, we may be able to drill down and get answers to our questions. In other cases, the reports may require additional analysis to understand the root causes behind observed results.

    Conversely, a learning leader might formulate a hypothesis such as, learners with low levels of management support are less likely to apply the learning. Through an impact study or ad hoc analysis, we can confirm or deny this hypothesis about manager support. The insights from the analysis may suggest ongoing reporting of new measures (for example, manager support). Moreover, the reports enable us to monitor results and determine if the hypothesis holds over time.

    Understanding the interplay among the four elements of reporting, methodologies, analytics, and measures will help you see how you can navigate the implementation of TDRp within your own organization.

    With all this in mind, let’s get started.

    PART I

    Foundations of Measurement

    CHAPTER 1

    The Talent Development Reporting Principles (TDRp) Framework

    The Talent Development Reporting principles (TDRp) framework simplifies the measurement, reporting, and management of learning. We use this framework throughout the book to provide a common language to help you understand the reasons to measure, select the right measures, and know how to share them in the right reports.

    At its core, the TDRp framework answers three questions:

    1. What are the reasons to measure?

    2. What measures should I use?

    3. What should I do with them once I have them?

    We begin by addressing each of these questions, starting with the reasons to measure. We then discuss the complete TDRp framework and its relationship to other standard-setting efforts.

    Reasons for Measuring

    The starting point for all measurement and reporting strategies should be an answer to this question: Why do we want to measure? The answer will directly influence your choice of measures, how you use them, how you present them, and how frequently you report them.

    There are many reasons for measuring, as shown in Figure 1-1. In some cases, we simply want to communicate results in a report, scorecard, or dashboard. In others, we want to answer questions that leaders have about the effectiveness of a specific initiative. Or we want to demonstrate value and the benefit provided by a curriculum or portfolio of programs, which will also help to build the case for new or additional investment.

    Figure 1-1. Why Do We Want to Measure?

    All of these are valid and may also depend on the specific needs of the organization. Whatever the reason, it is important to know the user and understand their needs. So, the recommended measures will always depend on the user and the context.

    A Framework for Measurement

    Given the many reasons to measure, it will be helpful to have a framework for measurement that highlights the most common reasons for measuring and segments the measures into fewer categories. Based on our experience, the reasons for measuring fall into four primary categories, which are an important part of the TDRp framework: inform, monitor, evaluate and analyze, and manage. Within each category, there are several specific reasons for measuring, each of which will guide your decisions on what data to collect, when to gather it, how to report it, and how to drive use of the data. Let’s explore each of these major reasons and delve into the practices that organizations are most commonly using today.

    Inform

    The most common reason for measurement is to inform. Practitioners generate dashboards, scorecards, and reports to share activity, identify trends, answer questions, and surface opportunities for improvement. L&D teams also use the data to create their own internal benchmarks or submit their results to industry surveys for benchmarking.

    When the purpose of measurement is to inform, L&D practitioners typically generate static reports shared via scorecards, dashboards, Excel spreadsheets, or even email updates.

    Monitor

    While informing is an important reason to measure, many leaders want to take a more active role in the process. When the purpose of measurement is to monitor, the recipients of the information want to know both what happened and how the results compare to a preestablished threshold, benchmark, or goal. Monitoring implies that a manager or leader wants to assess if the measure is meeting the threshold.

    For example, an ongoing course’s Level 1 participant reaction may be running in the 80–90 percent favorable range. L&D leaders have determined that they want to maintain it at 80 percent or higher. The measurement analyst would then generate a scorecard, which would highlight any Level 1 score below 80 percent. Alternatively, system administrators could program the system to generate a warning if the measure falls below 80 percent.

    Another example could be monitoring participants for a course when all are expected to complete it. A monthly scorecard would show progress toward the 100 percent threshold and detailed reports could be generated showing those who have not yet completed it.

    Evaluate and Analyze

    Another common reason to measure is to evaluate and analyze. Efforts to determine the effectiveness of a program and demonstrate its value represent the primary activities for many measurement analysts.

    Program evaluation is a major discipline within the measurement and evaluation field not only for profit-making organizations but also for nonprofit and government institutions. Because program evaluation is such an important driver of measurement, hundreds of books, whitepapers, and websites devote space to building competency and providing best practices for program evaluation.

    While this has traditionally focused on evaluating the results of a specific program or initiative, measurement analysts are increasingly leveraging engagement, hiring, promotion, succession, and retention data to explore broader workforce trends. For example, large organizations with HR and talent data warehouses often examine the impact of engagement on employee productivity, or the relationship between manager effectiveness and attrition of high performers. Learning organizations are exploring the relationship between specific types of learning (for example, learning experience portals) and employee engagement and growth. Analysis that incorporates broader HR data provides important input for the learning strategy and processes to ensure access to the right learning at the right time by the right employees. As organizations accumulate greater volumes of data, improved data quality and accessibility of disparate data elements will enable increasingly more sophisticated analysis efforts.

    Manage

    The last reason for measuring is to manage programs, processes, people, and ultimately the L&D function. Management implies that leaders have established a plan, target, or goal for each measure and that L&D leaders have committed to actively managing the program or initiative throughout the year to deliver the planned results. While monitoring involves a threshold, which is near the historical value of the measure, leaders need to manage when the plan or goal represents a significant change from the status quo. In other words, you manage when you want to move the needle on a measure with an understanding that you may have to devote significant effort to effect the change.

    The management process requires the creation of monthly reports showing at a minimum the plan, year-to-date (YTD) results, and a comparison of YTD results to plan. Ideally, the report will also show the forecast or value for the plan at year end if no special actions are taken. Since most L&D managers are not managing this way today, this reason for measuring represents the greatest opportunity for the use of measures.

    In summary, each reason for measuring has an important role in enabling leaders, stakeholders, and initiative owners to keep abreast of progress, understand what’s working and what’s not, and make data-informed decisions about the actions needed to sustain the program or get it back on track (Figure 1-2).

    Figure 1-2. The Four Reasons for Measuring

    Measurement Maturity

    Although all four categories represent important reasons to measure, their descriptions suggest a hierarchy of measurement purposes (shown in Figure 1-3), which will help organizations assess their measurement maturity and create their measurement strategy.

    Figure 1-3. Hierarchy of Reasons to Measure

    Informing

    Informing represents the beginning of the measurement journey for organizations. At this level, leaders develop and report basic, foundational measures such as the number of courses and participants or the percentage of employees receiving training.

    Monitoring

    Monitoring represents the second logical step in the use of measures. When moving from informing to monitoring, the role and capabilities of leaders steps up a notch. At this level, leaders have identified not only the key measures, but also the acceptable levels for each of them. When the measure’s value falls beneath the threshold, program owners are alerted that they need to take action.

    Evaluating and Analyzing

    Evaluating and analyzing is the third step in the hierarchy. At this stage, the organization uses measures to determine if a program is effective and what relationships (for example, correlations) exist among important measures. Evaluation and analysis is a big step up requiring considerably more effort and skill than monitoring.

    At this step, leaders will often want a more robust set of measures. In the case of learning, they will go beyond reaction and learning data to measure application, impact, and return on investment (ROI). In addition to identifying relationships among L&D measures, analysts often examine the relationship between learning measures and HR or business measures. For example, leaders may ask, Did the new product introduction training result in shorter sales cycle times or contribute to increased order levels in the first six months? Or, leaders may ask, Did learning contribute to higher employee engagement and retention? When senior business leaders begin asking questions that they can’t answer by simply monitoring basic data, L&D needs to build internal capability and the commitment to robust evaluation and analysis processes.

    Managing

    Managing is the highest-level reason for measuring programs, initiatives, processes, resources, and the overall L&D function. Managing requires that leaders first establish plans or targets for critical measures and then use monthly reported measures to determine if the program is on target to achieve the planned results. If not, managers need to take corrective action to get the program back on track.

    Management is the most intensive use of measurement and requires the greatest skill and effort, but it also delivers the biggest payoff. As in any maturity model, management requires the foundational capability of the three other levels, particularly evaluation and analysis, to be effective.

    While management seems like the obvious and logical level of measurement that leaders should aspire to achieve, many organizations have not fully embraced it. In fact, most measurement maturity models in the profession now list analytics and prediction (Level 3 in our model) as the highest level of maturity. The practice of setting plans and comparing results monthly to plan requires a level of business acumen, analysis, and discipline that many simply don’t have or desire. Management also requires a tolerance for uncertainty and a willingness to be accountable. Chapter 9 on management reporting explores this topic of running learning like a business in greater detail.

    The four reasons to measure hierarchy, along with descriptions of primary use, level of analysis, measurement frequency, and key elements, are shown in Table 1-1.

    Table 1-1. Measurement Requirements at Each Maturity Level

    The same measure may appear in each category but serve a different purpose. For example, the application of learning on the job could very well appear in each category. A leader may have a question about the level of job application across the organization (Inform). Leaders may set a minimum threshold for it and see that it is met each month (Monitor). For many programs, a goal owner may use application of learning as an indicator of program effectiveness (Evaluate) and want to understand if it is predictive of impact and perhaps employee retention (Analyze). For strategic programs, managers should actively manage the application rate monthly to ensure that the organization is achieving planned levels of application as part of an effort to achieve a business or HR goal (Manage).

    At this point, you may be asking, where do reporting and analytics fit into the measurement hierarchy? The simple answer is that both reporting and analytics are methods that help achieve the purposes we have just described.

    Reporting is a powerful means to highlight data anomalies or surface patterns in the data that program owners should address. John Tukey, an eminent 20th century statistician, said, One great virtue of good graphical representation is that it can serve to display clearly and effectively a message carried by quantities whose calculation or observation is far from simple (Tukey and Wilk 1965). We believe that all well-designed reporting can achieve this outcome, not simply those with graphical depictions.

    Reporting is useful for all levels in the hierarchy, whether to inform through a standard scorecard or help manage a program through alerts when a measure is below is predetermined threshold. Reporting is critical to leaders and practitioners at all levels of the organization to meet their commitments and deliver quality programs.

    Analytics is important at higher levels of the maturity curve, namely for evaluation, analysis, and management. Analytics is also required to answer multi-tier and complex questions. Reporting tends to answer questions such as What just happened? and Where is the problem? Analytics answers questions such as Why is this happening? or What will happen if we don’t make changes to our current processes or practices? Analytics requires a deeper level of statistical analysis, often involving regression analysis or predictive modeling. Analytical methods are essential to verify the link between inputs, activities, output, impacts, and ultimate outcomes. And in turn, these causal links are most important for evaluation, analysis, and management. There are a variety of approaches that practitioners use for reporting and analytics (Table 1-2).

    Table 1-2. The Role of Reporting and Analytics in the Measurement Hierarchy

    In conclusion, the natural starting point in our journey to measure, report, and manage is to ask the fundamental question, Why do we want to measure? While the answer may not always lead to a unique set of measures, it will determine how we use the measures, how we present them, and how frequently we share them. In chapters 6 and 7, we employ the four broad reasons to measure to create a measurement strategy and select appropriate measures.

    TDRp Measures

    The second question answered by the TDRp framework provides a classification scheme for the measures themselves. For simplicity, we recommend three types of measures for L&D and HR: efficiency measures, effectiveness measures, and outcome measures (Table 1-3). Note this book focuses on measures and reports for L&D only. The Center for Talent Reporting (CTR) website provides measures and sample reports for other HR disciplines.

    Table 1-3. The Three Types of TDRp Measures

    As we will discuss in chapter 2, these three types of measures, when used together, not only enable L&D functions to focus their energies but also mitigate unintended consequences when organizations focus only on one category of measure and ignore the rest.

    Origins of the Three

    Enjoying the preview?
    Page 1 of 1