Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Manager's Guide to Program Evaluation: 2nd Edition: Planning, Contracting, & Managing for Useful Results
Manager's Guide to Program Evaluation: 2nd Edition: Planning, Contracting, & Managing for Useful Results
Manager's Guide to Program Evaluation: 2nd Edition: Planning, Contracting, & Managing for Useful Results
Ebook172 pages1 hour

Manager's Guide to Program Evaluation: 2nd Edition: Planning, Contracting, & Managing for Useful Results

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Your Guide to Getting a Useful Evaluation, now updated and revised in this second edition.

Evaluation is vital and beneficial to any nonprofit organization. An effective evaluation can help identify an organization's successes, share information with key audiences, and improve services. It can confirm that an organization is truly making a difference, or what changes an organization needs to make in order to improve. This book describes what types of information to collect and what questions this information can answer, details the four phases of evaluation and the steps involved in each phase, and provides information on various types of research consultants and advice on selecting one. If you are an organization manager, decision maker, policymaker, funder, researcher, or student studying applied social service research, this guide is an essential resource for your knowledge of effective organizational management.

LanguageEnglish
Release dateJan 25, 2022
ISBN9781684427901
Manager's Guide to Program Evaluation: 2nd Edition: Planning, Contracting, & Managing for Useful Results
Author

Paul W Mattessich

PAUL W. MATTESSICH, Ph.D., is executive director of Wilder Research, which dedicates itself to improving the lives of individuals, families, and communities through applied research. Mattessich has assisted local, national, and international organizations with strategic planning, organizational improvement, and evaluation. He travels regularly to Northern Ireland and the United Kingdom, where he learns from, and consults with, organizations addressing youth development, community development, and the promotion of peace and acceptance of diversity among groups from divided communities. Mattessich has been involved in applied social research since 1973 and is the author or coauthor of more than three hundred publications and reports including the recently released third edition of Collaboration: What Makes It Work. He has also served on a variety of task forces in government and the nonprofit sectors. He received his Ph.D. in sociology from the University of Minnesota, where he currently serves as an adjunct faculty in the School of Social Work.

Read more from Paul W Mattessich

Related to Manager's Guide to Program Evaluation

Related ebooks

Leadership For You

View More

Related articles

Reviews for Manager's Guide to Program Evaluation

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Manager's Guide to Program Evaluation - Paul W Mattessich

    Preface to the Second Edition

    The Wilder Foundation has sought to fulfill its mission by delivering human services and also by undertaking activities that build its own capacity and the capacity of other organizations to work more effectively. To that end, the Manager’s Guide to Program Evaluation shares what managers in nonprofit and government organizations ought to know about program evaluation. The book does not provide a text for researchers. Rather it offers program managers, who might have a little research training or a lot, the knowledge they need for planning, contracting, and managing a helpful evaluation.

    Since publication of the first edition, the book has provided a valuable reference for thousands of managers. It has also appeared on the syllabus for courses that introduce practitioners to evaluation. We have received feedback from readers, and we used that feedback to develop the second edition.

    We have strengthened the book in part based on our reflections on the many social and cultural changes that have occurred worldwide during the past twenty years. For example, technology has greatly changed, and access has increased to new sources of information. Many organizations that in the twentieth century had cumbersome information systems now have automated program and service records that they can use to make decisions based on data. Online technology makes it easy to obtain information of many types and makes it easy to collect information through surveys and other means.

    Expectations have grown for research to have validity with varied cultures and populations. It has become more important to gather data from multiple sources and with a variety of tools in order to make sure to obtain reliable results.

    Requirements to demonstrate outcomes—whether formal expectations from funders or informal expectations from donors, voters, and the public at large—have steadily increased.

    This second edition can help managers stay on top of those changes when they engage in program evaluation.

    What’s new in the Second Edition?

    Similar to the first edition, this book provides important definitions and explains significant concepts related to program evaluation. It moves through the cycle of design, data collection, analysis, and reporting—from the perspective of a program manager. It describes the role of a manager at the different stages of that cycle.

    This edition builds on the previous edition in several ways:

    First and most obvious, we have added examples in some chapters to promote understanding of concepts and methods. These examples come from real experiences in evaluation research.

    We have clarified our descriptions of concepts and methods throughout the book in response to feedback from users of the first edition. We have also clarified some of the options and choices that managers have in implementing an evaluation.

    We suggest whenever appropriate how equity and cultural relevance should fit into evaluation design, data collection, analysis, and reporting.

    We offer perspective on the strengths and limitations of the many sources of big data that have proliferated during the past two decades. We have added new references and web links that take readers to many useful sources of information and tools for designing and implementing a program evaluation.

    We hope that you find the new and enhanced edition useful for your work.

    1

    What Is Program Evaluation?

    All of us gather evaluation information and make evaluative choices every day. The common sense we bring to everyday decisions strongly resembles the scientific principles that underlie program evaluation.

    Take for example a common situation: You live in a new location and want to find the shortest route to work. You might check a map or use an online trip mapping site to identify possible routes. Let’s suppose three possible routes exist: A, B, and C. How do you determine which is the best to use?

    If concerned mostly about the time it takes, you could try each route—probably several times—and record the length of time your trips take. After trying each route on several different days and recording the amount of time each trip took, you could look at the information you compiled and reach a good conclusion about the quickest route. If you had additional concerns about conditions and services along the route, or about safety, you might make notes about these aspects of your experience.

    How does this demonstrate that you already do program evaluation? Table 1 portrays how several of the features of your decision-making about the route to work resemble features of good program evaluation.

    As you can see, program evaluation strongly resembles thinking processes that we all use. It formalizes those processes and makes our thinking live up to certain standards. Program evaluation also has its own jargon. In this book, we’ll discuss how evaluation builds upon decision-making skills that you use every day and how it offers you tools that enhance your capacity to make decisions about programs. Along the way, you’ll pick up some of the jargon you’re likely to hear from an evaluator.

    We wrote this book for program managers, but we discovered that program evaluation researchers found it helpful, too, because the book can support conversations between managers and evaluators.

    Evaluation Defined

    What is program evaluation? A definition that we have used at Wilder Research borrows from the work of many evaluators:

    Evaluation is a systematic process for an organization to obtain information on its activities, its impacts, and the effectiveness of its work, so that it can improve its activities and describe its accomplishments.

    Let’s look at the key words in this definition.

    Systematic. Evaluation must be designed carefully, in a way that makes it reliable, credible, and useful. This implies attention to definitions of important concepts (for example, which services are provided and which people are served) as well as the use of methods that meet scientific standards.

    Process. Evaluation is ongoing. It involves work within many or all parts of an organization over time—with the intention to document what the organization is doing and to provide the organization with ways of measuring and understanding its activities and outcomes over a period of time.

    Information. Evaluation involves data. It provides information. It does not make decisions.

    Activities, impacts, effectiveness. Evaluation identifies what an organization does (activities, including who is served and what they receive); it identifies what results this produces (intended and unintended impacts); and it identifies the extent to which an organization achieves the specific outcomes it intended for the people whom the organization seeks to benefit (effectiveness).

    So that. The ultimate goal of evaluation is the use of information—either to better serve people or to represent the organization to others. In recent years, you may have heard about data-informed decision-making or data-based decision-making. Essentially, these two terms refer to using the data that organizations collect to inform strategic decisions, such as how to allocate resources or which opportunities to pursue. As a program manager, you likely have a good sense of how to budget your program’s resources—but the goal of data-informed decision-making is to intentionally include your program’s data and findings from evaluation in your budgeting

    Enjoying the preview?
    Page 1 of 1