Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education
Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education
Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education
Ebook634 pages10 hours

Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education

Rating: 0 out of 5 stars

()

Read preview

About this ebook

A comprehensive expansion to the essential higher education assessment text

This second edition of Assessment Essentials updates the bestselling first edition, the go-to resource on outcomes assessment in higher education. In this thoroughly revised edition, you will find, in a familiar framework, nearly all new material, examples from more than 100 campuses, and indispensable descriptions of direct and indirect assessment methods that have helped to educate faculty, staff, and students about assessment.

Outcomes assessment is of increasing importance in higher education, especially as new technologies and policy proposals spotlight performance-based success measures. Leading authorities Trudy Banta and Catherine Palomba draw on research, standards, and best practices to address the timeless and timeliest issues in higher education accountability. New topics include:

  • Using electronic portfolios in assessment
  • Rubrics and course-embedded assessment
  • Assessment in student affairs
  • Assessing institutional effectiveness

As always, the step-by-step approach of Assessment Essentials will guide you through the process of developing an assessment program, from the research and planning phase to implementation and beyond, with more than 100 examples along the way. Assessment data are increasingly being used to guide everything from funding to hiring to curriculum decisions, and all faculty and staff will need to know how to use them effectively. Perfect for anyone new to the assessment process, as well as for the growing number of assessment professionals, this expanded edition of Assessment Essentials will be an essential resource on every college campus.

LanguageEnglish
PublisherWiley
Release dateSep 9, 2014
ISBN9781118903650
Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education

Read more from Trudy W. Banta

Related to Assessment Essentials

Related ebooks

Teaching Methods & Materials For You

View More

Related articles

Reviews for Assessment Essentials

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Assessment Essentials - Trudy W. Banta

    Contents

    List of Exhibits and Figures

    Foreword

    Preface

    About the Authors

    Chapter 1: Defining Assessment

    Some Definitions

    Pioneering in Assessment

    Quality Assurance: An International Perspective

    Assessment Purposes

    Values and Guiding Principles

    Chapter 2: The Essentials of Assessment

    Planning Effective Assessment

    Implementing Effective Assessment

    Improving and Sustaining Assessment

    Additional Thoughts

    Chapter 3: Engaging Faculty and Students in Assessment

    Involving Faculty in Assessment

    Involving Students in Assessment

    Acting with Integrity

    Chapter 4: Setting Expectations and Preparing to Select Measures

    Intentions for Learning: Goals, Objectives, Outcomes

    Developing Statements of Expectations

    Selecting Methods and Approaches

    Designing Instruments

    Determining Approaches for Implementation

    Putting Everything Together

    Chapter 5: Using Direct Measures

    Using Classroom Assignments for Outcomes Assessment

    Performance Assessment

    Rubrics

    Aggregating Assessment Results in and across Courses

    Using Objective Tests for Outcomes Assessment

    Electronic Portfolios

    Chapter 6: Using Indirect Assessment Methods

    Using Surveys in Assessment

    Using Focus Groups in Assessment

    Additional Indirect Methods

    Qualitative versus Quantitative Approaches

    Classroom Assessment Techniques

    Chapter 7: Assessing Learning in the Major

    Capstone Experiences and Courses

    Portfolios

    Experiential Education

    Group Work and Team-Building Skills

    Employer Involvement

    Intentional Learning

    Chapter 8: Assessing Learning in General Education

    The Nature of General Education

    Assessment Choices and Issues

    Using Commercial Instruments and the Voluntary System of Accountability

    Assessing Specific Aspects of General Education

    The Degree Qualifications Profile

    Assessing General Education Outcomes within the Major

    Chapter 9: Assessing Student Learning and Program Effectiveness in Student Affairs

    Foundations for Assessment in Student Affairs

    Mission, Goals, and Objectives

    Leadership and Preparation for Assessment in Student Affairs

    Assessment Frameworks, Models, and Diagrams

    Assessment Plans and Methods

    Reporting and Sharing Results

    Ethical Behavior

    Improving Assessment

    Rewards for Assessment

    Chapter 10: Analyzing, Reporting, and Using Assessment Results

    Helping Faculty and Staff Use Their Assessment Results

    Assessment Reporting by Departments and Programs

    Summarizing Reports

    Managing Data

    Assessing Unit Reports

    Making the Process Transparent

    Institutional Assessment Reporting

    Analyzing Assessment Information

    Displaying Results

    Other Considerations

    Chapter 11: Assessing Institutional Effectiveness

    Linking Assessment and Institutional Planning: An Example

    Organizing to Assess Institutional Effectiveness

    Planning and Institutional Improvement at IUPUI

    Administering an Assessment Plan

    Considering Costs

    Linking Assessment to Other Valued Processes

    Chapter 12: Summing Up

    A Time of Transition

    Current Practice

    Continuing Challenges

    References

    Name Index

    Subject Index

    Advertisement

    End User License Agreement

    List of Illustrations

    Figure 11.1 Planning, Evaluation, and Improvement at IUPUI

    List of Exhibits

    Exhibit 4.1 Content-by-Process Matrix

    Exhibit 4.2 Curriculum Map

    Exhibit 4.3 Planning for Learning and Assessment

    Exhibit 4.4 Selection Criteria Matrix

    Exhibit 4.5 Objective-by-Measures Matrix

    Exhibit 5.1 Planning Sheet for e-Portfolios

    Exhibit 11.1 Assessment Plan Outline

    Praise for Assessment Essentials

    "Banta and Palomba’s new edition of Assessment Essentials covers a wide range of current topics that are crucial for faculty, staff, and administrators to follow as they strive to sustain effective assessments. I highly recommend this comprehensive resource for academic leaders and others in higher education who want to maintain the quality of their assessment efforts.

    I regularly teach graduate courses and used the original Banta and Palomba book as required reading that helped students learn critical knowledge and skills in implementing and sustaining assessments. I plan to use the comprehensive new edition of this book for my upcoming graduate courses and highly recommend this valuable resource to other faculty members."

    —Elizabeth A. Jones, Ph.D., professor of education and director, Doctoral Program, Holy Family University

    "It’s hard to improve on a classic, but Banta and Palomba have done so with this updated edition of Assessment Essentials, which is replete with examples of what effective assessment work looks like in different types of institutions."

    —George D. Kuh, adjunct professor and director, National Institute for Learning Outcomes Assessment, University of Illinois and Indiana University

    A sweeping update that brilliantly reflects the evolution of the field. Institutions should use it to build the assessment infrastructure they will need in the brave new world of prior learning assessment and competency-based education.

    —Barbara D. Wright, vice president, WASC Senior College and University Commission, Alameda, California

    Assessment Essentials

    Planning, Implementing, and Improving Assessment in Higher Education

    Second Edition

    Trudy W. Banta

    Catherine A. Palomba

    Foreword by Jillian Kinzie

    Wiley Logo

    Cover Design: Lauryn Tom

    Cover Image: © iStockphoto/VikaSuh

    Copyright © 2015 by John Wiley & Sons, Inc. All rights reserved.

    Published by Jossey-Bass

    A Wiley Brand

    One Montgomery Street, Suite 1200, San Francisco, CA 94104-4594

    www.josseybass.com/highereducation

    No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at www.copyright.com. Requests to the publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at www.wiley.com/go/permissions.

    Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Readers should be aware that Internet Web sites offered as citations and/or sources for further information may have changed or disappeared between the time this was written and when it is read.

    Jossey-Bass books and products are available through most bookstores. To contact Jossey-Bass directly call our Customer Care Department within the U.S. at 800-956-7739, outside the U.S. at 317-572-3986, or fax 317-572-4002.

    Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

    Library of Congress Cataloging-in-Publication Data has been applied for and is on file with the Library of Congress.

    ISBN 9781118903322 (cloth); ISBN 9781118903766 (ebk.); ISBN 9781118903650 (ebk.)

    THE JOSSEY-BASS HIGHER AND ADULT EDUCATION SERIES

    LIST OF EXHIBITS AND FIGURES

    Exhibit 4.1 Content-by-Process Matrix

    Exhibit 4.2 Curriculum Map

    Exhibit 4.3 Planning for Learning and Assessment

    Exhibit 4.4 Selection Criteria Matrix

    Exhibit 4.5 Objective-by-Measures Matrix

    Exhibit 5.1 Planning Sheet for e-Portfolios

    Exhibit 11.1 Assessment Plan Outline

    Figure 11.1 Planning, Evaluation, and Improvement at IUPUI

    FOREWORD

    The pressures for colleges and universities to demonstrate educational effectiveness and use empirical evidence to make improvements have only intensified since 1999, when Assessment Essentials debuted as a comprehensive resource for those responsible for assessment in higher education. Now, with the prospect of the tenth reauthorization of the Higher Education Act, lawmakers are pushing to enact new ways to hold colleges and universities accountable for their costs and outcomes. How are colleges and universities navigating this period of heightened demands for accountability?

    Results from the 2009 and 2013 National Institute for Learning Outcomes Assessment (NILOA) survey of provosts suggest that most institutions are engaged in considerable assessment activity and are using multiple approaches to gather evidence in response to a variety of drivers, most notably accreditation. In 2009, the vast majority (92 percent) of all colleges and universities participating in the survey used at least one assessment approach at the institution level, and the average was three. In 2013, that average increased to five. Institutional use of assessment evidence for every category—including accreditation, external accountability, strategic planning, and institutional improvement—was also higher in 2013 than in 2009.

    Indeed, much has changed in the world of assessment since the first edition of Assessment Essentials was published, and results demonstrating the amount of assessment activity going on at most institutions suggest significant progress on the assessment agenda. Yet strong interest in learning more about how to effectively implement, organize, support, and sustain assessment activities continues to grow. In this second edition, Trudy Banta and Catherine Palomba make a substantial and timely contribution to enriching the knowledge base regarding how assessment is carried out and the best ways to organize and structure effective assessment work.

    This updated edition of Assessment Essentials serves as a good checkup of the robustness of assessment practice. The practical examples it presents are a source of reassurance that assessment in colleges and universities is healthy and that institutions are not ignoring important assessment behaviors—while also serving as an alarm system, raising questions about current conditions of assessment before they become chronic. The Contents list alone could function as an annual checklist for healthy assessment practice in colleges and universities. These respected general practitioners of assessment have done a thorough examination of the overall health of assessment.

    Banta and Palomba are exactly right to place considerable emphasis throughout this book on increasing the involvement of faculty in assessment. They make the case that effective assessment of student learning cannot occur without the participation of faculty in every step of the assessment process. Quite simply, faculty involvement is key to meaningful assessment, and the meaning of assessment is only as good as the scope and quality of faculty involvement. Notably, the authors also acknowledge the importance of assessing student learning and program effectiveness in student affairs and have added a new chapter to address this expansion in assessment work. In particular, the chapters dedicated to engaging faculty and students in assessment and evidence of focused efforts to assess learning in student affairs make a significant contribution to documenting effective work and discussing current needs in the field.

    It is heartening to see that ensuring the use of assessment results is a consistent theme across all chapters in Assessment Essentials. For almost a dozen years, I have had the good fortune to work with the National Survey of Student Engagement (NSSE) to further the assessment of student engagement in educationally effective practice and provide colleges and universities with diagnostic, actionable information to inform efforts to improve the quality of undergraduate education. While it is rewarding to see campuses adopt NSSE as an important component of their assessment programs that provides them concise summaries of the strengths and shortcomings of their students’ experience, it is disappointing when campuses fail to share results with faculty, draw clear lessons from their assessment data, or formulate concrete improvement priorities. Motivated by the understanding that campuses are much better at gathering data than using assessment to improve, Banta and Palomba dedicate much-needed attention to outlining approaches that campuses can use to encourage faculty and staff to take action on assessment information, and in several chapters they highlight examples of closing the assessment loop, or taking action and then determining the results of these actions. Quite simply, assessment has little to no value if results are not shared and used in meaningful ways. This book aims to make assessment results meaningful and used.

    Assessment has taken on greater importance across all areas and units of colleges and universities. The widespread use of national surveys, rubrics, and portfolios to assess learning; the emergence of assessment technologies, including an array of data management systems; and the increase in assessment specialists are all signals of serious investment in assessment. While assessment cynics do still exist, the shallow compliance approach to assessment—simply to satisfy the expectations of accreditation—is clearly untenable and no longer holds sway. Although previously common, the compliance approach to assessment has been supplanted by an approach favoring assessment for learning and improvement. Now, assessment more often than not is viewed as vital to improving educational quality and effectiveness. The first edition of Assessment Essentials helped guide this shift, and this new edition updates and extends the guidance. More important, Banta and Palomba demonstrate how faculty and staff have made meaning of assessment results to demonstrate educational effectiveness and inform institutional improvement.

    Improving quality in undergraduate education to foster learning and success for all students is imperative for US higher education. The challenge that this presents to institutional leaders, faculty, and staff demands meaningful assessment and concerted action to enhance educational effectiveness. The stock-taking activities undertaken by NILOA indicate that assessment has become a permanent fixture in the structure of colleges and universities, yet there is clearly more to do for assessment at these institutions to advance and mature. Moreover, assessment in higher education has grown increasingly more complicated as demands for evidence and expectations for ensuring student learning and institutional improvement have intensified. Faculty and staff with assessment responsibilities and students of assessment need practical resources and achievable assessment examples to advance their assessment skills and repertoire. Assessment Essentials is precisely the resource to guide the field in these demanding times.

    Jillian Kinzie

    associate director, Indiana University Center for Postsecondary Research, and NILOA senior scholar Bloomington, Indiana

    PREFACE

    When we were considering the possibility of undertaking the task of revising the 1999 version of Assessment Essentials, our Jossey-Bass editor, David Brightman, offered to ask three experienced assessment professionals to review the original and give us advice about approaching a revision. We are indebted to this anonymous trio because they gave us a perspective that opened our eyes to the distinctive character of the first edition and to the possibilities for continuing to make a unique contribution to the literature with a revision.

    Our reviewers told us that we had offered a practical guide to assessment practice, with principles, examples, and advice about decisions that must be made in the course of implementing an outcomes assessment initiative. In addition to the guidance for practice, we had provided history and context with perspective and vision. The reviewers asked us for a similar approach in our revision, and we have tried to fulfill that request.

    When we wrote the 1999 edition, there were few such resources to guide faculty and staff who needed a basic introduction to outcomes assessment, with connections to current references that would help even experienced assessment leaders acquire fresh examples and extended understanding. Now there are many books, several journals, numerous conferences, new organizations, and a panoply of examples of good practice available on institutions’ websites. This has made it extremely difficult to produce, as we were instructed by our publisher, a work of as few pages as the first edition!

    In addition to the principles, examples, and perspectives offered in the first edition, our three reviewers asked us to add information about these topics:

    The use of technology in assessment, including electronic student portfolios

    Capstone courses as assessment vehicles

    Assessment in student affairs

    The link between outcomes assessment and such valued institutional processes as strategic planning, curriculum revision, and comprehensive program review

    So much has changed since 1999 that we have replaced virtually all of our original examples, and of course this required new surrounding text. So while we have retained much of our original organization, most of the words are new. We have expanded our original focus on assessment of student learning to include institutional effectiveness. Accordingly, there are new chapters on assessment in student affairs and assessing institutional effectiveness.

    Audience

    The three reviewers of our first edition identified our audience as learners: faculty and staff engaged in leading outcomes assessment on their campuses, faculty and staff new to assessment and seeking a comprehensive overview, faculty and staff teaching master’s and doctoral students in higher education and student affairs. We have attempted to provide some history, context, perspective, and vision for these populations.

    The Contents

    In Chapter 1 we introduce our broad definition of outcomes assessment, encompassing all institutional programs and services. We also include some history and perspective on assessment’s progress. Chapter 2 presents our assessment essentials envisioned in three phases: planning, implementing, and improving/sustaining assessment. The essential step of engaging faculty and students in outcomes assessment is the subject of Chapter 3.

    Chapters 4, 5, and 6 prepare readers to develop outcomes statements and make decisions about appropriate outcomes. Curriculum mapping, instrument validity and reliability, rubrics, and examples of direct and indirect measures are introduced.

    Chapters 7, 8, and 9 contain illustrations of the ways measures can be applied in assessing student learning in the major, general education, and student affairs, respectively. Chapter 10 addresses the essential processes of analyzing, reporting, and using assessment findings.

    Chapter 11 offers examples of comprehensive institution-wide assessment programs. In Chapter 12 we characterize the current assessment scene as one fraught with uncertainties, making it difficult to predict a clear way forward. Yet much has been achieved in the four-decade history of outcomes assessment in higher education, and we conclude with some of those achievements, as well as some continuing challenges.

    Acknowledgments

    We are indebted to the thousands of assessment professionals who have moved this field forward over the past four decades and have unselfishly shared their successes, failures, and continuing challenges in books, journals, and conference presentations, as well as on websites. They have provided the hundreds of examples we cite in illustrating our Assessment Essentials.

    Cindy Ahonen Cogswell, a doctoral candidate in Indiana University’s higher education program, has contributed her considerable research skills and editing expertise on our behalf. We could not have produced this book without the amazing clerical assistance of Shirley Yorger. And finally we appreciate the review of the final manuscript by Kenneth Gilliam, a master’s-level student in the student affairs program at Indiana University.

    Trudy W. Banta

    Indianapolis, Indiana

    Catherine A. Palomba

    Durham, North Carolina

    To Our Essentials:

    Ruth and Ky

    Logan, Holly, and T.J.

    Mady and Max

    Neil, Mary Frances, and Nick

    ABOUT THE AUTHORS

    Trudy W. Banta is professor of higher education and senior advisor to the chancellor for academic planning and evaluation at Indiana University-Purdue University Indianapolis (IUPUI). Considered a pioneer in outcomes assessment in higher education, Banta has received ten national awards for her work in this field. Before moving to IUPUI in 1992, she was the founding director of the Center for Assessment Research and Development at the University of Tennessee, Knoxville.

    Banta has developed and coordinated twenty-seven national conferences and fifteen international conferences on the topic of assessing quality in higher education. She has written or edited eighteen books on assessment, contributed thirty chapters to other published works, and written more than three hundred articles and reports. She is the founding editor of Assessment Update, a bimonthly periodical published since 1989 by Jossey-Bass.

    Catherine A. Palomba is director emeritus of assessment and institutional research at Ball State University. Prior to leading a nationally recognized assessment program on that campus, she taught economics at West Virginia University and Iowa State University. She also was a research analyst at the Center for Naval Analyses in Virginia. She earned her PhD in economics from Iowa State University and her master’s degree in economics from the University of Minnesota. Her assessment work includes two books, as well as several articles and presentations.

    CHAPTER 1

    DEFINING ASSESSMENT

    The concept of assessment resides in the eye of the beholder. It many definitions, so it is essential that anyone who writes or speaks about assessment defines it at the outset.

    Some Definitions

    In common parlance, assessment as applied in education describes the measurement of what an individual knows and can do. Over the past three decades, the term outcomes assessment in higher education has come to imply aggregating individual measures for the purpose of discovering group strengths and weaknesses that can guide improvement actions.

    Some higher education scholars have focused their attention on the assessment of student learning. Linda Suskie, for instance, in the second edition of her book Assessing Student Learning: A Common Sense Guide (2009) tells us that for her, the term assessment refers to the assessment of student learning. In the first edition of this book, we also adopted the focus on student learning:

    Assessment is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. (Palomba and Banta, 1999, p. 4)

    The term assessment in higher education has also come to encompass the entire process of evaluating institutional effectiveness. Reflecting her career in applying her background in educational psychology in program evaluation, the first author of this book uses this definition:

    Assessment is the process of providing credible evidence of

    resources

    implementation actions, and

    outcomes

    undertaken for the purpose of improving the effectiveness of

    instruction,

    programs, and

    services

    in higher education.

    In this book, the term assessment will certainly apply to student learning. But we also use it to describe the evaluation of academic programs, student support services such as advising, and even administrative services as we look at overall institutional effectiveness.

    We will describe the assessment of student learning as well as of instructional and curricular effectiveness in general education and major fields of study. We will consider methods for assessing student learning and program effectiveness in student services areas. We also will present approaches to assessing student learning and program and process effectiveness at the institutional level. In fact, the most meaningful assessment is related to institutional mission.

    Disciplinary accreditation is a form of assessing program effectiveness in a major field. Regional accreditation is a form of assessing institutional effectiveness. Both are powerful influences in motivating and guiding campus approaches to assessment. Federal, state, and trustee mandates for measures that demonstrate accountability may determine levels of performance funding and also shape campus assessment responses. We will discuss the many external factors that impel college faculty and administrators to undertake assessment activities.

    Our guiding principle in this book, however, will be to present approaches to assessment that are designed to help faculty and staff improve instruction, programs, and services, and thus student learning, continuously. Assessment for improvement can also be used to demonstrate accountability. Unfortunately, assessment undertaken primarily to comply with accountability mandates often does not result in campus improvements.

    Pioneering in Assessment

    In his book The Self-Regarding Institution (1984), Peter Ewell portrays the first work in outcomes assessment of three institutions. In the early 1970s, Sister Joel Reed, president of Alverno College, and Charles McClain, president of Northeast Missouri State University, determined that the assessment of student learning outcomes could be a powerful force in improving the effectiveness of their respective institutions. Alverno faculty surveyed their alumnae to find out what their graduates valued most in terms of their learning at Alverno (Loacker and Mentkowski, 1993). Survey findings shaped faculty development of eight abilities, including communication, analysis, and aesthetic responsiveness, that would become the foundation for curriculum and instruction at Alverno. In addition to work in their own discipline, Alverno faculty were asked to join cross-disciplinary faculty specializing in one of the eight core abilities. Alverno’s (2011) assessment as learning approach has transformed that college, increasing its reputation among students and parents, its enrollment, and its visibility in the United States and abroad as a leader in conducting conscientious and mission-centric assessment.

    At Northeast Missouri State University, President McClain and his chief academic officer, Darrell Krueger, became early advocates of value-added assessment, giving tests of generic skills to their freshmen and seniors and tracking the gain scores. In addition, department faculty were strongly encouraged to give their seniors an appropriate nationally normed test in their major field if one existed. McClain famously asked his department chairs one persistent question: Are we making a difference? meaning, How are our students doing on those tests we’re giving? (Krueger, 1993). The early emphasis on test scores had the effect of raising the ability profile of Northeast Missouri’s entering students. Subsequently the faculty and administration decided to pursue and gain approval from the state as Missouri’s public liberal arts institution, with the new name of Truman State University.

    The third pioneering institution profiled in Ewell’s book was the University of Tennessee, Knoxville (UTK). Whereas Alverno’s and Northeast Missouri’s assessment initiatives were internal in their origins and aimed at improving institutional effectiveness in accordance with institutional mission, UTK was confronted with the need to address an external mandate—a performance funding program instituted in 1979 by the Tennessee Higher Education Commission and the Tennessee state legislature. Initially UTK’s chancellor, Jack Reese, called the requirements to test freshmen and seniors in general education and seniors in their major field, conduct annual surveys of graduates, and accredit all accreditable programs an abridgement of academic freedom. His administrative intern at the time, Trudy Banta, thought the performance funding components looked like elements of her chosen field, program evaluation. She took advantage of a timely opportunity to write a proposal for a grant that the Kellogg Foundation would subsequently fund: Increasing the Use of Student Information in Decision-Making. For the first three years of addressing the external accountability mandate, faculty and administrators charted their own course on the performance funding measures on the basis of their Kellogg Project. While the amount of the Kellogg funding was tiny—just ten thousand dollars—for research-oriented faculty, the Kellogg grant gave them the opportunity to begin testing of students and questioning of graduates in their own way. Within five years, UTK was recognized by the National Council for Measurement in Education for outstanding practice in using measurement technology (Banta, 1984).

    By 1985 three additional states joined Tennessee in establishing performance funding programs for their public colleges and universities. Colorado, New Jersey, and Virginia issued far less prescriptive guidelines than Tennessee, however. The state higher education organizations and legislatures in the three new entries provided examples, but left it to their public institutions to select or design tests and other measures to demonstrate their accountability.

    In his 2009 paper for the newly formed National Institute for Learning Outcomes Assessment (NILOA), Ewell notes that two decades ago, the principal actors external to colleges and universities requiring attention to assessment were state governments. However, by the 1990s, mandates in several states were no longer being enforced because of budget constraints, and so attention turned to other goals, such as higher degree completion rates. Tennessee remained an exception in continuing to employ several learning outcomes measures in its long-established performance funding program.

    In 1988, Secretary of Education William Bennett issued an executive order requiring all federally approved accreditation organizations to include in their criteria for accreditation evidence of institutional outcomes (US Department of Education, 1988). During the next several years, the primary external stimulus for assessment moved from states to regional associations as they began to issue specific outcomes assessment directives for institutional accreditation, and discipline-specific bodies created such guidelines for program accreditation. The 1992 Amendments to the federal Higher Education Act (HEA) codified assessment obligations for accrediting agencies, and subsequent renewals of the HEA have continued to require accreditors to include standards specifying that student achievement and program outcomes be assessed. It has taken some accreditors longer than others to comply, however. Accreditors of health professions were in the vanguard, followed by social science professions like education, social work, and business. Engineering accreditors initiated ABET 2000 standards in 1997 (ABET, 2013). The first trial balloon for standards related to student learning outcomes in law was launched in 2013, for approval within three years (American Bar Association, 2013).

    By the time NILOA’s first survey of chief academic officers was undertaken in 2009, accreditation—either disciplinary or regional, or both—was being cited as the most important reason for undertaking assessment. According to Ewell (2009), the shift in stimulus from state governments to regional accreditors had the important effect of increasing the emphasis on assessment to guide improvement in addition to demonstrating accountability. Advocating congruence of assessment and campus mission is another hallmark of the influence of accrediting agencies on outcomes assessment. A July 19, 2013, statement of Principles of Effective Assessment of Student Achievement endorsed by leaders of the six regional accrediting commissions and six national higher education associations begins, [This] statement is intended to emphasize the need to assess effectively student achievement, and the importance of conducting such assessments in ways that are congruent with the institution’s mission (American Association of Community Colleges et al., 2013).

    The pendulum is swinging once again with respect to state interest in assessment. In spring 2010, the National Center for Higher Education Management Systems surveyed state higher education executive offices concerning policies, mandates, and requirements regarding student outcomes assessment (Zis, Boeke, and Ewell, 2010). According to study results, eight states, including Minnesota, Georgia, Tennessee, and West Virginia, were unusually active in assessment, some requiring common testing. Some states have systemwide requirements rather than state requirements. For many years, students at the campuses of the City University of New York were required to obtain a minimum score on a locally developed standardized examination in order to earn their degrees.

    More recently, declining global rankings, rising tuition and student debt, and poor prospects for employment of college graduates have alarmed state and federal decision makers (Miller, 2013). This has prompted an emphasis on productivity and efficiency in higher education, which is now seen as an engine of the economy and of the nation’s competitiveness (Hazelkorn, 2013). Many state reporting systems are focusing more on graduation rates, job placement, and debt-to-earnings ratios than on measures of student learning. The Voluntary Framework of Accountability for community colleges contains measures not only of how many students obtain degrees, but of how many pass remedial courses, earn academic credit, transfer to another institution, and get a job (American Association of Community Colleges, 2013). President Barack Obama’s administration has proposed a College Scorecard (White House, 2013). The emphasis on producing numbers of degrees and job-ready employees has alarmed educators. They fear that educational quality will suffer if too much weight in funding regimes is placed on simply graduating more students or turning out majors who are prepared for today’s jobs rather than with the abilities to adapt to ever-changing workplace demands. As Margaret Miller puts it, The completion goal is downright pernicious if it entails the minting of an increasingly worthless currency (2013, p. 4). In addition, emphasizing college completion in a shorter time frame could encourage institutions to raise their entrance requirements to be sure they enroll students who are best prepared for college work, which could make a college education unattainable for those who need it most.

    As a result of all these external influences, as well as internal interests in obtaining guidance for continuous improvement of student learning and institutional effectiveness, increasing numbers of faculty have been called on to participate in assessment. Some assume leadership roles, serving on campuswide committees charged with planning the institution’s overall approach to assessment or designing a program to assess general education. A greater number are involved at the department level, helping to design and carry out assessment of programs or courses for majors. Attendance at national, regional, state, and discipline-specific assessment conferences attests to continued interest in sharing assessment information. In fact, for more than a decade, the number of participants at the annual Assessment Institute in Indianapolis, the oldest and largest assessment conference in the United States, has approached or exceeded one thousand. This book is designed to fill some of the continuing need for information about assessment.

    Quality Assurance: An International Perspective

    Interest in obtaining evidence of accountability from postsecondary institutions emerged as a worldwide phenomenon in the mid-1980s. In Europe, China, Australia, South Africa, and other countries, as in the United States, stakeholders in higher education have become increasingly concerned about the value received for resources invested, accommodating increasing numbers and diversity of students, covering cost increases with resources spread over an ever-growing array of services, developing a workforce with skills competitive in a global marketplace, and producing graduates with credentials that are transferable across cultural and national boundaries. Since postsecondary education is managed by the central government in most other countries, initial accountability-related actions were national in scope and focused on a process commonly referred to as quality assurance (QA). Self-study and external peer review, including site visits, were encouraged. Given the commanding role of central education ministries, voluntary associations of institutions like the regional accrediting agencies in the United States were virtually unknown in Europe and elsewhere prior to 1985. In that year, thirteen universities in the Netherlands formed the Association of Cooperating Universities and began a six-year cycle designed to conduct peer reviews in the same year of all universities offering degrees in a particular discipline (Vroeijenstijn, 1994). Few other countries have followed the Netherlands in developing associations of institutions to carry out QA procedures.

    Prior to 1990, higher education in most other countries had been a privilege for the economically and intellectually elite: only 10 to 15 percent of the college-going age population was enrolled. In the next three decades, this percentage increased to 50 percent and higher in Europe and beyond. When only the privileged and gifted, who were generally motivated to succeed and able to navigate postsecondary education with minimal guidance, constituted the population of university students and their tuition was paid by governments, student services beyond an admissions office and a student housing staff were not needed. But with massification, students with diverse needs necessitated the creation of a full range of student services and the institutional research function to support this infrastructure (Banta, 2013).

    Although an emphasis on student success has come only recently to Europe and elsewhere, European students are much more influential in steering QA initiatives than US students are in shaping outcomes assessment. At the University of Freiburg in Germany, undergraduates in psychology made their own list of learning outcomes and presented it to the psychology faculty (Banta, 2009). In the United Kingdom, the National Student Survey is administered at all universities. The results, reported publicly, have become an important factor in judging the quality of institutions. There is now a European Students’ Union (ESU, 2014) that voices its concerns about quality assessment on occasion. The ESU is an umbrella organization for forty-seven national unions of students and has headquarters in Brussels, from whence its officers can lobby the European Union.

    Policies of the European Union have created transparent borders and a common currency that combine to encourage workers to migrate from one country to another. By 1999, it was clear that some standardization was needed in order to make sense of university degrees produced by programs differing in content, length, and curricular structure. In that year education ministers representing twenty-nine European countries met in Italy and drew up the Bologna Declaration, which called for comparable degrees based on a common framework of degree levels, a credit-based system that would facilitate international transfer, and a cooperative system of quality assurance (Ewell, 2004). In 2000 the European Network for Quality Assurance in Higher Education (ENQA) was established to promote European cooperation in ensuring quality. Australia, New Zealand, and Hong Kong are among the many other countries that also have national QA organizations that conduct institutional audits and disseminate information about good practice.

    In 2000 ENQA initiated a process designed to develop field-specific reference points, including learning outcomes, that could guide students to a credential in a field of study. The process is called tuning to reflect its attempt to steer a course between identical cross-national standards and institutional autonomy with respect to degree standards (Ewell, 2004, p. 12). Tuning in Europe has inspired similar efforts in Latin America and Russia, as well as the Lumina Foundation’s Tuning USA project involving several disciplines in Indiana, Kentucky, Minnesota, Texas, and Utah (Adelman et al., 2014, p. 38).

    Faculty development has been a hallmark of QA initiatives, particularly in Britain. But that has not kept British academics from voicing their resentment

    Enjoying the preview?
    Page 1 of 1