Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Measuring and Enhancing the Student Experience
Measuring and Enhancing the Student Experience
Measuring and Enhancing the Student Experience
Ebook358 pages3 hours

Measuring and Enhancing the Student Experience

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Measuring and Enhancing the Student Experience provides insights on how student experience measures could be used to inform improvements at institutional, course, unit of study and teacher level. The book is based on a decade of research and practitioner views on ways to enhance the design, conduct, analysis, reporting and closing the loop on student feedback data. While the book is largely based on Australian case studies, it provides learning experiences for other countries where student experience measures are used in national and institutional quality assurance. Consisting of 13 chapters, the book includes a wide range of topics including the role and purpose of student feedback, the use of student feedback in staff performance reviews, staff and student engagement, a student feedback and experience framework, the first year experience, use of qualitative data, engaging transnational students in feedback, closing the loop on feedback, student engagement in national quality assurance, use of learning analytics and the future of the student experience.

Mahsood Shah is an Associate Professor and Deputy Dean (Learning and Teaching) with School of Business and Law at CQUniversity, Australia. In this role Mahsood is responsible for enhancing the academic quality and standard of courses. Mahsood is also responsible for learning and teaching strategy, governance, effective implementation of policies, and enhancement of learning and teaching outcomes across all campuses. In providing leadership for learning and teaching, Mahsood works with key academic leaders across all campuses to improve learning and teaching outcomes of courses delivered in various modes including face-to-face and online. At CQUniversity, he provides leadership in national and international accreditation of academic courses.

Mahsood is also an active researcher. His areas of research include quality in higher education, measurement and enhancement of student experience, student retention and attrition, student engagement in quality assurance, international higher education, widening participation and private higher education.

Chenicheri Sid Nair is the incoming Executive Director, Tertiary Education Commission (TEC), Mauritius. Prior to joining TEC, he was Professor, Higher Education Development at the University of Western Australia (UWA), Perth where his work encompassed the improvement of the institutions teaching and learning. Before this appointment to UWA, he was Quality Adviser (Research and Evaluation) in the Centre for Higher Education Quality (CHEQ) at Monash University, Australia. He has an extensive expertise in the area of quality development and evaluation, and he also has considerable editorial experience. Currently, he is Associate Editor of the International Journal of Quality Assurance in Engineering and Technology Education (IJQAETE). He was also a Managing Editor of the Electronic Journal of Science Education (EJSE). Professor Nair is also an international consultant in a number of countries in quality, student voice and evaluations.

  • Provides both practical experience and research findings
  • Presents a diverse range of topics, ranging from broader student experience issues, analysis of government policies in Australia on student experience, the changing context of student evaluations, nonresponse to surveys, staff and student engagement, ideal frameworks for student feedback, and more
  • Contains data taken from the unique Australian experience with changing government policies and reforms relevant to the Asia-Pacific region
LanguageEnglish
Release dateOct 24, 2016
ISBN9780081010044
Measuring and Enhancing the Student Experience
Author

Mahsood Shah

Mahsood Shah is an Associate Professor and Deputy Dean (Learning and Teaching) with School of Business and Law at CQUniversity, Australia. In this role Mahsood is responsible for enhancing the academic quality and standard of programs. Mahsood is also responsible for learning and teaching strategy, governance, effective implementation of policies, and enhancement of academic programs across all campuses. In providing leadership for learning and teaching, Mahsood works with key academic leaders across all campuses to improve learning and teaching outcomes of courses delivered in various modes including face-to-face and online. At CQUniversity, he provides leadership in national and international accreditation of academic programs. Mahsood is also an active researcher.

Read more from Mahsood Shah

Related to Measuring and Enhancing the Student Experience

Related ebooks

Teaching Methods & Materials For You

View More

Related articles

Reviews for Measuring and Enhancing the Student Experience

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Measuring and Enhancing the Student Experience - Mahsood Shah

    Measuring and Enhancing the Student Experience

    Mahsood Shah

    Chenicheri Sid Nair

    John T.E. Richardson

    Table of Contents

    Cover image

    Title page

    Copyright

    About the Authors

    Preface

    Chapter 1. Measuring the Student Experience: For Whom and For What Purpose?

    1.1. Introduction

    1.2. Student Feedback: For Whom and for What Purpose?

    1.3. Conclusion

    Chapter 2. Student Feedback: The Loophole in Government Policy

    2.1. Introduction

    2.2. Current Problem

    2.3. Strategic Solution

    2.4. Conclusion

    Chapter 3. Student Feedback: Shifting Focus From Evaluations to Staff Performance Reviews

    3.1. Introduction

    3.2. Drivers of Change

    3.3. Performance-Based Funding and New Accountability

    3.4. Political Imperatives or Institutional Improvement?

    3.5. Case Study of an Australian University

    3.6. Limitations of the Strategy Deployed

    3.7. Conclusion and Future Implications

    Chapter 4. Why Should I Complete a Survey? Non-responses With Student Surveys

    4.1. Introduction

    4.2. Does Student Participation Matter?

    4.3. Methodology

    4.4. Findings

    4.5. Some Notable Changes

    4.6. Conclusion

    Chapter 5. Engaging Students and Staff in Feedback and Optimising Response Rates

    5.1. Introduction

    5.2. Response Rates

    5.3. Incentives

    5.4. Survey Fatigue

    5.5. Communication

    5.6. Acknowledgement

    5.7. Ownership

    5.8. Acceptable Response Rates

    5.9. A Useful Recipe

    5.10. Conclusion

    Chapter 6. A Student Feedback and Experience Framework

    6.1. Introduction

    6.2. Need for a Framework

    6.3. Current Shortcomings

    6.4. A Possible Framework for Surveys and Improvements

    6.5. Conclusion

    Chapter 7. Measuring the Expectations and Experience of First-Year Students

    7.1. Introduction

    7.2. Rationale for Surveying First-Year Students

    7.3. Methodology

    7.4. Overall Findings

    7.5. Subgroup Analysis

    7.6. Conclusion and Future Implications

    Chapter 8. Accessing Student Voice: Using Qualitative Student Feedback

    8.1. Introduction

    8.2. Methods

    8.3. Findings

    8.4. Discussion

    8.5. Conclusion

    Chapter 9. Engaging Transnational Students in Quality Assurance and Enhancement

    9.1. Australian Transnational Education

    9.2. Monitoring the Transnational Student Experience: Past Practices

    9.3. Current Policy Directions

    9.4. Transnational Student Experience

    9.5. Conclusion

    Chapter 10. Closing the Loop: An Essential Part of Student Evaluations

    10.1. Introduction

    10.2. Closing the Loop

    10.3. Are There Negative Implications for Not Closing the Loop?

    10.4. Strategies to Implement Closing the Loop

    10.5. Conclusion

    Chapter 11. Student Engagement in National Quality Assurance

    11.1. Introduction

    11.2. Student Engagement

    11.3. From External Quality Agency to National Regulator

    11.4. Prominence of the Student Voice

    11.5. Conclusion

    Chapter 12. Using Learning Analytics to Assess Student Engagement and Experience

    12.1. Introduction

    12.2. Learning Analytics

    12.3. Learning Analytics and Its Stakeholders

    12.4. Learning Analytics Elements, Process, Tools and Resources

    12.5. Use of Learning Analytics

    12.6. Indicators Used as Part of Learning Analytics

    12.7. Discussion

    Chapter 13. Measurement and Enhancement of Student Experience: What Next?

    13.1. Introduction

    13.2. Monitoring the Student Experience to Date

    13.3. Where Are We Heading in Monitoring the Student Experience?

    13.4. Administration and Type of Surveys

    13.5. Use of Data

    13.6. Performance Reporting

    13.7. Professional Development

    13.8. Research

    13.9. Concluding Remarks

    Index

    Copyright

    Chandos Publishing is an imprint of Elsevier

    50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States

    The Boulevard, Langford Lane, Kidlington, OX5 1GB, United Kingdom

    Copyright © 2017 Elsevier Ltd. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

    This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

    Notices

    Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

    Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

    To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

    Library of Congress Cataloging-in-Publication Data

    A catalog record for this book is available from the Library of Congress

    British Library Cataloguing-in-Publication Data

    A catalogue record for this book is available from the British Library

    ISBN: 978-0-08-100920-8 (print)

    ISBN: 978-0-08-101004-4 (online)

    For information on all Chandos Publishing publications visit our website at https://www.elsevier.com/

    Publisher: Glyn Jones

    Acquisition Editor: George Knott

    Editorial Project Manager: Harriet Clayton

    Production Project Manager: Debasish Ghosh

    Designer: Matthew Clayton

    Typeset by TNQ Books and Journals

    About the Authors

    Mahsood Shah is an associate professor and deputy dean (Learning and Teaching) at the School of Business and Law at CQUniversity, Australia. In this role Mahsood is responsible for enhancing the academic quality and standard of courses delivered by the business school. Mahsood is also responsible for strategy, governance, effective implementation of policies and enhancement of learning and teaching outcomes across all campuses. In providing leadership for learning and teaching, Mahsood works with key academic leaders across all campuses to monitor the quality of courses delivered in various modes, including face-to-face, online and partnership. At CQUniversity, he provides leadership in national and international accreditation of academic courses. Mahsood is also an active researcher. His areas of research include quality in higher education, measurement and enhancement of student experience, student retention and attrition, student engagement in quality assurance, international higher education, widening participation and private higher education. Before joining CQUniversity, Mahsood led research at the school level at the University of Newcastle, Australia. Mahsood has also led strategic planning and quality assurance in three other Australian universities.

    In addition to working in universities, Mahsood has worked closely with more than 15 private, for-profit higher-education providers on projects related to quality assurance, compliance, accreditation and enhancement of learning and teaching. Mahsood has significant experience in external quality assurance. He is a Tertiary Education Quality and Standards Agency expert and also an auditor with various international external quality agencies.

    Mahsood is the founding editor of the journal International Studies in Widening Participation.

    Professor Chenicheri Sid Nair is the incoming Executive Director, Tertiary Education Commission (TEC), Mauritius. Prior to joining TEC he was a professor of Higher Education Development, University of Western Australia. His primary areas of work are in the quality of teaching and learning. Before this, he was the interim director and quality advisor (Evaluations and Research) at the Centre for Higher Education Quality at Monash University, Australia. In the role of quality advisor he headed the evaluation unit at Monash University, where he restructured the university’s evaluation framework. The approach to evaluations at Monash has been noted in the first round of the Australian Universities Quality Agency audits and is part of the good practice database.

    John T.E. Richardson trained as an experimental psychologist and taught psychology at Brunel University for 26  years. As a consequence of taking one of the first teaching qualifications in higher education in the United Kingdom from 1979 to 1982, however, his research interests turned to student learning in higher education. His work during the latter part of the 1980s and the first half of the 1990s focussed on variations in student learning and attainment related to age, gender and culture. Since then he has focussed on factors affecting learning and attainment among students with and without disabilities.

    In 2001 he was appointed to a new chair in student learning and assessment at the UK Open University, and this enabled him to establish a programme of research on the relationship between students’ perceptions of the quality of their courses and the approaches to studying that they adopt during those courses. In 2002–2003, John contributed to a report to the Higher Education Funding Council for England (HEFCE) on Collecting and using student feedback on quality and standards of learning and teaching in higher education. This led directly to his membership of a team based at the Open University that carried out pilot studies for the HEFCE in 2003–2004 towards the development of the National Student Survey.

    Since then, John has been investigating differences in degree attainment (nationally and at the Open University) related to gender and ethnicity, and in 2007 he was asked to write a review of the research literature on this topic for the UK Higher Education Academy (HEA). In 2007 he was also a member of a team that provided a report for the HEA on conceptions of excellence in teaching, research and scholarship, and in 2008 he was a member of a team that provided a report for the HEFCE on university league tables (rankings) and their impact on higher-education institutions.

    John is a fellow of the British Psychological Society, the Society for Research into Higher Education and the UK Academy of Social Sciences. He is the associate editor of the journal Studies in Higher Education.

    Preface

    The measurement and enhancement of student experience are key elements of quality assurance frameworks in many countries. Higher-education institutions worldwide use student feedback to assess the quality of teaching, learning and various academic and non-academic support services. Student feedback was for many years part of institutional quality assurance, which enabled the assessment of courses, teaching and various support services and facilities. Recently, governments have increasingly shown a vested interest in monitoring institutional quality. The quality of student experience is now part of both internal and external quality reviews.

    Higher-education institutions in some countries are using student feedback and other institutional performance data to monitor trends and academic outcomes of students. Some institutions make significant investment in using information technology–enabled tools such as business intelligence software to manage large sets of data and reporting at the institutional, faculty, course and individual unit/subject levels. Such tools enable benchmarking of trend data with other institutions and, within the institution, across faculties, campuses and modes of education delivery. The use of such tools has also facilitated the centralisation of institutional data such as enrolments, academic outcomes, student experience, graduate outcomes, finances, research outcomes, staffing and other performance indicators. The increased use of technology in teaching is enabling institutions to make use of learning analytics, gain insights about student engagement and predict academic student success.

    Governments, on the other hand, have also introduced policies to use student feedback to assess quality outcomes. In some countries policies have been introduced to use standard national survey instruments to measure student experience. Student experience data are now used in rankings and league tables, which are publicly available for students to make informed choices on where to study. In countries such as the United Kingdom and Australia, governments have established websites to publish institutional performance data, including student experience results, for the general public. External quality agencies are now established by the governments of many countries to assess institutional quality assurance and monitor standards. Such agencies also examine institutional approaches in relation to the collection of student feedback, analysis, reporting and accountability for improvements. Some countries place increased emphasis on partnership between institutions and various student unions. Similarly, professional accrediting bodies use student experience and other academic outcome measures as part of accreditation and re-accreditation.

    Student feedback is also having an effect on individual staff in higher-education institutions. A range of factors are contributing to this, including government policies to monitor student experience, using student feedback results in rankings and league tables, linking student feedback results in performance funding and using student feedback results in assessing and rewarding academic staff. Many institutions set targets as part of the planning and budgeting process at the institutional and faculty levels. Such targets are monitored on an annual basis, and data are reported to faculties, schools and administrative units for action. Academic champions such as associate deans (academic) or similar roles are held accountable to respond in areas needing improvement. Individuals are asked to respond to poorly performing courses, courses with low response rates and courses with a consistent downward trend in performance related to student experience and other academic outcome measures.

    Measuring and Enhancing the Student Experience brings together the contemporary issues around measuring and evaluating the student experience. It is based mainly on the Australian experience and is relevant to new academics and researchers who are involved in assessing the quality of teaching using student feedback. The book is also relevant to individuals who manage or coordinate student feedback in different kinds of education institutions. All three authors have significant experience in both research and practice in the measurement and enhancement of student experience. Though many of the cases presented in the book are based on the Australian experience, the findings are relevant elsewhere and in particular to emerging nations that are in the process of establishing quality assurance frameworks.

    Mahsood Shah

    Chenicheri Sid Nair

    John T.E. Richardson

    Chapter 1

    Measuring the Student Experience

    For Whom and For What Purpose?

    Abstract

    The measurement of student experience was traditionally a research-based activity in institutions. It then became a core activity as part of institutional quality assurance for learning and teaching. Student feedback was collected and reported to various academic committees for discussion, and in some cases actions were taken for improvement. Two decades ago, student experience measures were not part of institutional strategy, and neither performance measures nor targets requiring with accountability were set at the university or faculty level. Other stakeholders such as governments, students, and external quality agencies have recently shown a vested interest in monitoring the quality of student experience. This chapter outlines how student experience has shifted from an internal institutional assessment tool to one that is increasingly being used and assessed by governments and external quality agencies. Discussed in this chapter is how student feedback is now changing focus from an internal assessment of quality to one with other stakeholder interest.

    Keywords

    Performance measures; Student experience; Student feedback; Student surveys; Teacher evaluations; Unit/subject evaluations

    1.1. Introduction

    Universities have a long history of measuring the students’ experiences with the quality of teaching, learning and various kinds of support services (Centra, 1979; Goldschmid, 1978; McKeachie & Lin, 1975; Rich, 1976). End-of-semester student evaluations are used in many institutions, and many academics know when it is time for evaluations to be collected. Institutions have for many years used various kinds of student survey data to improve teaching quality and other support. Some institutions use student feedback as part of standard practice; however, the extent to which the data is used by individual teachers to revise curricula, assessments, teaching methods and other supports is somewhat patchy – and in some cases questionable. Measuring student experience using student surveys does not necessarily enhance courses, assessments and pedagogy. Other factors come into play to ensure the effective use of data to inform improvements: the reliability and validity of the survey tool, the response sample, the way data are analysed and reported, the triangulation of student survey data with other academic outcome measures, the timing when reported data feed into annual faculty planning cycles, accountability for improvement, the extent to which students are engaged and informed about improvements, the processes in place to encourage individuals to use student feedback data in improving practice, how excellence and improvement are rewarded and, finally, how the progress of actionable improvement is tracked to ensure a positive impact. Some other factors also include the use of qualitative data that may be collected by staff-student committees, the accountability of senior managers in improvement initiatives and the partnership between universities and student unions in implementing improvements.

    Significant changes relating to the measurement and enhancement of student experience have occurred in the past 20  years. One of the key change is internal control of survey data and reporting compared with the use of standard instruments developed by or on behalf of a national government and results monitored by the government and its agencies. Student survey data from the United Kingdom and Australia are now available on Websites for the general public to access and compare across individual institutions and disciplines of study. This shift from internal control to scrutiny by the government has resulted in the use of student survey data in rankings and league tables. A number of factors played a key role in this shift: the global growth of higher education in terms of the numbers of students and institutions; the emergence of new kinds of institutions; the internationalisation of higher education, including student mobility; changes in the public funding of universities; governments wanting to improve the reputation of higher education; an increased focus on quality assurance and outcomes; and the emergence of new models of education delivery, such as online and international collaboration between institutions.

    The increased number of both local and international students in many countries has prompted governments to revisit policies and frameworks related to quality assurance. In many countries student experience indicators are some of the many mechanisms used to ensure quality. Many critics have argued that student feedback or ‘happiness indicators’ cannot be used to assess educational quality (eg, Furedi, 2012); rather, they provide only a ‘health check’ on student views on courses, teaching and other services. High satisfaction is not necessarily an indicator of student achievement and high academic outcomes, nor does it predict the academic success of students with high academic outcomes (Furedi, 2012; Marsh, 2007; Shah, Lewis, & Fitzgerald, 2011). For example, a rating of 4.5 of 5 on a student assessment or an evaluation of teaching quality is not a predictor of students’ academic success. Governments have also recently established various positions, such as an ombudsman or independent adjudicator, to protect the welfare of students and handle complaints. In some countries governments have also introduced fees paid by every student to support the services provided by student unions to protect students’ rights and welfare. The increased numbers of students and their complaints have prompted universities to establish their own complaint or ombudsman offices to manage internal complaints before they are referred to external agencies.

    Historically, institutional survey results have been communicated internally with limited or extensive

    Enjoying the preview?
    Page 1 of 1