Analysing Student Feedback in Higher Education
eBook - ePub

Analysing Student Feedback in Higher Education

Using Text-Mining to Interpret the Student Voice

  1. 222 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Analysing Student Feedback in Higher Education

Using Text-Mining to Interpret the Student Voice

Book details
Book preview
Table of contents
Citations

About This Book

Analysing Student Feedback in Higher Education provides an in-depth analysis of 'mining' student feedback that goes beyond numerical measures of student satisfaction or engagement. By including authentic student voices for understanding the student experience, this book will inform strategies for quality improvement in higher education globally.

With contributions, representing an international community of academics, educational developers, institutional data analysts and student-researchers, this book reflects on the role of computer-aided text analysis in gaining insight of student views. The chapters explore the applications of text-mining in different forms, these include varied institutional contexts, using a range of instruments and pursuing different institutional aims and objectives. Contributors provide insights enabled by computer-aided analysis in distilling the student voice and turning large volumes of data into useful information and knowledge to inform actions. Practical tips and core principles are explored to assist academic institutions when embarking on analysing qualitative student feedback.

Written for a wide audience, Analysing Student Feedback in Higher Education provides those making informed decisions about how to approach analyses of large volumes of student narratives, with the benefit of learning from the experiences of those who already started treading this path. It enables academic developers, institutional researchers, academics, and administrators to see how bringing text mining to their institutions can help them in better understanding and using the student voice to improve practice.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Analysing Student Feedback in Higher Education by Elena Zaitseva, Beatrice Tucker, Elizabeth Santhanam in PDF and/or ePUB format, as well as other popular books in Didattica & Didattica generale. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2021
ISBN
9781000526998
Edition
1

1 Discovering student experience Beyond numbers through words

Elena Zaitseva, Elizabeth Santhanam, and Beatrice Tucker
DOI: 10.4324/9781003138785-1

Introduction

How do students experience higher education?
Are they satisfied with teaching, learning environment, resources and academic support?
Do they feel they belong to the university community or is it a lonely experience?
How prepared do they feel for the world of work?
As Klemenčič and Chirikov (2015) rightly observed, questions like these are of central importance for prospective and current students and their families, for senior administrators, academics and staff in student services, and for the government and quasi-governmental quality assurance bodies. One of the most extensively used methods of gathering students’ perceptions of teaching quality and their broader experience are surveys. They are wide reaching, cost-effective, generally standardised and remain a debatable, but unfailing source of information about the quality and standards of teaching and learning. As Harvey pointed out, while students may have a certain bias which influences their responses, their “…perspective is advantageous for being much more immediate … and ensures richness and authenticity of the information provided” (Harvey, 2011, p. 23).
In the current culture of accountability, the ‘quantitative’ student voice is increasingly used as a measure of educational quality. While numerical trends and relationships remain attractive for many stakeholders, survey scales tend to exercise ‘agenda control’ by regulating what topics students are invited to contribute to when providing responses (Nelson, 2015). Authenticity in the ‘qualitative’ voice is more genuine: although comments might still be shaped around core survey questions, they also give students the opportunity to express their views on themes outside of a preconceived framework.
The chapter looks at the evolution of the survey-based student voice over the past several decades, focusing on how the surveys and positioning of students in relation to the learning process have changed over time. By highlighting the dangers of prevailing culture and practice ‘numbers speak louder than words’, the chapter provides a strong argument for the need for a systematic and timely analysis of student comments and institutional data triangulation. Challenges in analysing large qualitative datasets are outlined, followed by a discussion of recent developments in applications of text analytics in various national and institutional contexts. The chapter sets the scene for the subsequent contributions focussing on current practice.

Student evaluation surveys: a century long journey

The first formal student evaluations of teaching (SETs) emerged almost a century ago in the USA. Spencer and Flyr (1992, in Wachtel, 1998) reported that the first teacher rating scale was published as early as 1915. In the 1920s, educational psychologist Herman H. Remmers at Purdue University and the learning psychologist Edwin R. Guthrie at the University of Washington, developed comprehensive rating scales to provide university teachers with information about how their teaching was perceived by students and help them to make improvements, where necessary (Stroebe, 2020). Despite Remmers and Guthrie’s efforts in promoting student evaluations, and Remmers’ profound research on reliability of student ratings, a relatively small number of universities were actively using student evaluations until the 1960s (Centra, 1993).
In most colleges and universities in North America, SETs began in the late 1960s and early 1970s (Murray, 2005). In the1960s, the use of student evaluations was voluntary (Wachtel, 1998). A variety of survey instruments started to emerge: in 1966, for example, level-specific surveys for newly arriving students, first years and the final year students originated as part of the USA Cooperative Institutional Research Programme (Krause, 2011). Student activism of the 1960s brought university experience into the focus with many students complaining about irrelevant curriculum and demanding a voice in governance. In some large universities in the USA, students developed their own evaluation ratings and “haphazardly published the results” (Centra, 1993, p. 50).
While we could not find a reliable reference to when open questioning (opportunity to provide free-text answers to questions) was first introduced in universities, Centra mentioned that in 1964 he developed a student evaluation form at the Michigan State University that contained an open-ended comment section – this part of the questionnaire was retained by the teaching staff (Centra, 1993). Therefore, we could speculate that the 1960s was probably the first decade when student comments were incorporated into SETs. Given a later reference about the lack of text analysis in a study attempting to categorise students’ written comments, based on “2,685 courses evaluated during the fall 1978 semester” (Braskamp, Ory, and Pieper, 1981, p. 66), we could conclude that in the 1970s free-text comments were widely included in the university survey design.
The 1970s were called a ‘golden age’ of research into student evaluations with growth in the quality and thematic spread (Centra, 1979). During this decade, many higher education institutions (HEIs) started to use results of individual teacher evaluation for tenure and promotion decisions. Commercially viable instruments were developed in universities and research centres, providing HEIs with an opportunity to have access to banks of benchmarking and comparative data. Even in these early years, when individual academics in the USA and many other countries resisted student evaluations, dubbing it an intrusive process that prevented them to maintain their academic freedom (Haskel, 1997; Wines and Lau, 2006), there was an expectation that academics would periodically review their teaching and course content using student feedback as part of reflective practice.
The 1980s witnessed an increased emphasis on the evaluation of teaching in universities and proliferation of survey instruments (Marsh, 1982). It was a reflection of the system-level interest in the efficiency and effectiveness of higher education (HE), where surveys were evidencing accountability by the institutions themselves, government quality assurance agencies and discipline accreditation bodies (Knapper, 2001; Darwin, 2010). The growing diversity of student population, diversification of HEIs and their educational offerings brought attention to improving social mobility, better addressing students’ needs and enabling higher levels of attainment. This was taking place in parallel with the growth of neoliberalism and consumerist perspectives in HE. In addition to SET instruments, providing diagnostic feedback to individual teachers about the effectiveness of their teaching, unit and course evaluation surveys started to emerge.
The quality assurance agenda of the early 1990s brought accountability focus to a new level, with institutions in some countries being financially rewarded for evidenced good educational processes and practices (Neumann, 2000). Many HEIs moved to online administration and reporting of survey data. The focus of surveys began to shift from the evaluation of teaching to learning design and later, to student learning outcomes and wider aspects of the university and graduate experience (Abrami, d’Apollonia, and Rosenfield, 2007; Tucker, 2015). Establishment of national systems of institutional accreditation, focus on evidence-based decision-making, and requirements to provide information on performance indicators further expanded the use of student and graduate surveys in 2000s (Chalmers, 2007; Shah and Nair, 2012). Validated instruments, such as the Australian Course Experience Questionnaire (CEQ), were adopted by many institutions worldwide, highlighting the need for reputable surveys allowing international benchmarking. An increasingly competitive global HE market expedited development of the International Student Barometer for informing choices of prospective international students. Harvey argued that growing consumerism of HE resulted in sophistication in the processes of data collection (Harvey, 2011). The range of student data gathered expanded significantly during these years with the sector becoming largely reliant on online survey tools.
In the last two decades, governments all over the world started to regard HE as an economic commodity, with an increased interest in linking quality of education to employment outcomes. Many countries saw the emergence of graduate outcomes, pathways and destinations surveys (Schomburg and Teichler, 2006). UK National Student Survey (NSS), launched in 2005, was driven by the desire of UK funding agencies to collect standardised data that could be used to provide the public and the HE sector with comprehensive, comparable views of students about the quality of their education and inform the choices of future students (HEFCE, 2004,). Third parties were employed to collect, analyse and report on national surveys, and their results become increasingly present in the public domain. National instruments covering postgraduate provision emerged in some countries: in the UK, for example, results of the Postgraduate Research Experience Survey (PRES) and Postgraduate Taught Experience Survey (PTES) remain confidential to each institution, while allowing participating institutions to benchmark their results against the aggregate results of all participants.
Emphasis on student-centred learning in HE, where students are seen as active participants of the learning process, facilitated the search for alternative metrics to measure quality. 2006 saw the launch of the National Survey of Student Engagement (NSSE) in the USA. The survey assesses effective teaching practices and student engagement in educationally purposeful activities – thus bringing institutional attention to the resources that need to be invested to facilitate productive academic experiences or effective instructional practices (Kuh, 2003). The NSSE was later replicated in many other countries, including Australia, Canada, China, Japan, Mexico, New Zealand, South Africa, Northern Ireland, and the UK (Tremblay, Lalancette, and Roseveare, 2012).
Today, the views of HEI applicants, current students and graduates informs quality assurance and improvement processes at multiple levels, and a multitude of instruments collecting data on: satisfaction, engagement, experience, inclusion, career readiness, and employment outcomes. The surveys are administered by the HEIs, government agencies and professional bodies. Many institutions rationalised their internal instruments, aligning them with national surveys and seeking improved measures of institutional quality via benchmarking. Anecdotal evidence suggests that organisations have responded to the status of high-profile surveys by changing their activity to reflect the focus of survey items or by changing language they use in their communication with students. To address rapid changes taking place in HE, such as development of new educational technologies, changes in student characteristics and expectations, new access and participation targets, and global forces (e.g., the recent COVID-19 pandemic), evaluation measures are being regularly updated with questions repur...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Contents
  6. Preface
  7. Contributor bios
  8. 1 Discovering student experience: Beyond numbers through words
  9. Part I Exploring collective student voice: approaches, tools and institutional insights
  10. Part II Listening to diversity of student voices
  11. Part III Looking across the student journey
  12. Part IV Informing actionable insights and ethical approaches to decision-making
  13. Index