Customer Development of Effective Performance Indicators in Local and State Level Public Administration
eBook - ePub

Customer Development of Effective Performance Indicators in Local and State Level Public Administration

Rebekah Schulz, Andrew Sense, Matthew Pepper

  1. 168 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Customer Development of Effective Performance Indicators in Local and State Level Public Administration

Rebekah Schulz, Andrew Sense, Matthew Pepper

Book details
Book preview
Table of contents
Citations

About This Book

As communities demand more transparency and involvement in community affairs, local public administrators and government authorities are seeking new ways to meet those needs. A fundamental way to bridge community needs with authority's actions is through the customer-oriented development of Performance Indicators (PI). However, this is often not a core focal point and as a result, performance indicators are often output and not impact focused, and thus can lack relevance and utility.
Addressing this gap in academic and practical knowledge, Customer Development of Effective Performance Indicators in Local and State Level Public Administration presents a structured process to enable public organisations and their communities to jointly develop performance indicators for public organisation's operations, and enabling communities to determine key performance indicators that are both highly relevant and contextually useful.
Grounded in quality management principles, the book encourages community members to participate in practical co-production, promotes mutual learning and joint ownership, fosters relationship building between diverse customer groups, and inspires open conversations regarding local government operations.
This book provides ground breaking insights for public administrators at all levels, as well community leaders, and scholars of business, public administration, and social responsibility.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Customer Development of Effective Performance Indicators in Local and State Level Public Administration by Rebekah Schulz, Andrew Sense, Matthew Pepper in PDF and/or ePUB format, as well as other popular books in Business & Customer Relations. We have over one million books available in our catalogue for you to explore.

Information

Year
2021
ISBN
9781839821509

Chapter 1

Introduction

1.1. Chapter Introduction

This chapter provides key foundational aspects or perspectives on the research underpinning the contents of this practitioner guide book. It thereby outlines our quality-oriented approach to performance indicator development and articulates why having customers involved in such a process is so important. It also provides some academic research commentary on performance indicators in public administration (PA) and what makes our new approach so significantly useful in this PA context. Furthermore, it articulates in broad conceptual and practical detail our full approach to this issue – bearing in mind that following chapters will provide intimate details about each aspect. Towards the end of this chapter, it provides some limited commentary on the methodological and contextual information concerning the research which underpins the contents of this book. In so doing, it is hoped that it will help a reader contextually orientate, and more readily appreciate, the numerous real-life practical examples provided in the following chapters. Finally, at the end of this chapter, it briefly discusses how each of the following chapters are organised. Thus, the information contained in this chapter serves as the keystone on which the other chapters build.

1.2. Why Take a Quality-oriented Perspective to Performance Indicator Development in the Public Administration (PA) Context?

Whilst Chapters 2 and 3 will explore our quality-oriented approach to performance indicator development in much more detail, it is nonetheless important at this point to briefly articulate or justify why we considered a quality approach as appropriate to support performance indicator development in this PA context.
First, the PA context often involves a bewildering array of diverse customers and community stakeholders and operates a diverse range of services to communities. This is in particular contrast to private firms offering a limited range of products or services and often to specific market groups. In that frame, the successful management of public organisations is arguably more challenging and complex, more prone to public scrutiny and decision-making can sometimes be imbued with political considerations that may not necessarily align with operational concerns. Thus, having appropriate performance indicators in this context that effectively measure performance in accordance with the interests of various stakeholders would seem particularly important and may sensibly serve as the basis for establishing a ‘common ground’ for further improvement in those services.
Many readers will be aware of, or indeed have been involved in, Total Quality Management initiatives in their organisations over the past decades. The general principles of quality management concerning a focus on the customers and stakeholders, employee engagement and teamwork, and a focus on continuous improvement and learning suggest that a quality perspective may also have some merit in addressing performance indicator development and its current challenges in the PA context. Moreover, the quality management literature provides some formative frameworks that may assist in performance indicator development and consistent with those principles above, necessarily involves customers in those processes and in learning. Thereby, customers can have varying degrees of influence in providing inputs and in co-constructing outputs. Compared to other simpler and more prescriptive or deterministic approaches taken towards performance indicator development that are seen in many PA organisations, utilising a quality approach appears to be a more radical, more complex (due to its constructivist foundation), and a more context adaptive alternative. Nonetheless, if one presumes that having effective, relevant (to the operation and to the context), supported, and understood performance indicators that underpin continuous improvement initiatives as important, then just maybe a quality approach is what is needed to better deal with this issue.
The quality framework that was of particular interest to us was Quality Function Deployment (QFD) and its House of Quality (HoQ). QFD is a component of Total Quality Management and essentially tries to account for both product quality and process efficiency. QFD is most often used for design and manufacturing purposes and links customer requirements to the relevant technical requirements (TRs) used to manufacture or deliver a product or service (Evans, 2008). The resulting HoQ framework aligns the stakeholder requirements (voice of the customer (VoC)) with TRs (the ‘how’ or characteristics of the product or service being provided) (Evans & Lindsay, 2011). The terms external stakeholder and customer are quite often used interchangeably in the quality literature – reflecting that the more typical use of the HoQ tends to consider customers as primarily external to the organisation. However, in the underpinning study which supports the contents of this book, both internal and external stakeholders/customers are engaged, and, thus, the VoC here, is reflective of both groups’ perspectives. Both groups participated in the data capture process and also in seeking to align the VoCs with relevant indicators to measure performance. We considered this was highly appropriate given that the customer perspective is an integral component of [good] performance measurement (Goetsch & Davis, 2013; Tucker & Pitt, 2009).
The VoC is core to the effective implementation of any HoQ framework. However, it is perhaps bewildering to find that very few documented processes exist for the productive capture of the multifarious VoCs (Griffin & Hauser, 1993). In order for the customer requirements within the QFD approach to be meaningful and reflective of the community they represent, the processes used to source the data must be well-designed and effective – a particular challenge that was actively pursued in this research and is detailed and illustrated in following chapters. Further, it has been found that ‘collective involvement’ or stakeholder engagement in the quality process can raise performance (Pimentel & Major, 2016, p. 1007) and also improve the likelihood of performance indicators being appropriately used by the community, as previous research has demonstrated. Such stakeholder engagement processes also support learning and organisational improvement (Marr, 2009). The engagement process and the sampling of community and practitioner representation are, therefore, critical to the success of the HoQ.
In sum, in our pursuit of identifying and/or developing systematic and structured approaches towards performance indicator development that would best ‘fit’ and reflect the complex service context of PA, and ultimately incorporate diverse customer needs and community involvement, we concluded that an enhanced version of a HoQ framework specifically derived to focus on performance indicator development, would indeed facilitate those outcomes.

1.3. Why Have Customers Collaborate in the Development of Performance Indicators?

At the risk of repeating or reinforcing some comments made previously, it is simply not sufficient or sensible to mandate performance indicators to PA organisations from remote higher-level government authorities or to simply impose performance indicators on communities that ignore their particular needs and aspirations. Moreover, any random selection of performance indicators based on best practice observations or other detached and simplified formulaic approaches, again, does not necessarily reflect local community expectations nor encourage ownership of those performance indicators by the agency involved or the communities they serve. Many practitioner readers may readily appreciate these assertions and in turn move towards a conclusion that the best way to develop performance indicators that can reflect customers’ needs and encourages them to engage with the performance indicators, must be constructed with their active input.
A further compelling reason to have customers involved in performance indicator development revolves around communities demanding more transparency and more involvement in managing their community affairs. Public administrators are also seeking out ways to meet with those desires. This issue of participatory democracy is difficult and challenging particularly for local government authorities who to a large extent due to their purpose and scope are at the forefront of community engagement endeavours. These difficulties and challenges essentially revolve around what to focus on, how technically complex the issues are, how to initiate and enact processes that genuinely and productively involve community representation, and the public exposition and examination of what may be viewed as proprietary information or knowledge held by the authority. The approach presented and expounded on in our book is one way that public authorities can actively engage in highly productive partnerships with community groups/representatives. Therein, not only do community customers participate in very practical ways but the process promotes mutual learning and joint ownership through co-production of outcomes, fosters relationship building between diverse customer groups, and inspires improvement conversations about the operation being assessed. Such generated confidence in the measures developed and deployed also ultimately underpins the strategic continuous improvement and fiscal performance of those target operations.
In sum, having customers collaborate on the development of performance indicators enables a PA organisation to effectively address and incorporate their needs, encourages understanding of and direct ownership of the performance indicators, and helps address any goals of facilitating participatory democracy within local communities.

1.4. Performance Indicators and Measuring Performance in PA

A quick comment or two on the terms ‘performance indicators’ and ‘performance measures’, since some clarification about them is necessary in respect to this book. As seen in literature concerning performance management, the terms ‘measures’ and ‘indicators’ are often used interchangeably with little apparent reference to any relationship between them. Indeed, in a review of performance measurement systems literature by Choong (2014), it was shown that there is no consensus as to their meanings. At first glance, they may be interpreted as the same entity that is a measure is simply an indicator, but that does not represent the complete picture. It is our position in this book that an ‘indicator’ is the broader generic assessment criterion (or gauge) used to evaluate an operation’s performance (and can include both quantitative and qualitative forms), and a ‘measure’ (the degree) is a subset or element of that indicator. That being the case, an indicator can have singular or multiple measures attached to it and, thus, individual contexts have choice with how many and what measures they choose to deploy as pertaining to each indicator. For example, a performance indicator of ‘access to transport options’ for a local theatre district may have performance measures assigned that are considered most appropriate in their context, for example, ‘number of kilometres from the train station’ and/or ‘number of free parking spaces within a 1 km radius of the district’. In that frame, indicators can be considered the motherships which spawn their performance measures. Our focus in this book is on the development of performance indicators.
The following discussion on the current literature in this field tends to highlight the use of the term ‘measures’ rather than ‘indicators’ but nonetheless helps indicate the conditions concerning the field of measuring performance in public organisations. Globally, government agencies have adopted performance measures to review performance, with varying degrees of uptake and success (Hall & Handley, 2011; Marr, 2008b). This increase in the use of performance information and data in PA has been well documented (e.g. see Cepiku, Hinna, Scarozza, & Savignon, 2017; Hood, 2012; James & Moseley, 2014). There are a number of factors influencing the effective use of performance measures in these contexts. These factors include the following: inadequate staff training; the inability of existing information systems to cost-effectively provide timely, reliable, and valid data; difficulties in selecting and interpreting performance measures; a lack of organisational commitment to achieving results; and limited decision-making authority assigned to those who can act on those measures (Cavalluzzo & Ittner, 2004). Consequently, public organisations confront a complex assemblage of sociotechnical challenges to the successful implementation of performance measurement systems.
Performance indicators and measures must ideally link to strategic direction within a culture that promotes learning and change from performance data (Marr, 2008a). However, measuring performance in government is often seen as an administrative burden that rarely produces insight to support the business or lead to change (Marr, 2008b). Essentially, the bureaucratic culture of government can stand in the way of effective performance measurement. For example, in 2000, the Institute of Public Administration of Canada (IPAC) found that performance measurement in government bodies in North America was ineffective and rarely led to positive change (Plant, 2006). Performance measurement is, in part, made more difficult within the government context due to the opposing need to satisfy growing customer expectations (effectiveness) whilst remaining financially sustainable (efficiency) (Tomaževič, Tekavčič, & Peljhan, 2017).
Furthermore, reviewing and improving performance measurement in public entities can be costly, resource intensive, and time-consuming (Hatry, Gerhart, & Marshall, 1994). That said, effective performance measurement is nonetheless important in gauging an organisation’s continuous improvement and ongoing success (Caiden, 1998) and can be used to entrench cultural changes or innovation (Bartlett & Dibben, 2002). Hence, it is important to pay appropriate attention to it despite the challenges. It has also been noted that the development of a performance measurement system and indicators from the ground-up – ensuring its usefulness to managers and understanding by stakeholders – was also more successful upon implementation than the alternative, and has been utilised to inform continuous improvement (Hildebrand & McDavid, 2011). In contrast, an international study by Brusca and Montesinos (2016) examined the utilisation of performance reporting and found that the majority of countries did not engage stakeholders in the development of performance reporting and concluded that such engagement could strengthen such reporting. These findings clearly suggest a strong opportunity and a role for stakeholder participation in the development of performance indicators, which would in turn improve the successful implementation of those indicators and a focus on continuous improvement.
Enacting effective community/stakeholder engagement (Pansari & Kumar, 2017) in developing performance indicators requires suitable instruments/processes. Holistic and systematic processes focussed on achieving such outcomes are not currently described or prescribed in literature and the lack of such processes is noted by some researchers in the field (see Kumar & Pansari, 2016; Moxham, 2009; Taylor & Taylor, 2014; Yuen, Park, Seifer, & Payne-Sturges, 2015). There is a clear need to develop such participatory processes since communities expect to play a role in decision-making in government, despite having, arguably, a lack of knowledge or specialisation to understand complex issues to make informed decisions (Brydon & Vining, 2016). For example, community engagement practices have increased in Australia as: more citizens exert a desire to participate in decision-making; there is a belief within government that understanding community needs results in more effective policy development; there is resulting legitimisation of government from engagement programs; and there is growing ease about engagement via the online environment (Grant & Drew, 2017). At the local government level, for example, Head (2007) argues that expanding the engagement processes in local government was motivated by a desire to broaden responsibility for decisions and their ultimate success or failure. He also found that community participation in decision-making may assist in the restoration of trust in government (Head, 2007). Furthermore, the inclusion of community stakeholders in performance management practices could improve asymmetric information provision, improve the value of performance data (Epstein, Wray, & Harding, 2006), and encourage collaboration and relationship building between internal and external stakeholders of the public organisation. Such outcomes thereby positively impact external stakeholders’ perceptions of the performance of the public entity (Quinlivan, Nowak, & Klass, 2014).
In sum, in the local government area, for example, there are strong indications that the development and use of performance assessment is often variable, ill-conceived and ineffective, and, consequently, in need of greater understanding and change. Thus, this issue of performance indicators in PA is significantly important for effective performance measurement and management of public entities, and, is currently difficult to approach and very underdeveloped. Consequently, there is a lack of published material on this specific topic, and this book directly addresses the significant challenges of developing performance indicators for PA organisations, and thereby contributes knowledge to the broader field of performance measurement and management in PA.

1.5. Our Customer Approach to Performance Indicator Development (PIDA)

This section provides an outline of our complete performance indicator development approach and serves to both anchor and signpost that which is explored in the following chapters in this book. To that end, here, we only want to articulate the broader structure of our approach and highlight elements of each that have some alignment or synergy.
As mentioned earlier, we have taken a quality perspective in this approach to performance indicator development. The core and significant challenge, it was extrapolated, was determining an approach through which to define community needs and align those with relevant performance indicators. Therein, we have adapted and enhanced a standard HoQ as the decision-making framework to implement in developing performance indicators. By way of a brief recap, a HoQ seeks to translate customer requirements (VoC) into measurable TRs and metrics for products (Evans & Lindsay, 2008). TRs are those attributes of a physical product or service that specifically deliver upon stated customer requirements as articulated in the captured VoC. Any HoQ endeavours to balance effective and efficient service delivery with quality service provision (Tomaževič, Tekavčič, & Pelijhan, 2017) – and therein, quality service provision relates to the product meeting with customer requirements. The customer voices/needs and their integration with identified TRs are, thus, seminal in a HoQ framework. Moreover, the use of the term ‘house’ in this technique simply reflects that the framework is pictorially presented as a house – containing elements and matrices which constitute a roof, the walls, and the foundations, which also help it serve as a powerful communication and learning tool wherein all the critical elements are contained within the one figure.
We have termed our enhanced or derived HoQ as the performance indicator House of Quali...

Table of contents

  1. Cover
  2. Title
  3. Chapter 1. Introduction
  4. Chapter 2. What Is a Quality Perspective Towards Developing Performance Indicators?
  5. Chapter 3. The Key Elements of the PIHoQ Framework
  6. Chapter 4. A Guide to the Development of the PIHoQ
  7. Chapter 5. Benefits, Challenges, and the Road Ahead
  8. References
  9. Index