Description
1 attachmentsSlide 1 of 1attachment_1attachment_1
Unformatted Attachment Preview
7SSMM 609 Accounting, Organisations and Society
What’s Measured is What Matters
Professor: Rita Samiolo
Submission Date: 27 March 2020
Candidate Number: A23496
2
The prevalence of auditing has become far more widespread in our society today.
Industries which used to be self-regulated by peers are now subject to audits by independent
companies. Society used to rely on professionals for their technical knowledge and trust them to
put the public’s best interest first (McGivern & Ferlie, 2007). However, “growth in a ‘culture of
suspicion’ of professionals in the public sector has given rise to the ‘Audit Society’ we live in
today” (O’Neill, 2002). The trust we used to place in professionals has been replaced by trust in
accounting and auditing (Power, 1997). This shift has been especially prevalent in the new public
management (NPM) sector. Beginning in the 80s, a shift began in the public management sector
to make it more business-like. During this time, there was criticism of the public sector arising
from its perceived inefficiency and ineffectiveness. This led to decentralization, privatization and
development of target driven performance indicators. In the early 1980s, the National Audit
Office (NAO) and the Audit Commission (AC) were establish. These organizations
“consolidated audit resources of both local and central government and provided a focus for
addressing these inefficiencies and ineffectiveness as well as the economy of publicly funded
activities” (Power, 1999).
“Value for money” (VFM) audit arose as a way to introduce transparency into the public
sector. In the performance of these audits, rather than focusing simply on financial aspects,
auditors evaluate whether they believe the auditee has systems in place to ensure economy,
efficiency and effectiveness. This type of audit has fostered competition and performance in the
public sector and placed a focus on results-oriented behaviour. Another development in the NPM
has been the shift from focusing on the practice itself, towards a more indirect approach in which
there is a focus on the internal control systems (Munro, 2004). These changes mainly result from
time and economic factors. It is simply not possible for complex organizations, such as the NHS
3
to be audited. Therefore, it is more time efficient and economical to audit the internal control
systems already in place at the organization rather than the actual performance of the
organization. This focus on the internal systems rather than the activities of an organization itself
have major implications when examining how public sector systems, such as social work and
public health systems can be made “auditable”.
Inspections which may have been based on peer review/judgement in the past have been
replaced by standard and auditable measures of performance. Performance indicators, rankings
and indexes are used to measure the performance of an organization against a target. Complex
organizations require performance indicators because they allow users to focus their attention on
a much smaller dataset (meant to represent the entire organization) which can be tracked and
managed over time and help organizations set goals. It also improves comparability and helps
organisations address public expectations.
However, the problem with governing by targets is that it demands targets be set which
relate to a part of the entire performance which in turn is given importance over other parts. An
importance is placed on this part which is measured and given priority over other areas which are
not measured. This over-simplification of complex activities may not measure parts of the
performance which matter because they are often much harder to measure. The ultimate goal of
performance by targets is to develop targets that can be measured by indicators to assess
performance of the domain. This idea is demonstrated in Figure 1 (see appendix 1) (Bevan and
Hood, 2006).
However, there is a problem that Carter et al. recognized in 1995 which is that most
indicators are “tin openers rather than dials: by opening up a can of worms they do not give
4
answers but prompt investigation and inquiry, and by themselves provide an incomplete and
inaccurate picture” (Carter, 1995). This means that if performance indicators are dials, the
standards of performance have one set meaning which can be standardized and provides an
accurate measure. These are the small set of good measures (M[g] in appendix 1). However, if
performance indicators are tin openers, they are open to more than one interpretation and do not
always provide accurate measures of performance. These are the imperfect measures which are
susceptible to false positives or false negatives denoted in appendix 1 as M[i]. Further, there is
another subgroup of tin openers, for which we have no information, denoted as n and therefore
cannot be measured at all. This “opens a can of worms” meaning that when attempting to find
answers, performance indicators only lead to more questions and complicate the process. This in
turn prompts further investigation and inquiry.
There are a couple of assumptions that governance by targets rests on which must be
addressed. First, any omission of performance outside the domain (the organisation, for example,
that is being measured) or any parts of the domain which we have no data for do not matter; and
either the good measures are a reliable reflection of the total performance or the good measures,
in combination with the imperfect measures can be relied on as an adequate basis (Bevan and
Hood, 2006). Governance by targets also assumes that while individual behaviour will change,
the amount of ‘gaming’ will also be relatively low. Here, gaming refers to individuals reducing
performance in areas that are not measured or ‘hitting the target and missing the point’ (Bevan
and Hood, 2006). Unfortunately, what we have seen is that gaming is still very prevalent in
organisations which embrace governance by targets.
5
One example of gaming in the public sector is the establishment of consultant appraisal
(CA) in the British National Health Service (NHS) in April 2001. According to the Department
of Health (2000), CA was supposed to “provide a formally structed opportunity for professionals
to engage in dialogue and reflect on how their effectiveness might be improved”. Consultants
were given appraisal forms in which they provided evidence of ‘good medical practice’ (GMC,
2000) which they discussed with their appraiser in order to get their medical licenses revalidated.
Rather unsurprisingly, NHS hospitals also received a measure based on how many consultants
completed the appraisals. Consultants were encouraged by management to complete the
appraisals, without regard for how well they were done (McGivern & Ferlie, 2007). Another
consequence of this system was the tendency of consultants to ‘tick boxes’. The attempt of CA
appraisals to make the NHS more visible, explicit and accountable backfired. McGivern & Ferlie
(2007) found that “many medical professionals are not changing their actual practice but rather
they are re-presenting what they do in order to avoid empty boxes in the CA form”. This is a case
of decoupling. The audit process was managed as a separate compliance exercise with no
integration into the overall system. The audit provided an impression of legitimacy which was
given more importance than professional development, the overall goal of the audit.
Another example of gaming in the NHS is the introduction of ‘star ratings’ by the
Department of Health in England in 2001. These ratings for public health organizations “gave
each unit a single summary score from about 50 kinds of targets” (Bevan and Hood, 2006). This
performance assessment was linked with indirect sanctions. Managers with poor performance on
measured indicators risked being fired or being exposed to shaming. One of the many indicators
measured with the star ratings target system was the proportion of category A calls seen within 8
minutes for 1999-2000 and 2002-2003 (Bevan and Hood, 2006). Category A calls were defined
6
as immediately life-threatening emergencies. There was a target in place since 1996 to reach 75
percent of these within the allotted timeframe of 8 minutes. For 1999-2000, before the
introduction of the star rating, some trusts only achieved 40% (appendix 2). However, when
achieving 75% was introduced as a target from 2002-2003, these numbers increased
significantly, and, at the end of 2003, the worst achieved nearly 70% (Bevan and Hood, 2006)
(appendix 2).
At first glance, these results seem very good. It looks like the measures were good
(M[g]) and the targets set worked to improve performance. However, the Public Administration
Select Committee found in 2003 that there were inconsistencies in defining what classified a
‘life-threatening emergency’. The percentage of emergency calls listed as Category A ranged
from 10% to over 50% across all ambulance trusts (Public Administration Select Committee,
2003). Further, there was no clear standard with regard to when the clock started (Bird et al.,
2005). Both of these lead to questions regarding whether Category A response time was truly a
good measure of performance or not. Clearly, this was a case where the performance measures
were not necessarily good indicators of the true performance.
A final example which illustrates that there is far more to what matters than simply what
is measured is that of social work. In order for a social worker to accurately assess the state of a
child and their family, they must spend time with the family. Human contact is necessary in
order to effectively support families. However, the government has a growing demand for data
which comes at the loss of the type of data and interactions social workers need in order to
accurately gauge child welfare. The audit and inspection system has failed at its goal to make
social work transparent. There is a “mistaken emphasis on the easily measured aspects of
7
practice such as forms filled in or meetings held” (Munro, 2008). The part that is harder to
measure and evaluate, but arguably much more valuable is whether the information contained in
these forms is accurate and useful for reasonable decision making or what the nature of the
discussion was at the meeting. As Munro emphasizes, “the current system is so skewed towards
the simply measured that is sends out the wrong messages to workers about what is important”
(Munro, 2008). Social workers’ behaviour adjusts to the fact that the unmeasured parts of their
work are undervalued and not assessed. This diverts their attention away from the interactions
with families, a crucial part of their job, to paperwork and meetings. This is an example of that
which is important not being measured. The safety and comfort of the children under the care of
social workers should always be the top priority of any social worker. However, there is no
reliable way to measure this. Therefore, other less important factors like whether a meeting was
held, without regard for what was discussed at the meeting is given higher priority.
In conclusion, performance measurements and performance indicators foster selective
transparency. They make visible the aspects of organisations which are simply measured at the
expense of other things that may be more central to the organization. Targets in some cases lead
to deteriorating quality of services already being provided – as we saw with consultant appraisals
and social work. Targets can also transform the organization or system they are trying to
describe. In the case of social work, targets were set as an attempt to increase transparency.
However, as a result of the increased paperwork and liability, social workers began to spend less
time with the families and children they were supposed to be protecting and more time inputting
data into a system. This has a significant impact on the quality of public service being provided.
It is clear that taking part of an organisation as a representation of an entire organisation – as
performance indicators do – can and very often does lead to very negative, unintended
8
consequences. Even in cases where there appears to be significant improvements in performance
– such as the case with response to Category A calls under the star rating systems, it is
impossible to know whether these were genuine improvements and to what extent gaming
reduced performance in areas that were not captured by the targets. Clearly, there needs to be a
shift in the way these targets are defined and perhaps the monitoring system in place with the
aim of reducing gaming problems and synecdoche.
9
Appendix:
Appendix 1:
Appendix 2:
10
References:
Bevan, G. and Hood, C. (2006) What’s Measured Is What Matters: Targets and Gaming in the
English Public Health Care, System. Public Administration, 84, pp. 517–538.
Bird, S.M., D. Cox, V.T. Farewell, et al. (2005). ‘Performance Indicators: Good, Bad, and Ugly’,
Journal of the Royal Statistical Society, Series A, 168, 1, pp. 1–27.
Carter, N., R. Klein and P. Day. (1995). How Organisations Measure Success. The Use of
Performance Indicators in Government. London: Routledge.
Department of Health. Advance letter (MD) 6/00: Consultants’ contract: Annual appraisal for
consultants, 2000.
GMC. (2000) Revalidating doctors: Ensuring standards, securing the future. London: General
Medical Council.
McGivern, G. and Ferlie, E. (2007). ‘Playing tick-box games: Interrelating defences in
professional appraisal’, Human Relations, 60(9), pp. 1361–1385.
Munro, E. (2004). The impact of audit on social work practice [online]. London: LSE Research
Online.
O’Neill, O. (2002). A question of trust. Cambridge: Cambridge University Press.
Public Administration Select Committee. 2003. Fifth Report. On Target? Government by
Measurement (HC 62-I). London: The Stationary Office.
Purchase answer to see full
attachment
Explanation & Answer:
2000 words
User generated content is uploaded by users for the purposes of learning and should be used following Studypool’s honor code & terms of service.
Reviews, comments, and love from our customers and community: