Measuring Productivity in the Office Workplace report

advertisement
Centre for
Building
Performance
Research
Measuring Productivity in the
Office Workplace
Final Report
James Sullivan
George Baird
Michael Donn
Research and publication by the
Centre for Building Performance Research.
Victoria University of Wellington
Prepared for: the New Zealand Government Property
Management Centre of Expertise, Wellington
July 2013
Edition 2: Final Report, 26 July 2013
ISSN
ISBN
Authors:
Sullivan, J., Baird, G., Donn, M.
Report title:
Measuring Productivity in the Office Workplace
Centre for Building Performance Research,
Victoria University of Wellington,
P.O. Box 600, Wellington, New Zealand.
Phone + 64 4 463 6200 Facsimile + 64 4 463 6204
The Document Register is provided at the rear.
Measuring Productivity in the Office Workplace
TABLE OF CONTENTS
FIGURES .......................................................................................................................................................................... 5
TABLES ........................................................................................................................................................................... 5
EXECUTIVE SUMMARY .............................................................................................................................................. 7
ACKNOWLEDGEMENTS ............................................................................................................................................. 9
1
2
INTRODUCTION ............................................................................................................................................... 13
1.1
Aim ............................................................................................................................................................................... 13
1.2
The effects of the office environment on people ....................................................................................... 13
1.3
The value of productivity .................................................................................................................................... 14
1.4
Defining productivity: the challenge .............................................................................................................. 14
METHODS FOR ASSESSING PRODUCTIVITY ........................................................................................... 17
2.1
Perceived productivity ratings ......................................................................................................................... 17
2.1.1
How is it measured?......................................................................................................................................... 18
2.1.1.1
The question may be asked in several ways ............................................................................... 18
2.1.1.2
It may be assessed over a particular period of time ................................................................ 18
2.1.1.3
The scale may be numerical or ordinal ......................................................................................... 19
2.1.1.4
Assessments may be done by the self, peers, or supervisors............................................... 19
2.1.2
The accuracy of subjective evaluations of productivity .................................................................... 20
2.1.2.1
Measures of perceived productivity are widely used and may have some relation to
actual productivity ...................................................................................................................................................... 20
2.1.2.2
There is some evidence that the CBE survey at least may provide a reasonable
estimation of simple performance effects ......................................................................................................... 21
2.1.2.3
People are generally poor at assessing their performance ................................................... 22
2.1.2.4
Subjective ratings may be misleading............................................................................................ 23
2.1.2.5
Correlations between subjective and objective productivity measures are low ......... 24
2.1.3
Comparisons of the different measures................................................................................................... 24
2.1.3.1
The type of question.............................................................................................................................. 24
2.1.3.2
The influence of the recall period .................................................................................................... 27
2.1.3.3
The source of the ratings ..................................................................................................................... 27
2.1.3.4
Comparison of numerical and ordinal scales .............................................................................. 28
2.1.4
Summary .............................................................................................................................................................. 29
Centre for Building Performance Research
Page 3 of 75
Measuring Productivity in the Office Workplace
2.2
2.2.1
Cognitive performance test batteries ....................................................................................................... 31
2.2.2
Monitoring computer activity ...................................................................................................................... 32
2.3
3
4
5
Objective performance tests .............................................................................................................................. 31
Health based measures ........................................................................................................................................ 32
2.3.1
Absenteeism ........................................................................................................................................................ 33
2.3.2
Presenteeism ...................................................................................................................................................... 35
1.1.1
Frequency of health problems ......................................................................................................................... 36
2.4
Time lost to issues affecting productivity .................................................................................................... 37
2.5
Psychometric measures....................................................................................................................................... 39
2.5.1
Mood....................................................................................................................................................................... 39
2.5.2
Sleepiness/fatigue/alertness ....................................................................................................................... 40
2.5.3
Job satisfaction ................................................................................................................................................... 40
2.5.4
Job engagement ................................................................................................................................................. 41
2.5.5
Intention to quit/turnover ............................................................................................................................ 42
ASSESSMENT OF INTERNAL ENVIRONMENTAL CONDITIONS.......................................................... 44
3.1
The key environmental factors......................................................................................................................... 44
3.2
Environmental assessment methods ............................................................................................................. 46
3.2.1
Objective vs. Subjective assessment ......................................................................................................... 46
3.2.2
Comparison of different subjective assessment tools ....................................................................... 47
CONCLUSIONS ................................................................................................................................................... 52
4.1
What are the factors that the literature suggests can be used to measure productivity? ....... 52
4.2
What are the key environmental factors affecting productivity? ...................................................... 56
4.3
Are occupant surveys the best method for measuring and comparing productivity? .............. 56
REFERENCES...................................................................................................................................................... 60
Page 4 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
FIGURES
Figure 1: Average comfort vs. perceived effect of the environmental conditions on productivity in 15
NZ office buildings using BUS survey data. ...................................................................................................................... 26
TABLES
Table 1: Summary of the pros and cons of ordinal and numerical scales............................................................ 28
Table 2: Examples of health issues asked about in the different surveys............................................................ 36
Table 3: Summary of key environmental factors ........................................................................................................... 46
Table 4: Comparative summary of four occupant surveys ........................................................................................ 50
Table 5: Summary of advantages and disadvantages of various productivity measures ............................. 56
Centre for Building Performance Research
Page 5 of 75
Measuring Productivity in the Office Workplace
Page 6 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
EXECUTIVE SUMMARY
Scientific research has firmly established that the office environment can influence people’s health,
well-being, and productivity. While specific effects may vary significantly, studies have suggested
that increases in productivity of up to 15% may be gained from environmental improvements.
Productivity effects have great value — it has been estimated that over the life of a building, the
costs of the workers and their salaries can be as much as 10 to 40 times the maintenance and
operational costs of the building, and 80 to 200 times the initial construction costs.
This research paper looks at the measurement of the effects of the office environment on the
occupants' productivity via a review of over 300 references. Four key questions were asked:
1)
2)
3)
4)
What are the factors that the literature suggests can be used to measure productivity?
What are the key behavioural and attitudinal factors that affect productivity?
What are the key environmental factors affecting productivity?
Are occupant surveys the best method for measuring and comparing productivity?
What are the factors that the literature suggests can be used to measure productivity?
+ What are the key behavioural and attitudinal factors that affect productivity?
There is no clear definition or standard measure of productivity in the office environment. There is
great variation amongst different jobs and tasks, making it difficult to compare or aggregate them.
While productivity is at its roots an objective and quantifiable measure, relating inputs to outputs,
objective measures are often highly limited and inappropriate for many office jobs. Factors such as
quality and interpersonal relations are not readily countable, but may be very important. Thus,
overall productivity in the office cannot really be measured.
Because productivity cannot be measured simply, it is often defined more vaguely, in terms of
various elements — generally behaviours that may be related to productivity and which may
provide indications of improved organisational outcomes.
Researchers have used a number of such elements to assess the effects of the office environment on
occupants. These include:
1) Ratings of perceived productivity
2) Cognitive performance tests (e.g. working memory, processing speed, concentration)
3) Monitoring computer activity (e.g. keystrokes, mouse clicks)
4) Absenteeism
5) Presenteeism
6) Reported frequency of health issues
7) Time lost to issues affecting productivity
8) Mood
9) Sleepiness
10) Job satisfaction
11) Job engagement
12) Intention to quit
13) Turnover
Centre for Building Performance Research
Page 7 of 75
Measuring Productivity in the Office Workplace
Most of these elements are measured subjectively. This is because they are either a) inherently
subjective (e.g. mood, job satisfaction) or b) possibly impractical to measure objectively (e.g.
reported frequency of issues). It should be noted that the objective measures are not inherently
better than the subjective ones. They too are limited to only measuring aspects of the overall
productivity. Absenteeism, for instance, only measures the amount of productivity lost because
someone is not at work, and says nothing about how productive they are when they are present.
Ultimately, all the measures available are limited and only provide an indication of the effects on
overall productivity. This may, however, still be enough to say if a building is likely to be providing
significant improvements to its occupant’s productivity. The pros and cons of the different measures
are summarised in Section 4.1 of this report (pg.52).
What are the key environmental factors affecting productivity?
Key environmental factors affecting productivity include thermal conditions, indoor air quality,
acoustics, and lighting. They may also include elements such as workstation design and
ergonomics, and the amount of control people have over their environment.
Ultimately, these general factors touch on almost every aspect of office design. For example, indoor
air quality is affected by ventilation, location, occupant density, maintenance, cleaning, material
selection, where pollution sources are placed, and the general plan of the building.
A summary of these factors is in Section 3.1 (pg.44).
Are occupant surveys the best method for measuring and comparing productivity?
With regards to the assessment of environmental conditions, and their effects on people, the answer
is yes. Occupant surveys are the best way to get a broad picture of how the occupants are
responding to the building, and how well they think it is serving their needs. This is vital because
many productivity indicators may be influenced by more than just the environment. An occupant
survey can confirm whether it is likely to be the building that is causing any identified effects
(rather than some other factor), as well as identifying problem areas.
Objective measurements and observations are used to define the environmental conditions and
design elements to which the occupants are responding. They allow people to learn lessons about
the effects and success of design decisions, and define the specifics of problems that have been
identified, allowing them to be fixed. Thus, they complement occupant surveys.
For the measurement of productivity effects, it depends on what exactly one is trying to measure.
For most of the measures, the survey is indeed the best method, being a relatively simple and cheap
way of getting subjective reports from a large number of people. Some measures, however, may be
measured differently: absenteeism and turnover data may be acquired from organisational records;
computer activity is passively monitored by programs; and cognitive performance tests are tests
rather than surveys.
An occupant survey, such as those of the BUS or CBE, which measures both environmental
satisfaction and the perceived effects on productivity, is, however, an effective and practical method
for getting an indication of the productivity effects of a building. The method is very commonly
used, and studies over the years have consistently shown a high correlation between satisfaction
with environmental conditions and the perceived effects on occupant productivity and health — a
Page 8 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
relationship which is corroborated by controlled laboratory studies. An occupant survey assessing
the environmental conditions is necessary to have any real confidence in the presence of possible
effects. Therefore, it may be reasonably suggested that if any single method were to be used, such
an occupant survey would be the best method for assessing productivity effects.
It should be emphasised, however, that the occupant surveys just provide an indication. They can
indicate if the building is probably having an effect on productivity, and if it is likely to be “small”
or “large”. They cannot, however, confidently say that there is, for example, a 10% improvement in
productivity. The specific measures of productivity have not been validated due to the lack of any
clear definition or standard measure of office productivity. Moreover, the validity of a numerical
estimate of productivity is questionable when many aspects are not readily countable. It should also
be noted that there may be considerable variation in reported effects. While surveys may report that
on average productivity is improved by a building, closer examination may reveal that, say, a third
of the occupants were actually reporting negative effects. This would indicate a need to improve
parts of the building as much as anything else, as well as suggesting that the mean productivity
effect of the building may not be true for everyone. Ideally, an occupant survey is not just a means
of “scoring” a building, but is also a tool to enable one to maximise the utility of the building for as
many of the occupants as possible.
While an occupant survey may be adequate to provide indications of productivity effects on its own,
other measures may still be valuable. Measures such as job satisfaction and absenteeism provide
indications of likely effects on productivity and areas that may be considered important to
organisational outcomes. Moreover, if positive effects were found on multiple factors, such as
absenteeism and an occupancy survey, a stronger argument is made that a building is providing
valuable benefits. However, people’s time is limited, and they may not be willing to do a lot of tests
and surveys. An occupant survey such as the BUS may take up most of the time people are willing
to spend, leaving little room for additional tests. Cognitive performance tests, and mood surveys,
might not be practical, as they may require too much work from people. However, there are some
factors that can be measured simply and quickly with a few questions, such as job satisfaction and
intention-to-quit. There are also a number of factors that are already measured by the government,
such as job engagement and absenteeism, and it could be useful, and expedient, to bring that data
together to enable possible environmental effects to be identified.
In order to define a robust productivity evaluation technique, it is necessary to rigorously test the
evaluation tool. If such an evaluation exercise is to be undertaken, then merely finding a correlation
between two independent measures of productivity, such as occupant surveys and absenteeism is
not considered sufficient demonstration of corroborative evidence. Often in these circumstances a
third independent measure is used to triangulate the result, confirming that the correlation between
the first two measures is not a coincidence. Such independent measures exist for workplace
productivity: cognitive performance tests, or health surveys, could be used. However, using too
many of these measures could consume too much time and risk low participation as a result of
survey fatigue. It is not necessary that an operational tool incorporate this triangulation. It would be
sufficient to use this triangulation approach during the development of an operational tool based
upon, say, occupant surveys and absenteeism. It may be argued that the literature already provides
such evidence. However, it would still be important to confirm the correlations as part of the
process of making any tools operational.
Centre for Building Performance Research
Page 9 of 75
Measuring Productivity in the Office Workplace
Page 10 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
ACKNOWLEDGEMENTS
We were fortunate to have two of the world’s leading experts in this field involved in this exercise:
-
Adrian Leaman of Building Use Studies
Dr. Jennifer Veitch of the National Research Council of Canada
We would like to thank both of them for taking on the role of project advisors, for giving us the
benefit of their considerable experience, and for their perceptive comments and advice at key stages
in the process.
Centre for Building Performance Research
Page 11 of 75
Measuring Productivity in the Office Workplace
Page 12 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
1
INTRODUCTION
1.1 Aim
The overall subject of this research report is the measurement of the effects of the office
environment on the productivity of the occupants. The briefing document (Meehan, 2013) posed the
following four questions:
1)
2)
3)
4)
What are the factors that the literature suggests can be used to measure productivity?
What are the key behavioural and attitudinal factors that affect productivity?
What are the key environmental factors affecting productivity?
Are occupant surveys the best method for measuring and comparing productivity?
The questions were examined through a review of the literature on the various subjects.
Measurement of productivity and behavioural factors (Q. 1&2) is covered in Section 2, which
discusses the various behavioural and attitudinal measures that may be used to measure productivity
effects. The advantages and disadvantages of the different methods are examined, looking at how
reliable and useful they are as a measure of productivity, as well as potential logistical concerns.
The key environmental factors (Q. 3) are covered in Section 3, which asks about the key factors that
need to be measured, and the advantages and disadvantages of the different methods.
Whether or not occupant surveys are the best method is a question whose answer depends on what,
precisely, is being measured. Thus, the question (Q. 4) weaves a thread throughout the whole paper,
and is answered in the final discussion, when the analysis of the various methods is drawn together.
1.2 The effects of the office environment on people
This report is based on the premise that the office environment can affect people’s productivity.
This premise — that the office environment can influence people in ways that may reduce or
improve their productivity — is well established (Clements-Croome, 2006a; Kamarulzaman et
al., 2011; Loftness et al., 2003; Newsham et al., 2009; Oseland, 1999; Roelofsen, 2002;
Sensharma et al., 1998; Veitch, 2008). Numerous studies have shown that indoor air quality
(Seppänen et al., 2006a; Sundell et al., 2011; Wargocki et al., 1999; Wyon, 2004), thermal
conditions (Frontczak, 2011; Hancock et al., 2007; Pilcher et al., 2002; Seppänen et al., 2006),
lighting (Boyce et al., 2003; Boyce, 2003; Chaudhury et al., 2009; Veitch et al., 2011), noise
(Rashid & Zimring, 2008; Salonen et al., 2013; Szalma & Hancock, 2011), office design (Brand
& Smith, 2005; Charles et al., 2004; Haynes, 2008a; Smith et al., 2011), and ergonomics
(Attaran & Wargo, 1999; Rowan & Wright, 1995; Springer, 1997) can influence people’s
cognitive abilities, their health, their attitudes, and their productivity.
While the specific effects of any particular factor may be uncertain — as they are dependent on
many factors, such as the tasks in question — studies have suggested that increases in productivity
of up to 15% may be gained from environmental improvements (Oseland, 1999).
Centre for Building Performance Research
Page 13 of 75
Measuring Productivity in the Office Workplace
1.3 The value of productivity
Increases in productivity have considerable value. As discussed by Clements-Croome (2006b), the
majority of an organisation’s costs pertain to the workers and their salaries — while the costs of
constructing and operating buildings are relatively small when looked at over the building’s life.
The specific ratios may vary significantly between buildings: Evans et al. (1998) found a ratio of
1:5:200 between initial construction costs, maintenance and operation costs, and business operating
costs such as salaries; Wu & Clements-Croome (2005, cited in Clements-Croome, 2006b) estimated
a ratio of 1:8:80; US research found that staff costs are as much as 100 to 200 times energy costs,
and 20 to 44 times HVAC running costs (Clements-Croome, 2006b). Despite the variance,
however, it is clear that the value of the workers is far greater than that of buildings.
This disparity means that even very small productivity effects may have great value and be highly
cost-effective (Clements-Croome, 2006b).Wyon (1996, cited in Clements-Croome,
2006b)estimated that even a productivity effect of 0.5% could pay back the costs of upgrading
unhealthy office buildings in the US in as little as 1.6 years. As a corollary to this, the costs of
poorly designed buildings may also be highly significant.
The high value of productivity effects also makes clear the importance of ensuring that buildings
enable their occupants to be productive. To that end, it is necessary to measure their productivity.
1.4 Defining productivity: the challenge
Productivity is conventionally defined as the ratio of inputs to outputs (Djellal & Gallouj, 2013;
Haynes, 2008b; Koopmans et al., 2011; Oseland, 1999). It is based on the situation of the factory
production line, which traditionally deals with standardised products (Djellal & Gallouj, 2013), and
in which it is possible to measure, for example, the number of widgets produced per hour.
Conceptually, productivity is an objective and quantifiable measure (Alby, 1994).
However, defining and measuring productivity in the context of the office is highly problematic, as
has been discussed by many researchers (Djellal & Gallouj, 2013; Haynes, 2008b; Jääskeläinen &
Laihonen, 2013; Koopmans et al., 2011; Leaman & Bordass, 1999; Oseland, 1999; Zelenski et al.,
2008). While measuring inputs such as time and resources may not be a problem — timesheets, for
example, are commonly used — measuring the outputs is much more complicated (Djellal &
Gallouj, 2013; Leaman & Bordass, 1999).
As discussed by Leaman & Bordass (1999), productivity may be improved by increasing the
quantity of what one produces (while keeping the time and resources required the same), or by
improving the quality of what is produced. However, defining and measuring these for office work
can be very difficult, and potentially impossible to do objectively.
For example, how should the productivity of someone who is writing reports be assessed? By the
number of words? The problems caused by the variability of office work are apparent here — all
reports are not the same. Some reports are complicated and difficult, others simple and routine —
and this does not necessarily correspond to length. Moreover, the careless use of such a measure
could conceivably have negative effects if people start to try and maximise word count at the
expense of quality. Quality is difficult to define and is arguably subjective.
Page 14 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
The lack of standardisation prevents comparisons and inhibits attempts to produce an overall
measure of productivity. For example, different jobs are clearly non-comparable (Leaman &
Bordass, 1999). A manager cannot be directly compared to a receptionist or a policy analyst or an
accountant —the requirements and outputs of their jobs are completely different. The situation
repeats itself within jobs as well — office workers may have to carry out a wide variety of tasks,
each of which cannot be directly compared to the others. Indeed, many tasks are unique, and cannot
even be compared to other tasks of the same type. As previously discussed, different reports cannot
be directly compared as they may have different requirements, and different levels of difficulty.
This makes estimating overall productivity problematic, as individual task productivity cannot
simply be aggregated. If someone is “50%” productive1 in a meeting they have for half a day, and
then “70%” productive writing a report for the rest of the day, does this mean their overall
productivity is “60%”? Should different tasks be weighted? This would be difficult, especially as
constant weightings cannot be used — every meeting is not of the same importance. This
variability also means that measuring productivity objectively would require individual measures to
be developed for every job — which for an organisation with such a broad range as the government
may be impractical.
Despite the difficulties caused by the complexities of office work, people have attempted to provide
generic measures that can be used to measure the productivity of different jobs. One possibility is
the use of economic measures (e.g. ratio of revenue to expenditure)(Strassmann, 2004). The
advantage of this is that dollars (or otherwise) provide a standardised measure that can be used
across different jobs and situations, and that it avoids getting mixed up in the questions of
measuring quality and quantity by letting the marketplace take care of it. However, such measures
are inappropriate here, for several reasons:
1) It may be argued that it is actually measuring profitability rather than productivity
(Kemppila & Lonnqvist, 2003).
2) Such measures are heavily affected by external factors, such as the marketplace. As
confounding variables, such factors could make it near impossible to identify effects of the
built environment on people’s productivity.
3) As a public service, much of what the government undertakes does not make money (Djellal
& Gallouj, 2013).
Another complication is that many factors considered to be important to productivity, such as
quality and interpersonal relations, are not readily countable. For such reasons, as Murphy (2008a)
and Bommer et al. (1995) discuss, objective measures are often highly limited and inappropriate for
many jobs.
The “productivity” of different behaviours can be hard to define. For example, talking with coworkers may seem to be less productive in the sense that it means that one is not actively on task —
but it could also spark valuable ideas as well as more subtly benefiting the organisation by
enhancing interpersonal relations and the sharing of information. Often people think of the solutions
to problems when they take a coffee break. Excessive focus on task performance may ignore the
value of behaviours that may benefit the broader organisation, such as helping others and showing
1
Ignoring the question of how productivity is measured in the first place.
Centre for Building Performance Research
Page 15 of 75
Measuring Productivity in the Office Workplace
responsibility and initiative. Such behaviours are categorised as contextual performance, and are
considered to be important factors in work performance (Koopmans et al., 2011).
So, as has been discussed, productivity is very difficult to clearly define in the modern office and
there is no standard measure (Haynes, 2008b). The problem may be summed up in two points:
1) Productivity is nominally an objective and quantifiable measure.
2) We cannot define it in objective and quantifiable terms.
A solution may be revealed by looking at the related concept of work performance, defined in terms
of various behaviours such as absenteeism and job engagement (Koopmans et al., 2011). Work
performance may be defined as “scalable actions, behaviours, and outcomes that employees engage
in or bring about that are linked with and contribute to organizational goals”(Viswesvaran & Ones,
2000). As Koopmans et al. (2011) discuss, performance on these terms is “an abstract, latent
construct that cannot be pointed to or measured directly… made up of multiple components or
dimensions… made up of indicators that can be measured directly” (Koopmans et al., 2011). As
with productivity, defining and measuring overall performance is problematic. However, despite an
inability to clearly define and measure performance, it is possible demonstrate the existence of
performance effects by finding effects on various factors that influence work performance.
Similarly, showing effects on factors that contribute to productivity could be used to provide
evidence for effects on overall productivity.
The advantages and disadvantages of the measures of such factors are discussed in the next section.
Page 16 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
2
METHODS FOR ASSESSING PRODUCTIVITY
There are a wide range of methods that have been used to assess people’s productivity and work
performance. Due to the lack of any simple measure of overall productivity, the different methods
instead measure various factors that are expected to affect people’s productivity.
The basic aim is for the measures to be able to tell us, with a reasonable degree of confidence, if a
building is providing any productivity effects. As discussed in section 1.3, the economics of the
situation are so skewed towards the value of productivity that almost any productivity effect may
have great value.
Ideally, the measures would also be able to provide numerical estimates of the productivity effects,
allowing one to say that the building was, for instance, improving productivity by 10%. However,
given that many aspects of office productivity are not readily quantifiable, such measures are highly
limited.
The measures discussed here are:
1) Perceived productivity ratings
2) Cognitive performance tests
3) Monitoring computer activity
4) Absenteeism
5) Presenteeism
6) Reported frequency of health issues
7) Time lost to issues affecting productivity
8) Mood
9) Sleepiness
10) Job satisfaction
11) Job engagement
12) Intention to quit
13) Turnover
These factors were selected because research suggests not only that they affect productivity, but
also that they may be influenced by environmental conditions, and so may be appropriate options
for examining the effects of buildings on people’s productivity. A strong focus was placed on
perceived productivity ratings because they are the productivity measure of choice in occupant
surveys looking at the effects of the indoor environment on productivity.
2.1
Perceived productivity ratings
Subjective ratings of productivity are used in many studies (Clausen & Wyon, 2008; Haynes,
2008c; Humphreys et al. 2007; Larsen et al. 1998; Leaman & Bordass, 1999; Lee & Guerin, 2010;
Mak & Lui, 2012; Meijer et al. 2009; Meyer et al. 1989; Smith & Orfield, 2007; Viswesvaran et al.
1996).
One of the main reasons for using them is convenience. As Leaman & Bordass (1999) discuss,
simple surveys measuring subjective productivity have many advantages: they are relatively cheap,
quick and easy to carry out; questions about general productivity can be given to people in different
Centre for Building Performance Research
Page 17 of 75
Measuring Productivity in the Office Workplace
buildings and jobs without having to be tailored to the specific situation; large samples can be
analysed across many buildings; and the development of databases containing the results from
many buildings measured with the same general questions allow results to be compared to
benchmarks.
All of these make subjective productivity measures an attractive option — especially considering
the inability of researchers to define any generally useful objective measure of office worker
productivity. Indeed, one of the key arguments put forth for using subjective measures is that: “a
self-assessed measure of productivity is better than no measure of productivity” (Haynes, 2008b).
2.1.1
How is it measured?
The basic form involves asking people to rate their productivity. There are a variety of variants on
the basic question, with productivity being assessed over different time periods, using different
scales, and with ratings being solicited from different sources. This broad range and variety is
described in the sections below.
2.1.1.1
The question may be asked in several ways
Questions about people’s productivity may be asked in several ways.
There is the simple, direct approach, wherein people are simply asked to rate their productivity. An
example of this is that of Clements-Croome & Kaluarachchi (2000), who asked workers to:
“rate their level of productivity on a seven-point scale, from extremely dissatisfied to
extremely satisfied” (cited in Kemppila and Lonnqvist, 2003)
People may also be asked to compare themselves to others — such as the “average worker”. For
example, the Health and Work Performance Questionnaire asked (World Health Organisation,
2001):
“How often was your performance higher than most workers on your job?”
Finally, it is common, especially in studies looking at the effects of the environment on
productivity, to ask the question more indirectly, and to instead ask people what effect the
environment has on their productivity. This is the approach used in POEs such as the BUS2 and
CBE3 surveys.
“Please estimate how you think your productivity at work is increased or decreased by the
environmental conditions in the building?” (Leaman & Bordass, 1999)
2.1.1.2
It may be assessed over a particular period of time
It is also common to specify a time period over which one’s productivity should be assessed. Report
periods may cover a day, such as the survey used by Kildesø et al. (1999):
“Today I have been able to work:
0%--------------------------------100%”
Building Use Studies (BUS) occupant survey (Building Use Studies, 2011a)
Center for the Built Environment (CBE) occupant survey, University of California, Berkeley (Center for the Built
Environment, 2013)
2
3
Page 18 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
A week:
“How would you describe the overall amount of work you did this week?”
(Halpern et al. 2000)
Multiple weeks:
“On a scale from 0 to 10 where 0 is the worst job performance anyone could have at your
job and 10 is the performance of a top worker…. how would you rate your overall job
performance on the days you worked during the past 4 weeks (28 days)?” (World Health
Organisation, 2001)
Or even a year or more:
“Using the same 0-to-10 scale, how would you rate your usual job performance over the
past year or two?” (World Health Organisation, 2001)
2.1.1.3
The scale may be numerical or ordinal
The scales used also vary. The BUS and CBE surveys use numerical scales, and explicitly ask
people to estimate the percentage effect the environmental conditions have on their productivity.
The BUS uses a 9-point scale, from -40% to 40% (Leaman & Bordass, 1999):
“Please estimate how you think your productivity at work is increased or decreased by the
environmental conditions in the building?”
-40% -30%
-20% -10% 0% 10% 20%
30% 40%
While the CBE survey uses a smaller scale (Center for the Built Environment, 2013):
-20% -10%
-5% 0% 5% 10% 20%
Ordinal scales are also commonly used, with scale points described rather than enumerated. For
example, Humphreys and Nicol (2007) asked:
“Do you feel that at present your productivity is being affected by the quality of your work
environment and if so to what extent?”
Using a five point scale where the points were described as follows:
1.
2.
3.
4.
5.
2.1.1.4
Much higher than normal
Slightly higher than normal
Normal
Slightly lower than normal
Much lower than normal
Assessments may be done by the self, peers, or supervisors
Subjective performance and productivity do not have to be self-assessed — though as the above
examples demonstrate, it is common. Evaluations may also be provided by peers, or supervisors
(Goffin & Gellatly, 2001; Murphy, 2008b; Viswesvaran, 2002). This may reduce self-serving bias,
or provide alternative perspectives (Goffin & Gellatly, 2001; Murphy, 2008a).
Centre for Building Performance Research
Page 19 of 75
Measuring Productivity in the Office Workplace
2.1.2
The accuracy of subjective evaluations of productivity
There are, as has been described, a variety of surveys using measures of perceived productivity, or
the perceived effect of the environment on productivity. However, it must be noted that they are not
measures of “actual” productivity — and the assumption that subjective estimates can provide a
reliable estimate of actual productivity is questionable (Murphy, 2008a).
2.1.2.1
Measures of perceived productivity are widely used and may have some
relation to actual productivity
There are a number of arguments in favour of the use of subjective productivity measures. As
Humphreys & Nicol (2007) discuss, the idea that people have at least some idea of how productive
they are, and that they will be aware of the effect of environmental conditions on their ability to
work is plausible. As they argue, people do regularly adjust their environment to improve comfort,
reduce distraction and irritation, and enhance their ability to perform their work. On such grounds, it
might be argued that people can roughly estimate such productivity effects.
There is also some evidence that subjective assessments have some relation to objective
measurements. Some studies have, for instance, found statistically significant (though low)
correlations between various subjective measures of performance and objective measures (Bommer
et al., 1995; Humphreys & Nicol, 2007; Oseland, 1999). Some laboratory studies that measured
both perceived performance and actual performance found significant effects of environmental
factors on both the objective performance measures and the subjective ones, suggesting that
subjective measures may be viable proxies (e.g. Kaczmarczyk et al. 2004; Wyon, 2004).
Importantly, the relationships of various factors to perceived productivity are consistent with those
identified using objective measures (Humphreys & Nicol, 2007). Surveys of building occupants
have consistently identified a strong relationship between satisfaction with environmental
conditions and perceptions of the effect of the environment on their productivity (Frontczak, 2011;
Humphreys & Nicol, 2007; Leaman & Bordass, 2006; Smith & Orfield, 2007). When people are
dissatisfied with environmental conditions, they tend to report negative effects on their productivity.
Such a relationship is supported by the many studies that have found links between environmental
conditions and objective performance measures (e.g. Heschong Mahone Group, 2003; Satish et al.,
2012; Seppänen et al., 2006a; Seppänen et al., 2006b; Szalma & Hancock, 2011; Wargocki et al.,
2000). Thus, the suggestion that the relationship between perceived productivity and environmental
satisfaction is a reflection of a real link between environmental conditions and productivity is
sound.
It may also be argued that the perception of productivity may in itself influence actual productivity
(Raw et al., 1990) — an argument supported by studies suggesting that optimism and positive selfefficacy can benefit performance (Alba & Hutchinson, 2000). Thus, people may not just perceive
themselves to be more productive because they are more productive, but also be more productive
because they perceive themselves to be more productive.
Overall, the evidence suggests that perceived productivity, is reflective, in some way, of “actual”
productivity, and that it may be a reasonable indicator.
Page 20 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
2.1.2.2
There is some evidence that the CBE survey at least may provide a
reasonable estimation of simple performance effects
There are a limited number of studies comparing subjective and objective performance measures.
However, Frontczak (2011), in analysis of CBE survey results managed to provide an argument that
the predictions made using questions about the perceived effect of the environment on productivity
are in line with those made using objective measures of task performance. They argued that:
1) A 15 percentage point (1 point on a 7 point scale) change in satisfaction with air quality was
associated with a ~0.8 percentage point change in the perceived effect of the environment on
productivity. Studies into the effects of indoor air quality on simple office tasks have found
that 10% less people dissatisfied with the air quality was associated with ~1% better
performance (Wargocki et al., 1999, 2000).
2) A 15 percentage point change in satisfaction with temperature was associated with a 1
percentage point change in the perceived effect of the environment on productivity. Lan et al.
(2011) found that a similar change in thermal sensation corresponded to roughly a 0.8%
reduction in performance.
3) A 15 percentage point change in overall workplace satisfaction was associated with a ~3.7
percentage point change in the perceived effect of the environment on productivity. Clausen
& Wyon (2008) improved environmental conditions in an experiment, reducing the
proportion of people dissatisfied with the environment by about 40 percentage points, and
improving performance by about 7% (though it should be noted that there is significant
variability in the results).
4) Similarly, Frontczak (2011) also reported that a Japanese study found that a 10 percentage
point increase in environmental satisfaction was associated with a 3 percentage point
improvement in performance on a multiplication task.
This argument suggests that questions about the perceived effect of environmental conditions on
people’s productivity can in fact provide reasonably good estimates — at least as long as sample
sizes are large enough to average out error.
It should be noted, however, that there is a problem here, and that is that it assumes that the effects
on simple tasks performed by people in the various experiments are roughly equivalent to what they
are on more complicated work carried out in the real world. This is highly questionable: for
example, if someone is 5% slower at typing, does it mean that they are going to be 5% less
productive in general (which is what the productivity question in the occupant survey is nominally
about)? How much would their ability to write a report, or give advice, be affected? It is worth
noting that studies that do use multiple tests (e.g. Wargocki et al., 1999) tend to find significant
variation in effects on the different tests. Moreover, in real world situations environmental
conditions may also affect people’s behaviour in ways that may not be apparent in short laboratory
situations (Boyce et al., 2006a).
It is also, perhaps, worth noting that if one were to use the BUS survey, which uses essentially the
same productivity question but with a larger scale, to make similar predictions then one would get
results that suggest significantly larger effects — instead suggesting a productivity increase of about
Centre for Building Performance Research
Page 21 of 75
Measuring Productivity in the Office Workplace
5.9 percentage points for every point in overall comfort4. This kind of distortion, wherein the size of
the scale changes the results, is a known issue in questionnaire design (Schwarz et al., 1985),
Overall, while this evidence does support the validity of the subjective ratings as an indication of
productivity effects, the questions around it suggest that any numerical estimates of effects should
be treated with great caution.
2.1.2.3
People are generally poor at assessing their performance
The evidence discussed so far may be considered cautiously optimistic about the validity of
perceived productivity ratings. However, researchers have also raised a number of questions about
people’s assessment abilities.
The usage of subjective ratings of productivity is based upon an assumption — specifically, that
people can judge their performance reasonably well. It is occasionally claimed that, for instance,
“individuals have a good sense of their own productivity” (Zelenski et al., 2008) or that “people are
good judges of their own abilities and are quite capable of describing their own productivity”
(Kemppila & Lonnqvist, 2003).
Such claims are, however, highly questionable. Studies of students, for example, have found that
“people are generally inaccurate in predicting their performance” (Hacker et al., 2000). The worse
performing students tend to significantly overestimate their performance and the best performing
students tend to underestimate (Kennedy et al., 2002; Kruger & Dunning, 1999; Ryvkin et al., 2012;
Sharma, 2002).
Indeed, it is well established that comparisons to others are prone to significant bias. The majority
of people consider themselves to be above average in intelligence, ethics, logical ability, appearance
and more (Kennedy et al., 2002). This “Lake Wobegone effect” has been observed repeatedly all
around the world (Kennedy et al., 2002).
“When rating themselves vis-a-vis their peers, 70% rated themselves as above average
in leadership ability whereas only 2% judged themselves as below average. When
considering athletic ability, 60% considered themselves above the median and only 6%
below. When asked to judge their ability to get along with others, all students rated
themselves as at least average, 60% placed themselves in the top 10%, and 25% placed
themselves in the top first percentile.” (Dunning, Meyerowitz, & Holzberg, 1989)
It should also be noted that below-average effects exist for tasks that people generally aren’t very
good at (Kruger & Dunning, 1999). Indeed, overall, it seems that people are just not very good at
comparing themselves to others — possibly because they do not know how well others are
performing. Studies have found that comparisons people make are very strongly related to people’s
ratings of their own performance, but that they have almost no relation to perceptions of other
people’s performance (Kruger & Dunning, 1999). Hence, when people are bad at something, they
tend to assume that they are below average, while when they are good at something, they tend to
assume that they are above average — and they ignore the fact that the task may, for instance, be
fairly easy and that most other people are also good at it (Burson et al., 2006).
4
based on analysis of occupant responses in 15 New Zealand buildings.
Page 22 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Estimates of absolute performance are not necessarily better, however. They too tend to be
inaccurate (Burson et al., 2006), and studies have found that people tend to overestimate their
performance on hard tasks, and underestimate it on easy tasks (Burson et al., 2006; Moore & Cain,
2007). That being said, there is at least some indication that people can sometimes judge their
performance reasonably accurately. Hacker et al. (2000), for example, found that students could
give reasonably accurate judgements of their performance after they had taken an exam. Overall,
however, the literature suggests that accuracy cannot be relied upon.
Indeed, it may be worth noting that studies have indicated that bias may be greater when the subject
of evaluation is more ambiguous, as people can then select criteria that work best for them
(Dunning et al., 1989). Arguably, given the difficulty in defining “productivity” in the office, it is
fairly ambiguous.
The possibility of self-serving biases is shown by the fact that self-evaluations of performance tend
to be significantly higher than supervisory or peer evaluations (Kline & Sulsky, 2009). Similarly,
self-assessments of behaviours such as absenteeism (Johns, 1994a) and counterproductive
behaviour (Mann et al., 2012) also tend to be more positive than peer ratings. This does not
necessarily mean that peer ratings are more accurate. Indeed, peer ratings are also known to be
unreliable, as different people have different perspectives (Kline & Sulsky, 2009). A meta-analysis
of supervisory ratings by Viswesvaran et al. (1996) found a mean inter-rater reliability of 0.52,
suggesting that they cannot be considered an accurate measurement of performance — although
they may still be loosely indicative.
2.1.2.4
Subjective ratings may be misleading
Research has also suggested that people’s judgements may be distorted in ways that may be highly
misleading. A study of group brainstorming strategies found that when people in the group were
anonymous (communicating via computer), and were given critical comments, they produced
significantly more solutions and comments than in groups where people could identify each other
and got supportive comments (Connolly et al., 1990). However, the ratings of perceived
performance and satisfaction indicated the opposite — people in anonymous groups getting critical
comments thought they did worse (Connolly et al., 1990). Another study found that more frequent
interactions between group members clearly improved the performance of the group, but that people
found it harder to concentrate, and thought they did a less thorough job (Jessup & Connolly, 1993).
Similarly, it has also been found that, when brainstorming, people generate more ideas if they work
individually, but that people believe they generate more ideas when they work in groups (Paulus et
al., 1993). Such findings suggest that ratings of perceived productivity or performance can
sometimes be highly misleading. While this may not necessarily be a major issue for the assessment
of environmental effects, as there is general agreement between objective and subjective measures
about the effects of the environment on people’s performance and productivity, it does still raise
concerns about the accuracy of people’s perceptions, and the accuracy of any attempts to provide
percentage estimates of effects.
Clausen & Wyon (2008) examined the combined effects of various environmental factors on
people, measuring performance on simple proofreading and addition tasks and also asking the
subjects to evaluate their own performance. Changes made to improve environmental conditions
reduced dissatisfaction, and significantly improved subjective performance, with improvements of
Centre for Building Performance Research
Page 23 of 75
Measuring Productivity in the Office Workplace
20 to 25%. The effects on objectively measured performance however were much lower, ranging
from -1 to +7%. This suggests that subjective estimations may greatly overestimate performance
benefits. Indeed, it is particularly worrying that in one group, the mean objective change was -1%,
while the subjective estimates indicated large gains — again suggesting they could be misleading.
That being said, however, it is possible that people’s feelings may be more important over the long
term. Laboratory studies into the effects of lighting on people’s performance have for years had
great difficulty in identifying any real effects (Boyce et al., 2006a). It has been suggested that it
may be because people are quite capable of working effectively for short periods of time in adverse
conditions — but that in real life their dissatisfaction would have negative behavioural effects that
could reduce productivity more (Boyce et al., 2006a). Using this argument, it is possible that
people’s subjective assessments may actually be more reflective of the real impacts than the results
on simple tests would suggest. However, this has not been proven, and is largely hypothetical.
2.1.2.5
Correlations between subjective and objective productivity measures are
low
There are a limited number of studies comparing subjective and objective performance measures.
Those that do exist generally suggest weak relationships between them. Bommer et al. (1995)
carried out a meta-analysis of performance studies, and found a corrected correlation of 0.389
between subjective supervisory ratings and objective performance measures. An earlier metaanalysis by Heneman (1986) had similar results, reporting a mean corrected correlation of 0.27.
This suggests that subjective ratings should not be treated as the same as objective measures —
possibly because they are often not measuring exactly the same items — but that they may still be
viable indicators (Bommer et al., 1995).
2.1.3
Comparisons of the different measures
2.1.3.1
The type of question
There are, as previously described, several different forms that questions about productivity can
take. They could involve directly asking someone to rate their productivity, they could involve a
comparison to other workers, or they could be about the effect of other factors, such as the
environment, on productivity.
Firstly, the literature suggests that asking people to compare themselves to others has many
problems. The literature suggests significant biases — causing both under and overestimates
(Burson et al., 2006). Research suggests that when making comparisons people do not really
account for the abilities of others, and so they do not actually make any real comparison to other
people — they just estimate their own performance and then assume that if they are good, then they
are above average, and if they are bad then they are below average (Kruger, 1999). Thus, there
would seem to be little point in asking for comparisons.
Comparison of the direct approach of rating one’s productivity to the approach of rating the effect
of the environment on productivity is tilted very much in favour of the environmental effect
approach. Much of the research discussed above, looking at how accurately people can assess their
performance, used the direct approach. Thus, the various studies that paint a negative picture of
subjective performance ratings, such as Clausen & Wyon (2008), make the direct approach look
Page 24 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
fairly questionable. The environmental effect approach used in the BUS and CBE surveys, however,
has virtually no studies that we are aware of looking at its accuracy. The only examination of the
issue is in Frontczak (2011), who argues that the predictions about the effects of environmental
satisfaction on productivity using the CBE survey are comparable to those found in studies of
simple task performance — and as previously discussed, this argument is limited by the fact that
they are not actually measuring the same factors.
The idea that asking people about the effect of the environment on their productivity may be better
than asking them to rate their productivity directly is also loosely supported by theory. One possible
source of bias is ego — people like to have a positive self-image, and it is common for them to
externalise poor performance, blaming it on something or someone else (Hacker et al., 2000;
Ryvkin et al., 2012). A question about the effects of the environment could allow people to
externalise any problems, and so could conceivably reduce bias in responses.
Another possible explanation for people’s poor judgement is poor metacognition — essentially, the
skills needed to critically examine one’s performance are the same as those needed to perform well
(Kruger & Dunning, 1999). A potential advantage of the environmental effect approach is that it
avoids the need to critically evaluate one’s work. People just need to be able to tell if the
environment is affecting them — if it is making them feel good or bad, if it is distracting them, or
making it hard to concentrate.
One potential issue with the question is the uncertainty in reference points (Leaman & Bordass,
1999). If people are asked what effect the environment has on their productivity, what do they
compare the building to? Indeed, it may be argued that most occupants have no way of readily
assessing the effect of the environment on their performance since they should have been in their
current building for at least a year, and it may be difficult for them to compare the building they are
currently in to others they were in years ago. It is difficult enough to estimate productivity in the
present time without also having to try and remember what it was years ago.
The issue of poorly defined reference points is not, however, only a problem for the environmental
effect approach. That ambiguously defined scales may be interpreted differently by different people
is a known issue in questionnaire design (Marincic, 2011). Different people may have very different
ideas about what a “satisfactory” level of productivity is, what the performance of a “top worker”
should be, and what the standards that they are comparing themselves to should be. Indeed, research
has suggested that when the criteria for evaluation are ambiguous, people tend to select criteria that
make them look good (Dunning et al., 1989; Story & Dunning, 1998).
In fact, for any question asking people to assess “productivity” we cannot be sure exactly what it is
that they are assessing — something which is also a feature, as it is that vagueness that enables the
same general question about productivity to be given to people in different jobs.
Individuals’ productivity may be highly variable, and may be significantly influenced by other
factors such as individual capabilities, and management. In theory, asking about the effects of the
environmental conditions on people’s productivity could help to control for these factors, focussing
down on the effects of the environment, and more reliably identifying differences between
buildings. Indeed, strong correlations have been found between environmental satisfaction and the
perceived effects of the environment on people’s productivity reported by surveys (Leaman &
Bordass, 2006), indicating that they may be able to identify effects reasonably reliably. That being
Centre for Building Performance Research
Page 25 of 75
Measuring Productivity in the Office Workplace
said, it should be noted that there is still significant variability in the reported effects of buildings,
and results should be interpreted carefully. As the graph below shows (Figure 1), while the overall
trend is that more comfortable buildings are perceived as being better for people’s productivity,
there are a number of examples wherein buildings which are more comfortable are reporting worse
effects on productivity than other buildings that are less comfortable. This raises questions around
the accuracy of the results, as well as how reliably the ratings can identify small effects. Indeed, the
precision of the measurements is ultimately limited by the scale used. On a 7 or 9 point scale, the
smallest difference that can be reported by someone is 1 point. Thus, if the actual difference is
significantly smaller than this then it would be difficult to detect.
Finally, when analysing the results it is also important to consider the considerable variation
between individuals’ responses. While surveys may report that on average productivity is improved
by a building, closer examination may reveal that, say, a third of the occupants were actually
reporting negative effects. This would indicate a need to improve parts of the building as much as
anything else, as well as reminding us that the mean productivity effect of the building may not be
true for everyone.
Effect on Productivity
40%
30%
Comparison here shows the
opposite trend — the more
comfortable buildings reported
less positive effects on
productivity.
20%
10%
0%
-10%
-20%
-30%
R² = 0.75
-40%
1
2
Disatisfactory
3
4
Comfort
5
6
7
Satisfactory
Figure 1: Average comfort vs. perceived effect of the environmental conditions on productivity in 15 NZ office
buildings using BUS survey data.
Overall, the environmental effect approach to subjective productivity ratings would seem to be
preferable, with the literature being less negative about it than the other approaches — and indeed,
there is even some suggestion that it can provide reasonable estimates, though the case here is
questionable. It should be noted though that this apparent superiority may simply be because of a
lack of research directly assessing the accuracy of people’s judgements of the effect of the
environment on their performance.
Indeed, the lack of validation of measures of the perceived effect of environmental conditions on
productivity is a matter of some concern. Some would argue that the question does not, in fact,
measure productivity — but rather that it is just another aspect of environmental satisfaction. In this
argument, people do not really evaluate the effect on their productivity (which would be difficult),
and instead just assume that if they like the environment then it positively affects them, and if they
don’t like it that it is bad for them. This may not necessarily be wrong — the literature does
generally suggest that people work better in more comfortable conditions, and it is logical that if
conditions are making people uncomfortable and stopping them from concentrating then they would
Page 26 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
be less productive. However, it does raise some issues: firstly, that there is no research that we are
aware of that has determined what, precisely, people are assessing when they are asked to estimate
the effects of the environment on their productivity; secondly, that there is a need to examine the
validity and accuracy of people’s estimates of the effect of the environment on their productivity or
performance — one possibility could be to have people do simple tests in different conditions and
be asked what they thought the effect of the environment was on their test performance.
2.1.3.2
The influence of the recall period
People may be asked to recall their performance over different periods — for instance, they may be
asked to recall their performance over the past week, or the past year.
The recall period used may affect the accuracy of people’s estimates. Shorter recall periods
encourage enumeration strategies, making it easier to count up specific instances of the behaviour in
question (Belli et al., 2000). For example, Stewart et al. (2004) compared self-reported absenteeism
and productivity losses over 1 and 4 week periods, and suggested that 2 weeks may be the optimal
recall period and that longer periods may lead to underestimation.
The problem with short recall periods, however, is that they lack generalizability. People’s
productivity varies significantly from day to day, and from week to week (Fisher, 2008). Studies
have suggested that up to 77% of the variation in worker performance may be within-person due to
the effects of variables such as fatigue, motivation, and task complexity (Fisher, 2008). Related
factors such as absenteeism are known to vary significantly according to season (Léonard et al.,
1990). For these reasons, measurements of productivity over a short period may not be
representative of the long term. This would make it difficult to make reliable comparisons between
old and new buildings, as any differences could due to, or masked by, natural variation.
Thus, a question that looks at the long term or general performance would seem to be the best
option, even if it will be a very rough estimate (unless one is prepared to repeatedly survey people
week after week).
2.1.3.3
The source of the ratings
A worker’s productivity may be self-assessed, or it may be rated by others, such as peers or
supervisors.
The first thing to note is that ratings from different sources tend to differ due to different
perspectives (Murphy, 2008a). Self-ratings of performance, and factors such as absenteeism,
demonstrate significant bias, with self-ratings being more favourable (Johns, 1994b; Murphy,
2008a).
Supervisory and peer ratings, however, are not necessarily more reliable. As previously noted,
raters do not necessarily agree, resulting in ratings having limited reliability (Viswesvaran et al.,
1996). Differences may be because different raters are involved with ratees in different roles, or
they may be because different raters have different standards. Ultimately though, it makes it
difficult to be confident in the accuracy of the ratings. Indeed, supervisory and peer ratings may also
be prone to bias. A study of absenteeism found that subjects believed their work-group peers as
being less absent than the occupational norm — though the bias here was much smaller than in their
self-estimates (Johns, 1994a). The manager’s estimates of average absenteeism were also much
Centre for Building Performance Research
Page 27 of 75
Measuring Productivity in the Office Workplace
lower than the actual absenteeism, and they too have been found to hold self-serving biases about
their workgroups (Johns & Xie, 1998).
One possible way of addressing the problems of varying perspectives is to use multiple sources —
which can provide more reliable results (Kline & Sulsky, 2009). This may, however, pose other
problems. One issue is that having everyone rate themselves and each other could take up
significant amounts of time (Kline & Sulsky, 2009). There can also be issues around trust.
Performance ratings are sensitive topics, and people may be worried about negative effects on
interpersonal relations, or they may collude in quid-pro-quo arrangements, which could distort
results (Brutus & Derayeh, 2002). It may be important to emphasise that the survey’s focus is on the
effects of the building, and not on judging individual’s abilities.
It should also be noted that ratings from others cannot be used when the productivity question is one
about the effects of the environment on one’s productivity — a form which may in itself help to
reduce the bias present in self-ratings. After-all, how can someone else know how the environment
affects you?
2.1.3.4
Comparison of numerical and ordinal scales
As noted, the productivity rating scales may be numerical, with points defined by percentage
values, or they may more vaguely described ordinal scales.
Ordinal
Numerical
Pro
Honestly vague.
Gives apparently precise
values that are greatly
desired
Con
Vague, unclear.
Precision unwarranted,
illusory, and misleading.
Table 1: Summary of the pros and cons of ordinal and numerical scales
The difference between the ordinal and numerical scales is one of vagueness against precision, with
the complication that the accuracy of the measures is unknown.
Because scales of perceived productivity have not been validated against objective measurements of
productivity (due to lack of objective measurements) it cannot honestly be said that, for example, a
10% change in perceived productivity is actually a 10% change in productivity. Furthermore, it may
be argued that all that can really be said is that there is an indication of some “small” (assuming
10% is low on the scale) change in productivity. From this point of view, an ordinal scale that says
as much would honestly describe the state of our knowledge, while a numerical scale saying 10%
would give the misleading impression that we can accurately quantify the effects on productivity.
However, the ordinal scale also has problems. Aside from the fact that it doesn’t tell us what we
really want to know (the quantified effect on productivity); its vagueness is a flaw as well as a
virtue. The problem is that the scale points are very ill-defined and may be interpreted differently by
different people. What is a “small” effect on productivity? Different people can be expected to have
different ideas (Marincic, 2011). It may also mean that the scale points are not necessarily equally
spaced — is the difference between “not at all” and “a bit” the same as that between “a bit” and “a
lot”? This could change the interpretation and analysis of the data (Marincic, 2011). From here
some people may argue that the numerical scale is superior — even if it is inaccurate — because the
numerical scale provides more clearly defined anchor points that should be interpreted the same by
Page 28 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
everyone. We might not know if 10% of perceived productivity is the same as 10% of actual
productivity — but at least everyone knows that it’s 10%.
The counter to this argument is that people don’t have any way of estimating “10%” of their
productivity anyway. The reason why we have to ask people about their productivity is because we
cannot measure or clearly define it. We have no way of numerically measuring or estimating
productivity in the office. Therefore it can be argued that there is no reason to believe that anyone
(in a job that is not easily quantifiable) has any idea what 10% of their productivity is, and they
have no way of estimating it. Indeed, as discussed in the Introduction (1.4), many aspects of
productivity for office workers are not really countable (Murphy, 2008a). On such grounds, it may
be suggested that the numerical measures of perceived productivity are meaningless. As an
example, consider a graded report. One report is graded “A-“, and the other a “C”. The “A-“ is
clearly significantly better, but how much? Convention at Victoria University (based off point
systems) is that a “C” is given for a score of 50/100, while “A-“ requires 75/100. However, to claim
that, say, the “A-“ report is 50% better than the “C” one would be very odd. The same applies to
much knowledge based work.
Ultimately, both types of scale just provide rough indications of effects, and the percentage
estimates of productivity effects given by them should not be taken as being much more accurate
than describing the effects as “small” or “large”. Indeed, running both kinds of scale side-by-side
could be a useful test to examine whether or not the different scales are actually different.
2.1.4
Summary
To summarise:
-
-
Perceived productivity ratings are widely used, being relatively simple, quick, and cheap.
There is evidence that perceived productivity may reflect actual productivity.
o It is reasonable to suggest that people have some idea of how the environment affects
their productivity.
o The general relationships between perceived productivity and environmental factors
are corroborated by laboratory studies using objective performance measures.
o Perceived productivity may itself influence actual productivity.
o Estimates of productivity effects made with CBE survey data were in line with
simple task performance effects identified in controlled laboratory studies — though
it should be noted that they are not measuring the same factors, even if they are
related.
The accuracy of people’s judgements of performance however is highly questionable.
o People are poor at comparing themselves to others.
 “Above-average” and “below-average” effects, wherein people demonstrate
systematic bias in their self-assessments, are well documented.
 People tend not to account for other’s abilities when they are supposed to be
comparing themselves to others.
o People tend to overestimate performance on hard tasks, and underestimate on easy.
o Self-evaluations tend to be significantly higher than supervisory or peer evaluations,
suggesting bias.
Centre for Building Performance Research
Page 29 of 75
Measuring Productivity in the Office Workplace
-
-
-
-
-
o Supervisor and peer ratings also have limited reliability, as different people have
different perspectives.
o Perceptions of perceived productivity can be readily distorted by behaviour and
feedback that may in fact have the opposite effect on actual performance, making
perceived productivity ratings potentially misleading.
o Correlations between subjective and objective performance measures are low.
o Some evidence suggests that subjective ratings may significantly exaggerate
performance effects — and may make results misleading.
Asking people about the effect of the environment on their productivity may be the best
approach for acquiring productivity ratings.
o People are very poor at comparing themselves to others, and demonstrate significant
biases.
o There is substantial evidence that people are poor at assessing performance, and are
prone to significant biases and distortions that make subjective assessments
inaccurate and potentially misleading.
o There is some evidence that questions about the effects of the environment can
provide results in keeping with those found using objective performance tests.
Asking about the effects of the environment on people’s productivity may also
reduce bias.
o Strong correlations with environmental comfort suggest they can reasonably reliably
identify effects of the office environment on people — though it may be argued that
this could be because it is really just another aspect of environmental satisfaction.
There is uncertainty around the accuracy of ratings, with significant variability in the
relationship between environmental conditions and productivity, in building’s results, and in
individuals’ responses.
Long term recall periods are preferable to short ones — while long term ones may be less
accurate and more prone to bias, questions about the short term are not generalizable, and
are thus of little use.
Self-assessment is not necessarily worse than supervisory or peer assessment.
o Self-assessment in prone to significant positive bias.
o Supervisory and peer ratings are not, however, necessarily more reliable.
o Questions about the effects of the environment on one’s productivity have to be selfassessed.
Both ordinal and numerical scales are viable. Both merely give rough indications of the
presence and magnitude of productivity effects.
Overall, this leads to three key conclusions about the use of perceived productivity ratings:
1) That perceived productivity ratings do, as Murphy (2008b) discussed, reflect something
about productivity. However, they also reflect something about people’s perceptions and
values, something about the social context, and, indeed, something about the rating scales
and questions used. They cannot be considered to be a simple, or accurate, measure of
productivity. They may, however, provide an indication.
2) That asking people about the perceived effect of environmental conditions on their
productivity in general — the approach used by occupant surveys such as those of the BUS
Page 30 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
and CBE — may be the best way to acquire subjective productivity ratings (assuming that
the subject of interest is the effect of environmental conditions on productivity). At the very
least, there appears to be no strong argument in favour of alternative approaches. It may,
however, be worth trying to determine what exactly people are actually assessing when they
respond to the question as this is unclear.
3) That results should be analysed and interpreted carefully. There is significant uncertainty,
and small effects may not be reliable. Detailed examination may reveal problems with the
building that should be addressed, even if a building performs well on average.
2.2
Objective performance tests
There are also a number of objective performance measures that can be used. They are limited,
however, by the fact that they can only assess particular aspects of performance, and may not be
converted into measures of overall performance or economic estimates.
2.2.1
Cognitive performance test batteries
Cognitive performance tests are commonly used in laboratory studies in order to objectively
measure people’s performance and cognitive abilities. There are a wide range of them, measuring
many different aspects of performance, including:
-
Simple office work (typing, arithmetic) (e.g. Boray et al., 1989; Lan et al., 2011; Newsham
et al., 2004; Veitch et al., 1991; Wargocki et al., 2000)
Short-term memory (e.g. Heschong Mahone Group, 2003; Knez & Enmarker, 1998; Wang
& Boubekri, 2010)
Long-term memory (e.g. Knez, 1995)
Creativity (e.g. Dow, 2003; Goncalo, Flynn, & Kim, 2010; Isen et al., 1987)
Motivation (e.g. Boyce et al., 2006b)
Problem solving (e.g. Isen et al., 1987; Knez, 1995; Smith & Broadbent, 1980)
Speed of information processing (e.g. Lehrl et al., 2007)
As each test only assesses one facet of overall performance, batteries of multiple tests are
commonly used to get a better picture of how people perform, and how the environment may be
affecting them (e.g. Knez, 1995; Newsham et al., 2004).
As the tests measure activities or skills that may be used in office work, performance on them may
be logically connected to productivity. Better working memory, for instance, would be expected to
impact on all work that involves working memory.
Such tests have been used in many studies to identify effects of different environmental conditions
on performance (e.g. Boyce et al., 2003; Ljungberg & Neely, 2007; Seppänen et al., 2006; Wyon,
2004).
Their exact relationship to productivity (whatever it is) is less clear, however. The effect of, for
instance, having better working memory would depend on the job someone was doing — and it
would be incorrect to conclude that just because someone got a 10% higher score on a particular
test, that it means that they would be 10% better at all their work. Moreover, the relationships
between the test performance and “actual” productivity cannot be established without being able to
measure the productivity of the jobs in question — the problems with which have already been
Centre for Building Performance Research
Page 31 of 75
Measuring Productivity in the Office Workplace
discussed. Thus, such tests merely provide indicators of factors that should affect productivity to
some unspecified degree.
The biggest issue with them, however, is one of practicality. Many of the tests require significant
amounts of time — as much as half an hour or even longer, and even many of the smaller ones
require about 10 minutes. There are a few that take less than 5 minutes (e.g. speed of information
processing, simple arithmetic tests, and a couple of the short-term memory tests) and so may be
viable, but available time is limited. Experience suggests that people are willing to spend up to
about 15 minutes on occupant surveys (Oseland, 2007), and so if most of that time is taken up by,
say, a survey about their responses to the environment then there may not be room for testing like
this.
2.2.2
Monitoring computer activity
Another possibility is to measure productivity by monitoring computer activity, measuring
keystroke rate, mouse clicks, correction rates (use of backspace) and amount of computer usage
(Hedge & Gaygen, 2010). This has the advantage of being an objective measure, and being able to
be passively monitored without requiring occupants to spend time filling out a survey or test. Such
techniques have been used before to identify possible environmental effects on performance (Hedge
& Gaygen, 2010).
However, it also has a number of issues. One significant limitation is that it does not measure noncomputer based activities — which can be a significant part of people’s work. In addition,
comparison and analysis of results is complicated by the fact that different jobs and tasks can be
expected to affect measures of activity rates. For example, reading reports on a screen requires very
limited interaction, and would measure as “low” activity, while typing up a report requires lots of
typing and would measure as “high” activity. This means that people’s results cannot easily be
compared. However, the changes in individuals’ activity rates — such as the change from one
building to another — may be measured (at least as long as they are doing the same job). As office
workers may carry out a range of different tasks in the course of their work, their activity would
have to be monitored over a prolonged period in order to average out the fluctuations due to task
variation.
It should also be noted that typing lots of words, for example, does not necessarily mean that the
work is better — and indeed, if the use of such a metric encourages occupants to try and focus on
typing fast at the expense of factors such as quality of thought and communication, there could even
be negative effects caused by the measure.
Computer activity can be expected to relate to productivity — if people work faster and make less
errors they would logically be more productive. Overall, it might be able to be a useful indicator (at
least for jobs that involve a lot of repetitive computer tasks), but one should be very careful about
the analysis and interpretation of the results.
2.3
Health based measures
It is well established that environmental conditions can affect people’s health — one example being
sick building syndrome symptoms affected by poor indoor air quality (Fisk, 2000; Raw et al., 1996;
Sundell et al., 2011; Wargocki et al., 2002). Lighting may cause issues such as eyestrain, headaches,
Page 32 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
and stress (Boyce et al., 2003; Çakir & Çakir, 1998; Wilkins et al., 1989). Office type, and
perceived control over the environment have also been related to health effects (Boerstra et al.,
2013; Danielsson & Bodin, 2008).
There are a number of surveys used to assess the health of workers and the impacts on productivity.
Typically, they measure absenteeism, presenteeism, and the frequency of health related issues that
could affect people’s work.
2.3.1
Absenteeism
Absenteeism may be measured objectively through organisational records, or if such data is not
available, it may be self-assessed in surveys.
“During the past seven days, how many hours did you miss from work because of your
health problems? Include hours you missed on sick days, times you went in late, left early,
etc., because of your health problems. Do not include time you missed to participate in this
study.” (Reilly Associates, 2004)
Absenteeism has a number of advantages. It can be measured through organisational records — and
if it is, it does not require people to give up time filling in surveys. It is a quantifiable measure that
can be used to provide an economic estimate of productivity effects in terms of lost work hours
(Lofland et al., 2004). It reflects a clear effect on productivity — people who are not working are
not being productive. Moreover, absenteeism data is already collected by the government.
It does, however, have limitations. As with other measures, absenteeism is only a facet of overall
productivity — it does not measure effects that may occur to people’s performance at work, just the
losses caused by people being absent. High absenteeism rates may, however, be indicative of
disengagement from work and reduced job satisfaction, which could also reduce on-the-job
productivity (Sagie, 1998). Using records of absenteeism also has the disadvantage that it may delay
the evaluation of new building performance. Absenteeism is irregular — if someone is absent for a
day one week, it does not mean that they will be absent for one day every week (Hammer &
Landau, 1981). If someone has a contagious illness, it may spread throughout the office,
significantly increasing absenteeism for a period. Absenteeism is also known to be prone to
significant seasonal variation (Léonard et al., 1990). Because of this, it would be necessary to
measure absenteeism over a prolonged period of at least a year in order to account for such
variation. This means that if a new building is to be assessed, one must first wait a year in order to
iron out initial “teething” problems, and ensure that the building runs smoothly, and then one must
wait at least another year in order to gather the absenteeism data. Indeed, periods of even longer
than a year may be desirable. Chadwick-Jones et al. (1971) reported low correlations between
different years, suggesting that people’s absenteeism may vary significantly from year to year.
The variability also causes problems for the self-assessment of absenteeism. The variability means
that any measure of absenteeism over short periods is likely to be unreliable, and unable to be
generalised. However, there is some indication that people may be more prone to bias and
underestimation when asked to report absenteeism over longer periods (Johns, 1994b; Stewart et al.,
2004). Indeed, the literature on the accuracy of self-reported absenteeism suggests that its accuracy
is questionable, with studies reporting significant bias and underestimation in responses (Johns &
Xie, 1998; Johns, 1994a, 1994b). There are, however, some studies suggesting that self-reports can
Centre for Building Performance Research
Page 33 of 75
Measuring Productivity in the Office Workplace
be highly accurate (e.g. Revicki et al., 1994). Overall though, the accuracy of self-reports is
unreliable. However, they may still at least be able to provide good indications of reality, as
correlations between subjective and objective measures can be reasonably high, ranging from .30 to
.92 (Johns, 1994b). Thus, while the numbers reported may not be accurate, higher self-reports
should be indicative of higher absenteeism rates.
Another issue is that absenteeism data tends to be highly skewed and truncated by a large amount of
zero values (Hammer & Landau, 1981; Steel, 2003). Many workers may have no or very few
absences, while a few workers may have many (Steel, 2003).This can cause several problems:
1) It reduces the value of correlations that may be used to examine relationships between
absenteeism and other factors (Hammer & Landau, 1981). More complex statistical tests
may need to be used.
2) It can reduce statistical power, making it harder to identify differences (Hammer & Landau,
1981).
3) It can interfere with the ability to use simple statistical analysis techniques such as
regression models to analyse the data (Hammer & Landau, 1981)
To help reduce skewness, it has been suggested that it may be desirable to record absenteeism over
longer periods — up to 3 years or more (Steel, 2003). Larger sample sizes should also help to
smooth out data (Steel, 2003)
Questions also surround the choice of absenteeism indices. Absenteeism can be measured in
different ways — popular indices are time-lost (measuring the amount hours or days lost) and
frequency (number of absences in a time period, ignoring duration) (Steel, 2003). These different
measures can highlight different aspects of absenteeism. Time-lost, for instance, grants full weight
to long absences, and is held to be more reflective of issues such as sickness (Chadwick-Jones et al.,
1971). They hold the appeal of providing a measure of the amount of productive time lost.
However, they may be heavily distorted by the incidents of serious illness that results in someone
being absent for weeks. Frequency measures, on the other hand, can avoid such distortions, and by
discounting longer absences that may be caused by sickness, may be more reflective of voluntary
absences related to motivational issues (Chadwick-Jones et al., 1971). Chadwick-Jones et al. (1971)
suggested that frequency measures may be more reliable and stable than time-lost measures. Steel
(2003) argued that their stability was about the same. It may be suggested that using multiple
absenteeism indices may be useful to provide a better picture of the data.
Nominally, objective records of absenteeism should be preferable to self-estimates, as they should
avoid the inherent bias and accuracy issues. However, there may be significant issues around the
reliability and rigour of organisational records. Folger & Belew (1985) discuss the potential issues,
and note that error depends on how, exactly, organisations record their absences. Some
organisations require workers to fill out timesheets daily, while in others managers may keep track
of absences (Folger & Belew, 1985). If record-keeping is sloppy, then many absences might not be
counted, resulting in underestimation. One of the major issues with absenteeism records is that they
are reliant on people reporting absenteeism honestly (Folger & Belew, 1985; Steel, 2003). People
may not report all of their absences, or may have friends cover for them (Folger & Belew, 1985).
Managers could bias their records by being more lenient towards certain people (Folger & Belew,
1985). Honesty is also a problem if one wants to separate out different reasons for absenteeism —
Page 34 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
for instance, people may lie about being sick (Steel, 2003). While the concern in psychological
research about the separating out of absenteeism caused by sickness from absenteeism caused by
motivational issues is less of an issue here — as environmental conditions may affect both health
and motivation — it may be desirable to separate out external causes such as sick children or
transport issues that would not be expected to be affected by the office environment.
There may also be logistical and analytical issues depending on how the government’s absenteeism
data is stored. The data needs to be able to, at the very least, be reported on a building by building
basis.
Overall, absenteeism is a potentially valuable measure, but it is also one which brings a number of
concerns about accuracy and practicality of measurement.
2.3.2
Presenteeism
The concept of presenteeism is based on workers “going to work despite illness” (Bergström et al.,
2009; Gosselin et al., 2013). This results in reduced productivity due to impairment caused by their
health condition (Brooks et al., 2010; Gosselin et al., 2013). It is a complementary measure to
absenteeism, examining the on-the-job productivity losses due to health problems. Research has
suggested that it may be the cause of even more productivity losses than absenteeism (Schultz &
Edington, 2007).
Presenteeism is much vaguer and difficult to measure than absenteeism. Where absenteeism can be
clearly defined by people not being at work, measuring the productivity impacts of presenteeism
essentially requires the measurement of worker productivity. As such, just like other productivity
research, presenteeism is commonly assessed using subjective ratings of worker’s productivity, or
the effects of health problems on their productivity.
“During the past seven days, how much did health problems affect your productivity
while you were working?” (Reilly Associates, 2004)
Given this, the previous analysis of the issues around subjective productivity ratings applies here. It
may also be worth noting that other reviews have concluded that it is unclear what the best method
for measuring productivity and presenteeism is (Brooks et al., 2010; Prasad, Wahlqvist, Shikiar, &
Shih, 2004).
Some measures of presenteeism have, following the example of absenteeism, attempted to quantify
productivity losses in terms of unproductive hours. It is noted, however, that there is little evidence
that workers can accurately report such information (Mattke et al., 2007). Indeed, it would seem
likely that, as with absenteeism, people’s biases will lead them to significantly underestimate their
presenteeism. Additionally, as presenteeism is much less clearly defined, and it is harder for people
to be caught out, it is possible that the bias in reports could be significantly higher.
While putting productivity losses in terms of unproductive hours may appear to allow convenient
estimations of the costs involved, it should be noted that the methods for converting presenteeism
results into monetary estimates are questionable, and there is little consensus (Brooks et al., 2010;
Mattke et al., 2007; Schultz et al., 2009). The costs of presenteeism are a complicated issue, and
more than just the worker’s wages need to be considered. Overall, estimates of presenteeism costs
are no more than rough indications and should be treated with caution.
Centre for Building Performance Research
Page 35 of 75
Measuring Productivity in the Office Workplace
The interpretation of presenteeism can also be complicated, and it should really be used to
complement absenteeism measures. As Johns (2010) points out, while presenteeism may have a
negative effect on productivity when compared to a healthy person, it may still be more productive
than absenteeism. Thus, while reduced presenteeism may look like an improvement in productivity,
if it is being replaced by an increase in absenteeism then the overall effect could be reduced
productivity. Similarly, if absenteeism is reduced, but presenteeism is increased, it could result in
productivity being better overall.
1.1.1
Frequency of health problems
Another way of looking at the potential impacts of buildings on their occupant’s health — and thus
productivity and the related costs — is to look at the frequency of the various possible health
problems themselves.
This approach is commonly used in studies of sick building syndrome (e.g. Raw et al., 1990; Raw et
al., 1996), and air quality (e.g. Wargocki et al., 1999). Building surveys have found that when
people report suffering from multiple symptoms, they also report reduced productivity ( Raw et al.,
1990).
In this method, people are simply asked how often they have to deal with various health-related
issues. These may include both physical ailments, and psychological issues. Some examples of the
issues that may be asked about in different surveys are shown below:
Health and Work
Performance Questionnaire
(HPQ) (Kessler et al.,
2003; World Health
Organisation, 2001)
- Feeling dizzy
- Feeling tired
- Trouble sleeping
- Headaches
- Back or neck pain
- Pain in joints etc.
- Muscle soreness
- Watery eyes, runny
nose, or stuffy head
- Cough or sore throat
Health and Work
Questionnaire (HWQ)
(Halpern et al., 2000,
2001; Shikiar et al., 2004)
Various questionnaires
looking at Sick Building
Syndrome (Raw et al.,
1996)
-
-
Irritation
Impatient with others
Difficulty concentrating
Exhausted
Eye irritation
Eye strain
Runny nose
Sore throat
Breathing difficulty
Chest tightness
Rashes
Dry skin
Headaches
Tiredness
Poor concentration
Table 2: Examples of health issues asked about in the different surveys
Asking about specific health problems like this may be argued to have some advantages. For
instance, there is some evidence that breaking a general question (such as about health) down into
sub-categories can aid recall for irregular events and improve report accuracy (Menon, 1997). That
being said, research has also suggested that breaking topics down into small categories can cause
overestimation of the frequency of less frequent events, and that problems may be greater for longer
reference periods (Belli et al., 2000). In this case, given the vagueness of the scales used in these
questions (frequency described as “a little”, “a lot” ), the value of “improved accuracy” is
Page 36 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
questionable as the scale does not allow a high degree of accuracy anyway. Moreover, if long recall
periods (e.g. 6 months, a year) are desired to make the results generalizable and representative of
the average condition, then the questions would seem to be more at risk of the overestimation
problems as described by Belli et al. (2000).
It may also be suggested that specific effects may provide a more compelling argument that there
are concrete benefits. Being able to say that occupants are reporting less eyestrain and headaches
may be more convincing than just saying that people think they are “healthier”.
Some health issues may also be linked to specific aspects of the environment, helping to identify
problem areas. For example, eyestrain may be related to lighting and glare (Hedge et al., 1995).
This may, however, only be of limited value if people are also being surveyed about their
satisfaction with the environment. If glare is causing eyestrain then the existence of problems in that
area could be identified from the questions about satisfaction with lighting and glare. If
musculoskeletal problems are a concern, then they could be identified in comments about the
furniture and workstations.
Measuring specific health issues is also, of course, useful if an organisation is concerned about
health costs or any specific conditions, as different conditions may be more costly than others, and
may need to be addressed in different ways (Schultz et al., 2009).
If one’s aim is to assess the effect of the environment on people’s productivity, however, then
surveying people on a number of different health issues can have significant disadvantages. Indeed,
the Office Environment Survey, which preceded the BUS, did in fact include such questions.
However, they were dropped from it, leaving just a general question about how the building affects
people’s health. According to Adrian Leaman (2013), they dropped the questions from the survey
because having so many of them made the results difficult to analyse, because the questions made
the survey take too much time, and because they found that the questions did not really tell them
much about building performance. Given that the subject of interest is the overall effect of the
building on people’s productivity, breaking the health measures up too much could just obscure
matters. For example, if, in a new building, people had more eyestrain, but less headaches — what
would it mean overall? Such specific questions may not be an effective way to provide general
comparisons between buildings.
2.4
Time lost to issues affecting productivity
Another way of measuring productivity is to instead ask about how much time is lost to various
issues that would be expected to affect productivity.
For example, Laitinen et al., (1999; in Kemppila and Lonnqvist 2003) asked questions such as:
“How often do you have to search for tools and materials in your work?”
This approach is conceptually similar to asking about the effects of the environment or health on
one’s productivity, but is more specific. For this approach, the questions are specific to particular
jobs. This is both an advantage and a problem — its advantage is greater detail and relevance to the
specific jobs; the problem is that it may require different surveys to be designed for different jobs.
An example of this kind of approach for office workers is in the OPN occupant survey (Office
Productivity Network, 2005).
Centre for Building Performance Research
Page 37 of 75
Measuring Productivity in the Office Workplace
It measures “downtime factors”, asking people to “Estimate the amount of your work time that you
consider is wasted per week (if any) due to the following factors”. Factors include:
-
Waiting for printers, copiers.
Waiting and searching for documents.
Repeating work due to IT problems, glare, being disturbed, being too hot or too cold.
Organising and walking to meetings.
Unnecessary bureaucracy.
However, as Mattke et al. (2007) noted about time-lost presenteeism measures, it is questionable
whether or not people can accurately estimate such measures — indeed, research suggests that
people often significantly underestimate how long tasks took, as well as overestimating them if they
have limited experience with them (Roy et al., 2005). Similarly, the issues discussed around
estimating the costs of presenteeism also apply here — while time-lost measures would seem to
invite cost estimates, they should be treated with caution, as the methods to do so are questionable
(Brooks et al., 2010; Mattke et al., 2007; Schultz et al., 2009).
Time lost may be able to be measured more accurately if what people were doing was being
constantly recorded, such as via detailed timesheets. However, such an approach might be
unpopular, and could require increased bureaucracy and expense. In addition, if based on selfreports, it would be reliant on people being honest about their activities — which would be
questionable, given the sensitivity of the question of how productive someone is being.
It might also be said that many of the relevant factors here are really more about management issues
rather than environmental ones, so they may not say much about the effects of the environment on
productivity. Moreover, the questions about environmental conditions — such as repeating work
due to glare — may be of limited value due to the question marks around their accuracy. It is not
clear that they would say any more than that, for example, glare might be a problem — which could
already be seen in a simple rating scale. Moreover, the specificity of the questions means that they
are very limited. Questions about repeating or correcting work due to glare ignore other potential
problems, such as people disengaging from work due to eyestrain and headaches. Indeed, time-lost
measures ignore a fairly important factor in environmental effects — specifically that poor
conditions may make it harder to concentrate and people may work more slowly (e.g. Wargocki et
al., 2000). Answering lots of questions does take up more time, as well as making analysis more
complicated, and trying to cover every potential issue could become excessively time consuming
and complicated very quickly.
Overall, while interesting, the method would seem to be likely to have similar issues to the detailed
surveys of health problems that was dropped from the BUS, in that the measures could make it
difficult to run and analyse the survey, and their results may be of limited value.
Page 38 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
2.5
2.5.1
Affect measures
Mood
Mood may be defined as being comprised of dimensions of pleasure and arousal (Barrett & Russell,
1998). Positive moods have been linked to a wide range of beneficial behaviours and performance
effects (Isen, 2001; Russell & Snodgrass, 1991). Studies have found that people in positive moods
are more generous and helpful (Cunningham, 1979; Isen, 2001), more innovative, resolve
workplace conflicts more productively, cooperate better with others, are better at seeing other’s
points of view, can get better outcomes in negotiations, are less defensive, and are more efficient
and thorough in analysing information and making decisions (Isen, 2001).
Mood has also been studied with respect to the environment, and while research is not conclusive, a
number of studies have suggested effects of lighting (Knez & Kers, 2000; Knez, 1995; Newsham et
al., 2004; Veitch et al., 2008) and daylighting on people’s mood (Boyce et al., 2003), also linking it
to performance (Veitch et al., 2011). Other studies have suggested that noise (Vastfjall, 2002), and
colour (Küller et al., 2006; Stone, 2001) may also affect mood.
The specifics of mood vary somewhat depending on how it is measured. Two well established
mood scales are Watson’s Positive and Negative Affect Schedule (PANAS) (Watson, Clark, &
Tellegen, 1988), and Russell and Mehrabian’s three factor semantic differential scale (Mehrabian,
1974; Russell & Mehrabian, 1977).
The Russell-Mehrabian scale has three scales, measuring pleasure, arousal, and dominance
(Mehrabian, 1974). In contrast, PANAS has two scales, measuring positive affect (mood), and
negative affect (Watson et al., 1988). The key difference is that the PANAS scales are actually
measuring positive and negative activated mood — its positive affect, for example, combines both
happiness and excitement (Barrett & Russell, 1998).
While mood may be linked to many potentially beneficial outcomes, the assessment of it has a
number of issues. Firstly, it should be noted that it will only be an indicator — positive moods may
be good, but it is not possible to put any kind of figure on exactly how good.
The biggest issue is that mood is influenced by so many factors that it can be very difficult to
reliably identify specific effects of the environment (Boyce et al., 2003). In addition, occupant’s
mood may vary significantly from day to day. In order to identify reliable differences between
buildings, it would be necessary to measure mood repeatedly, on a number of different days. It
could also be useful to measure mood at both the beginning and end of the day in order to calculate
the average change in mood over the day, and thus help control the effects of the variation in initial
mood states (how people felt before they came to work). Such complications may make mood
impractical as an indicator of productivity.
Centre for Building Performance Research
Page 39 of 75
Measuring Productivity in the Office Workplace
2.5.2
Sleepiness/fatigue/alertness
It should be noted that sleepiness may be assessed as part of mood — the arousal factor in the
Russell-Mehrabian scale includes an assessment of alertness (Mehrabian, 1974). However, in the
absence of the mood tests, sleepiness may also be assessed, quickly and easily, on its own.
The Karolinska Sleepiness Scale is a simple 9-point scale where subjects rate how sleepy they feel
from “very alert” to “very sleepy” (Akerstedt & Gillberg, 1990). It has been used in a number of
studies, and has been validated against more objective measures of sleepiness (Kaida et al., 2006).
Sleepiness may affect productivity, as tired people are likely to find it harder to work and
concentrate, and be more prone to errors (Dean et al., 2010; Keller, 2009). Studies have indicated
that it may be affected by the environment — in particular lighting, which influences people’s
circadian rhythms (Begemann et al., 1997; Boyce et al., 2003; Webb, 2006). Studies have suggested
that lighting may affect fatigue and even sleep quality (Aries et al., 2010; Aries, 2005; Hubalek et
al., 2010; Viola et al., 2008), that thermal conditions may affect arousal (Smith & Bradley, 1994),
and that noise may increase fatigue (Vastfjall, 2002).
Its limitations, however, are much the same as mood’s. People’s sleepiness can vary significantly
from day to day, so in order to reliably identify differences between buildings, it may be necessary
to measure it on multiple occasions. This could be viable however, due to how quick and easy it is
to measure.
2.5.3
Job satisfaction
Job satisfaction is defined as an evaluation of how satisfied one is with their job or the various
aspects of it (Christian et al., 2011; Ritz, 2009; Warr et al., 1979). Questions may be asked about
satisfaction with elements such as one’s pay, the relationships with management and colleagues,
work hours, the work environment, job security, and more (Warr et al., 1979). The number of
factors measured may vary — Warr et al. (1979) measured 15 different factors, while Ritz (2009)
simply used a single measure of overall job satisfaction. The possibility of measuring it with a
single question makes it an attractive option, as this would allow job satisfaction to be easily added
on to a larger survey without significantly increasing the time required. Meta-analysis has found
that single-item measures of job satisfaction correlate well with multi-scale measures, suggesting
that single-item measures are a perfectly valid option in situations where, for example, time is
limited (Wanous et al., 1997). The trade-off is that while a single measure may be a convenient way
of measuring overall job satisfaction, it does not give any information about the facets of it —
which may be useful if managers want to know why people are or are not satisfied (Scarpello &
Campbell, 1983).
Job satisfaction is one of the most commonly used attitudinal measures, having been used in
thousands of studies (Wright, 2006). Studies have suggested that it may be influenced by the
environment (Danielsson & Bodin, 2008; Donald & Siu, 2001; Leather et al., 1998; Newsham et al.,
2009; Veitch et al., 2007; Veitch et al., 2010). It has also been linked to performance and
productivity. Meta-analysis by Judge et al. (2001) found a significant correlation between job
satisfaction and supervisory performance ratings — moreover, they found that the correlation was
stronger for more complex jobs. Satisfaction has also been linked to intention to quit (Caillier,
2011), absenteeism (Sagie, 1998), as well as turnover, customer satisfaction, and overall business
Page 40 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
performance (Harter et al., 2002). Analysis has also suggested a negative correlation between
satisfaction and counterproductive work behaviours (Dalal, 2005).
It should be noted, however, that the relationships are not necessarily strong (Fisher, 2003) — Judge
et al. (2001) for instance only found a correlation of ~0.30 between satisfaction and performance.
Moreover, the various analyses that have been done indicate significant variability and uncertainty
in the relationship between satisfaction and performance (Fisher, 2003; Judge et al., 2001; Zelenski
et al., 2008).
Another potential issue is that job satisfaction may be strongly influenced by factors such as
management (Newsham et al., 2009), which could confound the identification of effects caused by
differences between buildings. For example, if, in a new building, workers showed higher job
satisfaction, it may be difficult to tell whether or not it was due to the building being better and not
because of better management. Indeed, it could be both.
The use of job satisfaction is limited to it being an indicator. While higher levels of satisfaction may
suggest that improved productivity and organisational outcomes, they do not put any kind of
numerical value on it, and the relationship between the two is not clear enough to make accurate
predictions. It is, however, a very popular measure that has been linked to a number of positive
organisational outcomes, and possible environmental effects on it may be considered to be valuable.
2.5.4
Job engagement
Job engagement is defined as “a persistent and affective-motivational state of fulfilment” (Wefald et
al., 2012) that measures the investment of energy in one’s work (Christian et al., 2011). It is
characterised by elements such as motivation, effort, enthusiasm, and getting immersed in one’s
work (Wefald et al., 2012).
To our knowledge, the measure has not been used in research into the effects of the environment,
though it may be indirectly affected through job satisfaction. Some suggestion that environment
may affect motivation is found in the study of Stone (1998), who found that the presence of
windows increased perceptions that a room was “motivating”. Research by Veitch et al. (2011),
suggested that more satisfactory lighting could increase occupant’s work engagement, which was
defined based on occupant’s reported interest in reading an article, their motivation, and their use of
breaks between tasks.
Engagement would be logically expected to affect productivity — people applying more effort to
their work should be more productive. Studies have linked it to task performance, and have
suggested that while it does correlate highly with other measures such as satisfaction, it is a distinct
construct in its own right (Christian et al., 2011). Studies have also found that workers with higher
engagement have higher satisfaction and lower turnover intentions (Wefald et al., 2012).
The main reason it has been discussed here in spite of the limited evidence connecting it to
environmental effects is that staff engagement is already measured by the government, making it a
logical measure to use. However, the fact that the government apparently uses several different
surveys from different organisations such as Gallup, Right Management, Kenexa/JRA, and others,
could complicate matters as the different measures may not be equivalent. The potential issues
discussed around measuring job satisfaction also apply.
Centre for Building Performance Research
Page 41 of 75
Measuring Productivity in the Office Workplace
2.5.5
Intention to quit/turnover
Turnover can be a major problem. The loss of experience and expertise can reduce work output, and
the process of finding new employees and training them can be expensive, with some estimates that
the costs can be up to double an employee’s salary (Caillier, 2011). Indeed, statistically significant
negative correlations have been found between turnover rates and organisational performance (Park
& Shaw, 2013).
There has been some evidence that environmental conditions may affect turnover rates. Leather et
al. (1998) found that greater sunlight penetration was associated with better job satisfaction and
lower intention to quit. Turnover measures have not, however, been used much in environmental
effects research. A study by Veitch et al. (2010) also linked it to satisfaction with lighting and the
environment, with people with better lighting having lower intention to quit. Other than these two
studies, the evidence is largely indirect, through the fact that both environmental conditions
(Danielsson & Bodin, 2008; Donald & Siu, 2001; Leather et al., 1998; Newsham et al., 2009;
Veitch et al., 2007) and intention to quit have been linked to job satisfaction (Sagie, 1998;
Zimmerman & Darnold, 2009).
Turnover can be assessed subjectively or objectively. Intention to quit provides an indication of
how much turnover there could be, and can be measured quickly and easily. For example:
“Within the next 2 years, how likely are you to leave your current organization for a job in
another organization?” (Bright, 2008).
It is, however, only an indication of possible negative behaviour rather than an actual measure of
the behaviour.
Objective measures of turnover however do measure the actual behaviour, and could allow
estimates of their costs. The numbers of people leaving, and the costs of replacing them in terms of
advertising, interviewing, and training can be measured (Caillier, 2011). Of course, that is only part
of the story — the less tangible factors such as changing productivity due to lost experience are
more problematic, essentially requiring a measurement of the worker’s productivity.
Similar to absenteeism, turnover measures also raise questions around the accuracy of
organisational records — specifically, with regards to the types of turnover (Campion, 1991). A key
distinction in the literature is that between voluntary turnover and involuntary turnover (Shaw et al.,
1998). As Shaw et al. (1998) points out, voluntary turnover, or quitting, may be influenced by
different factors and may have different effects than involuntary turnover (firing). The distinction is
relevant here, as it is voluntary turnover that would be expected to be affected by the environmental
conditions. However, determining whether or not a worker is leaving voluntarily or involuntarily
can be problematic (Campion, 1991). Organisational records are not always trustworthy — they
may only record one reason of many; they may be recorded inaccurately to save face (e.g. “quit”
instead of “fired”); and some reasons may be classified in different ways, for example some people
have label pregnancy as voluntary while others have labelled it involuntary (Campion, 1991). Of
concern is that studies have suggested that records and former employees may not necessarily have
a high degree of agreement (Campion, 1991). Indeed, as Campion (1991) discusses, the distinction
between voluntary and involuntary may not always be clear — for example, if someone quits to
avoid being fired.
Page 42 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Further complications may be added by the fact that turnover is not necessarily bad for an
organisation — getting rid of or replacing poor workers may actually improve productivity
(Campion, 1991; Williams & Livingstone, 1994). The assessment of this, however, becomes more
complicated, as it requires some kind of performance assessment of the workers that is then linked
to the turnover records.
It should also be noted that if turnover rates are being acquired from various administrators, then it
is important to make sure they are all measuring turnover the same way (Castle, 2006). This issue
can, of course, be avoided by calculating the rates for oneself from the organisational records.
Again similar to absenteeism, turnover rates vary over time, and may be prone to, for instance,
seasonal variation (Barry et al., 2008). Studies have found that 6-month measures of turnover
cannot be extrapolated out to a year, suggesting that records may need to be assessed over longer
periods to be able to give reliable results (Barry et al., 2008).
Like job satisfaction, turnover may be significantly influenced by other factors, which could
confound attempts to identify environmental effects. Research has suggested a number of reasons
for turnover, including job satisfaction, economic climate, equity, psychological contract,
management and more (Morrell et al., 2004). There is, however, generally a lack of precedents
looking at environmental effects, making it difficult to say how sensitive turnover rates may be to
changes in environmental conditions. There is some evidence supporting links between intention-toquit and environmental conditions (Leather et al., 1998), but it should also be noted that intentionto-quit and turnover are only moderately correlated (0.45, Zimmerman & Darnold, 2009). That
being said, however, they may still be viable indicators that could provide useful information about
factors that may be considered to be important to organisations.
Centre for Building Performance Research
Page 43 of 75
Measuring Productivity in the Office Workplace
3
ASSESSMENT OF INTERNAL ENVIRONMENTAL CONDITIONS
When assessing the effects of the office environment upon people’s productivity, it is important not
just to assess productivity factors, but also to assess the environmental conditions. There are two
main reasons for this:
1) To make it possible to determine whether or not productivity effects are being caused by
environmental conditions in the building.
2) To identify problems.
If one were simply to measure productivity in a new building and compare it to the productivity in
the occupant’s old building then it may be difficult to say that the difference was because of better
environmental conditions and not, for instance, because of better management. If, however, one had
also surveyed the occupants and had found that they were more comfortable in the new building,
and more satisfied with the air quality and lighting, then it would be different. Then one could make
use of the literature on how those improvements to the environment are likely to affect people’s
productivity to argue that it is likely that the better design of the new building was at least partially
responsible for any improvements in productivity.
The known relationships between the environment and people’s performance may also be used to
evaluate the likely magnitude of productivity effects. For example, if the productivity results were
indicating that productivity was a lot higher in the new building, but the occupants’ satisfaction with
the building was only indicating small improvements, then it would suggest that the productivity
results may be overestimated, or that they may be being enhanced by factors other than the
environment.
Environmental assessment is also very useful for identifying problems. Environmental issues
reducing performance may be fixable — if they can be identified. Results that just say that
productivity or job satisfaction is low will not be able to tell anyone this. An assessment of the
environment however could identify the air conditioning, for example, as being a problem.
The following sections discuss two key issues:
1) The key environmental factors that need to be assessed.
2) The advantages and disadvantages of the different methods available for assessment.
3.1
The key environmental factors
If the environment needs to be assessed, then what are the key factors that have to be measured?
The answer is, arguably, all of them. General factors known to be important, such as comfort
(Leaman & Bordass, 2006), ultimately end up involving almost all aspects of the design of the
building, with many factors being inter-related. For example, if indoor air quality is an important
factor, then it should be noted that it is affected by ventilation, location, occupant density,
maintenance, cleaning, material selection, where pollution sources are placed, and the general plan
of the building (Charles et al., 2004). The choice of measures really depends on how one is
approaching the situation — whether or not one is trying to measure the physical conditions, define
design elements, or evaluate people’s responses — as well as how much detail is needed. It is
possible to reduce the measures to several key “overall-factors”, especially if one is focussing on
Page 44 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
people’s responses. Using the previous example, while indoor air quality might be affected by many
design elements, it can simply be assessed with a single question about how satisfied people are
with it — though this would, of course, give little detail about what is or is not working.
At minimum, however, to examine the effects of the indoor environment on productivity one should
assess the four primary general indoor environment factors — thermal comfort, indoor air quality,
acoustic, and lighting/visual comfort — as well as workstation ergonomics and comfort.
Environmental factors are summarised in the table below, which was based on key factors identified
from Charles et al. (2004) and Leaman & Bordass (2006). As noted by Bruhns (1996), there
may be hundreds of items to look at in an office building. This summary has focussed on key
elements related primarily to the indoor environmental factors that the literature has suggested
affect productivity, and does not claim to cover absolutely everything that can be studied in a
building evaluation. If one is interested in more detailed discussion of building evaluation and
design elements, then Baird et al. (1996) provide very comprehensive summaries of the building
elements, and how they can be assessed, and Charles et al. (2004) provide a thorough set of design
guidelines for offices.
Occupant
responses
Environmental
measures
Design /physical
components/levers
Behavioural factors
Thermal comfort
-
Indoor air quality
- Fresh air ventilation
rates
Contaminant levels:
- CO2, formaldehyde,
lead, ozone, NO2,
CO, radon, SO2,
VOCs, mould, small
particles
- Noise levels
- Speech
intelligibility
- Ventilation type
- Controls
- Good passive
design
- Windows
- Shading
- Insulation
- Thermal mass
- Glazing types
- Ventilation
- Location
- Occupant density
- Material selection
- Pollution sources
- Plan/layout
- Air delivery
- Individual
preferences
- Control
- Ability to adapt
(clothing)
- Expectations
- Management’s
responsiveness to
problems
- Control
- Cleaning
- Maintenance
- Responsiveness
-
- Task type
- Individual
preferences
- Etiquette
- Control
- Privacy
- Unwanted
interruptions
Acoustics
Air temperature
Air movement
Humidity
Radiant temperature
Centre for Building Performance Research
Workstation size
Layout
Surface acoustics
Partition height
Ventilation
systems
- Acoustic
insulation
- Occupant density
- Sound masking
noise
Page 45 of 75
Measuring Productivity in the Office Workplace
Visual comfort –
adequacy of
lighting,
daylighting
- Illuminance
- Uniformity
- Glare
Personal control
Satisfaction with
workspace
- Luminaire
selection
- Controls
- Lamp type
- Lighting layout
- Surface
reflectances
- Partition heights
- Ballast type
- Layout
- Windows
- Shading
- Colour
- Building depth
- Control usability
- Workgroup size
- Layout
-
- Individual
preferences
- Control
- Control
- Responsiveness
- Knowledge of
building operation
Storage
Aesthetics
Ergonomics
Workstation size
Layout
Design intent
Table 3: Summary of key environmental factors
3.2
Environmental assessment methods
There are a number of different methods that can be used to assess the indoor environment. Two
key issues are discussed here: firstly, the different, yet complementary, roles of objective and
subjective measures; and secondly, the advantages and disadvantages of some of the more
commonly used subjective assessment tools.
3.2.1
Objective vs. Subjective assessment
Objective and subjective assessments of the indoor environment play different roles, and are
fundamentally complementary. Their use, with regards to the assessment of the effects of the
environment on productivity, is described well in the quote below:
“For office buildings, the principal requirement is to provide an environment that enhances
occupants’ well-being and facilitates their productivity. The quality of the internal
environment is both objective and subjective. Instrumental measurement can provide
accurate and useful information on environmental conditions, but the subjective experience
of the building users should always be the final arbiter in the evaluation of those
conditions.” (Bruhns, 1996, p. 151)
Reliance on objective performance standards does not always work — for example, investigation
into sick building syndrome was driven by the discovery that buildings that nominally met the
standards of good design were getting poor responses (Wyon, 1994). Standards based on
laboratory studies may not always give accurate predictions for real buildings (Arens et al., 2010).
Arens et al. (2010) provide a good example of this: ISO standard 7730 allows conditioned office
buildings to be categorised into 3 classes of thermal environmental quality using the index of
Page 46 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Percentage Mean Vote (PMV). In theory, the better grade buildings should have more satisfied
occupants. However, field studies of a number of office buildings showed that overall there was no
difference between the different categories, with all of them showing dissatisfaction rates of about
20% on average.
Indeed, it is worth noting that comparisons of different comfort indices, designed to evaluate the
objective measured environment, can give significantly different results, even if their general trends
are the same (Humphreys, 2005). What the best environmental conditions are depends on personal
preferences and the tasks people are carrying out — and indeed comfort is, at least partially, a social
construct that can change over time (Humphreys, 2005).
Similarly, lighting studies have shown that light level preferences vary significantly between people
(Begemann et al., 1997), with one study finding that “for any given fixed light level (between 100
and 800 lx, the range possible in this experiment), at most 50% of the sample had a light level
preference within 100 lx of that value” (Veitch et al., 2011). This makes it difficult to define any
particular light level as being “correct”, as well as highlighting the importance of providing people
with control over their environment, allowing the occupants to adjust conditions to their needs.
Surveys have found that comfort and perceived productivity tend to be higher in buildings where
the occupants have greater control (Leaman & Bordass, 2006).
Thus, if one wishes to make sure that a building is actually providing a good environment for its
occupants, standards may not be reliable, and the logical solution is to check their responses.
Surveying is also considered to be one of the easiest and cheapest methods for evaluating the indoor
environment (Peretti & Schiavon, 2011).
Objective measurements and descriptions are used to define the environmental conditions to which
the occupants are responding. They may be used to define what the environment actually is, and are
necessary if one wants to address problems, to learn about what one should and should not do in a
design, and to define design parameters. Objective measurements and observations may vary in the
level of detail used, and decisions as to what are the key details that need to be measured may
require expert judgement (Leaman et al., 2010). It may be more efficient to use, for example, a
survey to identify the key problem areas before looking into them in more detail as necessary
(Leaman et al., 2010).
3.2.2
Comparison of different subjective assessment tools
There are, it should be noted, a large number of subjective environmental evaluation tools (Baird et
al., 1996; Dykes, 2012; Oseland, 2007; Peretti & Schiavon, 2011).
Leaman et al. (2010), summarising commonly used evaluation techniques, suggest two main tools:
structured interviews with focus groups; and occupant surveys. Structured interviews can be good
for discussing issues in greater detail, but it is suggested that they may best be used after surveys
have been carried out, so that people know where to focus discussion (Leaman et al., 2010). To get
a broad picture of the building, that can summarise its effects on the occupants and identify problem
areas, the occupant survey is the method of choice.
Even if one is just looking at indoor environmental quality surveys, there is still a large selection to
choose from. Many were custom made for particular projects or studies.
Centre for Building Performance Research
Page 47 of 75
Measuring Productivity in the Office Workplace
A few of the more widely used, commercially available occupant surveys are briefly summarised
and discussed below:
Building Use Studies
(BUS) Occupant Survey
(Building Use Studies,
2011a, 2011b)
Covers workspace, design, window access, furnishings, storage,
meeting rooms, safety, image, work requirements, cleaning,
thermal comfort, air quality, noise, lighting, perceived
productivity, perceived control, comfort, health, responsiveness to
problems, transport. People are given space to make comments on
many questions.
Key points:
-
Center for the Built
Environment (CBE)
Occupant Indoor
Environmental Quality
Survey
(Center for the Built
Environment, 2013)
Questions about transport not necessarily relevant to
office environment and productivity.
- Question about satisfaction with management’s
responsiveness to problems is not in other surveys.
- Goes into a lot of detail about environmental factors —
e.g. air quality section asks people to rate the
draughtiness, humidity, freshness, odour, and overall
satisfaction.
- Has already been used extensively in NZ, and NZ
benchmarks are available (Dykes, 2012).
- Has been used in over 500 buildings around the world.
Has been used since 1995, and was based on the earlier
Office Evaluation Survey made in 1985.
- Paper-based survey recommended, though there is also a
web version.
Covers workspace, design, window access, visual privacy, layout,
furnishings, storage, meeting rooms, safety, image, work
requirements, cleaning, maintenance, thermal comfort, air quality,
noise, lighting, perceived productivity, actual controls, general
satisfaction, perceived energy efficiency, satisfaction with
specific control elements. People are given space to make
comments on many questions.
Key differences between it and the BUS are:
-
Page 48 of 75
It doesn’t ask about the provision of meeting rooms.
Productivity question uses a ±20% scale rather than a
±40% one.
It asks about what the actual controls are rather than the
perceived level of control. Though it does also ask about
satisfaction with specific controls such as blinds and
thermostats.
It asks about satisfaction with visual privacy, ease of
interaction with colleagues, and the office layout.
It breaks up questions about furnishings into comfort and
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
OPN Workplace
Evaluation Survey
(Office Productivity
Network, 2005)
ability to adjust instead of just asking a general question
about the usability.
- It goes into less detail about environmental factors such as
thermal comfort and air quality, focussing mainly on
overall satisfaction.
- It asks about cleanliness, the cleaning service, and
building maintenance rather than just about the cleaning in
general.
- Each section asks specifically if it enhances or interferes
with people’s ability to do their work. E.g. effects of
thermal comfort, lighting, office layout etc.
- Questions about thermal comfort and air quality are not
broken up into both summer and winter.
- Asks about perceived energy efficiency. Not really
relevant unless you are interested in whether or not
employees think you are energy efficient.
- Is web-based rather than paper-based.
- Has been used in over 600 buildings. Has been used since
1996 (Dykes, 2012).
Covers furniture, storage, layout, meeting rooms, conferencing,
catering facilities, IT infrastructure, security, cleaning services,
thermal comfort, air quality, air movement, noise, privacy,
lighting, glare, aesthetics, overall satisfaction, perceived
productivity, downtime factors, workplace activities, space to
make comments at the end.
Key differences between it and the others are:
-
-
-
Uses 5 point satisfaction scales rather than 7 point.
Breaks down the environmental factor questions less than
the BUS, but it has a bit more detail than the CBE survey.
Has a much bigger focus on facilities, asking about the
different kinds of meeting rooms, conferencing facilities,
IT infrastructure etc.
Asks about how both the facilities, and the environmental
conditions, affect productivity.
Uses the same 40% perceived productivity scale as the
BUS.
Asks for a breakdown of the kinds of tasks people carry
out.
Unique section on “downtime factors” asks people to
estimate how much time they lose to various issues such
as waiting for printing, repeating work due to distractions
etc.
Has a breakdown on how well the office supports various
Centre for Building Performance Research
Page 49 of 75
Measuring Productivity in the Office Workplace
Overall Liking Score
(OLS) survey
(ABS consulting, 2013)
work activities — e.g. being creative, concentrating,
having meetings, team-work etc.
- Does not ask about access to windows, the number of
people in the room, or health. Though it does ask what
type of space people are in — if it is a single room, shared
room, or open plan.
- Does not ask about environmental controls.
- Only has a general comments section at the end, rather
than ones for specific topics.
- Approach is very much productivity focussed, with the
questions being specifically about how well the
facilities/environmental conditions support people’s work.
- Questions about facilities and downtime factors are
somewhat outside the scope of the effect of environmental
conditions and productivity — but may still be useful.
- May be paper-based or web-based.
- Has been used in over 70 buildings, and was first
developed in 1999.
- Used in the UK.
Covers thermal comfort, air quality, lighting, glare, window
access, attractiveness, privacy, noise, air movement, control,
attractiveness, working space, health, work area, storage, facilities
management, general comfort, appearance, colleagues, IT
provision. Invites comments for each question.
Key differences are:
-
-
-
As the name suggests, focusses on how much people
“like” the office. “Liking scores” are calculated based on
how much people like factors, and how important they say
they are.
Does not ask about productivity.
Does not ask about the number of people in the space, or
the maintenance and cleaning.
Has a similar level of detail on environmental conditions
as the OPN survey — more than the CBE, less than the
BUS.
May be paper-based or web-based.
Has been used in about 100 UK buildings (ABS
consulting, 2013). It dates back to 1992 (Dykes, 2012).
Table 4: Comparative summary of four occupant surveys
While there is some variation in focus area and approach, the different surveys are generally fairly
similar. The basic approach of asking people to rate how satisfied they are with various
environmental factors is fairly standard, and all of them address the key environmental factors
described in Section 3.1. The OPN survey stands out the most for its unique “downtime factors”
Page 50 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
section asking people to estimate how much time they waste due to various issues. That being said,
many of the issues it asks about are not really part of the building design, really being management
issues, and as discussed in Section 2.4 (Time lost to issues affecting productivity), the “downtime
factors” may not be that useful as a means of estimating environmental effects.
One point of note is that the CBE survey is purely web-based, unlike the others. Web-based surveys
have the advantage of being cheaper and easier to deliver and analyse, as most of it can be done
automatically (Oseland, 2007). Web-based surveys have, however, tended to have significantly
lower response rates — 30% or lower compared to the 70% and higher rates typically gotten by
paper surveys (Building Use Studies, 2011b). It may be noted, however, that Oseland (2007)
suggests that this difference is historical, and that web-based surveys can now get comparable
response rates to paper-based ones.
At the end of the day, all of the surveys will all be able to tell you if the occupants are satisfied with
the building, and if they are unhappy with any specific factors. They all cover most of the relevant
environmental factors discussed in the summary in Table 3. The BUS and CBE surveys are the
most widely used, and have a strong presence in the academic literature (e.g. Baird, 2001; Gou et
al., 2013; Leaman & Bordass, 2006; Lenoir et al., 2012; Moezzi & Goins, 2011; Wargocki et al.,
2012). The OPN survey is of possible interest because of its strong productivity focus, and its
addressing of facilities and IT infrastructure issues, while the OLS may be less appropriate as it
does not address productivity. The BUS may be considered to be advantaged, however, by the fact
that it has already been used on a number of New Zealand buildings with the aim of developing a
robust set of benchmarks for the country (Dykes, 2012).
Centre for Building Performance Research
Page 51 of 75
Measuring Productivity in the Office Workplace
4
CONCLUSIONS
4.1
What are the factors that the literature suggests can be used to
measure productivity?
+
What are the key behavioural and attitudinal factors that affect productivity?
There is no clear definition or standard measure of productivity in the office environment. There is
great variation amongst different jobs and tasks, making it difficult to compare or aggregate them.
While productivity is at its roots an objective and quantifiable measure, relating inputs to outputs,
objective measures are often highly limited and inappropriate for many office jobs. Factors such as
quality and interpersonal relations are not readily countable, but may be very important. Thus,
overall productivity in the office cannot really be measured.
Because productivity cannot be measured simply, it is often defined more vaguely, in terms of
various elements — generally behaviours that may be related to productivity and which may
provide indications of improved organisational outcomes.
Researchers have used a number of such elements to assess the effects of the office environment on
occupants. These include:
1) Ratings of perceived productivity
2) Cognitive performance tests (e.g. working memory, processing speed, concentration)
3) Monitoring computer activity (e.g. keystrokes, mouse clicks)
4) Absenteeism
5) Presenteeism
6) Reported frequency of health issues
7) Time lost to issues affecting productivity
8) Mood
9) Sleepiness
10) Job satisfaction
11) Job engagement
12) Intention to quit
13) Turnover
Most of these elements are measured subjectively. This is because they are either a) inherently
subjective (e.g. mood, job satisfaction) or b) possibly impractical to measure objectively (e.g.
reported frequency of issues). It should be noted that the objective measures are not inherently
better than the subjective ones. They too are limited to only measuring aspects of the overall
productivity. Absenteeism, for instance, only measures the amount of productivity lost because
someone is not at work, and says nothing about how productive they are when they are present.
Ultimately, all the measures available are limited and only provide an indication of the effects on
overall productivity. This may, however, still be enough to say if a building is likely to be providing
significant improvements to its occupant’s productivity. The pros and cons of the different measures
are summarised on the following pages.
Page 52 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Method
Perceived
productivity ratings
Pros
Cons
Provide an indication of
productivity effects.
No validation of accuracy for
knowledge work.
Surveys allow many people to
be assessed relatively cheaply.
Studies suggest people are poor
at assessing their performance.
Is already present in many
occupant surveys such as the
BUS and CBE.
Perceptions of performance can
be majorly distorted by things
like critical feedback.
Can be assessed very quickly
and easily (1 question).
Relationships between objective
and subjective ratings where
available are generally weak.
General question can be broadly
used.
Some indication that they can
provide reasonable average
estimates of simple performance
effects.
Is common practice.
Some indication that subjective
ratings may exaggerate
productivity effects.
Numerical estimates may
appear to be more accurate than
they are.
Relationships between
environment and subjective
measures are supported by
objective research, suggesting it
is a viable indicator.
High correlation between
perceived effects and
environmental comfort suggests
ratings may be able to reliably
identify effects.
Cognitive
performance tests
Provide indications of
productivity effects.
Only measures parts of
productivity.
May be done on computers.
Magnitudes of effects on
productivity unclear.
Cognitive effects may provide
broad benefits to many tasks.
Just provide indications.
Tests may require significant
time, may be impractical or
expensive.
Centre for Building Performance Research
Page 53 of 75
Measuring Productivity in the Office Workplace
Computer activity
monitoring
Does not need more time from
occupants.
Only measures a small part of
productivity for most jobs.
Ignores non-computer based
work.
May be highly misleading.
Difficult to work around factors
such as task type.
May cause counterproductive
behaviour.
Absenteeism
Quantifiable measure of
productivity losses.
Only measures part of
productivity.
Is very clear and
straightforward.
Accuracy depends on the rigour
of the administrative records.
Can be used with surveys
without needing people to give
more time.
Different absenteeism indices
(e.g. time lost or frequency) can
give different results.
Government already measures it. Possible logistical issues around
use of data (e.g. can it be
aggregated by building?)
May require records over
prolonged periods (at least a
year) to be reliable.
Self-estimated
absenteeism
Provides a quantifiable estimate
of some productivity effects.
Accuracy questionable, studies
indicate significant biases.
May be the only way of getting
absenteeism data.
Just provides an indication.
Only measures part of
productivity.
Surveys allow many people to
be assessed relatively cheaply.
Reported frequency
of health problems
Page 54 of 75
Provides an indication of
productivity.
Just provides an indication.
Specific questions may be easier
for people to answer accurately.
Magnitude of effects on
productivity unclear.
Specific effects may provide a
more compelling argument (i.e.
Better ‘health’ vs. less
headaches and eyestrain)
Large number of questions may
be time consuming.
Ordinal scales somewhat vague.
Experience from the
development of the BUS
suggests that it is hard to
analyse, and doesn’t provide
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
very useful information.
Time lost due to issues Provides an indication of
affecting productivity. productivity losses.
May provide an estimate of time
lost.
Specific questions may be easier
for people to answer accurately.
Specifics may provide useful
guidance as to what issues need
to be addressed to improve
productivity.
People’s ability to accurately
estimate such things is
questionable.
Estimates may exaggerate the
occurrence of rare events.
If added to another survey, the
large number of questions could
be time consuming.
Measures included in OPN
occupant survey.
Mood
Provides an indication of
potential performance.
Positive mood is linked to many
valuable performance and
behavioural outcomes.
Surveys allow many people to
be assessed relatively cheaply.
Just provides an indication.
Influenced by many factors,
difficult to identify
environmental effects.
Due to its high variability,
would need to be assessed
multiple times.
May not really be practical.
Subjective sleepiness
Provides an indication of
productivity.
Can be assessed very quickly
and easily (1 question).
Job satisfaction
Provides an indication of
productivity.
Is one of the most commonly
used measures.
Job engagement
Just provides an indication.
Due to low reliability, may need
to be measured multiple times.
Just provides an indication.
Relationship to productivity
may not be as strong as people
think.
Can be assessed very quickly
and easily if necessary (1
question).
If multiple questions are used, it
may take more time.
Provides an indication of
productivity.
Just provides an indication.
May be more strongly affected
by other factors, which could
hide environmental effects.
Relatively weak evidence
Government already measures it. linking it to environmental
Centre for Building Performance Research
Page 55 of 75
Measuring Productivity in the Office Workplace
effects.
Use of different surveys may
make comparisons difficult.
Intention to quit
Turnover
Provides an indication of
possible productivity costs, i.e.
turnover.
Just provides an indication.
Can be assessed very quickly
and easily (1 question).
Distorted by things like
restructuring. May be difficult
to detect effects past
confounding factors.
May allow estimation of some
costs.
May need long periods to get a
reliable average.
Can be used with surveys
without needing people to give
more time.
Distorted by things like
restructuring.
May be difficult to detect
effects past confounding
factors.
Accuracy of organisational
records may be questionable.
Table 5: Summary of advantages and disadvantages of various productivity measures
4.2 What are the key environmental factors affecting productivity?
Key environmental factors affecting productivity include thermal conditions, indoor air quality,
acoustics, lighting, workstation design and ergonomics, and the amount of control people have over
their environment.
Ultimately, these general factors touch on almost every aspect of office design. For example, indoor
air quality is affected by ventilation, location, occupant density, maintenance, cleaning, material
selection, where pollution sources are placed, and the general plan of the building.
A summary of these factors is in Section 3.1.
4.3 Are occupant surveys the best method for measuring and comparing
productivity?
With regards to the assessment of environmental conditions, and their effects on people, the answer
is yes. Occupant surveys are the best way to get a broad picture of how the occupants are
responding to the building, and how well they think it is serving their needs. This is vital because
many productivity indicators may be influenced by more than just the environment. An occupant
survey can confirm whether it is likely to be the building that is causing any identified effects
(rather than some other factor), as well as identifying problem areas.
Objective measurements and observations are used to define the environmental conditions and
design elements that people are responding to. They allow people to learn lessons about the effects
Page 56 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
and success of design decisions, and define the specifics of problems that have been identified,
allowing them to be fixed. Thus, they complement occupant surveys.
For the measurement of productivity effects, it depends on what exactly one is trying to measure.
For most of the measures, the survey is indeed the best method, being a relatively simple and cheap
way of getting subjective reports from a large number of people. Some measures, however, may be
measured differently: absenteeism and turnover data may be acquired from organisational records;
computer activity is passively monitored by programs; and cognitive performance tests are tests
rather than surveys.
An occupant survey, such as those of the BUS or CBE, which measures both environmental
satisfaction and the perceived effects on productivity, is, however, an effective and practical method
for getting an indication of the productivity effects of a building. The method is very commonly
used, and studies over the years have consistently shown a high correlation between satisfaction
with environmental conditions and the perceived effects on occupant productivity and health — a
relationship which is corroborated by controlled laboratory studies. An occupant survey assessing
the environmental conditions is necessary to have any real confidence in the presence of possible
effects. Therefore, it may be reasonably suggested that if any single method were to be used, such
an occupant survey would be the best method for assessing productivity effects.
It should be emphasised, however, that the occupant surveys just provide an indication. They can
indicate if the building is probably having an effect on productivity, and if it is likely to be “small”
or “large”. They cannot, however, confidently say that there is, for example, a 10% improvement in
productivity. The specific measures of productivity have not been validated due to the lack of any
clear definition or standard measure of office productivity. Moreover, the validity of a numerical
estimate of productivity is questionable when many aspects are not readily countable. It should also
be noted that there may be considerable variation in reported effects. While surveys may report that
on average productivity is improved by a building, closer examination may reveal that, say, a third
of the occupants were actually reporting negative effects. This would indicate a need to improve
parts of the building as much as anything else, as well as suggesting that the mean productivity
effect of the building may not be true for everyone. Ideally, an occupant survey is not just a means
of “scoring” a building, but is also a tool to enable one to maximise the utility of the building for as
many of the occupants as possible.
While an occupant survey may be adequate to provide indications of productivity effects on its own,
other measures may still be valuable. Measures such as job satisfaction and absenteeism provide
indications of likely effects on productivity and areas that may be considered important to
organisational outcomes. Moreover, if positive effects were found on multiple factors, such as
absenteeism and an occupancy survey, a stronger argument is made that a building is providing
valuable benefits. However, people’s time is limited, and they may not be willing to do a lot of tests
and surveys. An occupant survey such as the BUS may take up most of the time people are willing
to spend, leaving little room for additional tests. Cognitive performance tests, and mood surveys,
might not be practical, as they may require too much work from people, or may vary too much from
day to day. However, there are some factors that can be measured simply and quickly with a few
questions, such as job satisfaction and intention-to-quit. There are also a number of factors that are
already measured by the government, such as job engagement and absenteeism, and it could be
Centre for Building Performance Research
Page 57 of 75
Measuring Productivity in the Office Workplace
useful, and expedient, to bring that data together to enable possible environmental effects to be
identified.
In order to define a robust productivity evaluation technique, it is necessary to rigorously test the
evaluation tool. If such an evaluation exercise is to be undertaken, then merely finding a correlation
between two independent measures of productivity, such as occupant surveys and absenteeism is
not considered sufficient demonstration of corroborative evidence. Often in these circumstances a
third independent measure is used to triangulate the result, confirming that the correlation between
the first two measures is not a coincidence. Such independent measures exist for workplace
productivity: cognitive performance tests, or health surveys, could be used. However, using too
many of these measures could consume too much time and risk low participation as a result of
survey fatigue. It is not necessary that an operational tool incorporate this triangulation. It would be
sufficient to use this triangulation approach during the development of an operational tool based
upon, say, occupant surveys and absenteeism. Similarly, it would be a good idea to make sure that
the tests are responsive to objective changes in the environmental conditions. It may be argued that
the literature already provides such evidence. However, it would still be important to confirm the
correlations as part of the process of making any tools operational.
Page 58 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Centre for Building Performance Research
Page 59 of 75
Measuring Productivity in the Office Workplace
5
REFERENCES
ABS consulting. (2013). Overall Liking Score Questionnaire. OLS survey. Retrieved 13 June 2013,
from http://www.ols-survey.com/
Akerstedt, T., & Gillberg, M. (1990). Subjective and objective sleepiness in the active individual.
The International journal of neuroscience, 52(1-2), 29–37.
Alba, J. W., & Hutchinson, J. W. (2000). Knowledge calibration: What consumers know and what
they think they know. Journal of Consumer Research, 27(2), 123–156.
Alby, V. (1994). Productivity: Measurement and management. Transactions of AACE International,
1994, MAT4.1.
Arens, E., Humphreys, M. A., de Dear, R., & Zhang, H. (2010). Are ‘class A’ temperature
requirements realistic or desirable? Building and Environment, 45(1), 4–10.
doi:10.1016/j.buildenv.2009.03.014
Aries, M. (2005). Human Lighting Demands - Healthy Lighting in an Office Environment.
Technische Universieit Eindhoven, Eindhoven. Retrieved from
http://alexandria.tue.nl/extra2/200512454.pdf
Aries, M. B. C., Veitch, J. A., & Newsham, G. R. (2010). Windows, view, and office characteristics
predict physical and psychological discomfort. Journal of Environmental Psychology, 30(4),
533–541. doi:10.1016/j.jenvp.2009.12.004
Attaran, M., & Wargo, B. D. (1999). Succeeding with ergonomics in computerized offices. Work
Study, 48(3), 92–99.
Baird, G. (2001). Post-occupancy evaluation and Probe: a New Zealand perspective. Building
Research & Information, 29(6), 469–472. doi:10.1080/09613210110072656
Baird, G., Gray, J., Isaacs, N., Kernohan, D., & McIndoe, G. (Eds.). (1996). Building evaluation
techniques. New York: McGraw-Hill.
Barrett, L. F., & Russell, J. A. (1998). Independence and bipolarity in the structure of current affect.
Journal of Personality and Social Psychology, 74(4), 967–984. doi:10.1037/00223514.74.4.967
Barry, T. ‘Teta’, Kemper, P., & Brannon, S. D. (2008). Measuring Worker Turnover in Long-Term
Care: Lessons From the Better Jobs Better Care Demonstration. The Gerontologist, 48(3),
394–400.
Begemann, S. H. A., van den Beld, G. J., & Tenner, A. D. (1997). Daylight, artificial light and
people in an office environment, overview of visual and biological responses. International
Journal of Industrial Ergonomics, 20(3), 231–239. doi:10.1016/s0169-8141(96)00053-4
Belli, R. F., Schwarz, N., Singer, E., & Talarico, J. (2000). Decomposition can harm the accuracy of
behavioural frequency reports. Applied Cognitive Psychology, 14(4), 295–308.
doi:http://dx.doi.org/10.1002/1099-0720(200007/08)14:4
Bergström, G., Bodin, L., Hagberg, J., Aronsson, G., & Josephson, M. (2009). Sickness
Presenteeism Today, Sickness Absenteeism Tomorrow? A Prospective Study on Sickness
Presenteeism and Future Sickness Absenteeism. Journal of Occupational and
Environmental Medicine, 51(6), 629–638. doi:10.1097/JOM.0b013e3181a8281b
Page 60 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Boerstra, A., Beuker, T., Loomans, M., & Hensen, J. (2013). Impact of available and perceived
control on comfort and health in European offices. Architectural Science Review, 0(0), 1–12.
doi:10.1080/00038628.2012.744298
Bommer, W. H., Johnson, J. L., Rich, G. A., Podsakoff, P. M., & MacKenzie, S. B. (1995). On the
interchangeability of objective and subjective measures of employee performance: a metaanalysis. Personnel Psychology, 48(3), 587.
Boray, P. F., Gifford, R., & Rosenblood, L. (1989). Effects of warm white, cool white and fullspectrum fluorescent lighting on simple cognitive performance, mood and ratings of others.
Journal of Environmental Psychology, 9(4), 297–307. doi:10.1016/S0272-4944(89)80011-8
Boyce, P. R. (2003). Human Factors in Lighting (2nd ed.). London ; New York: Taylor & Francis.
Boyce, P. R., Hunter, C., & Howlett, O. (2003). The Benefits of Daylight through Windows. U.S.
Department of Energy. Retrieved from
http://www.lrc.rpi.edu/programs/daylighting/pdf/DaylightBenefits.pdf
Boyce, P. R., Veitch, J. A., Newsham, G. R., Jones, C. C., Heerwagen, J., Myer, M., & Hunter, C.
M. (2006a). Lighting quality and office work: two field simulation experiments: Authors’
response. Lighting Research and Technology, 38(4), 377–378.
doi:10.1177/147709380603800421
Boyce, P. R., Veitch, J. A., Newsham, G. R., Jones, C. C., Heerwagen, J., Myer, M., & Hunter, C.
M. (2006b). Lighting quality and office work: two field simulation experiments. Lighting
Research and Technology, 38(3), 191–223. doi:10.1191/1365782806lrt161oa
Brand, J. L., & Smith, T. J. (2005). Effects of Reducing Enclosure on Perceptions of Occupancy
Quality, Job Satisfaction, and Job Performance in Open-Plan Offices. Proceedings of the
Human Factors and Ergonomics Society Annual Meeting, 49(8), 818–820.
doi:10.1177/154193120504900806
Bright, L. (2008). Does Public Service Motivation Really Make a Difference on the Job Satisfaction
and Turnover Intentions of Public Employees? The American Review of Public
Administration, 38(2), 149–166. doi:10.1177/0275074008317248
Brooks, A., Hagen, S. E., Sathyanarayanan, S., Schultz, A. B., & Edington, D. W. (2010).
Presenteeism. Journal of Occupational and Environmental Medicine, 52(11), 1055–1067.
doi:10.1097/JOM.0b013e3181f475cc
Bruhns, H. (1996). CBPR Checklist. In G. Baird (Ed.), Building evaluation techniques (pp. 141–
160). New York: McGraw-Hill.
Brutus, S., & Derayeh, M. (2002). Multisource assessment programs in organizations: An insider’s
perspective. Human Resource Development Quarterly, 13(2), 187–202.
Building Use Studies. (2011a). BUS Occupant Survey. BUS Methodology.
Building Use Studies. (2011b). The Building Use Studies (BUS) Occupant Survey: Origins and
Approach Q&A. Retrieved from
https://docs.google.com/viewer?a=v&pid=gmail&attid=0.4&thid=13f0e2580bc5ee9c&mt=a
pplication/pdf&url=https://mail.google.com/mail/u/0/?ui%3D2%26ik%3D1436a24288%26
view%3Datt%26th%3D13f0e2580bc5ee9c%26attid%3D0.4%26disp%3Dsafe%26zw&sig=
AHIEtbQSFAXFAOWEJbxnEq28m1OCjX_xxQ
Burson, K. A., Larrick, R. P., & Klayman, J. (2006). Skilled or unskilled, but still unaware of it:
How perceptions of difficulty drive miscalibration in relative comparisons. Journal of
Personality and Social Psychology, 90(1), 60–77. doi:http://dx.doi.org/10.1037/00223514.90.1.60
Centre for Building Performance Research
Page 61 of 75
Measuring Productivity in the Office Workplace
Caillier, J. G. (2011). I Want to Quit A Closer Look at Factors That Contribute to the Turnover
Intentions of State Government Employees. State and Local Government Review, 43(2),
110–122. doi:10.1177/0160323X11403325
Çakir, A., & Çakir, G. (1998). Light and Health. Berlin: ERGONOMIC Institute. Retrieved from
http://www.healthylight.de/Light_and_Health/Documents_files/1LightandHealth.pdf
Campion, M. A. (1991). Meaning and measurement of turnover: Comparison of alternative
measures and recommendations for research. Journal of Applied Psychology, 76(2), 199–
212. doi:http://dx.doi.org/10.1037/0021-9010.76.2.199
Castle, N. G. (2006). Measuring Staff Turnover in Nursing Homes. The Gerontologist, 46(2), 210–9.
Center for the Built Environment. (2013). Center for the Built Environment: Occupant Indoor
Environmental Quality (IEQ) Survey. Retrieved 17 June 2013, from
http://www.cbe.berkeley.edu/research/survey.htm
Chadwick-Jones, J. K., Brown, C. A., Nicholson, N., & Sheppard, C. (1971). Absence Measures Their Reliability and Stability in an Industrial Setting. Personnel Psychology, 24(3), 463.
Charles, K. E., Danforth, A., Veitch, J. A., Zwierzchowski, C., Johnson, B., & Pero, K. (2004).
Workstation design for organizational productivity (No. NRCC 47343). Ottawa, Canada:
NRC Institute for Research in Construction and Public Works & Government Services
Canada. Retrieved from http://archive.nrccnrc.gc.ca/obj/irc/doc/pubs/nrcc47343/nrcc47343.pdf
Chaudhury, H., Mahmood, A., & Valente, M. (2009). The Effect of Environmental Design on
Reducing Nursing Errors and Increasing Efficiency in Acute Care Settings. Environment
and Behavior, 41(6), 755–786. doi:10.1177/0013916508330392
Christian, M. S., Garza, A. S., & Slaughter, J. E. (2011). Work Engagement: A Quantitative Review
and Test of Its Relations with Task and Contextual Performance. Personnel Psychology,
64(1), 89–136. doi:10.1111/j.1744-6570.2010.01203.x
Clausen, G., & Wyon, D. P. (2008). The Combined Effects of Many Different Indoor
Environmental Factors on Acceptability and Office Work Performance. HVAC&R Research,
14(1), 103–113. doi:10.1080/10789669.2008.10390996
Clements-Croome, D., & Kaluarachchi, Y. (2000). Assessment and Measurement of Productivity.
In Creating the Productive Workplace (1st ed., pp. 129–166). E. & FN Spon.
Clements-Croome, Derek (Ed.). (2006a). Creating the Productive Workplace (2nd ed.). Hoboken:
Taylor and Francis.
Clements-Croome, Derek. (2006b). Indoor environment and productivity. In Derek ClementsCroome (Ed.), Creating the Productive Workplace (2nd ed., pp. 25–54). Hoboken: Taylor
and Francis.
Connolly, T., Jessup, L. M., & Valacich, J. S. (1990). Effects of Anonymity and Evaluative Tone on
Idea Generation in Computer-Mediated Groups. Management Science, 36(6), 689–703.
doi:10.2307/2631901
Cunningham, M. R. (1979). Weather, Mood, and Helping Behavior: Quasi Experiments with the
Sunshine Samaritan. Journal of Personality and Social Psychology, 37(11), 1947–1956.
doi:10.1037/0022-3514.37.11.1947
Dalal, R. S. (2005). A Meta-Analysis of the Relationship Between Organizational Citizenship
Behavior and Counterproductive Work Behavior. Journal of Applied Psychology, 90(6),
1241–1255. doi:http://dx.doi.org/10.1037/0021-9010.90.6.1241
Page 62 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Danielsson, C. B., & Bodin, L. (2008). Office Type in Relation to Health, Well-Being, and Job
Satisfaction Among Employees. Environment and Behavior, 40(5), 636–668.
doi:10.1177/0013916507307459
Dean, B., Aguilar, D., Shapiro, C., Orr, W. C., Isserman, J. A., Calimlim, B., & Rippon, G. A.
(2010). Impaired Health Status, Daily Functioning, and Work Productivity in Adults With
Excessive Sleepiness. Journal of Occupational and Environmental Medicine, 52(2).
Retrieved from
http://search.proquest.com/docview/212686254/13EEDF53312417A7687/29?accountid=14
782
Djellal, F., & Gallouj, F. (2013). The productivity challenge in services: measurement and strategic
perspectives. The Service Industries Journal, 1–18. doi:10.1080/02642069.2013.747519
Donald, I., & Siu, O.-L. (2001). Moderating the stress impact of environmental conditions: The
effect of organizational commitment in Hong Kong and China. Journal of Environmental
Psychology, 21(4), 353–368. doi:10.1006/jevp.2001.0229
Dow, G. (2003). Creativity Test: Torrance Tests of Creative Thinking (1962). Retrieved 30 April
2012, from http://www.indiana.edu/~bobweb/Handout/d3.ttct.htm
Dunning, D., Meyerowitz, J. A., & Holzberg, A. D. (1989). Ambiguity and self-evaluation: The role
of idiosyncratic trait definitions in self-serving assessments of ability. Journal of Personality
and Social Psychology, 57(6), 1082–1090. doi:http://dx.doi.org/10.1037/00223514.57.6.1082
Dykes, C. (2012). User Perception Benchmarks for Commercial and Institutional Buildings in New
Zealand. (Masters Thesis). Victoria University of Wellington, Wellington. Retrieved from
http://researcharchive.vuw.ac.nz/handle/10063/2091
Evans, R., Haryott, R., Haste, N., & Jones, A. (1998). The long term costs of owning and using
buildings. London: The Royal Academy of Engineering. Retrieved from
http://www.raeng.org.uk/news/publications/list/reports/The_LongTerm_Costs_of_Buildings.pdf
Fisher, C. D. (2003). Why do lay people believe that satisfaction and performance are correlated?
Possible sources of a commonsense theory. Journal of Organizational Behavior, 24(6),
753–777.
Fisher, C. D. (2008). What If We Took Within-Person Performance Variability Seriously?
Industrial and Organizational Psychology, 1(2), 185–189. doi:10.1111/j.17549434.2008.00036.x
Fisk, W. J. (2000). Health and productivity gains from better indoor environments and their
relationship with building energy efficiency. Annual Review of Energy and the Environment,
25, 537.
Folger, R., & Belew, J. (1985). Nonreactive measurement: A focus for research on absenteeism and
occupational stress. Research in Organizational Behavior, 7, 129–170.
Frontczak, M. (2011). Human comfort and self-estimated performance in relation to indoor
environmental parameters and building features (Ph.D. Thesis). Department of Civil
Engineering, Technical University of Denmark. Retrieved from
http://www.byg.dtu.dk/upload/institutter/byg/publications/rapporter/byg-r260.pdf
Goffin, R. D., & Gellatly, I. R. (2001). A multi-rater assessment of organizational commitment: are
self-report measures biased? Journal of Organizational Behavior, 22(4), 437–451.
doi:10.1002/job.94
Centre for Building Performance Research
Page 63 of 75
Measuring Productivity in the Office Workplace
Goncalo, J. A., Flynn, F. J., & Kim, S. H. (2010). Are Two Narcissists Better Than One? The Link
Between Narcissism, Perceived Creativity, and Creative Performance. Personality and
Social Psychology Bulletin, 36(11), 1484–1495. doi:10.1177/0146167210385109
Gosselin, E., Lemyre, L., & Corneil, W. (2013). Presenteeism and absenteeism: Differentiated
understanding of related phenomena. Journal of Occupational Health Psychology, 18(1),
75–86. doi:http://dx.doi.org/10.1037/a0030932
Gou, Z., Prasad, D., & Siu-Yu Lau, S. (2013). Are green buildings more satisfactory and
comfortable? Habitat International, 39, 156–161. doi:10.1016/j.habitatint.2012.12.007
Hacker, D. J., Bol, L., Horgan, D. D., & Rakow, E. A. (2000). Test prediction and performance in a
classroom context. Journal of Educational Psychology, 92(1), 160–170.
doi:http://dx.doi.org/10.1037/0022-0663.92.1.160
Halpern, M., Shikiar, R., Rentz, A., & Khan, Z. (2000). Health and Work Questionnaire. Tobacco
Control. Retrieved from
http://tobaccocontrol.bmj.com/content/suppl/2001/09/13/10.3.233.DC1/halpern.pdf
Halpern, M., Shikiar, R., Rentz, A., & Khan, Z. (2001). Impact of smoking status on workplace
absenteeism and productivity. Tobacco Control, 10(3), 233–238. doi:10.1136/tc.10.3.233
Hammer, T. H., & Landau, J. C. (1981). Methodological Issues in the Use of Absence Data.
Journal of Applied Psychology, 66(5), 574.
Hancock, P. A., Ross, J. M., & Szalma, J. L. (2007). A Meta-Analysis of Performance Response
Under Thermal Stressors. Human Factors: The Journal of the Human Factors and
Ergonomics Society, 49(5), 851–877. doi:10.1518/001872007X230226
Harter, J. K., Schmidt, F. L., & Hayes, T. L. (2002). Business-Unit-Level Relationship Between
Employee Satisfaction, Employee Engagement, and Business Outcomes: A Meta-Analysis.
Journal of Applied Psychology, 87(2), 268–279. doi:10.1037/0021-9010.87.2.268
Haynes, B. P. (2008a). The impact of office layout on productivity. Journal of Facilities
Management, 6(3), 189–201.
Haynes, B. P. (2008b). Office productivity: A self-assessed approach to office evaluation. Built
Environment Division, Faculty of Development and Society, Sheffield Hallam University.
Retrieved from
http://www.prres.net/papers/Haynes_Office_Productivity_A_Self_Assessed_Approach_To_
Office_Evaluation.pdf
Haynes, B. P. (2008c). An evaluation of the impact of the office environment on productivity.
Facilities, 26(5/6), 178–195. doi:http://dx.doi.org/10.1108/02632770810864970
Hedge, A., Sims, W. R., & Becker, F. D. (1995). Effects of lensed-indirect and parabolic lighting on
the satisfaction, visual health, and productivity of office workers. Ergonomics, 38(2), 260–
290. doi:10.1080/00140139508925103
Hedge, Alan, & Gaygen, D. E. (2010). Indoor Environment Conditions and Computer Work in an
Office. HVAC&R Research, 16(2), 123–138. doi:10.1080/10789669.2010.10390897
Heneman, R. L. (1986). The Relationship Between Supervisory Ratings and Results-Oriented
Measures of Performance: A Meta-Analysis. Personnel Psychology, 39(4), 811–826.
doi:10.1111/j.1744-6570.1986.tb00596.x
Heschong Mahone Group. (2003). Windows and Offices: A study of office worker performance and
the indoor environment. California Energy Commision. Retrieved from http://www.h-mg.com/downloads/Daylighting/A-9_Windows_Offices_2.6.10.pdf
Page 64 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Hubalek, S., Brink, M., & Schierz, C. (2010). Office workers’ daily exposure to light and its
influence on sleep quality and mood. Lighting Research and Technology, 42(1), 33–50.
doi:10.1177/1477153509355632
Humphreys, M. A. (2005). Quantifying occupant comfort: are combined indices of the indoor
environment practicable? Building Research & Information, 33(4), 317–325.
doi:10.1080/09613210500161950
Humphreys, M. A., & Nicol, J. F. (2007). Self-Assessed Productivity and the Office Environment:
Monthly Surveys in Five European Countries. ASHRAE Transactions, 113, 606–616.
Humphreys, M. A., Nicol, J. F., & Raja, I. A. (2007). Field Studies of Indoor Thermal Comfort and
the Progress of the Adaptive Approach. Advances in Building Energy Research, 1(1), 55–88.
doi:10.1080/17512549.2007.9687269
Isen, A. M. (2001). An Influence of Positive Affect on Decision Making in Complex Situations:
Theoretical Issues With Practical Implications. Journal of Consumer Psychology, 11(2), 75–
85. doi:10.1207/S15327663JCP1102_01
Isen, A. M., Daubman, K. A., & Nowicki, G. P. (1987). Positive Affect Facilitates Creative
Problem Solving. Journal of Personality and Social Psychology, 52(6), 1122–1131.
doi:10.1037/0022-3514.52.6.1122
Jääskeläinen, A., & Laihonen, H. (2013). Overcoming the specific performance measurement
challenges of knowledge-intensive organizations. International Journal of Productivity and
Performance Management, 62(4), 350–363. doi:10.1108/17410401311329607
Jessup, L. M., & Connolly, T. (1993). The Effects of Interaction Frequency on the Productivity and
Satisfaction of Automated Problem-Solving Groups. In Proceeding of the Twenty-Sixth
Hawaii International Conference on System Sciences (Vol. 4). Retrieved from
http://interruptions.net/literature/Jessup-HICSS93.pdf
Johns, G. (1994a). Absenteeism estimates by employees and managers: Divergent perspectives and
self-serving perceptions. Journal of Applied Psychology, 79(2), 229–239.
doi:http://dx.doi.org/10.1037/0021-9010.79.2.229
Johns, G. (1994b). How often were you absent? A review of the use of self-reported absence data.
Journal of Applied Psychology, 79(4), 574–591. doi:http://dx.doi.org/10.1037/00219010.79.4.574
Johns, G. (2010). Presenteeism in the workplace: A review and research agenda. Journal of
Organizational Behavior, 31(4). Retrieved from
http://search.proquest.com/docview/224881868/13E7D5FBB165CBC5C2A/9?accountid=14
782
Johns, G., & Xie, J. L. (1998). Perceptions of absence from work: People’s Republic of China
versus Canada. Journal of Applied Psychology, 83(4), 515–530.
Judge, T. A., Thoresen, C. J., Bono, J. E., & Patton, G. K. (2001). The job satisfaction–job
performance relationship: A qualitative and quantitative review. Psychological Bulletin,
127(3), 376–407. doi:http://dx.doi.org/10.1037/0033-2909.127.3.376
Kaczmarczyk, J., Melikov, A., & Fanger, P. O. (2004). Human response to personalized ventilation
and mixing ventilation. Indoor Air, 14, 17–29. doi:10.1111/j.1600-0668.2004.00300.x
Kaida, K., Takahashi, M., Åkerstedt, T., Nakata, A., Otsuka, Y., Haratani, T., & Fukasawa, K.
(2006). Validation of the Karolinska sleepiness scale against performance and EEG
variables. Clinical Neurophysiology, 117(7), 1574–1581. doi:10.1016/j.clinph.2006.03.011
Centre for Building Performance Research
Page 65 of 75
Measuring Productivity in the Office Workplace
Kamarulzaman, N., Saleh, A. A., Hashim, S. Z., Hashim, H., & Abdul-Ghani, A. A. (2011). An
Overview of the Influence of Physical Office Environments Towards Employees. Procedia
Engineering, 20, 262–268. doi:10.1016/j.proeng.2011.11.164
Keller, S. M. (2009). Effects of Extended Work Shifts and Shift Work on Patient Safety,
Productivity, and Employee Health. AAOHN Journal, 57(12), 497–502; quiz 503–4.
Kemppila, S., & Lonnqvist, A. (2003). Subjective productivity measurement. Journal of American
Academy of Business, Cambridge, 2(2), 531–537.
Kennedy, E. J., Lawton, L., & Plumlee, E. L. (2002). Blissful ignorance: The problem of
unrecognized incompetence and academic performance. Journal of Marketing Education,
24(3), 243–252.
Kessler, R. C., Barber, C., Beck, A., Berglund, P., Cleary, P. D., McKenas, D., … Wang, P. (2003).
The World Health Organization Health and Work Performance Questionnaire (HPQ).
Journal of occupational and environmental medicine / American College of Occupational
and Environmental Medicine, 45(2), 156–174.
Kildesø, J., Wyon, D., Skov, T., & Schneider, T. (1999). Visual analogue scales for detecting
changes in symptoms of the sick building syndrome in an intervention study. Scandinavian
journal of work, environment & health, 25(4), 361–367.
Kline, T. J. B., & Sulsky, L. M. (2009). Measurement and Assessment Issues in Performance
Appraisal. Canadian Psychology, 50(3), 161–171.
Knez, I. (1995). Effects of indoor lighting on mood and cognition. Journal of Environmental
Psychology, 15(1), 39–51. doi:10.1016/0272-4944(95)90013-6
Knez, I., & Enmarker, I. (1998). Effects of Office Lighting on Mood and Cognitive Performance
And A Gender Effect in Work-xRelated Judgment. Environment and Behavior, 30(4), 553–
567. doi:10.1177/001391659803000408
Knez, I., & Kers, C. (2000). Effects of Indoor Lighting, Gender, and Age on Mood and Cognitive
Performance. Environment and Behavior, 32(6), 817–831. doi:10.1177/0013916500326005
Koopmans, L., Bernaards, C. M., Hildebrandt, V. H., Schaufeli, W. B., de Vet Henrica, C. W., &
van der Beek, A. J. (2011). Conceptual Frameworks of Individual Work Performance.
Journal of Occupational and Environmental Medicine, 53(8), 856–866.
doi:10.1097/JOM.0b013e318226a763
Kruger, J. (1999). Lake Wobegon be gone! The ‘below-average effect’ and the egocentric nature of
comparative ability judgments. Journal of Personality and Social Psychology, 77(2), 221–
232. doi:http://dx.doi.org/10.1037/0022-3514.77.2.221
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing
one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social
Psychology, 77(6), 1121–1134. doi:http://dx.doi.org/10.1037/0022-3514.77.6.1121
Küller, R., Ballal, S., Laike, T., Mikellides, B., & Tonello, G. (2006). The impact of light and
colour on psychological mood: a cross-cultural study of indoor work environments.
Ergonomics, 49(14), 1496–1507. doi:10.1080/00140130600858142
Laitinen, H., Hannula, M., Lankinen, T., Monni, T.-M., Rasa, P.-L., Räsänen, T., & Visuri, M.
(1999). The quality of the work environment and labor productivity in metal product
manufaturing companies. In D. Sumanth (Ed.), Productivity and Quality Management (pp.
449–459). Ykkösoffset Oy: Vaasa.
Page 66 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Lan, L., Wargocki, P., & Lian, Z. (2011). Quantitative measurement of productivity loss due to
thermal discomfort. Energy and Buildings, 43(5), 1057–1062.
doi:10.1016/j.enbuild.2010.09.001
Larsen, L., Adams, J., Deal, B., Kweon, B. S., & Tyler, E. (1998). Plants in the Workplace The
Effects of Plant Density on Productivity, Attitudes, and Perceptions. Environment and
Behavior, 30(3), 261–281. doi:10.1177/001391659803000301
Leaman, A. (2013). Personal communication.
Leaman, A., & Bordass, B. (1999). Productivity in buildings: the ‘killer’ variables. Building
Research & Information, 27(1), 4–19. doi:10.1080/096132199369615
Leaman, A., & Bordass, B. (2006). Productivity in buildings: the ‘killer’ variables. In Derek
Clements-Croome (Ed.), Creating the Productive Workplace (2nd ed., pp. 153–180).
Hoboken: Taylor and Francis.
Leaman, A., Stevenson, F., & Bordass, B. (2010). Building evaluation: practice and principles.
Building Research & Information, 38(5), 564–577. doi:10.1080/09613218.2010.495217
Leather, P., Pyrgas, M., Beale, D., & Lawrence, C. (1998). Windows in the Workplace.
Environment and Behavior, 30(6), 739–762. doi:10.1177/001391659803000601
Lee, Y. S., & Guerin, D. A. (2010). Indoor environmental quality differences between office types
in LEED-certified buildings in the US. Building and Environment, 45(5), 1104–1112.
doi:10.1016/j.buildenv.2009.10.019
Lehrl, S., Gerstmeyer, K., Jacob, J. H., Frieling, H., Henkel, A. W., Meyrer, R., … Bleich, S. (2007).
Blue light improves cognitive performance. Journal of Neural Transmission (Vienna,
Austria: 1996), 114(4), 457–460. doi:10.1007/s00702-006-0621-4
Lenoir, A., Baird, G., & Garde, F. (2012). Post-occupancy evaluation and experimental feedback of
a net zero-energy building in a tropical climate. Architectural Science Review, 55(3), 156–
168. doi:10.1080/00038628.2012.702449
Léonard, C., Dolan, S. L., & Arsenault, A. (1990). Longitudinal examination of the stability and
variability of two common measures of absence. Journal of Occupational Psychology, 63(4),
309–316. doi:10.1111/j.2044-8325.1990.tb00532.x
Ljungberg, J. K., & Neely, G. (2007). Stress, subjective experience and cognitive performance
during exposure to noise and vibration. Journal of Environmental Psychology, 27(1), 44–54.
doi:10.1016/j.jenvp.2006.12.003
Lofland, J. H., Pizzi, L., & Frick, K. D. (2004). A review of health-related workplace productivity
loss instruments. PharmacoEconomics, 22(3), 165–184.
Loftness, V., Hartkopf, V., Gurtekin, B., Hansen, D., & Hitchcock, R. (2003). Linking Energy to
Health and Productivity in the Built Environment: Evaluating the Cost-Benefits of High
Performance Building and Community Design for Sustainability, Health and Productivity.
Presented at the Greenbuild Conference, Center for Building Performance and Diagnostics,
Carnegie Mellon. Retrieved from
http://www.usgbc.org/Docs/Archive/MediaArchive/207_Loftness_PA876.pdf
Mak, C. M., & Lui, Y. P. (2012). The effect of sound on office productivity. Building Services
Engineering Research and Technology, 33(3), 339–345. doi:10.1177/0143624411412253
Mann, S. L., Budworth, M.-H., & Ismaila, A. S. (2012). Ratings of counterproductive performance:
the effect of source and rater behavior. International Journal of Productivity and
Performance Management, 61(2), 142–156. doi:10.1108/17410401211194653
Centre for Building Performance Research
Page 67 of 75
Measuring Productivity in the Office Workplace
Marincic, J. L. (2011). Vague quantifiers of behavioral frequency: An investigation of the nature
and consequences of differences in interpretation (Ph.D.). The University of Nebraska Lincoln, United States -- Nebraska. Retrieved from
http://search.proquest.com/docview/905289410/abstract?accountid=14782
Mattke, S., Balakrishnan, A., Bergamo, G., & Newberry, S. J. (2007). A review of methods to
measure health-related productivity loss. The American journal of managed care, 13(4),
211–217.
Meehan, M. (2013). Measuring productivity in the office workplace. Wellington, N.Z: New Zealand
Government Property Management Centre of Expertise.
Mehrabian, A. (1974). An approach to environmental psychology. Cambridge: M.I.T. Press.
Meijer, E. M., Frings-Dresen, M. H. W., & Sluiter, J. K. (2009). Effects of office innovation on
office workers’ health and performance. Ergonomics, 52(9), 1027–1038.
doi:10.1080/00140130902842752
Menon, G. (1997). Are the Parts Better than the Whole? The Effects of Decompositional Questions
on Judgments of Frequent Behaviors. Journal of Marketing Research, 34(3), 335–346.
doi:10.2307/3151896
Meyer, J. P., Paunonen, S. V., Gellatly, I. R., Goffin, R. D., & Jackson, D. N. (1989).
Organizational Commitment and Job Performance: It’s the Nature of the Commitment That
Counts. Journal of Applied Psychology, 74(1), 152.
Moezzi, M., & Goins, J. (2011). Text mining for occupant perspectives on the physical workplace.
Building Research & Information, 39(2), 169–182. doi:10.1080/09613218.2011.556008
Moore, D. A., & Cain, D. M. (2007). Overconfidence and underconfidence: When and why people
underestimate (and overestimate) the competition. Organizational Behavior and Human
Decision Processes, 103(2), 197–213. doi:10.1016/j.obhdp.2006.09.002
Morrell, K. M., Loan-Clarke, J., & Wilkinson, A. J. (2004). Organisational change and employee
turnover. Personnel Review, 33(2), 161–173.
Murphy, K. R. (2008a). Explaining the Weak Relationship Between Job Performance and Ratings
of Job Performance. Industrial and Organizational Psychology, 1(2), 148–160.
doi:10.1111/j.1754-9434.2008.00030.x
Murphy, K. R. (2008b). Perspectives on the Relationship Between Job Performance and Ratings of
Job Performance. Industrial and Organizational Psychology, 1(2), 197–205.
doi:10.1111/j.1754-9434.2008.00039.x
Newsham, G., Brand, J., Donnelly, C., Veitch, J., Aries, M., & Charles, K. (2009). Linking indoor
environment conditions to job satisfaction: a field study. Building Research & Information,
37(2), 129–147. doi:10.1080/09613210802710298
Newsham, G. R., Brand, J., Donnelly, C. L., Veitch, J. A., Aries, M. B. C., & Charles, K. E. (2009).
Linking indoor environment conditions to organizational productivity: a field study (No.
NRCC-49714). Ottawa, ON: National Research Council Canada. Retrieved from
http://archive.nrc-cnrc.gc.ca/obj/irc/doc/pubs/nrcc49714/nrcc49714.pdf
Newsham, G. R., Veitch, J. A., Arsenault, C. D., & Duval, C. L. (2004). Effect of dimming control
on office worker satisfaction and performance. In Proceedings of the IESNA Annual
Conference. Tampa, FL. Retrieved from http://www.nrccnrc.gc.ca/obj/irc/doc/pubs/nrcc47069/nrcc47069.pdf
Page 68 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Office Productivity Network. (2005). OPN Workplace Evaluation Survey. Office Productivity
Network. Retrieved 13 June 2013, from
http://www.officeproductivity.co.uk/files/OPN%20Survey.pdf
Oseland, N. (1999). Environmental Factors Affecting Office Worker Performance: A Review of
Evidence. London: Chartered Institution of Building Services Engineers : DETR.
Oseland, N. (2007). British Council for Offices guide to post-occupancy evaluation. London:
British Council for Offices.
Park, T.-Y., & Shaw, J. D. (2013). Turnover rates and organizational performance: A meta-analysis.
Journal of Applied Psychology, 98(2), 268–309. doi:http://dx.doi.org/10.1037/a0030723
Paulus, P. B., Dzindolet, M. T., Poletes, G., & Camacho, L. M. (1993). Perception of Performance
in Group Brainstorming: The Illusion of Group Productivity. Personality and Social
Psychology Bulletin, 19(1), 78–89. doi:10.1177/0146167293191009
Peretti, C., & Schiavon, S. (2011). Indoor environmental quality surveys. A brief literature review.
Retrieved from http://www.escholarship.org/uc/item/0wb1v0ss#page-3
Pilcher, J. J., Nadler, E., & Busch, C. (2002). Effects of hot and cold temperature exposure on
performance: a meta-analytic review. Ergonomics, 45(10), 682–698.
doi:10.1080/00140130210158419
Prasad, M., Wahlqvist, P., Shikiar, R., & Shih, Y.-C. T. (2004). A review of self-report instruments
measuring health-related work productivity: a patient-reported outcomes perspective.
PharmacoEconomics, 22(4), 225–244.
Rashid, M., & Zimring, C. (2008). A Review of the Empirical Literature on the Relationships
Between Indoor Environment and Stress in Health Care and Office Settings. Environment
and Behavior, 40(2), 151–190. doi:10.1177/0013916507311550
Raw, G., Garston, W., & Leaman, A. (1990). Further findings from the office environment survey:
productivity. In Proceedings of the 5th International Conference on Indoor Air Quality and
Climate (Vol. 1, pp. 231–236). Presented at the Indoor Air ’90, Ottawa, Canada.
Raw, G. J., Roys, M. S., Whitehead, C., & Tong, D. (1996). Questionnaire design for sick building
syndrome: An empirical comparison of options. Environment International, 22(1), 61–72.
doi:10.1016/0160-4120(95)00104-2
Reilly Associates. (2004). Work Productivity and Activity Impairment Questionnaire: General
Health V2.0 (WPAI:GH). Retrieved 3 June 2013, from
http://www.reillyassociates.net/WPAI_GH.html
Revicki, D. A., Irwin, D., Reblando, J., & Simon, G. E. (1994). The Accuracy of Self-Reported
Disability Days. Medical Care, 32(4), 401–404. doi:10.2307/3766027
Ritz, A. (2009). Public service motivation and organizational performance in Swiss federal
government. International Review of Administrative Sciences, 75(1), 53–78.
doi:10.1177/0020852308099506
Roelofsen, P. (2002). The impact of office environments on employee performance: The design of
the workplace as a strategy for productivity enhancement. Journal of Facilities Management,
1(3), 247–264. doi:10.1108/14725960310807944
Rowan, M. P., & Wright, P. C. (1995). Ergonomics is good for business. Facilities, 13(8), 18.
Roy, M. M., Christenfeld, N. J. S., & McKenzie, C. R. M. (2005). Underestimating the Duration of
Future Events: Memory Incorrectly Used or Memory Bias? Psychological Bulletin, 131(5),
738–756. doi:http://dx.doi.org/10.1037/0033-2909.131.5.738
Centre for Building Performance Research
Page 69 of 75
Measuring Productivity in the Office Workplace
Russell, J. A., & Mehrabian, A. (1977). Evidence for a three-factor theory of emotions. Journal of
Research in Personality, 11(3), 273–294. doi:10.1016/0092-6566(77)90037-X
Russell, J. A., & Snodgrass, J. (1991). Emotion and the environment. In D. Stokols & I. Altman
(Eds.), Handbook of Environmental Psychology (Vols. 1-2, Vol. 1, pp. 245–280). Malabar,
Fla: Krieger Pub. Co.
Ryvkin, D., Krajč, M., & Ortmann, A. (2012). Are the unskilled doomed to remain unaware?
Journal of Economic Psychology, 33(5), 1012–1031. doi:10.1016/j.joep.2012.06.003
Sagie, A. (1998). Employee Absenteeism, Organizational Commitment, and Job Satisfaction:
Another Look. Journal of Vocational Behavior, 52(2), 156–171.
doi:10.1006/jvbe.1997.1581
Salonen, H., Lahtinen, M., Lappalainen, S., Nevala, N., Knibbs, L. D., Morawska, L., & Reijula, K.
(2013). Physical characteristics of the indoor environment that affect health and wellbeing in
healthcare facilities: a review. Intelligent Buildings International, 5(1), 3–25.
doi:10.1080/17508975.2013.764838
Satish, U., Mendell, M. J., Shekhar, K., Hotchi, T., Sullivan, D., Streufert, S., & Fisk, W. (Bill) J.
(2012). Is CO2 an Indoor Pollutant? Direct Effects of Low-to-Moderate CO2 Concentrations
on Human Decision-Making Performance. Environmental Health Perspectives.
doi:10.1289/ehp.1104789
Scarpello, V., & Campbell, J. P. (1983). Job Satisfaction: Are All the Parts There? Personnel
Psychology, 36(3), 577.
Schultz, A. B., Chen, C.-Y., & Edington, D. W. (2009). The Cost and Impact of Health Conditions
on Presenteeism to Employers: A Review of the Literature. PharmacoEconomics, 27(5),
365–78. doi:http://dx.doi.org/10.2165/00019053-200927050-00002
Schultz, A. B., & Edington, D. W. (2007). Employee Health and Presenteeism: A Systematic
Review. Journal of Occupational Rehabilitation, 17(3), 547–79.
doi:http://dx.doi.org/10.1007/s10926-007-9096-x
Schwarz, N., Hippler, H.-J., Deutsch, B., & Strack, F. (1985). Response Scales: Effects of Category
Range on Reported Behavior and Comparative Judgments. The Public Opinion Quarterly,
49(3), 388–395. doi:10.2307/2748649
Sensharma, N. P., Woods, J. E., & Goodwin, A. K. (1998). Relationships between the indoor
environment and productivity: A literature review. ASHRAE Transactions, 104, 686.
Seppänen, O., Fisk, W. J., & Lei, Q. H. (2006a). Ventilation and performance in office work.
Indoor Air, 16(1), 28–36. doi:10.1111/j.1600-0668.2005.00394.x
Seppänen, O., Fisk, W. J., & Lei, Q. H. (2006b). Effect of Temperature on Task Performance in
Office Environment. In Proceedings Cold Climate HVAC conference. Moscow.
Seppänen, Olli, Fisk, W. J., & Lei, Q. H. (2006). Effect of temperature on task performance in
office environment. Lawrence Berkeley National Laboratory. Retrieved from
http://escholarship.org/uc/item/45g4n3rv
Sharma, H. C. (2002). Can students predict their scores in exams? Journal of Natural Resources
and Life Sciences Education, 31, 96.
Shaw, J. D., Delery, J. E., Jenkins, G. D., & Gupta, N. (1998). An organization-level analysis of
voluntary and involuntary turnover. Academy of Management Journal, 41(5), 511–525.
Page 70 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Shikiar, R., Halpern, M. T., Rentz, A. M., & Khan, Z. M. (2004). Development of the Health and
Work Questionnaire (HWQ): an instrument for assessing workplace productivity in relation
to worker health. Work (Reading, Mass.), 22(3), 219–229.
Smith, A. P., & Broadbent, D. E. (1980). Effects of Noise on Performance on Embedded Figures
Tasks. Journal of Applied Psychology, 65(2), 246–248. doi:10.1037/0021-9010.65.2.246
Smith, A., Tucker, M., & Pitt, M. (2011). Healthy, productive workplaces: towards a case for
interior plantscaping. Facilities, 29(5-6), 209–223.
doi:http://dx.doi.org/10.1108/02632771111120529
Smith, R., & Bradley, G. (1994). The influence of thermal conditions on teachers’ work and student
performance. Journal of Educational Administration, 32(1), 34.
Smith, T. J., & Orfield, S. J. (2007). Occupancy Quality Predictors of Office Worker Perceptions of
Job Productivity. Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, 51(8), 539–543. doi:10.1177/154193120705100801
Springer, C. J. (1997). Ergonomics at the video display terminal: Problems, solutions, benefits.
Professional Safety, 42(3), 30–32.
Steel, R. P. (2003). Methodological and operational issues in the construction of absence variables.
Human Resource Management Review, 13(2), 243–251. doi:10.1016/S10534822(03)00015-9
Stewart, W. F., Ricci, J. A., & Leotta, C. (2004). Health-related lost productive time (LPT): Recall
interval and bias in LPT estimates. Journal of Occupational and Environmental Medicine,
46(6), S12–S22.
Stone, N. J. (1998). Windows and Environmental Cues on Performance and Mood. Environment
and Behavior, 30(3), 306–321. doi:10.1177/001391659803000303
Stone, N. J. (2001). Designing effective study environments. Journal of Environmental Psychology,
21(2), 179–190. doi:10.1006/jevp.2000.0193
Story, A. L., & Dunning, D. (1998). The More Rational Side of Self-Serving Prototypes: The
Effects of Success and Failure Performance Feedback. Journal of Experimental Social
Psychology, 34(6), 513–529. doi:10.1006/jesp.1998.1362
Strassmann, P. A. (2004). Defining and Measuring Information Productivity. New Canaan,
Connecticut: The Information Economics Press. Retrieved from
http://www.strassmann.com/pubs/cw/rankings/ip_rankings_v3.pdf
Sundell, J., Levin, H., Nazaroff, W. W., Cain, W. S., Fisk, W. J., Grimsrud, D. T., … Weschler, C. J.
(2011). Ventilation rates and health: multidisciplinary review of the scientific literature.
Indoor air, 21(3), 191–204. doi:10.1111/j.1600-0668.2010.00703.x
Szalma, J. L., & Hancock, P. A. (2011). Noise effects on human performance: A meta-analytic
synthesis. Psychological Bulletin, 137(4), 682–707. doi:http://dx.doi.org/10.1037/a0023987
Vastfjall, D. (2002). Influences of current mood and noise sensitivity on judgments of noise
annoyance. The Journal of Psychology, 136(4), 357–70.
Veitch, J. A., Newsham, G. R., Boyce, P. R., & Jones, C. C. (2008). Lighting appraisal, well-being
and performance in open-plan offices: A linked mechanisms approach. Lighting Research
and Technology, 40(2), 133–151. doi:10.1177/1477153507086279
Veitch, J. A., Newsham, G. R., Mancini, S., & Arsenault, C. D. (2010). Lighting and office
renovation effects on employee and organizational well-being (No. NRC-IRC Research
Report RR-306). Ottawa, ON: NRC Institute for Research in Construction.
Centre for Building Performance Research
Page 71 of 75
Measuring Productivity in the Office Workplace
Veitch, J. A, Stokkermans, M. G. M., & Newsham, G. R. (2011). Linking Lighting Appraisals to
Work Behaviors. Environment and Behavior. doi:10.1177/0013916511420560
Veitch, J. A. (2008). Investigating and Influencing How Buildings Affect Health: Interdisciplinary
Endeavours. Canadian Psychology, 49(4), 281–288.
Veitch, J. A., Charles, K. E., Farley, K. M. J., & Newsham, G. R. (2007). A model of satisfaction
with open-plan office conditions: COPE field findings. Journal of Environmental
Psychology, 27(3), 177–189. doi:10.1016/j.jenvp.2007.04.002
Veitch, J. A., Gifford, R., & Hine, D. W. (1991). Demand characteristics and full spectrum lighting
effects on performance and mood. Journal of Environmental Psychology, 11(1), 87–95.
doi:10.1016/S0272-4944(05)80007-6
Viola, A. U., James, L. M., Schlangen, L. J. M., & Dijk, D.-J. (2008). Blue-enriched white light in
the workplace improves self-reported alertness, performance and sleep quality.
Scandinavian Journal of Work, Environment & Health, 34(4), 297–306.
Viswesvaran, C. (2002). Absenteeism and Measures of Job Performance: A Meta-Analysis.
International Journal of Selection and Assessment, 10(1-2), 12–17. doi:10.1111/14682389.00190
Viswesvaran, C., & Ones, D. S. (2000). Perspectives on Models of Job Performance. International
Journal of Selection and Assessment, 8(4), 216–226. doi:10.1111/1468-2389.00151
Viswesvaran, C., Ones, D. S., & Schmidt, F. L. (1996). Comparative analysis of the reliability of
job performance ratings. Journal of Applied Psychology, 81(5), 557.
Wang, N., & Boubekri, M. (2010). Investigation of declared seating preference and measured
cognitive performance in a sunlit room. Journal of Environmental Psychology, 30(2), 226–
238. doi:10.1016/j.jenvp.2009.12.001
Wanous, J. P., Reichers, A. E., & Hudy, M. J. (1997). Overall job satisfaction: How good are
single-item measures? Journal of Applied Psychology, 82(2), 247–252.
doi:http://dx.doi.org/10.1037/0021-9010.82.2.247
Wargocki, P, Sundell, J., Bischof, W., Brundrett, G., Fanger, P. O., Gyntelberg, F., … Wouters, P.
(2002). Ventilation and health in non-industrial indoor environments: report from a
European multidisciplinary scientific consensus meeting (EUROVEN). Indoor air, 12(2),
113–128.
Wargocki, P., Frontczak, M., Schiavon, S., Goins, J., Arens, E., & Zhang, H. (2012). Satisfaction
and self-estimated performance in relation to indoor environmental parameters and building
features, 1(1). Retrieved from http://www.escholarship.org/uc/item/451326fk
Wargocki, P., Wyon, D. P., Baik, Y. K., Clausen, G., & Fanger, P. O. (1999). Perceived Air Quality,
Sick Building Syndrome (SBS) Symptoms and Productivity in an Office with Two Different
Pollution Loads. Indoor Air, 9(3), 165–179. doi:10.1111/j.1600-0668.1999.t01-1-00003.x
Wargocki, P., Wyon, D. P., Sundell, J., Clausen, G., & Fanger, P. O. (2000). The Effects of
Outdoor Air Supply Rate in an Office on Perceived Air Quality, Sick Building Syndrome
(SBS) Symptoms and Productivity. Indoor Air, 10(4), 222–236. doi:10.1034/j.16000668.2000.010004222.x
Warr, P., Cook, J., & Wall, T. (1979). Scales for the measurement of some work attitudes and
aspects of psychological well-being. Journal of Occupational Psychology, 52(2), 129–148.
doi:10.1111/j.2044-8325.1979.tb00448.x
Page 72 of 75
Centre for Building Performance Research
Measuring Productivity in the Office Workplace
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of
positive and negative affect: The PANAS scales. Journal of Personality and Social
Psychology, 54(6), 1063–1070. doi:10.1037/0022-3514.54.6.1063
Webb, A. R. (2006). Considerations for lighting in the built environment: Non-visual effects of
light. Energy and Buildings, 38(7), 721–727. doi:10.1016/j.enbuild.2006.03.004
Wefald, A. J., Mills, M. J., Smith, M. R., & Downey, R. G. (2012). A Comparison of Three Job
Engagement Measures: Examining their Factorial and Criterion-Related Validity. Applied
Psychology: Health and Well-Being, 4(1), 67–90. doi:10.1111/j.1758-0854.2011.01059.x
Wilkins, A. J., Nimmo-Smith, I., Slater, A. I., & Bedocs, L. (1989). Fluorescent lighting, headaches
and eyestrain. Lighting Research and Technology, 21(1), 11–18.
doi:10.1177/096032718902100102
Williams, C. R., & Livingstone, L. P. (1994). Another look at the relationship between performance
and voluntary turnover. Academy of Management Journal, 37(2), 269.
World Health Organisation. (2001). Health and Work Survey. The World Health Organisation
Health and Work Performance Questionnaire. Retrieved 23 April 2013, from
http://www.hcp.med.harvard.edu/hpq/ftpdir/HPQ%20Employee%20Version%2081810.pdf
Wright, T. A. (2006). The emergence of job satisfaction in organizational behavior: A historical
overview of the dawn of job attitude research. Journal of Management History, 12(3), 262–
277. doi:http://dx.doi.org/10.1108/17511340610670179
Wu, S. M., & Clements-Croome, D. (2005). Critical reliability issues for building services systems.
In L. Cui & A. H. C. Tsang (Eds.), Proceedings of the 4th International Conference on
Quality & Reliability (pp. 559–565). Beijing: Beijing Inst Technology Pr. Retrieved from
http://centaur.reading.ac.uk/12460/
Wyon, D P. (2004). The effects of indoor air quality on performance and productivity. Indoor air,
14 Suppl 7, 92–101. doi:10.1111/j.1600-0668.2004.00278.x
Wyon, D. P. (1996). Indoor envionmental effects on productivity. In IAQ ’96 Paths to Better
Building Environments (pp. 5–15). ASHRAE.
Wyon, D. P. (1994). Current Indoor Climate Problems and Their Possible Solution. Indoor and
Built Environment, 3(3), 123–129. doi:10.1177/1420326X9400300304
Zelenski, J. M., Murphy, S. A., & Jenkins, D. A. (2008). The Happy-Productive Worker Thesis
Revisited. Journal of Happiness Studies, 9(4), 521–537.
doi:http://dx.doi.org/10.1007/s10902-008-9087-4
Zimmerman, R. D., & Darnold, T. C. (2009). The impact of job performance on employee turnover
intentions and the voluntary turnover process: A meta-analysis and path model. Personnel
Review, 38(2), 142–158. doi:http://dx.doi.org/10.1108/00483480910931316
Centre for Building Performance Research
Page 73 of 75
Victoria University of Wellington
School of Architecture
Document Register 3.61
LOGO OR ACRONYM
RECIPIENT'S REF/CODE
CBPR Centre for Building Performance Research
AUTHOR'S REF/CODE
TITLE AND SUBTITLE OF REPORT
SPONSOR'S REF/CODE
Measuring Productivity in the Office Workplace
REPORT DATE
July 2013
AUTHOR(S)
ISSN/ISBN NUMBER
Sullivan, J., Baird, G., Donn, M.
ISSN
ISBN
AUTHOR ORGANISATION (name and address)
DISTRIBUTION - report issued to:
Centre for Building Performance Research,
Victoria University of Wellington,
P.O. Box 600, Wellington, New Zealand
Sponsor, research organisations
and interested parties on request
SPONSORING ORGANISATION (name and address)
-report available from:
New Zealand Government Property Management Centre of
Expertise (PMCoE)
Author organisation and
Sponsor organisation
KEYWORDS
ENQUIRES/COMMENT TO:
Productivity, Measurement, Post-occupancy Evaluation,
Literature Review, Office Buildings, Indoor Environmental
Quality, Behaviour
Director, Centre for Building
Performance Research.
PAGES
PRICE
75
on application
ABSTRACT
This report is a review of the literature carried out by the CBPR for the New Zealand
Government Property Management Centre of Expertise looking at how the effects of office
buildings on their occupant’s productivity could be measured.
Figures, Tables & References provided
SUPPLEMENTARY NOTES
July 2013
REPRODUCTION OF COMPLETED PAGE IS AUTHORISED
Centre for
Building
Performance
Research
Published by:
Centre for Building Performance Research, Victoria University of Wellington, P.O. Box 600, Wellington, New Zealand
Telephone +64 4 463 6200
Facsimile +64 4 463 6204
Download