Seeing Where You Stand: From Performance Feedback to Performance Transparency Ethan S. Bernstein Shelley Xin Li Harvard Business School ebernstein@hbs.edu Harvard Business School xli@hbs.edu Working Paper (April 2016) ABSTRACT This paper examines the consequences of replacing traditional performance feedback with transparent performance data, an increasingly popular trend in organizations today. Using data from a field experiment at a large service organization as it transitioned to making each employee’s real-time individual performance transparent, we explore the productivity implications of employees constantly seeing where they stand. We also investigate two novel relational moderators of the relationship between performance transparency and productivity—supervisor support and social comparison orientation—which we believe are understudied despite their increased importance in more transparent performance environments. We find that both moderate the positive effect of performance transparency on productivity. Employees with low supervisor support and low social comparison orientation benefit most from transparent performance data, suggesting that such data acts as a substitute for both managers and informal social comparison. We discuss the implications for the design of transparent feedback systems and suggest directions for future research. KEYWORDS: performance feedback, transparency, productivity, field experiment Copyright © 2016 by Ethan Bernstein and Shelley Xin Li Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. Please do not cite without permission. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the authors. Seeing Where You Stand Organizations have long used performance feedback—information about the effectiveness of one’s work behavior (Ashford & Cummings, 1983; Taylor, Fisher, & Ilgen, 1984)—to improve employee productivity (Alvero, Bucklin, & Austin, 2001; Balcazar, Hopkins, & Suarez, 1985; Ilgen, Fisher, & Taylor, 1979; Larson, 1989). Managers use traditional performance feedback systems to “cue” (Vroom, 1964) both employee motivation (Murphy & Cleveland, 1995) and learning (Barankay, 2012; Kluger & DeNisi, 1996) in efforts to raise performance. In today’s organizations, however, performance data that is more “transparent”—more detailed, more real-time, and shared with a broader audience of managers and/or peers—is increasingly expected to do the job of traditional performance feedback. In place of traditional performance reviews, we have employees with handheld computers (Amazon) or armbands (Tesco) that track and optimize every move (Head, 2014; Kantor & Streitfeld, 2015; Rawlinson, 2013), gamified leaderboards that make all tracked metrics public in real time (Bernstein & Blunden, 2015; Mollick & Rothbard, 2014), restaurant point-ofsale systems that scrape every transaction for signs of waitstaff fraud (Pierce, Snow, & McAfee, 2015), cameras that automatically track the smiles of card dealers and waitstaff in casinos (Peck, 2013), handsoap and hand-sanitizer dispensers that use radio-frequency identification (RFID) to track who uses them and who does not (Dai, Milkman, Hofmann, & Staats, 2015), UPS trucks with sensors that record hundreds of measurements to capture every action (Goldstein, 2014), and many other innovations. Studies suggest that transparent performance data has already displaced traditional performance feedback in nearly 10 percent of Fortune 500 companies (Cunningham & McGregor, 2015), making it trendy to debate whether annual performance reviews and traditional performance feedback are “dead” (e.g., Economist, 2016; Rock & Jones, 2015; Shook, 2015). “Big data” technologies, largely responsible for making organizational performance much more transparent in the last two decades (Healy & Palepu, 2001; Leuz & Wysocki, 2008), are now uprooting the traditional use of formal, face-to-face, supervisoremployee feedback (Buckingham & Goodall, 2015; Vara, 2015), replacing it with transparent individual performance data. 1 Seeing Where You Stand Yet theory has not kept up with technology. While such individual performance transparency is frequently celebrated in practice on the assumption that it will raise productivity (e.g., Huhman, 2014; Lublin, 2011), it remains relatively unstudied. In particular, while the benefits of making organizational performance transparent are relatively unequivocal, making individual performance transparent seems likely to either enhance or undermine performance depending on certain moderators, many of which have been explored in the performance feedback literature (e.g., Kluger & DeNisi, 1996) but not in the context of transparent performance data. In addition, our review of prior performance feedback literature suggests that what we will call relational moderators—moderators based on an employee’s relations to others at work (such as his or her boss, peers, and subordinates)—have been particularly understudied (c.f., Mahlendorf, Kleinschmit, & Perego, 2014). At the same time, because transparency increases the impact of surrounding individuals’ behavior on one’s own (Bernstein, 2012), such moderators have likely become more important than they were. Consider, for example, the difference between the dyadic nature (employee-supervisor) of a traditional performance feedback conversation and the multiplex nature (employee-all) of making performance transparent. The multiplex nature of the latter broadens and deepens the importance of relational moderators, as has been found in other fields (Gittell, Seidner, & Wimbush, 2010; Gittell, 2002; Perlow, Gittell, & Katz, 2004). We therefore explore two related research questions. First, what is the productivity impact of substituting transparent performance data for traditional performance feedback? Second, what is the impact of relational moderators—in this case, the employee’s level of supervisor support (vertical relation) and social comparison orientation (horizontal relation)—on that effect? We draw on a field experiment, including embedded participant observation, involving service mechanics in a large gas utility in the southeastern United States during a time when a group of employees went from traditional supervisor-employee performance feedback to having their individual performance data—tracked by their trucks and technological devices—made transparent to peers, supervisors, and the organization at large. 2 Seeing Where You Stand Our approach allows us to extend the work of authors who have previously studied the impact of various interventions on the effectiveness of performance feedback, both in the lab and in the field. We, however, are the first to study the impact of substituting traditional performance feedback with transparent performance data in a setting where the feedback, rather than the incentives, are the focus of study. Our analysis is also novel in inquiring whether the productivity consequences of transparent performance data may vary considerably due not just to characteristics of the feedback itself or its delivery but to the relations of the people involved. Such relational moderators could explain some of the wide variation in the productivity results of performance feedback systems and the mixed conclusions from prior research on those systems. More importantly, we propose that these relational considerations may partially explain why the revolution in transparent individual performance data has not consistently produced a revolution in productivity. THEORETICAL BACKGROUND AND HYPOTHESIS DEVELOPMENT From Periodic Performance Feedback to Real-time Transparency of Relative Performance Numerous studies in psychology, organizational behavior, operations management, accounting, and economics have demonstrated the beneficial effects of feedback on individual performance in organizations (e.g., (Dockstader, Nebeker, & Shumate, 1977; Florin-Thuma & Boudreau, 1987; Guzzo, Jette, & Katzell, 1985; Ilgen et al., 1979; Ivancevich & McMahon, 1982; Pritchard, Bigby, Beiting, Coverdale, & Morgan, 1981). For example, Ilgen et al. (1979) stated in their literature review that feedback is “essential for learning and for motivation.” Although systems designed to deliver performance feedback offer great promise to improve performance (Alvero et al., 2001; Balcazar et al., 1985; Fedor, 1991; Taylor et al., 1984), a long tradition of empirical research also demonstrates their ability to undermine it (Kluger & DeNisi, 1996), eroding the performance of the best employees (Haas & Hayes, 2006), the worst employees (Podsakoff & Farh, 1989), or anyone in between (Ivancevich & McMahon, 1982; Kluger, Lewinsohn, & Aiello, 1994). 3 Seeing Where You Stand Kluger and DeNisi's (1996) meta-analysis of empirical studies, accounting for 607 effect sizes and 23,663 observations, found that feedback interventions improved performance on average but that 38 percent of them decreased performance. Because of this wide variation (Nadler, Cammann, & Mirvis, 1980), research has cycled between meta-review articles that attempt to integrate the inconsistent results of previous studies and new experiments demonstrating additional moderators, most of which fall into one of three categories: (1) attributes of the feedback itself, such as frequency (Chhokar & Wallin, 1984; Pampino Jr, MacDonald, Mullin, & Wilder, 2004), sign (positive versus negative framing) (Church, Libby, & Zhang, 2008; Cianci, Klein, & Seijts, 2010; Hossain & List, 2012; McFarland & Miller, 1994; Van-Dijk & Kluger, 2004; VandeWalle, Cron, & Slocum, 2001), specificity (Goodman & Wood, 2004), and scope (relative/ranked versus individual) (Barankay, 2012; Brown, Fisher, Sooy, & Sprinkle, 2014); (2) the recipient’s past performance (for example, a high versus low performer) (Bradler, Dur, Neckermann, & Non, 2013); or (3) complementary features attached to the feedback, such as goal setting (Becker, 1978; Ivancevich & McMahon, 1982; Locke, Shaw, Saari, & Latham, 1981; Pritchard, Jones, Roth, Stuebing, & Ekeberg, 1988), rules (Fedor, 1991; Haas & Hayes, 2006), and incentives (Ashton, 1990; Bandiera, Barankay, & Rasul, 2013; Hannan, Krishnan, & Newman, 2008; Lourenco, Narayanan, Greenberg, & Spinks, 2015; Newman & Tafkov, 2014; Petty, Singleton, & Connell, 1992; Sprinkle, 2000). Of these categories of moderators between feedback and performance, the one that research has consistently demonstrated to have the most positive and significant impact has been the third. Feedback enhances performance when it is specific to goals set (see, e.g., Locke et al., 1981 for a thorough review of the goal-setting literature) and incentives delivered. Yet those levers assume work for which it is practical to set a single goal, which can be arranged in the laboratory and occasionally in carefully selected field experiments, but rarely in everyday working life. Employees generally do not have single-criterion goals, but are asked to sustain high levels of productivity over time across a changing portfolio of tasks. That makes traditional approaches to pairing goal-setting and performance feedback impractical or, worse, it converts goal 4 Seeing Where You Stand setting into the time-consuming yet vapid check-the-box game of impression management hidden within most annual performance reviews (Culbert & Rout, 2010). New technology, new questions. To address the impracticality of effective goal setting and the other moderators mentioned above, workplace transparency—made possible by advanced tracking technologies—has become a widespread managerial practice (Bernstein, 2014). The transparency of an individual’s performance has increased through systems which permit more people to more frequently see more about their own performance and the performance of others than ever before. As a result, disclosure of performance is becoming a substitute for goal setting and, increasingly, for performance feedback. In place of negotiating a particular performance criterion, activity becomes sufficiently instrumented and tracked that each person can see his or her performance relative to or ranked against that of coworkers (Buckingham & Goodall, 2015), literally seeing where he or she stands within the organization. What is the productivity impact of substituting transparent performance data for traditional performance feedback? We deliberately choose the word “transparent” not just because of its common use, but also to bring together existing theory on performance management and new work on singlecriterion-task-based monitoring in operations management and on relative performance pay in economics, all three of which are increasingly related in practice but have remained relatively distinct in the scholarly literature. For example, using multi-firm field experiment data from tens of thousands of waiters across hundreds of restaurants, Pierce et al. (2015) found that the use of information technology to monitor and detect theft not only reduced it by $24 per week but also increased total revenue by $2,975 per week. Staats, Dai, Hofmann, & Milkman (2015) found that RFID-based electronic monitoring of handwashing in hospitals had an initial positive effect on compliance ranging from 20 to 60 percent, although compliance declined over time and fell below pre-intervention levels after monitoring was removed. 5 Seeing Where You Stand Researchers in economics have begun to study similar questions, focused specifically on incentives. The findings have been somewhat pessimistic. For example, Barankay (2012), in a careful field experiment involving highly incentivized (commissioned) furniture salespeople, found that removing individualized rank transparency—that is, no longer telling employees their own rank—increased performance of male (but not female) salespeople by 11 percent. Bandiera et al. (2013) found that introducing team-based incentives to piece-rate soft fruit pickers, which required transparently telling all the pickers their performance, reduced their productivity by 14 percent in the case of rank incentives (due to decreased performance of low performers) but increased it by 24 percent in the case of tournament prizes (due to increased performance of high performers). Others have found that disclosing relative performance can improve motivation but simultaneously encourages unproductive distortions in how work is done (Hannan, McPhee, Newman, & Tafkov, 2012), especially when combined with strongly incentivized (so-called “tournament” or “stack ranking”) compensation schemes (Hannan et al., 2008). Such unproductive distortions can take the form of failing to increase performance except when close to crossing a threshold and winning a bonus (Delfgaauw, Dur, Non, & Verbeke, 2014) and can even go so far as sabotaging others’ work or artificially increasing one’s own performance (Charness, Masclet, & Villeval, 2013), particularly when compensation is involved (Brown et al., 2014). Newman & Tafkov (2014) found that transparent performance data has a negative effect on performance in reward tournaments but a positive effect in reward-and-punish tournaments, both effects driven by bottom and middle performers. Lourenco et al. (2015) found that physicians do less to improve the effectiveness of their e-prescriptions with transparent performance data and a strong negative incentive (the threat of being fired). Despite these high-quality studies, we find it hard to generate a convincing hypothesis as to whether the substitution of transparent feedback for performance transparency will increase or decrease productivity in the absence of incentives and a single-criterion task. We therefore keep our research question about the main effect framed as a question rather than a hypothesis: 6 Seeing Where You Stand Research Question 1 (RQ1): What is the productivity impact of substituting transparent performance data for traditional performance feedback? New technology, new moderators. Such transparency, by definition, implicates a wider spectrum of “others”; that is, other observers. The monitoring literature, including the studies cited above, has so far typically focused on supervisory oversight (or a Foucauldian perception of oversight) rather than on peer sharing, but as performance-tracking systems become the norm, monitoring (both by supervisors and by one’s peers) and feedback become part of the same managerial conversation about what data to make available to whom for the sake of productivity. Put differently, making performance transparent tends to involve making the traditional performance feedback conversation far more public. While that does not diminish the importance of the moderators between performance feedback and performance summarized above, particularly in the case of incentives, it does implicate a new set of potentially important moderators: those that capture the relationship between the individual and the public, or what we term relational moderators. While the presence or absence of relative or ranked performance information (what we categorized above as the “scope” of feedback) could be considered a relational variable (Mahlendorf et al., 2014), we focus less on an attribute of the feedback and more on an attribute of the vertical (supervisory) and horizontal (peer) relationships implicated by the performance data. In this study, we chose to investigate two relational moderators—supervisor support and social comparison orientation—that are well-established in the literature on organizations but, so far as we are aware, have not been systematically studied in the context of performance feedback or transparent performance. Supervisor Support: Vertical Effects of Transparent Performance One key relational difference between traditional performance feedback and making performance transparent is the role of the supervisor. Traditionally, a supervisor is the employee’s primary source of performance feedback, even if that feedback has been gathered from others, as in a 360-degree review 7 Seeing Where You Stand (DeNisi & Kluger, 2000). With transparent performance, however, the data goes straight to the employee, bypassing the supervisor’s filter. How will that affect performance? Conceptually, it depends on whether transparent performance data is a complement to or a substitute for the supervisor’s traditional feedback role. Perceived supervisor support—that is, an employee’s view on how much his or her supervisor values his or her contributions and development (Kottke & Sharafinski, 1988)—has been empirically tied to in-role and extra-role performance (Eisenberger, Stinglhamber, Vandenberghe, Sucharski, & Rhoades, 2002; Jokisaari & Nurmi, 2009; Shanock & Eisenberger, 2006), so one would expect employees who have higher perceived supervisor support to also have higher past performance. It is unclear, however, how the introduction of transparent performance might change that relationship. On the one hand, transparent performance data can only reveal where one stands, not how to improve, so it requires complementary coaching from a supportive supervisor to translate increased motivation, from having room for improvement transparently revealed, into concrete steps for learning and improvement (Pfeffer, 2016). In other words, employees do not just need to see the apple on the tree, but they need a ladder to climb up to it. Much as telling an employee his or her rank in the organization without explaining how to improve it could make that employee less productive rather than more (Barankay, 2012), providing transparent performance data without supervisor support to improve could be frustrating, trigger feelings of helplessness (Martinko & Gardner, 1982), and thus undermine performance. We would therefore expect transparent performance data to benefit employees with lower supervisor support less than it benefits those with higher supervisor support. On the other hand, it has been suggested that transparent performance data operates as a substitute for the supervisor’s traditional performance feedback role (Hamel, 2011; Tapscott & Ticoll, 2003). For an employee who wishes to improve, finding out where you stand in comparison to coworkers can provide you with role models—high performers who may even be better coaches than a below-average 8 Seeing Where You Stand supervisor. Like transactive memory (Ren & Argote, 2011; Wegner, 1987), seeing where you stand could be a way to identify where the expertise lies. We would therefore expect performance transparency to benefit employees with low perceived supervisor support more than it benefits those who already have the role models they need in their more supportive supervisors. In fact, employees with high supervisor support might not only have less to gain but also more to lose. Prior research has shown that when your supervisor is responsible for performance feedback, your impression management behavior can significantly, if indirectly, influence your performance ratings if your supervisor likes you or feels similar to you (Wayne & Liden, 1995). Indeed, performance ratings can be positively biased solely through an employee’s propensity to ask for feedback, even if that inspires the employee to make strategic rather than productive changes in his or her behavior (De Stobbeleir, Ashford, & Buyens, 2011). Even in a context, like ours, in which performance ratings are neither subjective nor determined by a supervisor, supervisors can indirectly influence allegedly objective performance ratings (Wayne & Liden, 1995) by providing more resources, plum opportunities, or increased recognition to those whom they subjectively view as high performers. Our reading of the literature suggests that supervisor support has as much potential to moderate the relationship between transparent performance and productivity positively as negatively. We therefore offer paired hypotheses: Hypothesis 2a (H2a): An increase in performance transparency will complement traditional performance feedback, so employees with more supportive supervisors will improve their productivity more than those with less supportive supervisors. Hypothesis 2b (H2b): An increase in performance transparency will substitute for traditional performance feedback, so employees with less supportive supervisors will improve their productivity more than those with more supportive supervisors. Social Comparison Orientation: Horizontal Effects of Transparent Performance A second key relational difference involves the relationship between an employee and his or her peers. Festinger (1954) seminal paper proposed that, in the absence of clear standards of correctness, 9 Seeing Where You Stand people evaluate themselves, their opinions, and their capabilities in comparison with others (see Gibbons & Buunk, 1999; Suls, Martin, & Wheeler, 2002; Wood, 1989 for detailed reviews of social comparison orientation theory). Although the “desire to learn about the self through comparison with others is universal,” the extent to which people do so has been shown to vary with the established scale of social comparison orientation, the Iowa-Netherlands Comparison Orientation Scale (INCOM) (Gibbons & Buunk, 1999: 199). While social comparison has been linked to the productivity effect of performance transparency in the lab (McFarland & Miller, 1994), we are aware of no field experiments. Yet a field experiment has the distinct advantage of involving employees who have worked together, in an organizational context for much longer than the brief period of a lab experiment. If the “primary goal of social comparison is to acquire information about the self” (Gibbons & Buunk, 1999: 129) and if such information is typically used to evaluate and improve oneself, it would seem to follow that those more prone to social comparison are also more likely to increase their productivity when given more performance transparency. Hypothesis 3a (H3a): When performance transparency increases, employees with a stronger social comparison orientation will improve their productivity more than those with a weaker social comparison orientation. However, in real work environments (as opposed to lab contexts), it’s quite possible that, even before employees formally receive any feedback, they already have a sense of how they compare with each other—from their supervisors’ comments, from feedback-seeking behaviors (Ashford, Blatt, & Walle, 2003; Ashford & Tsui, 1991; Ashford, 1986) such as informal discussions with supervisors or peers, or even just from observing each other. To the extent that a person with a greater social comparison orientation would be more likely to already have gathered performance information about others before it was provided formally, the effect of formally providing transparency could be smaller. It is also possible that heightened social comparison, combined with increased performance transparency, could lead to negative psychological consequences, 10 Seeing Where You Stand such as diminished trust in coworkers (Dunn, Ruedy, & Schweitzer, 2012) and discouragement (Beshears, Choi, Laibson, Madrian, & Milkman, 2015), and to unproductive behaviors. Hypothesis 3b (H3b): When relative performance transparency increases, employees with a stronger social comparison orientation will improve their productivity less than those with a weaker social comparison orientation. METHODS Research Setting The context of our study is a service operation—a natural gas distribution company (referred to as GasCo) with 1,100 employees serving approximately 425,000 residential, commercial, and industrial customers in the southeastern United States. Most of GasCo’s employees are customer-facing, including the cadre of field-based service technicians, known as mechanics, on whom this study focuses. Mechanics spend their days on the road addressing requests from customers to turn on gas, turn off gas, repair gas appliances, repair leaks, and respond to emergencies. They typically start their work day by logging into the system on their trucks and accepting orders (which sometimes involves calling the customers). A mechanic maps a path to the order (for example, a customer’s house), arrives on site, complete the order, then drives to the next order or takes a break. Although the tasks may appear straightforward and routine, mechanics identify themselves as professionals because of the risk inherent in any activity involving gas, the wide variability in the contexts, systems, and devices they are expected to safely diagnose and fix, and the systematic training they receive. Their tasks therefore fit our two criteria for this study, being sufficiently specified for performance metrics to be comparable across individuals, but sufficiently complex to permit wide variation in results based on capability and on criteria largely within the individual worker’s control. Like most gas utilities in the United States, GasCo had grown by acquiring dozens of municipal gas utilities. Not long before our study, it had consolidated the customer service organizations (including the 11 Seeing Where You Stand mechanics) of all its acquisitions. GasCo’s mechanics, all of whom were represented by one of two unionized bargaining units, now worked from 11 work centers, ranging from 2 to 42 mechanics. Because the consolidation involved integrating previously autonomous organizations with different histories, the performance feedback systems also needed to be integrated by creating a consistent set of metrics. Through a bottom-up effort across all of the service mechanic organizations, GasCo created a single scorecard of metrics, collected automatically by technology in the mechanics’ trucks, to which the mechanics had collectively agreed. The metrics included use of time (percentages of productive time, support time, and nonproductive time), number of on-time customer appointments, emergency response time, and whether a job was completed on the first trip. This agreement on metrics, however, produced a new dilemma: who should be allowed to see them? On the one hand, management agreed that seeing one’s own performance relative to peer performance might help employees know where to improve and motivate them to try. On the other hand, management feared that making individual performance transparent, and therefore common knowledge among coworkers, might lead to unintended consequences such as sabotaging others’ work or gaming the metrics (Charness et al., 2013). In the absence of a clear answer, we were invited to partner with the organization to run a pilot program to determine the effect of performance transparency at GasCo, starting with the mechanics. The setting provides two key advantages for an experimental study on performance transparency. First, GasCo’s work force did not face the high-powered economic incentives (positive or negative) characteristic of prior experiments investigating the effectiveness of performance feedback but are not necessarily characteristic of most jobs. GasCo’s employees were unionized and, while mechanics participated in the employee incentive plan, its incentives were based on companywide objectives and thus were relatively disconnected from an individual’s day-to-day performance. This allowed us to observe the effect of performance transparency itself, decoupled from financial incentives or fear of career consequences. 12 Seeing Where You Stand Despite the fact that many, if not most, jobs today lack strong financial incentives, we are aware of no other field experiment of performance transparency in which strong financial incentives were not present. Second, the mechanics interact with customers and, to a certain extent, with other mechanics in their own work center, but rarely with mechanics in other centers. Randomization of the experimental intervention at the work-center level was therefore unlikely to suffer from “contamination” due to mechanics at different work locations interacting. Data Field experiment. Four of the 11 work centers were randomly selected for the treatment condition, which involved being able to access transparent performance information on individual scorecards. The other seven served as a control group operating at the status quo—exactly the same as the treatment group but without the performance transparency intervention. As a result of our random assignment, 31 mechanics fell within the treatment condition and 92 fell within the control group. -------------------- Insert Figure 1 About Here -------------------Figure 1 shows a sample screenshot of the mechanic’s view of the daily scorecard information in the treatment group (here, with the employees’ names disguised). Employee 1, for example, could see not only his or her own performance metrics for the previous day, but also the metrics for everyone else in his or her work center. The metrics were ranked from the highest to the lowest. Mechanics in the treatment group received an email every morning with a link to the scorecard information on an intranet webpage. They could access the information from their truck or from any computer. We tracked how often the intranet webpages were accessed to ensure the quality of the intervention—that is, to be sure the mechanics were actually accessing the performance data—although, due to both technological and human subjects limitations, we could not identify who had made any particular visit to the data. Mechanics in the control group continued to receive no formal relative-performance transparency. The experimental pilot ran from June 25, 2014 to August 29, 2014. 13 Seeing Where You Stand Survey. In order to measure supervisor support and social comparison orientation, we administered a pre-experimental survey to all the mechanics. Consistent with Eisenberger et al. (2002) and Shanock & Eisenberger (2006), we used the established six-item instrument to measure perceived supervisor support. Consistent with numerous studies in organizational behavior, we used INCOM (Gibbons & Buunk (1999) to measure social comparison orientation. As an additional control, we asked the mechanics to assess their own past performance on a variety of metrics, given prior evidence that one’s perception of one’s own past performance influences one’s expectations for future performance, which, in turn, have been shown to moderate the productivity impact of performance feedback (Northcraft & Ashford, 1990). For example, we asked “On ‘% of my day spent performing productive activities,’ compared to all 123 mechanics at GasCo, I think my performance would rank me ___ out of 123” and provided a sliding scale, from highest to lowest performer, on which to answer. By pairing a mechanic’s response with his or her actual past performance, we could holistically control for past performance, both actual and perceived. We sent the survey by email on June 19 (six days before starting the experimental intervention) and gave the mechanics four days to complete it. They could access it from their trucks or from any computer. We made clear that the survey was conducted by us, the researchers, not by GasCo and that none of the responses would be seen by anyone in the company. The email contained a link to an external survey website that recorded their responses. We sent the survey to 123 mechanics and received 63 responses, a response rate of 47 percent, which management told us was normal for this population. Comparing humanresources data for responding and nonresponding individuals revealed no bias in the type of individual who responded. We timed the survey to be a trigger, in both the treatment and control groups, for any Hawthorne effects (Adair, 1984; Jones, 1992; Roethlisberger & Dickson, 1939), whereby changes in effect and performance result from the mere fact of managerial attention or environmental modification rather than from the specific 14 Seeing Where You Stand treatment itself. By simultaneously triggering any such effects in both the treatment and control groups, we alleviated concerns that results in one condition relative to the other might be due to a Hawthorne effect. Participant observation. In the spirit of the longstanding qualitative tradition of participant observation (Becker, 1958; Jorgensen, 1989; Spradley, 1980), one researcher was embedded into the workforce for the week of June 30, the second week of the field experiment. He was selected based on his ability to fit in with traditional recruits for the service mechanic role at GasCo, but he was trained in how to properly collect field notes (Emerson, Fretz, & Shaw, 1995) and in the fundamentals of participant-observation research. For a week, he rode, worked, ate, and hung out with other service mechanics as a typical apprentice (trainee), rotating amongst different mechanics in different work centers. His constant note-taking was not seen as unusual, but rather as typical for an apprentice. In the evenings, he synthesized his notes and added detail while the memories were still fresh. Because mechanics spend so much time driving from one site to another, there is a lot of time for casual conversation and the latest change in their jobs—the newly transparent performance feedback—was a natural and frequent topic. These data therefore add significant texture to the quantitative results of the field experiment and survey. Our decision to embed a researcher was driven, in part, by our own experiences doing “ride-alongs” with the mechanics several months before the field experiment began. Our goal at the time had been to understand the work firsthand so as to better design the experiment, but we also learned that service mechanics are very accustomed to having outsiders join them and thus are very good at showing a manager, stakeholder, or other guest only what he or she wants to see while getting their own work done. Based on our own experiences, we believed that we would only get the full story if we embedded a participantobserver who could earn the mechanics’ trust. Main regression model. To analyze the performance data, we used a difference-in-differences estimation model, which allows a precise yet simple analysis of the effect of the intervention on the treatment group relative to any changes experienced by the control group during the same period. The basic model is: 15 Seeing Where You Stand Yit = α + (β1 Treatmentit) + (β2 x Postit) + (β3 x (Treatmentit Postit)) + ∑ + εit. Yit is the dependent variable: the performance metric at the employee(i)-workday(t) level. Treatmentit is an indicator variable that equals 1 if the employee was working in one of the four pilot sites. Postit is an indicator variable that equals 1 if the date was on or after June 25, 2014. The main estimation uses ordinary least squares (OLS) regressions. Consistent with Bertrand, Duflo, & Mullainathan (2004), standard errors of the coefficients are computed using the block-bootstrap method (clustering by work locations). If the effect of the treatment is positive, we should see a positive and significant β3 when Yit is the positive productivity indicator, % Productive Time, and a negative and significant β3 when Yit is the negative productivity indicator, % Nonproductive Time. Measures. Table 1 shows the definitions and descriptive statistics for our key variables. -------------------- Insert Table 1 About Here -------------------Dependent variables. To measure performance, we adopted GasCo’s three standardized metrics: % Productive Time, % Support Time, and % Nonproductive Time. GasCo designed these metrics to help mechanics allocate time productively between these three categories, which collectively represent the entire work day. In our empirical analysis, we used % Productive Time and % Nonproductive Time as the dependent variables; because a work day is allocated entirely between the three categories of activity, showing changes in two is sufficient to reflect any change in how mechanics allocate their time. % Productive Time is a positive productivity metric and % Nonproductive Time is a negative productivity metric.1 We retrieved An alternative way to view these metrics is that % Productive Time represents time spent to increase current productivity, % Support Time represents time spent on activities, such as maintenance and training, that might increase productivity or work quality, and % Nonproductive Time represents time spent on activities that are unlikely to increase productivity or work quality. 1 16 Seeing Where You Stand daily performance data from GasCo’s archive between June 1, 2013 and August 29, 2014—389 calendar days before the intervention and 65 working days after the intervention. Independent variables. Our measures for perceived supervisor support and social comparison orientation, consistent with the design and previous use of those instruments, are calculated as a simple addition of the scores from each question, adjusted for reverse coding. We also incorporate a set of control variables based on demographic information obtained through GasCo’s internal records, including tenure, age, gender, and race. We control for past performance in all of our regressions and, in our analysis of the main effect of performance transparency, evaluate different tiers of performers separately. As an example of how past performance can affect the relationship between performance transparency and productivity, consider findings from the performance feedback literature that feedback can positively influence poorer performers while having little influence on better performers (Pritchard et al., 1981), meaning those who do not receive recognition are primarily responsible for an increase in organizational performance, through peer pressure and a preference for conformity with higher-performing peers (e.g., Bradler et al., 2013; Schultz, Juran, & Boudreau, 1999). Indeed, people with different levels of past performance respond differently to different performance feedback mechanisms, including social comparison (McFarland & Miller, 1994). For example, for the highest performers, poorly designed feedback systems can reduce pressure and therefore performance (Haas & Hayes, 2006); for the lowest performers, they can undermine motivation by arousing feelings of self-doubt or helplessness (Kluger et al., 1994). Research has also shown that individuals can exhibit behaviors consistent with strong last-place aversion (Kuziemko, Buell, Reich, & Norton, 2014). Because some of these effects could be triggered either by actual past performance or by the gap between perceived and actual past performance, we incorporate both triggers into our regressions. Our measure of a mechanic’s actual past performance comes from archival data on % Productive Time and % Nonproductive Time; self-evaluated past performance is a standardized ranking, between 0 and 100, based on his or her own evaluation in the survey. 17 Seeing Where You Stand RESULTS Descriptive Statistics On average, mechanics’ productive (nonproductive) time over the sample period is 59.13 percent (31.70 percent) with good variation (standard deviation is 24.36 percent for productive and 23.41 percent for nonproductive time). Nonproductive time showed a higher level of performance variation (relative to the mean value). The median age and median tenure are 50 and 19 years, respectively. Fifty-two percent of the mechanics are white. Comparing all pre-experiment variables across the treatment and control groups, our randomization appears to have been successful, with the exception of two variables for which the ratio of the difference between the treatment and control groups’ means was greater than 10 percent. We explicitly control for these unbalanced variables in our statistical analysis. The survey data showed good variation in the measures based on the ratio of standard deviation to the mean. The mean value of the self-evaluation is 67 percent rather than 50 percent, consistent with the welldocumented tendency to overestimate one’s own relative performance (Kruger & Dunning, 1999). -------------------- Insert Table 2 About Here -------------------Table 2 tabulates the unconditional correlations between the dependent and independent variables. % Productive Time is negatively and significantly (at the one-percent significance level) correlated with tenure, age, past performance on % Nonproductive Time, and being white and is positively and significantly (at the one-percent significance level) correlated with social comparison orientation, past performance on % Productive Time, self-evaluation, and perceived level of supervisor support. % Nonproductive Time is positively and significantly (at the one-percent significance level) correlated with age, past performance on % Nonproductive Time, and being white and is negatively and significantly (at the one-percent significance level) correlated with past performance on % Productive Time, self-evaluation, and perceived level of supervisor support. Tenure is negatively correlated with social comparison orientation, self-evaluation, and perceived level of supervisor support (all statistically significant at the one-percent level). 18 Seeing Where You Stand Baseline Result: Does Performance Transparency Improve Productivity? -------------------- Insert Table 3 About Here -------------------Table 3 shows the main regression results. Columns 1 and 2 are analyses conducted on a broader sample of mechanics for whom we only have demographic and performance data. Columns 3 and 4 are analyses conducted on the sample for whom we also have survey data (and hence can construct measures for social comparison orientation, perceived level of supervisor support, and self-evaluation). For consistency, we will use the latter sample for all subsequent analyses. In Table 3, Columns 1 and 3 use % Productive Time as the dependent variable, while Columns 2 and 4 use % Nonproductive Time. The coefficient on Treat x Post is the estimated treatment effect of the intervention on the performance metric. Positive coefficients on this interaction term in Columns 1 and 3 show that the effect of the intervention on % Productive Time was positive, although not statistically significant after we block-bootstrapped the standard errors, clustered by work groups. The negative and statistically significant coefficients on this interaction term in Columns 2 and 4 indicate a negative effect on % Nonproductive Time; that is, the intervention decreases the percentage of nonproductive time. Considering the pre-intervention mean values for both productivity metrics, the results in Columns 3 and 4 indicate a 4.77-percent increase in % Productive Time and a 11.01percent decrease in % Nonproductive Time from the control group average in the pre-intervention period. This suggests that, for our context, performance transparency does improve productivity. The improvement is greater for % Nonproductive Time in terms of both economic magnitude and statistical significance. In order to ensure the quality of our intervention—that is, to be sure the mechanics were actually accessing the performance data and thus that it is the access to performance data and not something unrelated that is driving the results—we obtained the number of views of the daily performance report for each of our four treatment sites during the experiment period. Across the four sites, the average employee accessed the daily report at least once every three days and in some cases far more frequently. The sites at which mechanics accessed the report most often were the ones which experienced the greatest productivity boost during the experiment, which indicated that it was, indeed, the access to the performance feedback 19 Seeing Where You Stand that drove the treatment effects. According to conversations with our embedded observer, supervisors noticed that performance was improving too: one supervisor in the treatment condition told the observer that she “had seen the performance metrics improve from around equal parts nonproductive and productive time to 60 to 70 percent productive time.” Actual past performance. Table 4 shows the regression analyses conducted on subsamples of mechanics whose actual pre-intervention performance was either (a) greater than or equal to the median performance (“Higher Performer”) or (b) below the median (“Low Performer”). Treatment effects for both % Productive Time and % Nonproductive Time are much stronger in magnitude and significance for the “Low Performers.” Overall, these findings suggest that learning about one’s performance seems to bring about more improvement amongst the low performers. -------------------- Insert Table 4 About Here -------------------Self-perceived performance. In Table 5, we split the sample by mechanics who, before the intervention, perceived their performance to be greater than or equal to what it actually was (Columns 1 and 3, labelled “AA”) and those who perceived their performance to be less than it actually was (Columns 2 and 4, labelled “BA”). We find that the treatment effects for % Productive Time and % Nonproductive Time are much stronger in magnitude and statistical significance for those who had underestimated themselves. -------------------- Insert Table 5 About Here -------------------Supervisor Support To test H2a and H2b, we ran the baseline regressions, splitting the sample into those who reported a high level of perceived supervisor support (a score on “supervisor support” greater than or equal to the sample median) and those who reported a low level of perceived supervisor support; these are labelled “HSS” and “LSS,” respectively, in Table 6. We find treatment effects of greater magnitude and statistical significance for both % Productive Time and % Nonproductive Time for those who perceived a low level of supervisor 20 Seeing Where You Stand support, consistent with H2b instead of H2a and suggesting that transparent performance may serve as a substitute for supervisor support. Considering the pre-intervention mean values for both productivity metrics, these results indicate a 16.43-percent increase in % Productive Time and a 18.71-percent decrease in % Nonproductive Time. As one mechanic said in front of the participant-observer, he liked that “low performers could approach high performers”—knowing now who they were—to learn how to improve if they “didn’t have a good supervisor.” -------------------- Insert Table 6 About Here -------------------Social Comparison Orientation To test H3a and H3b, we ran the baseline regressions while splitting the sample by mechanics’ social comparison orientation. Columns 1 and 3 of Table 7 are the subsamples of mechanics whose social comparison-orientation scores were greater than or equal to the sample median (“High Social Comparison,” or “HSC”). Columns 2 and 4 are the subsamples whose scores were less than the sample median (“Low Social Comparison,” or “LSC”). The negative treatment effect on % Nonproductive Time is greater—both in magnitude (a 17.36-percent reduction from the pre-intervention mean) and in statistical significance— for the low social comparison group. The results from Table 7 are consistent with this interpretation, as is one supervisor’s comment to the participant-observer that the newly available feedback simply “consolidated data that he already had” and that he had always “told his workers they could come to him [to see].” We therefore find support for H3b instead of H3a. -------------------- Insert Table 7 About Here -------------------There are two subfactors in the social comparison orientation measure: the first six items focus on comparing one’s performance with others’ performance and constitute the subfactor “Ability”; the other five focus on caring about others’ opinions or thoughts and constitute the subfactor “Opinion” (Gibbons & Buunk, 1999). Table 7 shows the analyses conducted on subsamples split by the scores on these subfactors. The productivity boost was higher for those with lower scores on “Ability”; that is, those less prone to 21 Seeing Where You Stand compare their performance with that of others. But it was also higher for those with higher scores on “Opinion”; that is, those who care more about others’ opinions or thoughts. We suspect that for those who score lower on “Ability,” the effect of the incremental information from the formal transparent relative performance is greater than the effect of a lower tendency to collect benchmark information on their own. However, for those who score higher on “Opinion,” the effect of caring more about others’ thoughts took the form of higher performance pressure (and thus a higher productivity boost) when they saw the transparent performance data. Robustness Checks We ran three robustness checks to address certain characteristics of our field experiment. First, the gas utility business is seasonal. In part because of that, we requested a much longer time series or pre-intervention performance data and re-ran the difference-in-differences regressions using month fixed effects. The treatment effects were similar. Second, our dependent variables, % Productive Time and % Nonproductive Time, are correlated (though not perfectly, since there is also the third category, “Support Time”), which means that the error terms in the two regressions are correlated. Correlated dependent variables do not cause bias in the estimation of coefficients, but running them as separate regressions could reduce efficiency (Kennedy, 2003). Therefore, we also used Seemingly Unrelated Regression Estimation (SURE) to estimate our main regressions; results did not change. Third, our dependent variables have bounded values. We ran Tobit regressions, setting the lower and upper bounds at 0 and 100, and got similar results. For simpler interpretation, we kept OLS with blockbootstrapped standard errors (clustered by work locations) as our main specification. DISCUSSION Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. 22 Seeing Where You Stand Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age. The Petabyte Age is different because more is different. (Anderson, 2008: 1) Performance transparency once meant the disclosure of organizational outcomes, but the steady advance of enabling technologies (for a review, see Kidwell & Sprague, 2009) has made it increasingly possible to make an individual’s activities transparent within an organization. The old adage “you get what you pay for” has evolved to “you get what you measure” and, more recently, “you get what you make transparent.” Such transparency is no longer just the purview of managers, but is increasingly open to all. While the ability to track more information does not necessitate open access to it, the two seem to have evolved together. That has implications for many aspects of management, but the effects are currently particularly acute for the field of performance feedback, where employee-supervisor discussions are being displaced by employee performance transparency. Is such performance transparency productive? The results from our field experiment in a large service organization suggest cautious optimism. We find that performance transparency can boost productivity— especially by reducing nonproductive behaviors—even without strong financial incentives. However, even without the amplifying and sometimes distorting effect of strong financial incentives, we were able to replicate the large variation in results as found in prior studies on performance feedback and performance transparency (Kluger & DeNisi, 1996). We had positive, null, and negative effects based, for example, on actual and perceived past performance or even on which metric we were analyzing. Specifically, we find that the productivity boost is greater, and in some cases only exists, for % Nonproductive Time of low performers and those who underestimated their performance. Increasing the transparency of positive productivity metrics makes top performers more determined to stand out. Increased transparency of negative productivity metrics makes low performers more determined not to stand out—at least not in that way. Consistent with prior research on performance feedback, these results highlight the importance of taking prior performance and self-evaluation into account when assessing the impact of any system that disseminates performance information to employees. 23 Seeing Where You Stand Because of the variation, the implications of performance transparency—like the implications of performance feedback—are likely to be well understood only through careful studies of moderators. In this study, we argue that relational moderators—moderators based on an employee’s relations to others at work (such as his or her boss, peers, and subordinates)—are particularly important (and understudied) when performance transparency increases the visibility and influence of surrounding individuals’ behavior. Therefore, we focus on relational moderators and find powerful effects. Our relational moderator for vertical relations, supervisor support, showed a significant inverse relationship with productivity improvement in the transition from traditional performance feedback to performance transparency. In other words, supervisor support and performance transparency appear to be substitutes: employees with less supervisor support experienced a boost in productivity, while those with more supervisor support didn’t gain much from the new system. We leave open for future research the possibility that, especially in different contexts, a shift to performance transparency might lower productivity for employees with high supervisor support. Alternatively, our findings do not undermine the current practitioner dialogue about ways in which transparent data can substitute for managers; indeed, we find that performance transparency is a substitute for supervisor support when that is lacking. There is an opportunity for future research to provide more evidence on which roles of a manager can be replaced by technologies that automatically track and provide performance (and other useful) information to employees, or the conditions under which managerial support is a substitute versus a complement for performance transparency. Our relational moderator for horizontal relations, social comparison orientation, also showed a significant inverse relationship with productivity in the transition from traditional performance feedback to performance transparency. Those with a lower social comparison orientation experienced a greater boost in productivity, as those who actively seek social comparison were more likely to have already sought relative performance information before the intervention and thus had less to learn from the new system. While we had left open this possibility in our hypotheses, we expect many readers to find the 24 Seeing Where You Stand result surprising, as researchers tend to believe that it is the employees who are high on social comparison orientation—who always want to know where they stand relative to others—who will benefit the most from increased performance transparency. But in our study, those employees were relatively disadvantaged by the transition to performance transparency. Future research could dive deeper and provide direct evidence on whether social comparison orientation leads to higher feedback searching behaviors in the absence of a formal feedback system, and whether informal information searching behaviors can be enhanced in a productive way by the introduction of a formal system that enhances performance transparency. The data from the self-assessments of past performance strengthen the low-social comparisonorientation finding. Three motives have been identified for making social comparisons (Wood, 1989): self-evaluation (choosing the most comparable benchmark in order to be accurate in one’s view of the world), self-improvement (choosing a higher benchmark in order to improve), and self-enhancement (choosing a lower benchmark for the sake of one’s self-esteem). In our study, those who underestimated their relative performance were also lower on social comparison orientation. It is possible that these mechanics were driven by a desire for self-improvement (since their perception was likely to be a result of consistently choosing a higher benchmark in social comparison) and therefore motivated to improve when performance transparency increased. While we did not have a large enough sample of such people to run three-way interactions, we believe this would be a valuable avenue of future research. One perspective that might help integrate the vertical and the horizontal relational moderators would be to consider how these relations shape the pre-existing channels, both formal and informal, for performance information to reach individual employees. Organizations have a history, a context, and a fabric of social interactions, existing prior to the introduction of any new management system. In assessing the impact of a formal system that enhances performance transparency, one needs to take into consideration not only the attributes of the formal system and the people involved in the system, but also any pre-existing mechanisms that could have served similar purposes within the organization. This 25 Seeing Where You Stand highlights the importance of gaining evidence through carefully designed field experiments in addition to all the laboratory studies which cannot replicate the tacit task knowledge and complex organizational context surrounding the introduction of a management practice. This also casts a new light in analyzing some of the previously studied moderators of performance feedback (e.g. past performance, selfevaluation) and invites researchers to consider how these moderators could interact with these “background” channels of disseminating performance information – existing prior to the introduction of a formal system. Implications for Research Our findings have implications for three streams of research. First, we contribute to theory in management and organizations on the productivity impact of performance transparency versus traditional performance feedback (e.g., Alvero et al., 2001; Balcazar et al., 1985; Ilgen et al., 1979; Kluger & DeNisi, 1996; Mas & Moretti, 2009; Roels & Su, 2014; Schultz et al., 1999; Song, Tucker, Murrell, & Vinson, 2015; Taylor et al., 1984). Whereas other studies show the effect of increased performance transparency (Barankay, 2012) combined with explicit financial incentives or in the context of strong financial incentives (Boudreau, Lakhani, & Menietti, 2015), this paper empirically identifies the effect of providing public relative performance information in itself in a context of weak financial incentives and relatively little fear of career consequences. This study is also the first to specifically study the replacement of traditional performance feedback with performance transparency. Second, we contribute to the growing body of work in behavioral operations (e.g., Bendoly, Donohue, & Schultz, 2006) on the impact of transparency-enhancing technology on worker behavior and productivity (e.g., Bendoly, 2013; Blader, Gartenberg, & Prat, 2015; Buell, Kim, & Tsay, 2015; Gino & Pisano, 2008; Pierce et al., 2015). While other studies show the effect of transparency in the form of direct observation or system-enabled monitoring (e.g., Blader et al., 2015; Pierce et al., 2015; Staats et al., 2015), we provide empirical evidence on the effect of making transparent performance data common knowledge to employees other than the focal employee, highlighting the social nature of some 26 Seeing Where You Stand transparency-enhancing technologies and their social-psychological effects on employee behaviors and productivity. Third, we contribute to an old yet recently thriving operations management literature on managing and motivating employees (e.g., Chen & Sandino, 2012; Loch & Wu, 2007), particularly on how factors unrelated to explicit financial incentives can affect individual behaviors (Ashraf, Bandiera, & Lee, 2014; Edelman & Larkin, 2014). Whereas other studies show that work design (Hackman & Oldham, 1975; Huckman, Staats, & Upton, 2009; Tan & Netessine, 2014) can significantly affect productivity, we show that shaping the information environment in the workplace by tracking and providing more transparent individual performance information could also affect behavior and productivity. The Performance Implications of Seeing Where You Stand Because the logic of performance transparency seems so compelling, its use is becoming increasingly common, either in addition to or instead of traditional supervisor-to-subordinate performance feedback. As the use of performance transparency grows, both management scholars and managers themselves will need to take into account how the success or failure of performance transparency is affected by the specific social and interpersonal environment in which it is put to use. We have tried to take an initial step in that direction, identifying and examining two relational moderators – the vertical relationship with supervisors and the horizontal interactions with peers – that can make a difference in the productivity effects of enhancing performance transparency within real organizations. Our study shows that such feedback can confer comparatively more or less benefit on an employee depending on how supportive his or her supervisors already are and on the degree to which that employee views his or her performance in comparison to that of coworkers. Future research, and new managerial initiatives to increase performance transparency, could yield more fruitful outcomes with increased appreciation for the human interactions existing prior to the implementation of these exciting transparency-enabling technologies and increased attention to the relational moderators that allow us to account for them. 27 Seeing Where You Stand REFERENCES Adair, J. G. 1984. The Hawthorne effect: A reconsideration of the methodological artifact. Journal of Applied Psychology, 69(2): 334–345. Alvero, A. M., Bucklin, B. R. & Austin, J. 2001. An objective review of the effectiveness and essential characteristics of performance feedback in organizational settings (1985-1998). Journal of Organizational Behavior Management, 21(1): 3–29. Anderson, C. 2008. The end of theory: The data deluge makes the scientific method obsolete. Wired. Ashford, S. J. 1986. Feedback-seeking in individual adaptation: A resource perspective. Academy of Management Journal, 29(3): 465–487. Ashford, S. J., Blatt, R. & Walle, D. V. 2003. Reflections on the looking glass: A review of research on feedback-seeking behavior in organizations. Journal of Management, 29(6): 773–799. Ashford, S. J. & Cummings, L. L. 1983. Feedback as an individual resource: Personal strategies of creating information. Organizational Behavior and Human Performance, 32(3): 370–398. Ashford, S. J. & Tsui, A. S. 1991. Self-regulation for managerial effectiveness: The role of active feedback seeking. Academy of Management Journal, 34(2): 251–280. Ashraf, N., Bandiera, O. & Lee, S. S. 2014. Awards unbundled: Evidence from a natural field experiment. Journal of Economic Behavior \& Organization, 100: 44–63. Ashton, R. H. 1990. Pressure and performance in accounting decision settings: Paradoxical effects of incentives, feedback, and justification. Journal of Accounting Research, 28: 148–180. Balcazar, F., Hopkins, B. L. & Suarez, Y. 1985. A critical, objective review of performance feedback. Journal of Organizational Behavior Management, 7(3-4): 65–89. Bandiera, O., Barankay, I. & Rasul, I. 2013. Team incentives: Evidence from a firm level experiment. Journal of the European Economic Association, 11(5): 1079–1114. Barankay, I. 2012. Rank incentives: Evidence from a randomized workplace experiment. Wharton Business School Working Paper. Becker, H. S. 1958. Problems of inference and proof in participant observation. American Sociological Review, 23(6): 652–660. Becker, L. J. 1978. Joint effect of feedback and goal setting on performance: A field study of residential energy conservation. Journal of Applied Psychology, 63(4): 428–433. Bendoly, E. 2013. Real-time feedback and booking behavior in the hospitality industry: Moderating the balance between imperfect judgment and imperfect prescription. Journal of Operations Management, 31(1): 62–71. Bendoly, E., Donohue, K. & Schultz, K. L. 2006. Behavior in operations management: Assessing recent findings and revisiting old assumptions. Journal of Operations Management, 24(6): 737–752. Bernstein, E. 2014. The transparency trap. Harvard Business Review, 92(10): 58–66. Bernstein, E. & Blunden, H. 2015. The sales director who turned work into a fantasy sports competition. Harvard Business Review. Bertrand, M., Duflo, E. & Mullainathan, S. 2004. How much should we trust differences-in-differences estimates? Quarterly Journal of Economics, 119(1): 249–275. Beshears, J., Choi, J. J., Laibson, D., Madrian, B. C. & Milkman, K. L. 2015. The effect of providing peer information on retirement savings decisions. The Journal of Finance, 70(3): 1161–1201. Blader, S., Gartenberg, C. & Prat, A. 2015. The contingent effect of management practices. Working Paper (available at SSRN). Boudreau, K. J., Lakhani, K. & Menietti, M. E. 2015. Performance responses to competition across skilllevels in rank order tournaments: Field evidence and implications for tournament design. RAND Journal of Economics. Bradler, C., Dur, R., Neckermann, S. & Non, A. 2013. Employee recognition and performance: A field experiment. ZEW-Centre for European Economic Research Discussion Paper 13-017. Brown, J. L., Fisher, J. G., Sooy, M. & Sprinkle, G. B. 2014. The effect of rankings on honesty in budget reporting. Accounting, Organizations and Society, 39(4): 237–246. 28 Seeing Where You Stand Buckingham, M. & Goodall, A. 2015. Reinventing performance management. Harvard Business Review, 93(4): 40–50. Buell, R. W., Kim, T. & Tsay, C.-J. 2015. Creating reciprocal value through operational transparency. Harvard Business School Technology & Operations Mgt. Unit Working Paper, (14-115). Charness, G., Masclet, D. & Villeval, M. C. 2013. The dark side of competition for status. Management Science, 60(1): 38–55. Chen, C. X. & Sandino, T. 2012. Can wages buy honesty? The relationship between relative wages and employee theft. Journal of Accounting Research, 50(4): 967–1000. Chhokar, J. S. & Wallin, J. A. 1984. A field study of the effect of feedback frequency on performance. Journal of Applied Psychology, 69(3): 524–530. Church, B. K., Libby, T. & Zhang, P. 2008. Contracting frame and individual behavior: Experimental evidence. Journal of Management Accounting Research, 20(1): 153–168. Cianci, A. M., Klein, H. J. & Seijts, G. H. 2010. The effect of negative feedback on tension and subsequent performance: The main and interactive effects of goal content and conscientiousness. Journal of Applied Psychology, 95(4): 618–630. Culbert, S. & Rout, L. 2010. Get rid of the performance review: How companies can stop intimidating, start managing-and focus on what really matters. New York: Hachette Book Group. Cunningham, L. & McGregor, J. 2015. Why big business is falling out of love with the annual performance review. The Washington Post, August 17. Dai, H., Milkman, K. L., Hofmann, D. A. & Staats, B. R. 2015. The impact of time at work and time off from work on rule compliance: The case of hand hygiene in health care. Journal of Applied Psychology, 100(3): 846–862. De Stobbeleir, K. E., Ashford, S. J. & Buyens, D. 2011. Self-regulation of creativity at work: The role of feedback-seeking behavior in creative performance. Academy of Management Journal, 54(4): 811– 831. Delfgaauw, J., Dur, R., Non, A. & Verbeke, W. 2014. Dynamic incentive effects of relative performance pay: A field experiment. Labour Economics, 28: 1–13. DeNisi, A. S. & Kluger, A. N. 2000. Feedback effectiveness: Can 360-degree appraisals be improved? Academy of Management Executive, 14(1): 129–139. Dockstader, S. L., Nebeker, D. M. & Shumate, E. C. 1977. The effects of feedback and an implied standard on work performance. ERIC. Dunn, J., Ruedy, N. E. & Schweitzer, M. E. 2012. It hurts both ways: How social comparisons harm affective and cognitive trust. Organizational Behavior and Human Decision Processes, 117(1): 2–14. Economist, S. 2016. Schumpeter: The measure of a man. Economist, February 20. Edelman, B. & Larkin, I. 2014. Social comparisons and deception across workplace hierarchies: Field and experimental evidence. Organization Science, 26(1): 78–98. Eisenberger, R., Stinglhamber, F., Vandenberghe, C., Sucharski, I. L. & Rhoades, L. 2002. Perceived supervisor support: contributions to perceived organizational support and employee retention. The Journal of Applied Psychology, 87(3): 565–73. Emerson, R. M., Fretz, R. I. & Shaw, L. L. 1995. Writing ethnographic fieldnotes: xviii, 254 p. Chicago: Chicago, IL: University of Chicago Press. Fedor, D. B. 1991. Recipient responses to performance feedback: A proposed model and its implications. Research in Personnel and Human Resources Management, 9: 73–120. Festinger, L. 1954. A theory of social comparison processes. Human Relations, 7(2): 117–140. Florin-Thuma, B. C. & Boudreau, J. W. 1987. Performance feedback utility in a small organization: Effects on organizational outcomes and managerial decision processes. Personnel Psychology, 40(4): 693–713. Gibbons, F. X. & Buunk, B. P. 1999. Individual differences in social comparison: Development of a scale of social comparison orientation. Journal of Personality and Social Psychology, 76(1): 129–142. Gino, F. & Pisano, G. 2008. Toward a theory of behavioral operations. Manufacturing & Service Operations Management, 10(4): 676–691. 29 Seeing Where You Stand Gittell, J. H. 2002. Coordinating mechanisms in care provider groups: Relational coordination as a mediator and input uncertainty as a moderator of performance effects. Management Science, 48(11): 1408–1426. Gittell, J. H., Seidner, R. & Wimbush, J. 2010. A relational model of how high-performance work systems work. Organization Science, 21(2): 490–506. Goldstein, J. 2014. To increase productivity, UPS monitors drivers’ every move. NPR Planet Money Morning Edition (April 170. Goodman, J. S. & Wood, R. E. 2004. Feedback specificity, learning opportunities, and learning. The Journal of Applied Psychology, 89(5): 809–21. Guzzo, R. A., Jette, R. D. & Katzell, R. A. 1985. The effects of psychologically based intervention programs on worker productivity: A meta-analysis. Personnel Psychology, 38(2): 275–291. Haas, J. R. & Hayes, S. C. 2006. When knowing you are doing well hinders performance: Exploring the interaction between rules and feedback. Journal of Organizational Behavior Management, 26(1-2): 91–111. Hackman, J. R. & Oldham, G. R. 1975. Development of the job diagnostic survey. Journal of Applied Psychology, 60(2): 159. Hamel, G. 2011. First, Let’s Fire All the Managers. Harvard Business Review, 89(12): 48–59. Hannan, R. L., Krishnan, R. & Newman, A. H. 2008. The effects of disseminating relative performance feedback in tournament and individual performance compensation plans. The Accounting Review, 83(4): 893–913. Hannan, R. L., McPhee, G. P., Newman, A. H. & Tafkov, I. D. 2012. The effect of relative performance information on performance and effort allocation in a multi-task environment. The Accounting Review, 88(2): 553–575. Head, S. 2014. Mindless: Why smarter machines are making dumber humans. New York: Basic Books. Healy, P. M. & Palepu, K. G. 2001. Information asymmetry, corporate disclosure, and the capital markets: A review of the empirical disclosure literature. Journal of Accounting and Economics, 31(1): 405–440. Hossain, T. & List, J. A. 2012. The behavioralist visits the factory: Increasing productivity using simple framing manipulations. Management Science, 58(12): 2151–2167. Huckman, R. S., Staats, B. R. & Upton, D. M. 2009. Team familiarity, role experience, and performance: Evidence from Indian software services. Management Science, 55(1): 85–100. Huhman, H. R. 2014. 5 Ways HR Technology Can Improve Performance reviews. Entrepreneur, July 23. Ilgen, D. R., Fisher, C. D. & Taylor, M. S. 1979. Consequences of individual feedback on behavior in organizations. Journal of Applied Psychology, 64(4): 349–371. Ivancevich, J. M. & McMahon, J. T. 1982. The effects of goal setting, external feedback, and selfgenerated feedback on outcome variables: A field experiment. Academy of Management Journal, 25(2): 359–372. Jokisaari, M. & Nurmi, J.-E. 2009. Change in newcomers’ supervisor support and socialization outcomes after organizational entry. Academy of Management Journal, 52(3): 527–544. Jones, S. R. G. 1992. Was there a Hawthorne effect? American Journal of Sociology, 98(3): 451–468. Jorgensen, D. L. 1989. Participant observation: A methodology for human studies. Thousand Oaks, CA: SAGE Publications. Kantor, J. & Streitfeld, D. 2015. Inside amazon: Wrestling big ideas in a bruising workplace. The New York Times. Kennedy, P. 2003. A guide to econometrics. MIT Press. Kluger, A. N. & DeNisi, A. 1996. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2): 254–284. Kluger, A. N., Lewinsohn, S. & Aiello, J. R. 1994. The influence of feedback on mood: Linear effects on pleasantness and curvilinear effects on arousal. Organizational Behavior and Human Decision Processes, 60(2): 276–299. 30 Seeing Where You Stand Kottke, J. L. & Sharafinski, C. E. 1988. Measuring perceived supervisory and organizational support. Educational and Psychological Measurement, 48(4): 1075–1079. Kruger, J. & Dunning, D. 1999. Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6): 1121–1134. Kuziemko, I., Buell, R. W., Reich, T. & Norton, M. I. 2014. “Last-place aversion”: Evidence and redistributive implications. The Quarterly Journal of Economics, 192(1): 105–149. Larson, J. R. 1989. The dynamic interplay between employees’ feedback-seeking strategies and supervisors’ delivery of performance feedback. Academy of Management Review, 14(3): 408–422. Leuz, C. & Wysocki, P. D. 2008. Economic consequences of financial reporting and disclosure regulation: A review and suggestions for future research. Available at SSRN 1105398. Loch, C. H. & Wu, Y. 2007. Behavioral operations management. Now Publishers Inc. Locke, E. A., Shaw, K. N., Saari, L. M. & Latham, G. P. 1981. Goal setting and task performance: 19691980. Psychological Bulletin, 90(1): 125–152. Lourenco, S. M., Narayanan, V. G., Greenberg, J. O. & Spinks, M. 2015. Can feedback hurt? Adverse impact of feedback under negative financial incentives. Harvard Business School Working Paper. Lublin, J. S. 2011. Transparency pays off In 360-degree reviews. Wall Street Journal, (December 8). Mahlendorf, M. D., Kleinschmit, F. & Perego, P. 2014. Relational effects of relative performance information: The role of professional identity. Accounting, Organizations and Society, 39(5): 331– 347. Martinko, M. J. & Gardner, W. L. 1982. Learned helplessness: An alternative explanation for performance deficits. Academy of Management Review, 7(2): 195–204. Mas, A. & Moretti, E. 2009. Peers at work. American Economic Review, 99(1): 112–145. McFarland, C. & Miller, D. T. 1994. The framing of relative performance feedback: Seeing the glass as half empty or half full. Journal of Personality and Social Psychology, 66(6): 1061–1073. Mollick, E. R. & Rothbard, N. 2014. Mandatory fun: Consent, gamification and the impact of games at work. The Wharton School Research Paper Series. Murphy, K. R. & Cleveland, J. 1995. Understanding performance appraisal: Social, organizational, and goal-based perspectives. Thousand Oaks, CA: SAGE. Nadler, D. A., Cammann, C. & Mirvis, P. H. 1980. Developing a feedback system for work units: A field experiment in structural change. Journal of Applied Behavioral Science, 16(1): 41–62. Newman, A. H. & Tafkov, I. D. 2014. Relative performance information in tournaments with different prize structures. Accounting, Organizations and Society, 39(5): 348–361. Northcraft, G. B. & Ashford, S. J. 1990. The preservation of self in everyday life: The effects of performance expectations and feedback context on feedback inquiry. Organizational Behavior and Human Decision Processes, 47(1): 42–64. Pampino Jr, R. N., MacDonald, J. E., Mullin, J. E. & Wilder, D. A. 2004. Weekly feedback vs. daily feedback: An application in retail. Journal of Organizational Behavior Management, 23(2-3): 21–43. Perlow, L. A., Gittell, J. H. & Katz, N. 2004. Contextualizing patterns of work group interaction: Toward a nested theory of structuration. Organization Science, 15(5): 520–536. Petty, M., Singleton, B. & Connell, D. W. 1992. An experimental evaluation of an organizational incentive plan in the electric utility industry. Journal of Applied Psychology, 77(4): 427–436. Pfeffer, J. 2016. Sorry, Uber. Customer service ratings cannot replace managers. Fortune, March 31. Pierce, L., Snow, D. & McAfee, A. 2015. Cleaning house: The impact of information technology monitoring on employee theft and productivity. Management Science, 61(10): 2299–2319. Podsakoff, P. M. & Farh, J. L. 1989. Effects of feedback sign and credibility on goal setting and task performance. Organizational Behavior and Human Decision Processes, 44(1): 45–67. Pritchard, R. D., Bigby, D. G., Beiting, M., Coverdale, S. & Morgan, C. 1981. Enhancing productivity through feedback and goal setting. DTIC Document. 31 Seeing Where You Stand Pritchard, R. D., Jones, S. D., Roth, P. L., Stuebing, K. K. & Ekeberg, S. E. 1988. Effects of group feedback, goal setting, and incentives on organizational productivity. Journal of Applied Psychology, 73(2): 337–358. Rawlinson, K. 2013. Tesco accused of using electronic armbands to monitor its staff. The Independent. Ren, Y. & Argote, L. 2011. Transactive memory systems 1985-2010: An integrative framework of key dimensions, antecedents, and consequences. The Academy of Management Annals, 5(1): 189–229. Rock, D. & Jones, B. 2015. Why more and more companies are ditching performance ratings. Harvard Business Review. Roels, G. & Su, X. 2014. Optimal design of social comparison effects: Setting reference groups and reference points. Management Science, 60(3): 606–627. Roethlisberger, F. J. & Dickson, W. J. 1939. Management and the worker: An account of a research program conducted by the Western Electric Company. Cambridge, MA: Harvard University Press. Schultz, K. L., Juran, D. C. & Boudreau, J. W. 1999. The effects of low inventory on the development of productivity norms. Management Science, 45(12): 1664–1678. Shanock, L. R. & Eisenberger, R. 2006. When supervisors feel supported: relationships with subordinates’ perceived supervisor support, perceived organizational support, and performance. Journal of Applied Psychology, 91(3): 689–695. Shook, E. 2015. Dump Performance Appraisals... and Help Employees Be Their Best. Huffington Post, August 4. Song, H., Tucker, A., Murrell, K. L. & Vinson, D. R. 2015. Adopting coworkers’ best practices: Public relative performance feedback as a tool for standardizing workflow. Harvard Business School Working Paper. Spradley, J. P. 1980. Participant observation. Belmont, CA: Wadsworth. Sprinkle, G. B. 2000. The effect of incentive contracts on learning and performance. Accounting Review, 75(3): 299–326. Staats, B. R., Dai, H., Hofmann, D. & Milkman, K. 2015. Process Compliance and Electronic Monitoring: Empirical Evidence from Hand Hygiene in Healthcare. Wharton Business School Working Paper. Suls, J., Martin, R. & Wheeler, L. 2002. Social comparison: Why, with whom, and with what effect? Current Directions in Psychological Science, 11(5): 159–163. Tan, T. F. & Netessine, S. 2014. When does the devil make work? An empirical study of the impact of workload on worker productivity. Management Science, 60(6): 1574–1593. Tapscott, D. & Ticoll, D. 2003. The naked corporation: How the age of transparency will revolutionize business: xv, 348 p. New York: Free Press. Taylor, M. S., Fisher, C. D. & Ilgen, D. R. 1984. Individual’s reactions to performance feedback in organizations: A control theory perspective. In Rowland, K. M. and Ferris, G. R. (Ed.), Research in Personnel and Human Resources Management, vol. 2: 81–124. Greenwich, CT: JAI Press. Van-Dijk, D. & Kluger, A. N. 2004. Feedback sign effect on motivation: Is it moderated by regulatory focus? Applied Psychology, 53(1): 113–135. VandeWalle, D., Cron, W. L. & Slocum, J. W. 2001. The role of goal orientation following performance feedback. Journal of Applied Psychology, 86(4): 629–640. Vara, V. 2015. The Push Against Performance Reviews. The New Yorker, July 24. Vroom, V. H. 1964. Work and motivation. New York: Wiley. Wayne, S. J. & Liden, R. C. 1995. Effects of impression management on performance ratings: A longitudinal study. Academy of Management Journal, 38(1): 232–260. Wegner, D. M. 1987. Transactive memory: A contemporary analysis of the group mind. In Mullen, B. and Goethals, G. R. (Ed.), Theories of Group Behavior: 185–208. New York: Springer-Verlag. Wood, J. V. 1989. Theory and research concerning social comparisons of personal attributes. Psychological Bulletin, 106(2): 231–248. 32 Seeing Where You Stand Figure 1. Screenshot of the Mechanic’s View of the Daily Scorecard Information 33 Seeing Where You Stand Table 1: Dependent and Independent Variables Variable % Productive Time % Nonproductive Time Tenure Age White Social Comparison Orientation Pre-intervention Average % Productive Time Pre-intervention Average % Nonproductive Time Self-Evaluation Supervisor Support Definition A mechanic's productive hours as a percentage of the total available working hours in a work day. Productive time includes "en route" and "onsite" activities. A mechanic's nonproductive hours as a percentage of the total available working hours in a work day. Nonproductive time includes "non-order travel, lunch, break, personal, and ready" time. Number of years the mechanic had worked in the company at the beginning of the intervention. The age of the mechanic (in years) at the beginning of the intervention. =1 if the ethnic group of the mechanic is White. A simple addition of the scores for the 11 questions on social comparison orientation in the pre-experimental survey (adjusted for reverse coding). The average "% Productive Time" in the pre-intervention period. The average "% Nonproductive Time" in the pre-intervention period. The self-reported percentile of the pre-intervention performance on % Productive Time. A simple addition of the scores for the questions on supervisor support in the pre-experimental survey. Obs. Mean Std. dev. Median 11,120 59.125 24.360 66.112 11,120 30.696 23.406 23.471 11,120 18.521 8.281 19.000 11,120 47.206 8.183 50.000 11,120 0.552 0.497 1.000 11,120 35.193 7.653 37.000 11,120 59.554 12.392 62.942 11,120 30.643 12.555 28.642 11,120 67.331 31.705 81.890 11,120 24.164 4.007 24.000 34 Seeing Where You Stand Table 2: Correlations % Productive Time % Nonproductive Time Tenure Age White Social Comparison Orientation Pre-intervention Average % Productive Time Pre-intervention Average % Nonproductive Time Self-Evaluation Supervisors Support I 1.000 II -0.771 0.000 -0.042 0.000 -0.029 0.002 -0.138 0.000 0.029 0.002 0.508 0.000 -0.447 0.000 0.054 0.000 0.035 0.000 1.000 -0.003 0.717 0.040 0.000 0.074 0.000 0.026 0.006 -0.477 0.000 0.537 0.000 -0.117 0.000 -0.052 0.000 III IV V VI VII VIII IX X 1.000 0.706 0.000 0.031 0.001 -0.180 0.000 -0.079 0.000 0.008 0.421 -0.191 0.000 -0.187 0.000 1.000 -0.297 0.000 -0.049 0.000 -0.070 0.000 0.102 0.000 -0.088 0.000 -0.222 0.000 1.000 -0.059 0.000 -0.256 0.000 0.133 0.000 -0.160 0.000 0.113 0.000 1.000 0.042 0.000 0.069 0.000 0.022 0.020 0.482 0.000 1.000 -0.890 0.000 0.115 0.000 0.094 0.000 1.000 -0.204 0.000 -0.102 0.000 1.000 0.104 0.000 1.000 (Note: Correlations are reported on the first line, and standard deviations are reported on the second.) Table 3: Does Performance Transparency Improve Productivity? (1) % Productive Time Treat x Post Treat Post Tenure Age White Male 0.1272 (2.287) 0.4073 (0.387) -0.8783 (1.041) -0.0238 (0.035) -0.0286 (0.061) -0.9781 (0.965) -4.7507 (3.103) (2) % Nonproductive Time -1.8710* (1.068) -0.1064 (0.248) 0.5715 (0.991) 0.0418 (0.035) -0.0670 (0.049) -0.1315 (0.350) 5.3769 (4.912) Social Comparison Prior Performance Productive Prior Performance Nonproductive Self-Evaluation 0.9176*** (0.058) 2.8184 (3.838) 0.0688 (0.332) -0.9488 (0.826) 0.0221 (0.039) -0.0563 (0.035) -0.1180 (0.378) -3.3811*** (1.090) 0.1106 (0.331) 1.2315 (0.990) -0.0467 (0.033) -0.0584 (0.113) -0.3320 (0.421) 0.0576 (0.051) 0.9940*** (0.014) -0.0529 (0.051) 0.9705*** (0.029) Supervisor Support Observations Adj. Rsq (3) % Productive (4) % NonTime productive Time 26,993 0.0211 26,993 0.2597 -0.0032 (0.006) -0.1334 (0.125) 11,120 0.2605 1.0025*** (0.006) -0.0078 (0.007) 0.0641 (0.111) 11,120 0.2894 35 Seeing Where You Stand Table 4: Impact of Substituting Traditional Performance Feedback with Performance Transparency, Split by Actual Past Performance (1) % Productive (2) % Productive (3) % Non(4) % NonTime Higher Time Lower productive Time productive Time Performer Performer High Performer Lower Performer Treat x Post Treat Post Tenure Age White Social Comparison Prior Performance Productive Prior Performance Nonproductive Self-evaluation 0.7724 (3.115) -0.2163 (2.952) -3.3319*** (0.945) -0.0423 (0.241) 0.0045 (0.250) -1.2746 (2.013) -0.0187 (0.240) 1.0251* (0.525) 10.4250*** (3.707) -0.5334 (1.339) -3.6729*** (0.890) 0.0461 (0.234) 0.0374 (0.203) 1.8305 (2.185) 0.1310 (0.259) 1.0345*** (0.130) -3.7633 (3.572) -0.1109 (0.470) 2.3265 (3.154) 0.0305 (0.027) -0.0563 (0.035) 0.2935 (1.011) -0.0278 (0.047) -4.1762** (1.621) 0.0246 (1.254) 0.5417 (1.006) 0.0003 (0.102) -0.0584 (0.113) -0.8336 (0.797) -0.0620 (0.110) 1.0242*** (0.033) -0.0110 (0.008) 0.1421 (0.187) 5,720 0.2946 Supervisor Support -0.0005 (0.008) -0.0249 (0.424) 0.0172 (0.020) -0.2966 (0.322) 1.0422*** (0.154) -0.0046 (0.010) -0.0852 (0.057) Observations Adj. Rsq 5,692 0.0461 5,428 0.2285 5,400 0.0507 36 Seeing Where You Stand Table 5: Impact of Substituting Traditional Performance Feedback with Performance Transparency, Split by Whether Perceived Performance is Higher or Lower than Actual Performance (1) % Productive (2) % Productive (3) % Non(4) % NonTime Over Time Under productive Time productive Time Estimate Estimate Over Estimate Under Estimate Treat x Post Treat Post Tenure Age White Social Comparison Prior Performance Productive Prior Performance Nonproductive Self-evaluation Supervisor Support Observations Adj. Rsq 0.0288 (4.743) 0.3721 (2.443) -3.6499*** (0.680) -0.0943 (0.064) 0.0496 (0.250) -0.2940 (2.747) 0.0372 (0.445) 0.9777*** (0.128) 0.0221 (0.269) -0.1851 (0.275) 6,783 0.2937 5.9302** (2.815) 0.0174 (1.222) -3.1664** (1.316) -0.0486 (0.167) 0.0374 (2.042) -0.0905 (2.042) 0.0249 (0.176) 0.9991*** (0.081) 0.0119 (0.038) 0.0247 (0.374) 4,337 0.2285 -1.4552 (1.612) -0.0914 (0.576) 0.0448 (0.940) -0.0530 (0.041) -0.4640 (0.035) -0.4640 (0.707) -0.0454 (0.037) -6.3972*** (1.473) 0.5549 (2.812) 3.3170** (1.667) -0.0633 (1.100) -0.0584 (0.113) 0.0213 (0.727) 0.0044 (0.785) 0.9929*** (0.018) -0.310 (0.026) 0.1224 (0.095) 0.9941** (0.477) -0.0069 (0.338) -0.1479 (0.789) 6,783 0.1967 4,337 0.2429 37 Seeing Where You Stand Table 6: Supervisor Support as a Moderator for the Impact of Substituting Traditional Performance Feedback with Performance Transparency (1) % Productive (2) % Productive (3) % Non(4) % NonTime Higher Time Lower productive Time productive Lower Supervisor Supervisor Support Higher Supervisor Supervisor Support Support Support Treat x Post Treat Post Tenure Age White Social Comparison Prior Performance Productive Prior Performance Nonproductive Self-evaluation 0.8980 (4.846) -0.989 (0.475) -3.4651** (1.385) -0.0347 (0.048) 0.0160 (0.059) 0.2233 (0.513) 0.0514 (0.051) 1.0068*** (0.026) 9.7158*** (1.193) -0.3115 (2.210) -3.6301*** (0.794) -0.0670 (0.087) 0.0626 (0.099) -0.5949 (0.799) 0.0498 (0.115) 0.9716*** (0.038) Supervisor Support -0.0146** (0.007) -0.0518 (0.075) 0.0124 (0.018) -0.2672 (0.423) Observations Adj. Rsq 7,159 0.2355 3,961 0.3043 -2.1950 (1.433) 0.2494 (0.347) 0.1607 (1.354) 0.002 (0.051) -0.0445 (0.051) -0.4119 (0.659) 0.0115 (0.062) -5.7420*** (1.891) 0.3964 (2.077) 3.0303*** (1.145) -0.0857 (0.085) -0.0857 (0.085) 0.1829 (0.802) -0.0549 (0.096) 1.0115*** (0.033) 0.0026 (0.008) -0.0336 (0.092) 0.9805*** (0.043) -0.0207 (0.020) 0.2971 (0.380) 7,159 0.2346 3,961 0.3636 38 Seeing Where You Stand Table 7: Social Comparison Orientation as a Moderator for the Impact of Substituting Traditional Performance Feedback with Performance Transparency (1) (2) %Productive %Productive Time HSC Time LSC Treat x Post Treat Post Tenure Age White Social Comparison Overall Social Comparison Ability Social Comparison Opinion Prior Performance Productive 5.0375 2.4291 (3.383) (5.185) -0.2885 0.2822 (1.284) (1.024) -2.4272** -5.2180*** (1.017) (1.653) 0.0214 -0.1043 (0.062) (0.077) -0.0064 0.0549 (0.057) (0.107) 0.4374 -0.4727 (0.803) (1.1913) 0.0843 -0.0094 (0.137) (0.062) Supervisor Support Observations Adj. Rsq -2.7268 (1.956) 0.6723 (0.673) -0.2806 (1.208) -05.557 (0.050) -0.0263 (0.056) -0.6431 (0.674) -0.880 (0.065) (4) %Nonproductive Time LSC (5) (6) %Productive %Productive Time HSCA Time LSCA -5.3302*** 2.3941 4.0770 (1.501) (4.068) (4.990) -0.0594 0.2318 0.1891 (0.632) (2.631) (1.396) 3.7426** -2.0677*** -5.8257*** (1.520) (0.891) (1.998) 0.0578 -0.0373 -0.0903 (0.050) (0.160) (0.122) -0.0263 0.0126 0.0545 (0.058) (0.129) (0.206) -0.0163 0.3321 -0.5630 (1.038) (2.358) (0.130) 0.0545 (0.055) 0.0436 -0.0464 0.545 0.120 (7) (8) (9) (10) (11) (12) %Non%Non%Productive %Productive %Non%Nonproductive productive Time HSCO Time LSCO productive productive Time HSCA Time LSCA Time HSCO Time LSCO -3.0351** (1.469) 0.3471 (0.845) 0.4460 (0.965) -0.0194 (0.050) -0.0234 (0.063) -0.6708* (0.375) -0.0464 0.120 -4.2012** 3.6486 -1.7571 -3.8307*** -0.7600 (1.879) (2.568) (13.360) (1.334) (1.372) -0.1809 0.0511 -0.1997 0.2429 -0.3519 (0.944) (0.493) (11.308) (0.266) (2.860) 2.5515 -3.4873*** -3.4688*** 1.1566 1.4244*** (1.964) (0.664) (1.065) (1.392) (0.466) 0.0318 0.0146 -0.0844 -0.0371 0.0699 (0.135) (0.080) (0.342) (0.031) (0.057) -0.0542 -0.0094 0.0266 -0.0156 -0.0548 (0.131) (0.087) (0.089) (0.042) (0.075) 0.1781 -0.0043 -0.5496 -0.4151 -0.1960 (1.537) (0.593) (2.810) (0.427) (1.486) 0.0757 0.193 0.3696* 0.217 0.9836*** 0.9944*** (0.026) (0.059) Prior Performance Nonproductive Self-Evaluation (3) %Nonproductive Time HSC -0.0013 (0.012) 0.0114 (0.013) -0.1366 (0.121) -0.1532 (0.149) 5,909 0.2575 5,211 0.2549 0.1038 (0.100) 5,909 0.2854 -0.0756 (0.083) 5,211 0.2916 -0.0082 (0.049) 0.0153 (0.031) -0.1511 (0.635) -0.1092 (0.094) 7,396 0.2820 3,724 0.2125 -0.4134** (0.166) 0.1731 (0.323) 0.9912*** 1.0081*** (0.386) (0.015) 0.9729*** 0.9847*** (0.328) (0.077) 1.0012*** 0.9855*** (0.026) (0.021) -0.0103 -0.0165 (0.010) (0.012) -0.1834 (1.922) 0.9916*** 0.9761*** (0.018) (0.042) -0.0051 -0.0216 (0.007) (0.030) 0.1376 (0.092) 7,396 0.3578 -0.0782 (0.066) 3,724 0.1045 -0.0034 (0.012) 0.0047 (0.131) -0.1488 (0.211) 0.0567 (1.124) 6,299 0.2444 4,821 0.2713 1.0098*** 0.9548*** (0.006) (0.027) -0.0063 -0.0182 (0.006) (0.018) 0.1145 (0.111) 6,299 0.2760 -0.1327 (0.197) 4,821 0.3014 39