A Framework for Thinking about Feedback and Evaluation & Learning View 1 Learning requires feedback. In order to learn we need to know whether we are moving towards or away from our desired goals. Feedback provides information about the gap between where we are and where we want to be. I’ll use the terms “feedback” and “evaluation” interchangeably with full awareness of the entailed abuse of ordinary language. In common parlance “evaluation” has a normative dimension. We do sometimes speak of “negative feedback.” On the surface this also seems to entail a normative judgment. This is a misinterpretation. “Negative feedback” is another phrase for “balancing feedback” or feedback about our position with respect to some goal. This is the sort of feedback on which learning rests. It’s opposite, positive feedback, is more appropriately labeled “reinforcing feedback,” the sort of information that moves us to continue doing more of the same, regardless of the consequences. In the sense in which I will use the term “evaluation” is information about our distance from a particular goal. We evaluate our progress towards a desired end. When we evaluate a particular action we are providing, from our own point of view, feedback about its success as judged by the goals to which the action was directed. View 2 Whether learning requires feedback – and the type of feedback it requires - depends upon what learning is. If learning is the increasingly effective ability to achieve some goal or solve a (type of) problem, then feedback is critical. We usually have to know what we’re doing wrong in order to figure out how to do it right. Mistakes are an essential and inevitable part of this sort of learning. But learning does not have to be goal-directed – at least not directed to some sort of external actionable goal. Not all knowledge is “actionable knowledge.” A different view of learning is that it leads to understanding rather than goal-dependent “success.” Understanding may or may not have specific practical consequences. It involves drawing new connections and discovering (or creating) new relationships between aspects of our system of concepts and beliefs. In doing so we necessarily change that web of beliefs and assumptions. We modify, adapt, extend and build new concepts. In this case the role of feedback is not to inform us about the gap between where we are and where we want to be. There is no “where we want to be” previously defined to be informed about. The role of learning on this view is to create a new and more productive understanding, something that did not previously exist. Feedback can tell us whether our attempts are interesting, consistent, traditional, and creative but not whether they’re approaching some antecedently recognized template. And feedback can also play another role; it can prime the search for understanding with new directions based upon different perspectives; it can widen the space of options we consider and force us to view our situation or problem in different and unexpected ways. One way in which this can happen is if feedback forces us to surface and question our own mental models. 2/28/05 Framework for Thinking About Evaluation 1 These two views thus reflect different approaches to learning and different roles that feedback can play in the process. The first view is a sort of Platonic approach in which there is a specific goal that can be known and it is the learner’s job is to discover and reach it. Feedback functions as a sort of intellectual radar that tells the learner how close he is to the ideal of knowledge at any given time. The game is over when the goal is realized. The second view is a sort of Nietzschean anti-realist approach according to which learning is an act of intellectual creation. Feedback functions as a catalyst for the process enriching (or impoverishing) the environment in which it takes place. The two views also reflect the distinction between single and double loop learning.1 Different views may be more appropriate for different domains (e.g., science and business vs. literature and art) and different purposes. Perspective The role of feedback is thus relative to the type of learning it is invoked to promote. But it’s also relative to the perspective from which it’s offered. “What’s the point of providing feeding feedback?” needs to be supplemented with “for whom?” as well as “for what?” Feedback plays very different roles for the receiver and the provider. And the point of offering feedback may differ significantly from the point of requesting it.2 The two views discussed above both reflect the learner’s perspective: give me guidance or give me ideas. Of course, there may be other motives less directly, but no less importantly, tied to learning: emotional support, encouragement, personal validation, establishing mutual interest, etc. For the provider the situation looks quite different. What is he or she offering and getting out of the process? Unless there’s a clear understanding about what the feedback is needed for it’s unlikely that it will serve the intended purpose. The provider must have a clear understanding of the rules of the feedback game as initiated by the requestor. This requires that the requestor have a very clear understanding about what he or she expects to achieve through the request. What Feedback Ought to Be In a normal classroom setting, evaluation is a technically simple matter. It’s a 1-1 function the domain of which is the set of students in the class and the range of which is a subset of real numbers, the grades. A grade is somehow supposed to provide information about gap between some current state of the student and the desired goal. The question to ask here is not “What’s wrong with this picture?” but “What’s right with it?” For many reasons it seems, at best, pointless. More often, it’s positively counterproductive. If we think of individuals, groups and classes as learning systems we’re forced to admit the obvious: learning is a process that develops over time. One positive role evaluation can play is to help the system perform more effectively. To do this it must therefore satisfy several characteristics: it must be timely, it must be accurate, it must be complete,3 and it must be relevant (i.e., to the situation and goals). If any of these conditions fail, it will inhibit the development of the system. This is not to say that appropriate feedback guarantees success. It does not. It’s also not to say that inadequate feedback inevitably 1 Also referred to as “adaptive” and “generative” learning by Senge Cf. Seashore, “The Future of Feedback” 3 Cf. Meadows. “Thou shalt not distort, delay or sequester information” in “Places to Intervene.” 2 2/28/05 Framework for Thinking About Evaluation 2 results in system collapse. It doesn’t. But a healthy and sustainable system requires a feedback mechanism that allows it to accurately situate itself within its environment in a reasonably timely manner. Systematically delayed or distorted feedback, almost by definition, guarantees extinction. So much for the background. Who Gets Evaluated … By Whom .. and How? In OTL we have three levels in our learning system: individual, group and class. Entities at these levels (21 participants, 19 students, 2 instructors, 1 LOST, 6 LOGs) interact in a number of very complex ways; individuals interact one-one with other individuals; team members interact with groups and sub-groups; groups interact with other groups; and all interact to somehow give rise to that mysterious entity the “class” or whole organization. On the general principle that feedback is information about the environment that helps us navigate, we can define a matrix of feedback types. You can provide feedback to your group and to the class. Conversely, the class and your group can provide information to you. You and your group(s) can provide feedback to other groups. Finally, each level – individual, group, class - can do a self-evaluation of its own activities. In the jargon of corporate America, this amounts to a sort of global and non-hierarchical 360° evaluation of all entities (i.e., sub-systems) that comprise the system. (See the table below.) So much for who gets evaluated by whom. A more challenging question is how the evaluation is done. In developing an evaluation procedure I suggest we begin with what Collins & Porras refer to as the “core ideology” of our organization. In any organization success at the organizational level means, at least in part, success in realizing the organization’s fundamental purpose. But we are not just any organization. Part of our purpose is to become a learning organization. This immediately yields several other broad parameters that guide evaluation. Learning is a process. A learning organization is one in which the process of learning – scanning the environment, surfacing assumptions, modifying actions – is no less important than what is ultimately produced. In a learning organization how the organization behaves – its core values – are no less important in evaluating its performance that what it delivers. To put this in another that eliminates the process/product dichotomy, in a learning organization evaluating what is accomplished must include a consideration of how it is accomplished. Process and deliverables are, from the standpoint of evaluation, inseparable. Second, in a learning organization entities at all levels do what they do not because they are specifically told to do it. They make the choices they do because they believe these decisions will help the organization realize its purpose. A hallmark of a learning organization is the freedom it offers to individuals. Values and purpose (and perhaps a more localized envisioned future) provide a general context in which the organization unfolds. There is no specific blueprint for action. There is no commander in chief that constantly monitors the situation. The purpose and envisioned future are in command. Everything else is out of control.4 This is life in the chaordic lane. In order for this to work, however, individuals and groups must share a common understanding of the organization’s purpose. And every member entity’s goals and values must, in a very general way, be consistent with those of the organization. This means that evaluation 4 To steal a phrase from … quote in Malcolm Gladwell’s Blink. 2/28/05 Framework for Thinking About Evaluation 3 criteria at every level – individual, group, and class – must reflect the overall purpose and the core values that define the organization as a whole.5 If it doesn’t, sub-optimization is inevitable, learning across the organization will suffer and, if the learning disability is severe enough, the organization will fail. Evaluation in OTL To recap, if we want to develop useful evaluation mechanisms we first need to identify the characteristics – the signposts – that indicate the direction in which we’re moving at all three levels. For each level we need to have at least very tentative answers to three questions: Where are we going? How will we know we’re there? What sorts of things shall we look at to decide how close we are? In the case of a learning organization these answers must involve not only what is achieved (e.g., the quality of the deliverables) but how it is achieved (the process, “task two”). Deliverables are a snapshot. They are static; a slice of time. Learning organizations are not. They are entities that can adapt and grow. These dynamic qualities are reflected as much or more in the processes of the organization than in its products. They’re made manifest in the way people relate, the sorts of control mechanisms in place, the degrees of freedom people have, the way the culture of the organization evolves, etc. And all of these are most directly visible in the structure and processes of the organization. These tell us the degree to which the purpose, values and concept of the organization are implemented in its practice. Practically speaking, as our basis for evaluation we will need to translate the “How will we know we’re there?” question (now broadly understood to include process as well as deliverable) to each level of the organization. For each cell in the table below there should therefore be both product and process (task 1 and task 2) questions that help us decide whether we moving towards or away from a learning organization. Note that the discussion of evaluation in this concluding section is tied to the first role of feedback – View 1 above – assessing the degree of success in achieving certain goals. It may at first glance appear more difficult to develop an evaluation instrument based on the second sort of feedback – View 2 above – according to which feedback helps us create a new understanding of the world. I suggest that this second role of feedback is part of what it means to be an organization that learns. To the degree to which our evaluation criteria satisfactorily address our values and processes they will also address the effectiveness of feedback in this second sense. But this is a topic for another discussion. 5 See comments about the holographic view of the organization in Morgan. 2/28/05 Framework for Thinking About Evaluation 4 Summary Table: Who Gets Evaluated … By Whom ... and How? SOURCE FEEBACK FROM [SOURCE] ABOUT [TARGET] TARGET Individual Individual Group Individual goals Personal growth Support Receptiveness Openness Group Membership behavior Contribution to group goals/purpose I. Group goals productivity II. Group process – Learning org: Collaboration Risk encouraged Failure embraced Class (Organization) Membership behavior Contribution to org goals/purpose Membership behavior Contribution to org goals/purpose Class goals Learning org Class (Organization) Support Receptiveness Openness Support Forum Resources New ideas II. Class process Core values/ Learning org: Collaboration Risk encouraged Failure embraced Example – Row 1, Column 2 Target: Individual, Source: Source Feedback from Group about Individual This cell represents evaluation of an Individual by the Group (feedback from the group about an individual). The relevant goals are those of the group, the source of the feedback. In addition to specific group deliverables, process criteria for group membership and participation may be relevant evaluation items. Example – Row 2, Column 1 Target: Group, Source: Individual Feedback from Individual about Group This cell represents evaluation of a Group (e.g., a LOG or LOST) by an Individual (feedback from the individual about the group). In this case the individual is providing to the group information about the degree to which the group has supported or inhibited the individual’s contribution to the group’s success (e.g., openness, receptivity to ideas, personal support) and, optionally, the degree to which the group has supported his or her individual goals. 2/28/05 Framework for Thinking About Evaluation 5