Minutes* Faculty Consultative Committee Thursday, May 5, 1994 10:00 - 12:00 Room 238 Morrill Hall Present: Judith Garrard (chair), Carl Adams, John Adams, Lester Drewes, Dan Feeney, James Gremmels, Kenneth Heller, Robert Jones, Morris Kleiner, Geoffrey Maruyama, Cleon Melsa, Michael Steffes, Irwin Rubenstein, Shirley Zimmerman Regrets: Karen Seashore Louis, Toni McNaron, Mario Bognanno Absent: none Guests: Darwin Hendel, Jane Whiteside (Academic Affairs) Others: Martha Kvanbeck (University Senate), Nancy Livingstone (St. Paul Dispatch & Pioneer Press), Maureen Smith (University Relations) [In these minutes: Various issues; benchmarks and critical measures] 1. Faculty Governance Matters Professor Garrard convened the meeting at 10:10 and asked for a vote--unanimously accepted--to close the meeting in order to discuss relationships with the Board of Regents. Following an hour-long discussion, it was agreed that Professors Garrard and Adams would raise several issues with President Hasselmo. It was also agreed that a discussion with outgoing Executive Director Barbara Muesing would be appropriate, in order to try to obtain a better idea of the role of the Regents and the context in which they do their jobs. Professor Garrard then solicited from her colleagues suggestions for items in her report to the Board of Regents next week. Following that discussion, Committee members deliberated on several matters: -- On the issue of faculty salaries, one Committee member observed that if one points out to legislators that the University ranks 30th of 30 top research universities, they are not always bothered and inquire instead how the University compares to institutions ranked 31 to 50. Some appear to believe that a mediocre university is acceptable. -- Having a mediocre university may be tolerable for legislators, but it is NOT acceptable for faculty. The University cannot rely on its ability to sell to the legislature the notion that this university should be among the top--that is not where their values are. The faculty are * These minutes reflect discussion and debate at a meeting of a committee of the University of Minnesota Senate or Twin Cities Campus Assembly; none of the comments, conclusions, or actions reported in these minutes represent the views of, nor are they binding on, the Senate or Assembly, the Administration, or the Board of Regents. Faculty Consultative Committee May 5, 1994 2 saying they want to be among the top ten and must promote an agenda that will accomplish that objective. -- To some extent the goal of being in the top ten is a holy grail and the legislature is correct to question it. What is needed is a demonstration of what top ten or leading institutions DO (for example, in providing leaders for society) compared to institutions ranked 31 - 50. The latter group certainly provides leaders, but someone needs to look at the numbers; it IS possible to quantify the differences and why the state needs an institution in the top group. Moreover, it is in part because of legislative initiatives that faculty expectations about making the University better have been raised, and the financial support required to accomplish those expectations must be provided. To raise expectations, and then bring them crashing down, is far worse than to have ever raised them at all. There appeared to be agreement among Committee members that comparative information about "great" and "mediocre" universities should be obtained--and the "consequences" of an institution BEING great versus mediocre. Professor Garrard then reported on discussions taking place about the possibility of a new publication for faculty members, similar in format to a newspaper. This issue will come to the Committee in the near future, she said. 2. Benchmarks and Critical Measures Professor Garrard next welcomed Dr. Darwin Hendel and Dr. Jane Whiteside from Academic Affairs to discuss benchmarks and critical measures. Dr. Hendel began by telling the Committee that he and Dr. Whiteside are part of a three-person team (along with Acting Associate Vice President George Copa) to lead the effort to develop institutional critical measures for evaluating accomplishment of the U2000 objectives. These measures will be reported to the Board of Regents for information in June and brought to the Board for action in July; the discussion with the Senate committees, among others, marks the beginning of the internal consultation process. It will be followed by consultation with external groups later in May. Dr. Hendel then reviewed the handouts that had been provided to Committee members, including a two-page list of 21 proposed critical measures. He said that they had been developed by considering the measures proposed by the college strategic planning documents and budgets, by looking at information from other institutions, and measures proposed by other University groups (such as the Research Strategic Planning Committee report). The 21 measures are thus a distillation of what many others have identified as important to their mission and the mission of the University. The proposed measures vary considerably in the specificity of their definition; some are relatively easy to establish as concrete measures while others are, at the present, more conceptual in nature and require further definition. One Committee member applauded the effort to develop the measures but suggested it will be VERY important to identify clearly to whom the measures are aimed--the Board of Regents? Internal groups? The public? In addition, it must clear what judgments are being sustained or what decisions will be made. Is the intent to measure progress on U2000? If so, that is fine, but it must be made clear. How Faculty Consultative Committee May 5, 1994 3 is the information to be used? If that is not made explicit, people will have all kinds of ideas about how to use the information or who should use it. The LEVEL of analysis must also be clear--will it be for departmental action or for the Regents to assess progress on U2000? It was also suggested, recalling the comments on comparing the consequences of having a "great" or "mediocre" university, that among the REASONS for using critical measures, FIRST should be that they "serve as a link between planning, performance, evaluation, and resource allocation." One of the concerns expressed by some faculty, pointed out one Committee member, is that all of these measures are quantitative--but that there is much more going on at the University than can be summed up in quantitative measures. What is being said when this complaint is made? Dr. Hendel responded that the principles used to develop the critical measures have included from the beginning the understanding that measures must be both quantitative and qualitative. The impact of the University on the state may not always be easily quantifiable (e.g., the assistance of University faculty in responding to last summer's floods). The concern is that much work will go into counting things that may not be good measures of the University's quality or productivity. The institution has a mission and set of measures reflecting how the mission is being served, observed one Committee member. Faculty are rewarded individually for their performance within their units. Has he seen instances where people were being held accountable for their neglect of the institution mission? The level of the basis for faculty contributions within units has not been reached, he responded, although some discussions are taking place. When these measures are refined, it was then said, they will constitute marching orders on what the University will be doing. Dr. Hendel agreed that it is important to know that there will be a trickle-down effect from these 21 critical measures. In some areas that will be appropriate, but at present they are only looking at overall measures and it is not their intent that these measures will be used to judge individual faculty. But how can they NOT be so used, it was inquired? If there is a divergence between the individual faculty member and what the University says should be done, where will the tension be resolved? Is part of the concern, asked another, that units A and B will be compared on the basis of these measures? Absolutely, it was said. That must be the INTENT, if there is to be evaluation of how the University's resources are being used. One Committee member then observed that there is much use of the term "workload" for faculty effort, and not as much attention to results. Faculty are good at describing what they do but not at describing the outcomes. Another said there need to be measures of output--high quality or low--and measures of value-added. What does the University contribute to a student? The University starts with a different group of students than does Harvard, for example. That should be picked up in one of the measures, Dr. Hendel said, one that calls for measuring the readiness of students for college and that tracks the diversity of students who enter. Discussion then turned to the measure of the student experience, which includes both student satisfaction and faculty involvement in undergraduate teaching. One Committee member said they ignore the work of Alexander Astin [a professor at UCLA and a leading researcher on student outcomes in higher education] about what is important to the undergraduate experience. Another agreed, noting also that the environment where teaching takes place is unpleasant (it does not take into account "the dump we have to work in"). If there were comparisons to other institutions, the contrast would be sharp, it was said. There are objective measures that could be used, it was said, and the meaning of "student Faculty Consultative Committee May 5, 1994 4 satisfaction" is unclear. One could hope they leave with RESPECT for the University, even while recognizing that improvements could be made. The measures also ignore the element of leadership--one of the criticisms made by constituents during the U2000 discussions was that University graduates do not become leaders. There is also nothing on higher-order learning and the knowledge base that students gain. The measures of post-graduate experience may address this, it was said, but the point isn't clear. The danger of benchmarks such as these, one Committee member contended, is that they drive the agenda; if they are wrong, the University could go in the wrong directions. Points were also made about a number of the other measures: -- Apropos the measure of the uniqueness of the baccalaureate experience at the University (e.g., the number of students involved in research activities, internships, and international experiences), one Committee member pointed out that other institutions in the state offer such experiences. Uniqueness may not be the right measure, it was said. Moreover, said another, if that is an important measure, research and scholarship should also be judged by its impact on teaching--the effect should go both ways. The fact that one can be taught be Nobel laureates and those close to the forefront of their fields is what is important, said another Committee member. -- In terms of the number of credits students take to obtain their degree, asked one Committee member, is it so bad that they may take more than what is required? The goal is educated people. The University needs to understand, with each measure, in what direction it wants to go, Dr. Hendel said. Is taking 187 credits, rather than 180, bad? This may come down to the question of efficiency versus what is learned. What might be a better measure is faculty contact hours; in some cases, students with a 5-credit course may have 8-9 contact hours; in others, carrying 12 credits, they may have only 7 or 8. This is an important number, especially when so many students learn orally rather than by reading. That is related to graduation rates, Dr. Hendel said, and also to accreditation requirements, over which the University has limited control. -- Dr. Hendel pointed out that use of measures of reputation of graduate and professional programs is suggested with some trepidation, given the imperfect ways in which reputation are determined. -- Of the measures of outreach and public service, one Committee member pointed out that some ask why they should respond to a request from the Department of Health when they could be working on a grant proposal to NIH. In addition, said another, the focus should not be solely on the State of Minnesota; this is a regional and national institution. And why, asked another, are not talks and presentations included, when they are just as easily measured? -- Inclusion of interdisciplinary and applied programs and activities as a measure is dangerous, said one Committee member, if the benchmarks become a driving force, unless Faculty Consultative Committee May 5, 1994 5 there is also a corresponding measure for DISCIPLINARY activities. -- If the external market emphasizes a particular activity (for example, research), there may be a divergence between the emphasis placed on it by the University and by the external market--in other words, internal and external incentives may not be the same. This possible divergence in emphases could create conflict for faculty members or departments. Professor Garrard thanked Dr. Hendel and Dr. Whiteside for joining the meeting. 3. Approval of Agenda Professor Garrard then called for approval of the draft May 19 Faculty Senate docket; it was unanimously approved. She then adjourned the meeting at 12:10. -- Gary Engstrand University of Minnesota