File - Adam crawford's e

advertisement

Adam Crawford

12/10/13

SAE767 Final

Part One: Case Study

Lakeville College is a small, liberal arts college located in the northwestern United States with a population of approximately 3000 students. Lakeville College has a rich history of instilling values of community, empathy, and introspection into its students. Its Volunteerism and Service Learning Center, Institute for Social Change Leadership Development, and campus food pantry program are crowning jewels in Lakeville’s array of programs designed to create, as

Dr. Henry Eastman, President of Lakeville College, puts it, “empathic innovators.”

There is one area that Lakeville College appears to be doing poorly in creating empathic innovators. For the last several years, the first year experience program, nicknamed has gotten a reputation of being a mostly unsuccessful program. Many students have said in passing that the

“Introduction to the Lakeville Experience” courses (ILE101, ILE102), which all students are required to take in their first and second semester, are pointless, boring, or an easy A. Recently, a respected senior at Lakeview wrote an op-ed for the school newspaper arguing that because the class results in no benefits for students, the class should be abolished or drastically restructured.

The Student Government Association recently approved a resolution supporting this.

Dr. Eastman has asked you—Dr. Lana Winters, Director of the First Year Experience

Program—to conduct a year-long assessment on how ILE101 and ILE102 are meeting their own learning outcome (to instill empathy in students), to address student (and now, faculty) concerns that the courses are not contributing to the university’s mission of creating empathic innovators.

The results of this assessment will be used to decide what to do with the courses—keep them, change them, or remove them.

Step 1: Define the Problem

The first year experience courses (ILE101 & ILE102) at Lakeview College are not meeting their intended learning outcome of instilling empathy into its students. This ties directly into the university mission, and a large amount of the university’s resources contribute to implementing these courses. To appease student demands and to maintain the university’s reputation as a generator of “empathic innovators”, something must be done to address these courses.

Step 2: Determine the Purpose of the Study

This study will assess the extent to which students become more empathic after taking

ILE101/102. In addition, this study will investigate which elements of the courses are positive contributors to empathy development, and which are not.

Step 3: Determine Where to Get the Information Needed

The information will need to come from the students currently taking ILE101 & 102. Since this an outcomes assessment, and no current data exists on empathy development at this stage of students’ careers at Lakeville College, the information will need to come directly from these students.

Step 4: Determine the Best Assessment Method

To measure empathy development as a whole, I will use quantitative surveys to measure indicators of empathy development in students. Valid, pre-existing tools will be utilized to measure empathy. This method will allow for a clear indication of whether these courses contributed, at least partially, to empathy development in students. I did consider utilizing a rubric to measure empathy from essays assigned in class, similar to Wilson’s study of empathy development from service learning experiences (see Documentation). Still, as Wilson indicates, this is a time-consuming method that usually is done with a small sample. The president is wanting a sweeping assessment of the entire ILE program. A quantitative analysis, which can be easily number-crunched, serves this purpose better.

In order to determine which individual elements of the ILE101/102 curriculum are positive contributors to empathy development, details are important. While a quantitative measurement of empathy development after each activity or assignment might be valid, it would be overly cumbersome and unrealistic to accomplish. Those activities/assignments that do have a positive impact on empathy development should stand out in students’ memories. Therefore, a limited number of focus groups designed to identify which elements of the course made the biggest impact on them as an empathic individual is the best method for this portion of the assessment.

Step 5: Determine Whom to Study

The students who will be taking the first year experience courses during the assessment will be the study population. These students’ gaining of empathy (or lack thereof) is the entire focus of the study. The only way to measure that is to assess the students themselves.

Step 6: Determine How the Data Will Be Collected

For the quantitative instruments, the data will be collected at three points during the year: at the beginning of the academic year (before the students have taken ILE101), at the end of the first semester (after completing ILE101), and at the end of the second semester (after completing

ILE102). The surveys will be administered during one of the first class periods in ILE101, one of the last class periods in ILE101, and one of the last class periods in ILE102. This allows for a pre-test, outcomes for ILE101 only, and the combined outcomes of ILE101 and ILE102.

For the focus groups, data will be collected by a note taker who will be present during the focus groups. This person will take notes of what students say, making notes of consensus and dissent.

These notes will serve as a rough sketch of how students feel about ILE101/102’s curriculum.

Step 7: Determine What Instruments Will be Used

A literature review revealed that there are few pre-existing tools for measuring empathy that would be valid and useful to this study. Shapiro, Morrison, & Boker (see Documentation ) utilized two measures when assessing the empathy of medical students who participate in a poetry program. First is the Empathy Construct Rating Scale

( http://www.hrdpress.com/Empathy-Construct-Rating-Scale-5-Pack-ECRS ), an 84-question, 6point rating scale that measures such things as the ability to listen, reflect, and communicate with empathy. The second is the Balanced Emotional Empathy Scale

( http://www.kaaj.com/psych/scales/emp.html

), which measures the extent to which an individual can feel the pain or enjoyment of another individual.

In addition to these tools, I will develop a semi-structured outline of questions for leading student focus groups. These questions will be designed to assess which elements of the classes did or did not contribute to any empathic group that the previous measures do indicate.

Step 8: Determine Who Should Collect the Data

ILE101/102 instructors will administer the quantitative surveys. As Alex Owens described in her presentation, it can be difficult to rely on others to implement your own instrument in a time effective manner. With this in mind, I will be sure to have Dr. Eastman communicate the importance of completing these surveys to the instructors to ensure participation. For the focus groups, I will be the one conducting them. Focus groups take skill and preparation to administer, so I want to limit the number of people running them to ensure quality.

Step 9: Determine How the Data Will Be Analyzed

The created tools have their own instructions for calculating results. The results from the beginning of the first semester will be compared to the end of first semester and the end of the second semester to observe changes in students’ levels of empathy. For the focus groups, I will review the notes from the sessions to pull out prominent themes. I will construct these into a narrative discussing how each element of the current curriculum does or does not develop empathic students.

Step 10: Determine the Implications of the study for Policy and Practice

This study will help inform the university leadership to what extent the current first year experience system is succeeding (or failing), and what elements of the system are contributing to that success (or failure). Moving forward, faculty and administrators will be able to keep what works, and change/remove what does not work in order to create a purposefully designed, effective curriculum that students will find more engaging and rewarding.

Step 11: Report the Results Effectively

Reporting results is one of the most important steps in assessment. When reporting assessment results, it is important to keep in mind who the stakeholders are for the area you are assessing.

Students, faculty, staff, administrators, and community members can all be stakeholders for a university assessment project. In this situation, some obvious stakeholders include Dr. Eastman

(LC President) and the Student Government Association. Still, ILE faculty and the general student body will also be invested in finding out what is or is not working in this university-wide program.

As Zachery Holder explained in his presentation, IRB approval is an important element to consider in an assessment project, and can potentially throw the entire project into a tailspin if not handled proactively. With that in mind, I will work with Dr. Eastman to determine whether this information will be shared externally, and if so, work to receive IRB approval as soon as possible.

Part Two: Assessment Memo

MEMO

To: Dr. Dee Cisco

From: Adam Crawford

CC: Dr. Belinda McCarthy

Subject: MSU Assessment Culture and Proposed Initiatives

Missouri State University has in recent years taken a fairly common approach to assessment. In your words, we are “data rich and information poor.” We’ve conducted surveys, facilitated focus groups, and developed rubrics—without a while lot to show for it. Moving forward, I encourage you to identify and articulate what your Student Affairs as a unit should be accomplishing. This will help set the stage for an assessment plan. In order to assess, you must know what you are assessing. With a clear mission and vision in mind, I suggest the following in

System Implementation

All units within student affairs (SA) should shift their focus from satisfaction and service delivery to student learning, student development, and outcomes. It is not enough for a student to be satisfied with a program or service. If students are satisfied, that does not necessarily mean that they are engaging in critical thinking, personal development, or substantive learning. It means that they are comfortable. In order to achieve your mission as a division, your division must become honed in on student learning and development.

With this paradigm shift, your division can begin to engage in outcomes based assessment to determine if students really are gaining the identity development and critical thinking skills we purport to teach them. The assessment plan should be consistent, widely known and understood, thorough, manageable, and results in tangible results. Focus your efforts on one or two

departments at a time, while preparing the rest of the units for how they can best prepare for their own assessments.

System Evaluation

How do you know your assessment plan is working? When your staff members views on assessment reflect trust, encouragement and hope. In a successful assessment plan, assessment becomes cyclical and responsible for positive contributions and changes within the unit. The system will also be reflective on itself, changing to meet the new realities of technologies, methodologies, and the needs of the campus/division.

Barriers to Effective Implementation

One of the biggest barriers to effectively implementing an assessment plan is buy-in. How do you motivate an entire division—many of which are understaffed, overworked, and too focused on their own projects to see the value in a robust assessment plan—to buy into contributing to this new system. I believe there are three ways.

First, you must make the incentives of this assessment plan both clear and tangible for SA staff.

For example, if Campus Recreation can demonstrate how their programs are effectively contributing to student learning and development, you can reward them by providing them additional resources to pursue expanding their projects or creating new ones. If the Office of

Student Engagement finds that it is not meeting its learning outcomes, you will provide the resources and guidance to overhaul what is not working in order to set OSE onto a successful path. As you can see, in both scenarios positive change came from effective assessment.

Second, you instill assessment into the everyday work of your SA staff. Assessment is often perceived as this big, overwhelming thing that takes too much time/effort to do well. Instead, demonstrate and guide your unit on how assessment is really about the little things. Assessment can be spread out over a long period of time and broken up into manageable bits. This makes the actual doing of assessment seem possible.

Third, you must ensure that your staff members have adequate professional development in order to design, implement, and report assessment projects that are reliable, valid, and useful. This includes encouragement and financial support for attending conferences, seminars, trainings, and webinars related to assessment.

Collaboration with Faculty and Academic Affairs

Assessment provides a rich opportunity for student affairs professionals to partner with academic affairs. Why? If faculty are able to see how student affairs as a field actively and positively contributes to student learning, then they will understand how we can become their greatest access in ensuring that students are learning both inside and outside the classroom. A collaboration between academic and student affairs, with assessment serving as the foundation, allows students to experience a robust, well-rounded educational experience where their academic, social, living, and extracurricular experiences all connect to one another in a way that develops them in a holistic way.

Download