i STUDENT FEEDBACK IN SOCIAL WORK EDUCATION: AN ASSESSMENT OF

advertisement
i
STUDENT FEEDBACK IN SOCIAL WORK EDUCATION: AN ASSESSMENT OF
CALIFORNIA STATE UNIVERSITY SACRAMENTO’S
DIVISION OF SOCIAL WORK
A Thesis
Presented to the faculty of the Division of Social Work
California State University, Sacramento
Submitted in partial satisfaction of
the requirements for the degree of
MASTER OF SOCIAL WORK
by
Rachel Sarah Marine
SPRING
2014
STUDENT FEEDBACK IN SOCIAL WORK EDUCATION: AN ASSESSMENT OF
CALIFORNIA STATE UNIVERSITY SACRAMENTO’S
DIVISION OF SOCIAL WORK
A Thesis
by
Rachel Sarah Marine
Approved by:
__________________________, Committee Chair
Dr. Jude Antonyappan
__________________________, Second Reader
Dr. Dale Russell
____________________________
Date
ii
Student: Rachel Sarah Marine
I certify that this student has met the requirements for format contained in the University
format manual, and that this thesis is suitable for shelving in the Library and credit is to
be awarded for the thesis.
__________________________, Graduate Coordinator
Dale Russell, Ed.D., LCSW
Division of Social Work
iii
___________________
Date
Abstract
of
STUDENT FEEDBACK IN SOCIAL WORK EDUCATION:
AN ASSESSMENT OF CALIFORNIA STATE UNIVERSITY SACRAMENTO’S
DIVISION OF SOCIAL WORK
by
Rachel Sarah Marine
The purpose of the study was to assess the student feedback component of the implicit
curriculum in the Division of Social Work at California State University Sacramento.
The study examined the use and perceived efficacy of seven identified feedback channels
and sought to identify barriers to effective feedback and suggestions to improve feedback
within the Division. The study surveyed a non-probability, purposive sample of 73
Master of Social Work (MSW students); Two key informant interviews of faculty
members were also conducted for this study. The tools for data collection included a
structured questionnaire for the students and an interview guide for the key informants.
Study findings indicate that teaching evaluations were the most commonly used feedback
channel followed by in-class discussions, email/online communication, and informal
conversations. Participation in governance was the least used feedback channel with
80% of respondents having never given feedback through this channel, while
participation in both social work and other student organizations were listed as rarely
used by students as channels for providing feedback. Response trends reflect that the
more commonly used feedback channels, except for teaching evaluations, were perceived
iv
as the most efficacious (taken into consideration by, important to, and acted upon by the
Division faculty and/or staff), by the students. There was a medium correlation (r=.412)
between frequency of use and perceived efficacy of feedback channels.
Student responses to open-ended questions and key informant interviews revealed some
of the barriers to participation in feedback channels and to efficacious feedback. Barriers
included students fear of being penalized by grades or being viewed negatively, a culture
or environment in which students aren’t comfortable providing feedback, limited time
and resources, and lack of accountability both of faculty and staff for responding to
feedback, and accountability of students for providing constructive feedback.
Suggestions to improve student feedback in social work education included implementing
policies that formalize and make students aware of existing feedback channels with an
emphasis on the utilization patterns of the feedback they provide.
Findings based specific recommendations include facilitation of communication between
students and Division faculty and staff, improved accountability, and promotion of
increased participation in feedback channels. The research findings strongly indicate the
need for increased discourse between students and faculty about student feedback within
the MSW program at CSUS. The study proposes a targeted instrument for the assessment
of student feedback that schools of social work can maintain for purposes of assessment
of the student feedback component of the implicit curriculum.
_______________________, Committee Chair
Dr. Jude Antonyappan
_______________________, Date
v
TABLE OF CONTENTS
Page
List of Tables ............................................................................................................. . ix
List of Figures .............................................................................................................. xi
Chapter
1. STATEMENT OF THE PROBLEM ………………………...……………………1
Background of the Problem ……… ..................................................................2
Statement of the Research Problem ...................................................... 5
2. REVIEW OF THE LITERATURE……………………………………………...13
Implicit Curriculum…………………………………………………………...14
Implicit Curriculum and Social Work Values………………………...15
Theoretical Frameworks and Perspectives….…………………………….......17
Critical theory and Empowerment frameworks………………………17
Person/Student Centered Learning………………………………...…19
Systems Theory……………………………………………………….21
The Importance of Student Feedback in Curriculum Development…….........23
Forms of Student Feedback…………………………………………………..24
Teaching Evaluations…………………………………………………24
Student Participation in Governance…………………………………27
Barriers to Student Involvement………………………………………....…...29
Methods of Assessment for Implicit Curriculum and Student Feedback….....31
vi
Conclusion………...……………………………………………………….....31
3. METHODS…………………………………………………………………….…34
Study Objectives ……………………………………………………………..34
Study Design …………………………………………………………………35
Sampling Procedures ………………………………………………...………36
Data Collection Procedures ……………………………………………...…..37
Instruments …………………………………………………………………..38
Data Analysis ………………………………………………………………...40
Quantitative Data Analysis…………………………………….……..40
Qualitative Data Analysis………………….…………………………41
Protection of Human Subjects …………………………….…………………42
Limitations……………………………………………………………………43
4. STUDY FINGS AND DISCUSSIONS………………………………………….44
Overall Findings ……………………………………………………………..45
Specific Findings ……………………………………………………………..51
Teaching Evaluations………………………………………………....51
Email/Online Communication………………………………………..57
Class Discussion……………………………………………………...60
Informal Conversations…………………………………………….…64
Participation in Social Work Student Organizations……………..…..67
Participation in Other Non-Social Work Student Organizations…….70
vii
Participation in Governance………………………………………….72
General Solutions Questions……………………………………….....76
Correlations…………………………………………………………...77
Key Informant Interviews………………………………………..…...78
Interpretations to the Findings ……………………………………………….82
Primacy of Teaching Evaluations Among Feedback Channels…...….83
Interpretation of Barriers and Solutions for Utilization of
Feedback Channels…………………………………………………...85
5.
CONCLUSION, SUMMARY, AND RECOMMENDATIONS …………….....96
Summary of Study ……………………………………………………….…..96
Implications for Social Work …………………………………………….…..97
Recommendations …………………………………………………………..101
Limitations ………………………………………………………………….103
Conclusion ………………………………………………………………….104
Appendix A. Student Surveys……………………………………………………...106
Appendix B. Key Informant Interview Protocol…………………………………...115
References ................................................................................................................116
viii
LIST OF TABLES
1. Feedback Channels by Percentage of Respondents………………………………46
2. Mean Score for Frequency of Use of Feedback Channels…………………....…..47
3. Mean Score for Efficacy Variables for Each Feedback Channel...……………….49
4. Correlation between Use and Perceived Efficacy of Feedback Channels……......50
5. Scores on Teaching Evaluation Variables by Percentage of Respondents…….....52
6. Descriptive Statistics for Teaching Evaluation Variables………………………...53
7. Results of One-Sample T-Test for Teaching Evaluations Variables……...……...55
8. Scores on Email/Online Communication Variables by Percentage of
Respondents……………………………………………………….................58
9. Descriptive Statistics for Email/Online Communication Variables……………...59
10. Scores on Class Discussion Variables by Percentage of Respondents…………..61
11. Descriptive Statistics for Class Discussion Variables…………………………....62
12. Scores on Informal Conversation Variables by Percentage of
Respondents………………………………………………….……………....65
13. Descriptive Statistics for Informal Conversation Variables…………………...…66
14. Scores on Participation in Social Work Student Org. Variables by Percentage
of Respondents………………………………...……………………………..67
15. Descriptive Statistics for Participation in Social Work Orgs. Variables……..…..68
16. Results of One-Sample T-Test for Participation in SW Student
Organization……………………………………………………..………..….69
ix
17. Participation in Other Non-Social Work Student Org. Variables by Percentage
of Respondents………….……………………………………………………71
18. Results of One-Sample T-Test for Participation in Other Student Org…...……...72
19. Scores on Participation in Governance Variables by Percentage of
Respondents…………………………………………………………………..73
20. Descriptive Statistics for Participation in Governance Variables………………...74
21. Results of One-Sample T-Test for Participation in Governance Variables………75
22. Correlations Among Teaching Evaluation Variables………………………..........78
x
LIST OF FIGURES
Figures
Page
1. Correlation between Frequency of Use and Perceived Efficacy of Feedback
Channels…….………………………………………………………….……..51
xi
1
Chapter 1
STATEMENT OF THE PROBLEM
Social work education consists not only of the content of the curriculum delivered
in the educational setting, but also the context in which the curriculum is delivered. This
context, referred to as the implicit curriculum, consists of the social environment,
organizational structure, and academic culture of the institution and conveys additional
information to students, especially about the values of the profession, thereby
contributing to student’s identity formation and professional socialization. Because the
profession of social work is a value-centered profession, it follows that the implicit
curriculum or educational context should embody social work values, for example social
justice, the dignity and worth of the person, the importance of human relationships,
integrity, and competence as well as an empowerment perspective. Following the
empowerment perspective, a cornerstone of social work curriculum, an empowering
educational environment would encourage, elicit, and act upon student feedback and
input into the system.
Despite the empowerment approach central to social work curriculums and
reinforced by the Council on Social Work Education (CSWE), schools of social work
struggle to create an educational environment which facilitates the engagement of
students as stakeholders in their own educational process. From a systems perspective
we can identify whether feedback is blocked or whether it flows into the system,
producing change in response to new input. Improving student input in social work
education is essential to ensuring that the values and perspectives taught in the explicit
curriculum are exemplified in the educational environment. In this way an educational
environment in which student feedback and input is valued contributes to the professional
socialization of social workers as empowered stakeholders, enabling them to go on to
empower others in their social work practice.
Although the Council on Social Work Education requires accredited schools of
social work to evaluate their own implicit curriculum, the field lacks tools for assessment
as well as knowledge of best practices for improving the implicit curriculum, especially
in the context of student engagement. The present study assesses the student feedback
component of the implicit curriculum among masters students in one school of social
work, California State University Sacramento, and seeks to identify and evaluate the use
and perceived efficacy of channels for student feedback, as well barriers to student
feedback within the Division of Social Work system. In this way the present study
contributes to the assessment and development of the student feedback component of the
implicit curriculum within diverse schools of social work.
Background of the Problem
As Holosko et al. (2011) contend, there is oftentimes a disconnect between
explicit messages and values being taught in curriculum and implicit values exemplified
in the academic culture. Brainard and Brislen (2007) argue that this incongruity between
2
implicit and explicit curriculum may encourage students to become “professional and
ethical chameleons”. In the context of social work for example, the explicit curriculum
may encourage students to empower their clients in the field, while the education
environment renders them powerless in their own institutions. For example, students are
encouraged to organize clients to advocate for their interests to those in power, while
students may themselves be unorganized or face a hierarchical system that lacks adequate
channels for student feedback. This kind of paradox leads to the chameleon-type
behavior referred to by Brainard and Brislen in which students receive mixed messages
from contradictory explicit and implicit curriculums.
The Council on Social Work Education uses the Educational Policy and
Accreditation Standards (EPAS) to accredit baccalaureate and master’s level social work
programs. EPAS establish requirements for accreditation among four features of an
integrated curriculum design: program mission and goals; explicit curriculum; implicit
curriculum; and assessment (CSWE, 2008). According to the CSWE, the implicit
curriculum is “composed of the following elements: the program’s commitment to
diversity; admissions policies and procedures; advisement, retention, and termination
policies; student participation in governance; faculty; administrative structure; and
resources” (CSWE, 2008, p.10). While accreditation standards require social work
programs to assess the explicit curriculum, schools are not currently required to measure
the success of the implicit curriculum.
3
The Council of Social Work Education acknowledges student feedback and
participation as essential components of the implicit curriculum or educational context.
Indeed the implicit curriculum is an important aspect of CSWE’s most recent 2008
Educational Policy and Accreditation Standards (EPAS) which outline the requirements
schools of social work must meet for accreditation by CSWE. While the CSWE EPAS
encourage schools of social work to embody social work values in the implicit
curriculum and outline suggestions for implicit curriculum assessment, the field has yet
to identify best practices in assessment and improvement of student input components of
the implicit curriculum.
If social workers are to truly empower individuals, families, and communities to
engage with their social environments, including the systems of power in which they are
immersed, they must do more than read about empowerment in class. Rather, these
students should have opportunities to actively participate in their own educational
institutions, to challenge the power structures, and introduce new ideas. A feminist
perspective to empowerment, the very same which is taught in schools of social work,
insists that rather than a struggle for limited power, individuals and groups can all
become more powerful for the benefit of all.
There are many approaches to eliciting and utilizing student feedback. Furlong
and Cartmel look critically at what they term the “student as consumer” paradigm in
which individual choice or consumer rights can disguise structural inequalities (as cited
in Read et. al, 2003), while Fielding warns that student feedback may be more for
4
producing measurable outcomes of school ranking or achievement rather than responding
to student input (2006). Several researchers find that faculty assessments alone might not
be as valuable as a dynamic approach involving conversations with faculty about one’s
education (Theall and Franklin, 2001; Marsh, 2007; Knol,2012; Kulik & McKeachie,
1975; Kember et. al, 2002; Rotem & Glasman, 1979; Lenze, 1996). Others note the
barriers to student involvement in governance and policy-making decisions (Obondo,
2000; Flutter, 2007). These dynamics point to the variety of approaches to student
feedback and their respective challenges.
Statement of the Research Problem
Assessment the use of various modes of providing student feedback, the
perceived efficacy of those modes, and ways in which stakeholders believe the flow of
student feedback in the system can be improved is essential to a deliberate effort to align
the implicit curriculum with the competencies that comprise a professional social work
education. The lack of knowledge of best practices in creation, assessment, and
improvement of the student feedback component of the implicit curriculum in social
work education is a problem for schools of social work to comply with CSWE
accreditation standards and convey social work values through the educational context in
which the curriculum is taught
Study purpose. This study assessed student awareness of and use of existing
channels for student feedback within the CSUS Division of Social Work, the perceived
efficacy provided through these feedback channels, as well as barriers to student input
5
and suggestions for improvement. It thereby contributed to the development of a model
for assessment of student input components of the implicit curriculum, as well as
suggestions for improving student feedback in social work professional education.
Results of this study can provide insight that the Division of Social Work at Sacramento
State can invoke moving forward, and especially in preparation for re-accreditation,
however results of this study can be also used to understand and enrich student feedback
in social works schools everywhere, and indeed are applicable to education in general.
Theoretical framework. This research project is grounded in critical theory and
empowerment frameworks; person-centered and student-centered approaches to teaching
and learning; and systems theory.
Critical theory is interested how social institutions and structures reinforce power
inequalities on the basis of factors such race, class, age, or gender. Critical theory
demands critical reflection of taken-for-granted assumptions which undergird foundations
of power and inequality in society. In this way critical theory borrows concepts from
critical pedagogy, feminism, Marxism, and postmodernism.
Critical theory’s concern with critical awareness of power and how it operates
encompasses an empowerment approach central to the value base of social work
practice and the mission of most schools of social work. Empowerment is described by
Lee (2001) as having “three interlocking dimensions: 1) the development of a more
positive and potent sense of self, 2) the construction of knowledge and capacity for a
more critical comprehension of the web of social and political realities of one’s
6
environment, and 3) the cultivation of resources and strategies, or more functional
competence, for attainment of personal and collective goals” (p. 34). In other words, an
empowerment approach focuses on the critical education of individuals and groups and
their mobilization to challenge power and oppression in society. In this way it aligns
closely with Freire’s radical or critical pedagogy, which centers on concientizacion or
the bringing of social injustices and power dynamics to the awareness of a group that
can then take action against the oppressors (1990).
Building on critical theory and an empowerment perspective, some educational
theorists call for education that challenges power differences between students, teachers,
and administration. In a similar critique of power in society, person or student-centered
learning approaches value continuous engagement with power dynamics. For example in
Sennett’s (1999) student-centered model of dialogic schools, polyphony or the use of
many voices heard through multiple forums for public conversation can facilitate radical
reconfiguring of hierarchical power structures often present in educational settings.
Fielding (2006), argues that student voice in person centered learning environments
should be student driven, staff supported, and an ongoing endeavor.
Systems theory offers a useful framework for examining organizations in the
context of their larger environment. Systems theory asserts that there are universal
principles of organizations that hold true for all systems (as cited in Mizikasi, 2006).
Systems have four main characteristics: they are goal oriented, they have inputs from
their environment, they produce outputs to achieve their goals, and they receive feedback
7
from the environment about outputs (Banathy, 2000). Open systems interpret and respond
to feedback from their environments (Hutchinson, 2010), and cannot survive without
continual interaction (Mizikasi, 2006).
The aforementioned theoretical frameworks support both the purpose and
approach of this study. An empowerment approach, with its roots in critical theory are
central to the CSWE EPAs followed by all accredited schools of social work, mandating
for example that schools of social work teach students to ”recognize the extent to which a
culture’s structures and values may oppress, marginalize, alienate, or create or enhance
power or privilege” (EP 2.1.). It therefore follows that a critical approach to education
should be manifest in both the explicit and implicit curriculum, and deal directly with
issues of power within the organizational system.
Student feedback as an essential component of an implicit curriculum that
facilitates student empowerment, is also an important element of person or student
centered learning. Like critical education, student-centered learning requires students
engage in a dialogic and reciprocal process with faculty and staff in traditional power
roles.
Systems theory is a useful framework for the examination of the existence, use,
and perceived efficacy of channels for student feedback taking place in the present study.
Systems theory provides a theoretical basis for examining an organization, such as the
Division of Social Work, as the sum of its parts, responding to feedback from its
8
stakeholders and greater environment. In this way, it helps us identify where feedback is
effective and where it may be blocked within the organizational system.
Definition of terms. Explicit curriculum: “The explicit curriculum constitutes the
program’s formal educational structure and includes the courses and the curriculum”
(CSWE, 2008, p.3).
Implicit curriculum: the educational environment in which the explicit curriculum is
presented. According to the Council on Social Work Education, the implicit curriculum
on social work schools is composed of the “program’s commitment to diversity;
admissions policies and procedures; advisement, retention, and termination policies;
student participation in governance; faculty; administrative structure; and resources”
(CSWE, 2008, p.10).
Student feedback/input: student views or opinion regarding aspects of the educational
experience including but not limited to curriculum, teaching quality and style, field
experience governance, faculty, administrative structure, and resources.
Feedback channel: A formal or informal institutional policy or practice that directs
information from the system’s environment or outputs back into the system for response.
Examples of student feedback in an educational setting might include faculty evaluations,
participation in governance or policy decisions, or formal or informal discourse with
faculty and staff.
The Council on Social Work Education (CSWE): accrediting agency for social work in
the US.
9
Educational Policy Accreditation Standards (EPAs): Standards of professional
competence used by the Council on Social Work Education to accredit baccalaureate and
master’s level social work programs including standards for program missions and goals,
explicit curriculum, explicit curriculum, and assessment.
Professional social work education: Social work education which culminates in either a
bachelors or masters in social work from a qualifying institution of higher education.
Efficacy variable: Refers to the following variables or the combination thereof for each
identified feedback channel: 1) “I believe the Division of Social Work faculty and/or
administration take the feedback I provide in (feedback channel) into consideration”, 2)
“I believe the information I provide in (feedback channel) is important to Division faculty
and staff”, and 3) “ I believe the Division of Social Work takes action based on the
feedback I provide in (feedback channel”.
Assumptions. This research assumes that there is a gap in the literature wherein
student feedback forms an essential component of the implicit curriculum, yet best
practices in best practices in assessing and improving student feedback are not identified
for implementation within schools of social work. This study further assumes that the
Division of Social Work functions as any other organizational system under systems
theory and that students not only receive outputs and serve as outputs of that system, but
also actively participate in the system through the contribution of their feedback. It
further assumes that the social work values and EPAs outlined by the CSWE are
10
desirable aspects of social work education and the profession of social work, worthy of
continuous assessment in schools of social work.
Social work research justification. While the Council on Social Work Education
offers suggestions for implicit curriculum assessment, best practices in assessment and
improvement of student input components of the implicit curriculum have not been
ascertained in the research and indeed few studies exist which examine the prevalence or
impact in student participation and feedback in social work education. The present study
seeks to contribute to the development of a model for assessment of student input
components of the implicit curriculum, as well as suggestions for improving student
feedback in social work education.
Results of this study will not only provide insight that the Division of Social
Work at Sacramento State can invoke moving forward, but also can be used to
understand and enrich student feedback in social works schools everywhere,
Furthermore, assessing, challenging, and enhancing student feedback in social work
education encourages the professional development of social work students as
empowered and actively engaged stakeholders informed by an educational environment
which embodies social work values such as the dignity of the individual, and looks
critically at power dynamics within systems. In this way social work schools can uphold
the profession’s commitment to human and community well-being, respect for human
diversity and quest for social justice.
11
Study limitations. The study is limited to identifying and evaluating feedback
channels as well as barriers to student feedback within the Division of Social Work at
Sacramento State University. It does not generate new policies or models for facilitating
student feedback, nor does it examine multiple methods for assessing student feedback.
Rather, the study itself is an assessment of student feedback which identifies both the
various feedback channels available for student feedback, the extent to which these
channels are used by students, the perceived efficacy of feedback channels in producing
change, as well as the barriers to student feedback within the system studied.
The study uses non-random, purposive sampling to identify survey and interview
participants, therefore findings are not generalizable to other populations
12
Chapter 2
REVIEW OF THE LITERATURE
The 2008 Educational Policy and Accreditation Standards (EPAS) of the CSWE
symbolize a departure from past approaches to curriculum design and program
assessment. Whereas social work education and accreditation standards had heretofore
focused on content (i.e. what is taught in programs), the emphasis is now on outcomes, or
more specifically, social work competencies, that graduates of social work programs are
expected to acquire in their schooling. Programs applying for accreditation or reaccreditation, therefore, must demonstrate how their program facilitates mastery of each
of the competencies and maintain multiple measures for how each competency is
assessed. In addition to competency-based outcomes, the 2008 EPAS call for evaluation
of not only the “explicit curriculum”, or course content and associated competencies, but
also the “implicit curriculum”, or the “educational environment in which the explicit
curriculum is presented” (EP 3.0).
From student surveys, to exit interviews, to alumni surveys, student feedback is a
large component of social work education assessment required for accreditation by the
Council of Social Work Education (CSWE) (Holloway, 2008). Aside from generating
quantifiable feedback for assessment of learning outcomes, student input and
participation lead to a more equitable and person-centered education and contribute to the
overall learning environment or culture, known as the implicit curriculum (CSWE, 2008).
13
The present study seeks to assess from a systems perspective, student awareness of and
use of existing channels for student feedback within the CSUS Division of Social Work,
and their perceived efficacy.
Implicit Curriculum
According to Hafferty’s (1998) conceptualization, the implicit or hidden
curriculum includes factors of the social environment, organizational structure and
academic culture that contribute to a student’s identity formation and professional
socialization. An important aspect of these definitions is that they view, as Giroux and
Penna claim, education as situated in the larger socio-political context (Giro-ux & Penna,
1979). In this way the implicit curriculum reflects the larger environmental contexts of
the institution, the profession, and society through program structure and processes.
According to the CSWE, the implicit curriculum is “composed of the following
elements: the program’s commitment to diversity; admissions policies and procedures;
advisement, retention, and termination policies; student participation in governance;
faculty; administrative structure; and resources” (CSWE, 2008, p.10). It is argued here
that a systems perspective facilitates the evaluation of an important aspect of the implicit
curriculum: the availability, efficacy and use of feedback channels for student input wit
hin the social work program. This literature review frames the discussion of the
student feedback in terms of CSWE accreditation standards for the implicit curriculum;
14
critical theory, empowerment, and person or student centered learning frameworks; social
work professional values; and existing literature on student feedback and participation in
social work and other schools of education.
Implicit Curriculum and Social Work Values
Sambell and McDowell (1998) describe the implicit or hidden curriculum as
referring to practices, rules, regulations, jargon, the physical environment, and routines
that send messages to students about the values, attitudes and behaviors about the
institution and the learning environment (as cited in Holosko, Skinner, MacCaughelty, &
Stahl, 2011). Similarly, Holosko et al. discovered that they could only assess implicit
curriculum through values rather than competencies in their attempt to build the implicit
curriculum of their university’s BSW program. Despite the distinction between explicit
and implicit curriculum, they note that the two remain closely linked and affect one
another (Holosko et al., 2011). From this perspective, it follows that social work
programs would strive to embody social work values set forth by the CSWE in its
learning environment and culture, while at the same time, each school would differ in the
ways in which its implicit curriculum embodies social work values as well as the values
of the greater systems in which the program is embedded. These values, as well as the
values inherent in critical pedagogy and empowerment frameworks, inform the argument
for open feedback channels for student input, and are discussed below.
15
The CSWE maintains that social work values including service, social justice, the
dignity and worth of the person, the importance of human relationships, integrity,
competence, human rights, and scientific inquiry should “underpin the explicit and
implicit curriculum and frame the profession’s commitment to respect for all people and
the quest for social and economic justice” (EP 1.1). Yet Holosko et al. (2011) contend
that there can be a disconnect between explicit messages and values being taught and
implicit values exemplified in the academic culture. Similarly Brainard and Brislen
(2007) argue that incongruity between implicit and explicit curriculum may encourage
students to become “professional and ethical chameleons”, in their study of medical
curriculum. For this reason, it is essential to critically assess the implicit curriculum and
the messages it communicates to students. To do so, the present study applies a systems
perspective to evaluate a critical component of the implicit curriculum: the role of student
input or feedback into the system. Critical pedagogy, an empowerment perspective, and
person centered or student centered learning are frames that contextualize and inform the
import of this inquiry.
16
Theoretical Frameworks and Perspectives
Critical Theory and Empowerment Frameworks
Critical theory is a useful framework that lends itself toward the argument for
more radical student input in education. Borrowing from critical pedagogy, feminism,
Marxism, and postmodernism, critical reflection involves:
“a commitment to questioning assumptions and taken-for-granteds
embodied in both theory and professional practice, and to raising
questions that are moral as well as technical in nature and that are
concerned with ends at least as much as with means; an insistence
on foregrounding the processes of power and ideology that are
subsumed within the social fabric of institutional structures,
procedures, and practices, and the ways that inequalities in power
intersect with such factors as race, class, age, or gender; a
perspective that is social rather than individual, just as the nature of
our experience as individuals is social; and an underlying aim of
realizing a more just society based on fairness and democracy,
reflected in work and education as well as in social life generally.”
(Reynolds, 1999, p. 538).
Thus the critical approach is ultimately concerned with critical awareness of
power and how it operates. In this way it encompasses an empowerment approach.
17
Empowerment is described by Lee (2001) as having “three interlocking dimensions: a)
the development of a more positive and potent sense of self, b) the construction of
knowledge and capacity for a more critical comprehension of the web of social and
political realities of one’s environment, and c) the cultivation of resources and
strategies, or more functional competence, for attainment of personal and collective
goals” (p. 34).
Empowerment theory builds upon Freire’s radical or critical pedagogy, which
aims to empower individuals to examine critically the society and culture in which they
exist, and ultimately take action against the oppressive elements (1990). In this
approach a critical perspective is the result of a dialogical process of reciprocal
exchange that forms the foundation of the student-teacher relationship. Freire
maintains that knowledge has structural and relational dimensions and encodes values
and ideologies that normalize domination and oppression or exploitation (Reynolds,
1999).
An empowerment approach, with its roots in critical theory aligns with the CSWE
defined social work competency 2.1.4, “Engage diversity and difference in practice”,
which requires that social workers ”recognize the extent to which a culture’s structures
and values may oppress, marginalize, alienate, or create or enhance power or privilege”
(EP 2.1.). It therefore follows that a critical approach to education should be manifest in
18
both the explicit and implicit curriculum, and deal directly with issues of power within
the organizational system.
Person/Student Centered Learning
Building on critical theory and an empowerment perspective, some educational
theorists call for education that places students in the power center. Smyth argues that in
order for schools to be true learning organizations, they must invest students with what
Warren (2005) terms “relational power”. Relational power draws upon trust and
cooperation and acknowledges that learning involves the power to get things done
collectively by confronting, rather than denying power inequalities (Smyth, year, p.292).
Student-centered approaches to education place central value on the individual and
therefore align with the social work values of the dignity and worth of the person, the
importance of human relationships , as well as competency of engaging diversity and
difference through the acknowledgement of the culture’s power structures (CSWE,
2008).
Fielding’s (2006) typology of the interpersonal orientation of organizations
incorporates critical theory’s notions of power by presenting organizations in terms of
their use of persons in relationship, a concept he borrows from MacMurray (as cited in
Fielding, 2006). Functional or instrumental relationships are encounters that take place in
order to achieve a purpose, whereas personal relations exist to help us “be and become
ourselves in and through our relations with others” (see Macmurray). Fielding argues that
19
in person centered learning organizations the functional is expressive of the personal such
that means are transformed by the ends (2006). Applying this to social work education,
we could argue that in order to be typified as person centered learning organizations,
schools of social work would be organizations in which the end goal of mastery of social
work competencies necessitates means which themselves embody social work values and
competencies.
Fielding contrasts the ideal of person centered learning organizations with what
he terms high performance models of schooling. In these systems, the personal is used
for the sake of the functional in that students are “included or excluded, valued or not,
primarily on the basis of whether they contribute to organizational performance of the
school” (2006, p.302). In this model, student voice is a matter of “managerial
desirability” in that “there is an unwavering corporate requirement on all teachers that
student views are systematically sought on matters to do with the curriculum and formal
learning” (Fielding, 2006, p. 306). Similarly Furlong and Cartmel argue that in the new
“student as consumer” paradigm, individual choice or consumer rights can disguise
structural inequalities (as cited in Read et. al, 2003). Certainly we can see how the call for
multiple modes of assessment for multiple competencies could lead to multiple systems
for collecting student feedback while leaving students feeling that “the schools do not
care about them as persons, but only about them as bearers of results and measurable
outcomes”. The same, he argues, is true of teachers (Fielding, 2006, p. 306).
20
While Fielding warns against allowing the drive for performance accountability to
supplant the more lofty goals of person-centered education, this ideal must be balanced
against organizational demands, namely the necessity of accreditation for a social work
program. In order to prevent the ends from supplanting the means, he argues that student
voice should follow Sennett’s (1999) model of dialogic schools by incorporating
polyphony (many voices); civility manifested through multiple forums for public
conversation; and carnival: moments in which norms of engagement and role relations
are turned upside-down (as cited in Fielding, 2006, p.308). Fielding warns that in
emphasizing interpersonal dynamics of education such as relationships, it is important to
include not only the voices of all students, but also to the “silences, inertia and fear that
speak to the alienation and indifference of those who choose to remain silent” (Fielding,
2006, p.311). Instead, he argues that student voice in person centered learning
environments is student driven, staff supported, and often a joint endeavor. As opposed
to what Fielding sees as brief, one-way, and information focused attempts at student
engagement, the new wave of student voice requires significant occasions where students
lead dialogue as equals and contributors to their education (2006).
Systems Theory
The systems approach stems from “general systems theory” by the biologist
Ludwig von Bertalanffy (1969) which asserts that there are universal principles of
organization that hold true for all system (as cited in Mizikasi, 2006). The primary
21
principle of systems theory is that the whole is greater than the sum of the parts, the
whole determines the nature of the parts, and the parts are interconnected and must be
understood in relation to the whole (Mizikasi, 2006). According to Banathy (2000),
systems have four main characteristics: 1) systems are goal oriented; 2) systems have
inputs from their environment; 3) systems have outputs to achieve their goals; and (4)
there is feedback from the environment about the output. (as cited in Mizikasi, 2006).
Open systems are those which interpret and respond to feedback from their
environments (Hutchinson, 2010), and cannot survive without continual interaction
(Mizikasi, 2006). For example, Mizikasi explains that institutions of higher education are
exposed to external interaction and influences from external accreditation systems and
other external systems such as labor market and society, through which they can acquire
new properties (Mizikasi, 2006). When feedback is blocked, the system can’t meet
changing needs or respond to a changing environment. Thus, a systems approach serves
as a functional framework for assessing student feedback within the Division of Social
Work at California State University.
Because students are the primary stakeholders in social work education, their
input should be viewed as essential in the development of the implicit and explicit
curriculums. This was acknowledged by Holosko, MacCaughelty and Stahl (2010) as
they constructed components of the implicit BSW curriculum at their university. The key
value assumptions which they agreed upon in construction of the implicit curriculum
22
included the value of student voice as well as the need for the implicit curriculum to be
driven by the explicit curriculum, noting that “the implicit curriculum, in a sense, is a
wraparound model to the formal or explicit curriculum; as a result, it needed to be related
both to enhancing the overall school’s learning culture as well as the curriculum”
(Holosko, MacCaughelty & Stahl, 2010, n.p.). In this way they acknowledge the
importance of student feedback in the implicit curriculum as it relates to explicit
curriculum development.
The Importance of Student Feedback in Curriculum Development
Because students are the primary stakeholders in social work education, their
input should be viewed as essential in the development of the implicit and explicit
curriculums. This was acknowledged by Holosko, MacCaughelty and Stahl (2010) as
they constructed components of the implicit BSW curriculum at their university. The key
value assumptions which they agreed upon in construction of the implicit curriculum
included the value of student voice as well as the need for the implicit curriculum to be
driven by the explicit curriculum, noting that “the implicit curriculum, in a sense, is a
wraparound model to the formal or explicit curriculum; as a result, it needed to be related
both to enhancing the overall school’s learning culture as well as the curriculum”
(Holosko, MacCaughelty & Stahl, 2010, n.p.). In this way they acknowledge the
importance of student feedback in the implicit curriculum as it relates to explicit
curriculum development.
23
As the CSWE accreditation standards for the explicit curriculum changed in 2008
from content-based to competency-based standards, programs now have greater
flexibility in curriculum design. The 2008 EPAS require that programs address each
program’s context as well as the profession’s purpose and values in its mission and goals,
which in turn shape the content of the curriculum (Holloway, Black, Hoffman & Pierce,
2009). Thus the social work profession’s purpose and values as well as the unique
values, mission and goals of the individual institution, which comprise the program
context, all inform the development of the explicit curriculum. It is therefore essential
that programs evaluate the extent to which student feedback flows within the system and
acknowledge how the program context including its values, mission and goals, shape and
are shaped by the existing feedback system. In order for social work programs to ensure
that the implicit curriculum and explicit curriculum are in alignment with one another and
with social work values, student feedback must inform curriculum redesign and
development, especially as programs revisit the explicit curriculum in terms of
competencies.
Forms of Student Feedback
Teaching Evaluations
Perhaps the most common form of student feedback in higher education systems
is through faculty and instructor evaluation. The CSWE advises social work programs to
assess the implicit curriculum through measures such as “’instructor effectiveness
24
measures in course evaluations, advisor evaluations, and student evaluations of field
experience, program ambiance evaluations and the like” (Holloway, 2013). Yet, several
studies conclude that student ratings do not have a significant impact on teaching
improvement (Kulik & McKeachie, 1975; Kember et. al, 2002). Rotem and Glasman
(1979) found that feedback from student ratings was not effective for improving
university teacher performance because of a) source of feedback, b) content of feedback
not informative or specific enough and not focusing on behavior that can be changed and
c) recipient of feedback obstinacy, conceit, etc. (as cited in Knol, 2012).
Theall and Franklin (2001) found that student ratings often misinterpreted,
misused, or not used at all. Cohen (1980) conducted a meta-analysis of student evaluation
studies, and found that augmentation (targeted instructions for improving performance
given by an expert consultant) is the only variable that had a significant effect on the
degree of teacher improvement. Marsh (2007), in a large-scale longitudinal study of
fifteen teachers over thirteen years, also suggested external consultation can make a
substantial difference in teaching effectiveness, remarking that “teachers don’t know how
to fully utilize the feedback without external assistance” (p.788). Lenze (1996) also wrote
about impact of educational consultation.
Knol (2012) created an experimental design study to study the roles of feedback
and consultation on teaching and learning in a University of Amsterdam psychology
program. He assigned 75 professors at to either an experimental condition with
25
intermediate feedback-plus-consultation, an intermediate feedback-only condition, or a
control condition with neither feedback nor consultation. Results indicated that the
feedback plus consultation group were most satisfied with their evaluations, were
satisfied with the consultation, and found evaluations to be significantly more useful.
Furthermore the feedback plus consultation produced significant improvement in student
evaluation of the teaching and their learning, while the feedback only condition produced
some, but not significant improvement in evaluations, and the control condition produced
no change. These findings imply that student questionnaires alone may not be enough for
teaching to improve.
Not only do student evaluations of faculty not necessarily result in improved
teaching, they may have negative effects on instructor morale. McKeachie (1997)
concluded that when professors perceive the ratings to be low it may have a negative
effect on their motivation. Arthur (2009) found four reactions to negative student
feedback: shame (it’s my fault and I can’t do anything about it), blame (it’s their fault and
I can’t do anything about it), tame (it’s about them, but I can respond to their needs) and
reframe (it’s to do with me, but I can learn and develop as a result). These studies
highlight the power imbalances in student feedback systems and indicate the need for
more effective mechanisms of student feedback that are framed constructively, heard, and
acted upon by faculty and/or administration.
26
Student Participation in Governance
Another highly studied arena of student participation in higher education is in
university governance. Student involvement in governance can facilitate the evaluation
of curricula and teaching practices through the identification and correction of
weaknesses in programs and instruction (Lee (1987) as cited in Menon, 2003).
In a large study of students in university governance by the Council of Europe
Project on Education for Democratic Citizenship (CC-HER Bureau, 2000 in Menon,
2003) researchers conducted interviews with students and other stakeholders at fifteen
European and fifteen US colleges and universities to measure the extent of student
participation in university governance and satisfaction with institutional practices. They
found that student participation in governance was very limited in involvement and
influence. Concerns included a lack of transparency and lack of student consultative
processes such as decision making on academic and community issues. There was also
concern that a small elite group of students dominated student opinion. Furthermore, the
faculty acknowledged that the university did not provide adequate information to its
students, particularly information on students’ rights. While students participated in
committees at many levels, their presence was often viewed as a formality to satisfy
statutory requirements. Student input had a tendency to be restricted to cases which affect
student interests in the short-term. In response to their findings, the researches
recommend a more active role for student representative, encouraging them to prepare
27
and present an analysis of important problem situations and put forward alternative
solutions along with their major strengths and weaknesses (as cited in Menon, 2003).
A similar study on student involvement in policy-making in higher education was
conducted by Obondo (2000) at Kenyatta University and the University of Nairobi (as
cited in Menon 2003) using focus groups interviews and questionnaires from 45
administrators and 100 students. Results indicated that most students were not at all
involved in decisions regarding policies and their implementation at their universities.
Barriers to participation included organizational constrains such as “unnecessary
bureaucracy” and lack of adequate information as well as mistrust of student
representatives.
According to Flutter (2007), learning cultures often fail to incorporate learners in
developing the educational environment. Menon (2003) argues that students may not be
in a position to effectively promote the interests of their groups. For example, the
participation of students in boards can lead to conflict of interest and students may lack
sufficient knowledge and experience in matters of decision-making. Other reasons cited
for limiting student involvement included students’ presumed lack of interest in matters
of governance, the potentially negative effect of their involvement on their performance,
the limited time of their enrolment, and the need to excluded them from the discussion of
sensitive issues such as student grading and faculty promotion (Menon, 2003) . Also
student representatives may not adequately represent the student public and their
28
involvement in personnel and tenure decision may lead to confrontation and empathy
between them and academic staff. Like the studies on student feedback evaluations, the
literature on student participation in governance presents several reasons why student
feedback may be blocked in the education system.
Barriers to Student Involvement
The literature on student participation in education supports the invocation of
critical pedagogy and person-centered education perspectives. For example, Astin notes
that frequent interaction with faculty is more strongly related to satisfaction with college
than any other type of involvement or, indeed, any other student or institutional
characteristic (Astin, 1964). He contends that students who interact frequently with
faculty members are more likely than other students to express satisfaction with all
aspects of their institutional experience, including student friendships, variety of courses,
intellectual environment, and even the administration of the institution. Therefore he
encourages finding ways to promote greater student involvement with faculty (Astin,
1984).
Academic writing on diversity in higher education highlights power relations of
students entering the university system. As Grant (1997) notes, students have to learn to
be ‘independent’ in terms of lack of direct supervision, and at the same time they learn
their ‘place’ as ‘subordinates’ in the hierarchical academy. Consequently, inequality leads
to lack of student confidence in the importance or usefulness of their own opinions or
29
ideas (as cited in Read, Archer, & Leathwood, 2003). Non-traditional students are the
most alienated in this system. Read et al. (2001) contend that there is a need for
initiatives to focus on ‘cultural’ aspects of the academy such as methods and styles of
teaching and learning. In this way they invoke the CSWE education policy 3.1 Diversity,
which necessitates that the program’s commitment to diversity is reflected in its learning
environment.
In an attempt to evaluate the implicit curriculum at their school of social work,
Grady, Powers, Despard, and Naylor (2011) developed an instrument called the Implicit
Factors Survey (IFS), which consisted open ended and demographic questions. Sixty four
final-year MSW students completed the survey and results were coded and analyzed
using open and axial coding and analytical techniques and inductive processes.
Qualitative responses to the question of what students would change were classified into
three categories: curriculum, instructors, and assignments with students reporting they
wanted more flexible and broad curriculum, more practical instruction with consistent
quality among the faculty, and more practical and purposeful requirements. Further,
results indicated that the classroom environment—which included engaging classes,
opportunities for critical thinking, and experiencing safe and respectful learning
environments--received less positive ratings as did instructor expectations and feedback
to students. In this way, this study highlighted the importance of effective feedback
channels in social work graduate education.
30
Methods of Assessment for Implicit Curriculum and Student Feedback
The Business School at Loughborough University in England set out to document
the student feedback system in their school including students’ confidence in the system
and competence in contributing to it. They collected data from several sources including
interview with staff and student representatives, analysis of institution documents relating
to student feedback processes, attendance at meetings involving aspects of student
feedback processes, three focus groups with students including one composed of student
representatives, direct observation of the administration, and feedback questionnaires.
In their approach to evaluating the implicit curriculum of their social work
program, Edwards, et al. (2004) utilized student focus groups to identify variables. From
this work, they developed a quantitative survey to use in assessment (Edwards, LaSala,
Battle, 2006). The present study identified existing channels for student feedback in the
Division of Social Work and a survey was developed which assesses student awareness
and use of as well as perceived efficacy of the identified channels for student feedback.
Conclusion
In conclusion, the reviewed literature builds a case for the value of student
feedback channels in social work education. The Council on Social Work accreditation
standards mandate that accredited social work programs address not only the content of
what its taught (explicit curriculum), but also the learning environment or institutional
31
culture (implicit curriculum) in which is presented. The learning environment, they
maintain, conveys information about social work values, therefore it follows that student
feedback channels should be in line with CSWE EPAS including affirmation and respect
for diversity (EP 3.1) and student participation (EP 3.2.9).
Yet while accreditation standards require social work programs to assess the
explicit curriculum, schools are not currently required to measure the success of the
implicit curriculum, only to demonstrate that assessment measure are in place. Thus,
while the Council on Social Work Education outlines suggestions for implicit curriculum
assessment, best practices in assessment and improvement of student input components
of the implicit curriculum have not been ascertained in the research, and indeed few
studies exist which examine the prevalence or impact of student participation and
feedback in social work education. Among the studies available on student participation
in general, much of the research is theoretical rather than practical, and few studies
address assessment of student feedback. The present study seeks to fill this gap in the
literature by examining student awareness and use of existing channels for student
feedback within the CSUS Division of Social Work, and their perceived effectiveness,
and thereby contribute to the development of a model for assessment of student input
components of the implicit curriculum. At the same time the study seeks to identify
barriers to improved flow of student feedback within the system as well as suggestions
for improvement of student feedback components of the implicit curriculum in social
work education.
32
The present study is framed in critical, empowerment, and person or student
centered perspectives, which further enhance the argument for multiple effective
channels for student voice in education. Systems theory provides a useful framework for
assessment of student awareness and use of, as well as perceived efficacy of the channels
for student feedback. Focus group and surveys constituted the primary measurement
instruments in this research design.
33
Chapter 3
METHODS
Study Objectives
The primary objectives of this study were to assess the use of various channels
of providing student feedback within the Division of Social Work at CSUS and the
perceived efficacy of those modes of providing feedback, and also to identify barriers to
the utilization of these feedback channels as well as way in which stakeholders believe
the flow of student feedback in the system can be improved. The overall objective of
assessing these feedback channels is to understand the student feedback component of the
implicit curriculum with the ultimate goal of using this understanding to align implicit
curriculum with the educational goals of the social work program. Results of this study
will provide insight that the Division of Social Work at Sacramento State can utilize for
program improvement and in preparation for re-accreditation. Additionally these results
can enrich understanding of feedback in social work graduate education more broadly.
While CSWE requires that accredited schools of social work formally assess the
explicit curriculum in terms of social work competencies, they are merely advised to
maintain an assessment plan focused on dimensions of the implicit curriculum
(Holloway, 2009) Yet best practices and instruments for the assessment of the implicit
curriculum have yet to be identified either by CSWE or in the literature. Therefore a
34
secondary purpose of this study was to develop and implement an instrument for
assessment of student feedback components of the implicit curriculum. The instrument
developed for this study invoked principles of systems theory by evaluating feedback
based on the various pathways or channels through which feedback serves as input into
the system.
Study Design
The study is both descriptive and exploratory. It is descriptive in that it seeks to
describe the Division of Social Work at CSUS with regard to the flow of student
feedback within this system. This description includes describing the frequency of use of
each of the identified feedback channels. The study is exploratory in that it seeks to
satisfy the researchers curiosity and desire for better understanding of the environment of
student feedback within the Division as well as the utility of assessing this environment
from a systems perspective by measuring the use and perceived efficacy of feedback
channels. Furthermore it is exploratory in as much as it may lead to the development of
methods of studying student feedback to be employed in subsequent studies and in selfassessments of the implicit curriculum of social work programs recommended by the
CSWE for accreditation.
The research design consists of student surveys, and key informant interviews
with faculty, combining both qualitative and quantitative methods. Instruments and
analysis are described below. One of the goals of this study was to develop a tool to
35
evaluate student feedback through the assessment of use and perceived efficacy of
feedback channels. This tool, the student survey, was then used to explore the
environment of student feedback within the Division of Social Work.
Sampling Procedures
Non-probability, purposive sampling was used for selecting student participants
for the survey portion of the research project. The researcher requested permission of
each MSW Social Work Practice instructor to enter Social Work Practice courses
(SWRK 204A/B and SWRK204C/D) in order to administer the student survey. Four
professors responded to the researcher’s request to administer surveys to their students
during class time. Participation was entirely voluntary for students in these classes.
Overall there were seventy-four student MSW participants in the study.
In order to provide additional insight into the research question from an alternate
viewpoint, the researcher conducted detailed interviews with key informants, two faculty
members. Snowball sampling was used to identify interview participants. Interviews
were unstructured, yet followed an interview protocol consisting of key questions.
Interviews were recorded and notes were taken. No material incentives were provided
for any participants in this study.
36
Data Collection Procedures
Survey data was collected over the span of two weeks during class time of four
MSW Social Work Practice classes. Four professors responded to the researcher’s
request to administer surveys to their students during class time. The researcher entered
the classrooms at an agreed upon time toward the end of the class period. The researcher
introduced the topic of study to the students, explained to students informed consent, and
distributed consent forms and surveys.
Each survey was assigned a distinct number and no identifying information was
collected about participants. Participation was entirely voluntary for students in these
classes. After completing surveys and consent forms, students submitted them to the
researcher in two separate piles. Surveys were kept confidential in a locked cabinet to
which only the researcher has access. Overall there were seventy-four student MSW
participants in the study.
Key informant interviews were conducted with two professors in the Division of
Social Work. Interviews were conducted at a time convenient for the professor and the
researcher and took place in the professors’ offices. Interviews were unstructured,
although the researcher used an interview protocol to guide the direction of the
interviews. The researcher took notes throughout the interview and also recorded
interviews. Interviews lasted approximately thirty minutes.
37
Instruments
The student survey was developed based on the concept of feedback channels,
rooted in systems theory. From a systems perspective, feedback channels are information
pathways within a system that feed information from the environment back into the
system, leading to either homeostasis or change. In the context of the Division of Social
Work as a system, feedback channels are ways that students can provide feedback or
input to faculty and/or staff in order to effect change in the system.
The researcher identified several existing feedback channels within the Division
of Social Work that served as the primary factors evaluated in the questionnaire.
Feedback channels were identified in the literature and through the researcher’s
experience as a student within the Division. Additionally, the survey allows participants
to identify additional modes of giving feedback into the system that could be
incorporated into future assessment tools.
A survey design was chosen as an appropriate tool for collection of confidential
data regarding the attitudes and behaviors of individuals as the unit of analysis. The
survey was developed such that for each of the feedback channels identified participants
would rate on a Likert scale to what extent they had used that feedback channel as a form
of student input, to what extent they believe that Division faculty and/or administration
take the feedback they give through that specific feedback channel into consideration, to
what extent they believe that information they provide through that feedback channel is
38
important to Division faculty and staff, and to what extent they believe the Division takes
action based on the feedback they provide through that specific feedback channel.
Participants are asked to answer these scaled questions for each of the following feedback
channels: teaching evaluations of faculty members (mid-semester and end-of-semester
evaluations); informal conversations with faculty or staff as a form of student input; inclass discussions; email or online communication; participation in SWSA or Phi Alpha
(social work student organizations); participation in other (non-social work) student
organization; participation in governance (i.e. student government, faculty meetings, and
governing boards). Following the scaled questions participants were asked to identify the
barriers to use each of the feedback channels and suggestions for improving each channel
for student feedback.
Demographic indicators were limited to year of study in the social work program,
age, and years of experience as a social worker. Students were asked to answer yes or no
to a variety of questions about their experiences providing feedback into the Division of
Social Work. Finally, participants were asked in open-ended questions to identify
methods of sharing feedback not covered in this survey, any barriers to using these
addition methods of sharing feedback, and solutions to increase the volume and efficacy
of student feedback in social work education. A copy of the survey is included in the
appendix.
39
While key informant interviews with faculty members were unstructured, the
researcher developed an interview protocol to guide the direction of the interviews.
Questions contained in the protocol and interviews aimed at eliciting barriers students
face in giving effective feedback and suggestions for overcoming these barriers from the
point of view of faculty. The interview protocol is included in the appendix.
Data Analysis
Quantitative Data Analysis
Analysis of quantitative data was conducted using SPSS software. All
quantitative variables were entered into SPSS and identified as ordinal, nominal, or scale
within the program. The researcher then entered quantitative data for each student
survey, with individual students identified only by number, thereby safeguarding the
confidentiality of participants.
Following the completion of data entry for quantitative data, the researcher ran
several statistical tests in SPSS. For variables measured at the ordinal level, associations
such as t-tests and were calculated. For example, t-tests for independent samples were
used to determine whether there was a significant difference in the mean scores on select
variables between MSWI and MSWII students as unique groups. These variables
included combined scores for use of all feedback channels as well as combined scores for
efficacy variables for all feedback channels. Additionally one-tailed t-tests were
performed for each of the variables pertaining to feedback channels in order to determine
40
whether there was significant differences between the mean scores of these variables and
the neutral value of three, indicating a response of “sometimes”.
Correlations were calculated to determine to what extent the variation in
perceived efficacy of feedback provided through the feedback channels could be
accounted for based on the extent to which feedback channels were used. Similarly
correlations were calculated between the four variables used to measure each of the seven
assessed feedback channels. Pearson’s r were calculated for these interval level data and
scatterplots were constructed with R2 estimation.
Frequencies and summaries including mean scores and standard deviations were
calculated for all variables pertaining to the seven feedback channels measured at the
nominal level. These scores could range between one, indicating an average response of
“never” to five, indicating and average response of “always”. Similarly percentages were
calculated for each category of response for each variable. More specifically the
percentage of respondents who answered never, rarely, sometimes, often, and always was
reported for each feedback channel related variable.
Qualitative Data Analysis
Qualitative responses to open-ended questions embedded in student surveys were
transcribed by the researcher into word processing software. The data was coded in order
to identify emerging themes. Once themes were identified, the researcher organized
qualitative responses into each identified category and calculated sums for each category
based on quantity of utterances/responses coded into each category.
41
Protection of Human Subjects
The Human Subjects application was completed and submitted to the California
State Institutional Review Board for approval. Approval was received in writing on
November 7, 2013 with the project approved as exempt under 45 CFR 46.101(b)(2) (for
Tests, Surveys, Interviews) and assigned the protocol # 13-14-014.
There was minimal risk of discomfort or harm to participants as the probability and
magnitude of harm or discomfort is no greater than what might be encountered in daily
life. The subject matter addressed in the research is not considered sensitive and
therefore disclosure of participant views on the subject pose minimal psychological or
social risks. The research involved only the use of survey and interview procedures, did
not involve children, and the information obtained was not recorded in a manner that
human subjects can be identified, nor could any disclosure of the human subjects’
responses outside of the research reasonably place the subjects at risk of criminal or civil
liability or be damaging to the subjects’ financial standing, employability, or reputation.
Precautions were taken to ensure the voluntary participation of subjects including
written informed consent. Confidentiality was maintained through anonymity of survey
subjects and safeguarding of all data in a locked cabinet to which only the research has
access. All data will be destroyed by June 2014, and shared study results will not contain
any identifying information.
42
Limitations
The study is limited by utilization of non-random, purposive sampling or survey
and interview participants. Therefore findings cannot be generalized to other
populations. Thus while the findings in this study may provide insight into the
environment of student feedback within the Division of Social Work at CSUS, the results
of this study can be challenged on the grounds that the sample population were not truly
representative. In this way the results of the study cannot be directly applied to other
social work programs and institutions of higher education. This study should be
replicated with larger, random samples of respondents.
43
Chapter 4
STUDY FINDINGS AND DISCUSSIONS
The purpose of this study was to assess student feedback components of the
implicit curriculum within the Division of Social Work at CSUS. More specifically the
study aimed to assess student use and perceived efficacy of existing channels for student
feedback and to identify barriers to student input and suggestions for improvement. This
assessment was conducted through the development and utilization of student surveys
administered among MSW students as well as interviews with two key informants who
are faculty members. The goals of the study were to explore the environment for student
feedback within the Division, contribute to the development of a model for assessment of
student feedback components of the implicit curriculum, and develop suggestions for
improvement of student feedback in social work education.
Assessment of the use of various channels for student feedback, the efficacy of
those channels, and ways in which stakeholders believe the flow of student feedback in
the system can be improved is essential to a deliberate effort to align the implicit
curriculum with the competencies that comprise a professional social work education.
The lack of knowledge of best practices, assessment, and improvement of the student
feedback component of the implicit curriculum in social work education is a problem for
schools of social work to comply with CSWE accreditation standards and convey social
work values through the educational context in which the curriculum is taught. Results of
44
this study may provide insight that the Division of Social Work can utilize for program
improvements, and preparation for re-accreditation. Furthermore, while results of this
study are not generalizable to a larger population, they may provide insight and a systems
approach to student feedback that can inform practices in other institutions of higher
education.
Overall Findings
This study assessed the use and perceived efficacy of channels for student
feedback within the Division of Social Work at CSUS as well as barriers to and
suggestions for improvement of their utilization. The data from students was collected
with a survey designed by this researcher and key informant interviews were conducted
with two faculty members in order to provide additional insight into this topic. Seventythree MSW students participated in the student survey portion of this study. Fifty-two
students were in the first year of the MSW program and twenty-one were in the second
year. No students were in any other years of study. Respondents ranged from age 22 to
56 with an average age of 29, the most common age being 24 with 21% of respondents of
this age, the mean age was 29 with a standard deviation of 9. A majority of respondents
(43%) had no work experience as a social worker outside of field work.
Students were asked to rate the frequency of use and to evaluate the perceived
efficacy of several different feedback channels on a Likert scale of 1-5, with one being
never and five representing the category of always. The most commonly used feedback
45
channel was teaching evaluations with 45% of students reporting that they always use
this feedback channel, no students reporting that they never use teaching evaluations as a
channel for providing feedback, and only 4% reporting that they rarely do so. The
second most commonly used feedback channel was email/online communication
followed by class discussions. Student responses for frequency of use of each feedback
channel are displayed by percentage of students in Table 1.
Table 1
Feedback Channels by Percentage of Respondents
Ratings
Never
Rarely
Sometimes
Often
Always
Total
Missing
Teaching
Evaluations
0
4
23
27
45
100
0
Email/Online
Communication
In-Class
Discussions
Informal
Conversation
SW
Student
Org.
NonSW
Student
org.
Gover
nance
Partici
pation
16
12
32
22
15
96
4
5
15
40
30
5
95
5
12
20
37
21
7
96
4
57
16
10
7
5
95
5
43
22
27
5
3
100
0
80
3
9
7
1
100
0
Mean scores were calculated for responses on the item “I have used (feedback
channel) as a form of student input” for each of the seven identified feedback channels.
Possible scores could range from 1 indicating that the respondent had never used that
feedback channel, to 5 indicating that the student always used that feedback channel, with
3 as the neutral value indicating that the respondent sometimes used that feedback
channel. Teaching evaluations were used most frequently with an average score of 4.1,
indicating that on average students give feedback through teaching evaluations often.
46
Email/online communication, class discussions, and informal conversations all had
average scores very close to three indicating that on average, students sometimes use
these feedback channels. On the other end of the spectrum average participation in social
work student organization and non-social work organizations were rarely used as
feedback channels with average scores between 1.5 and 2, while participation in
governance was the least used form of feedback on average with a mean score of 1.5.
These results are summarized in Table 2.
Table 2
Mean Score for Frequency of Use of Feedback Channels
TeachEmail/
In-Class Informal Particiing
Online
Discus- Conver- pation in
Evalua- Commsions
sation
SW
tions
unication
Student
Org.
4.14
3.08
3.17
2.90
1.82
Non-SW Participation
Student in GovernOrg.
ance
1.53
1.46
A score was calculated for each student comprised of the sum of responses on all
frequency of use questions, specifically responses to the prompt “I have used (feedback
channel) as a form of student input” for each of the feedback channels identified in the
surveys.
47
The researcher conducted an independent sample t-test to see if there is a
difference in the average score on the use of feedback channels based on the year of
study. The null hypothesis was that the average score on these variables would not be
different between first-year and second-year MSW students. The researcher was unable
to reject the null hypothesis at the set alpha of .05, t(68) = -.515, p > .05. This indicates
that use of feedback channels does not vary based on year of study in the MSW program.
A score was calculated for each student comprised of the average of responses on
all efficacy-related questions, specifically questions asking to what extent students felt
faculty or staff felt their feedback was important, taken into consideration, and acted
upon by Division faculty and/or staff for each of the feedback channels identified in the
survey. Possible scores range from one, indicating that respondents on average never
believe that the feedback given through the feedback channel is efficacious (important to,
considered by, and acted upon by the Division) and five indicating on average
respondents always believe their feedback is efficacious, with three as the neutral value
indicating that on average respondents sometimes feel that feedback given through that
feedback channel is efficacious.
Efficacy of feedback given through email/online communication, class
discussions, and informal conversations approached a mean score of three, indicating that
on average students sometimes feel feedback given through these channels is efficacious.
On average, students felt that feedback given through participation in both social work
student organization and non-social work student organizations was rarely efficacious.
48
Teaching evaluations had the higher efficacy scores with a mean score of 3.3, indicating
that on average, student feel that feedback given in teaching evaluations is sometimes
efficacious. These results are displayed in Table 3.
Table 3
Mean Score for Efficacy Variables for Each Feedback Channel
Teaching
Email/
In-Class
Informal ParticiEvaluations Online
DiscusConver- pation SW
Communsions
sation
Student
ication
Org.
3.28
2.95
2.93
2.99
2.40
Non-SW ParticiStudent pation in
Org.
Governance
2.08
2.151
The researcher conducted an independent sample t-test to see if there is a
difference in the average score on the perceptions of students of the efficacy of student
feedback based on the year of study. The null hypothesis was that the average score on
these variables would be equal for first-year and second-year MSW students. The
researcher was unable to reject the null hypothesis as the p value was greater than .05
t(55) = .1.12, p = .272). This indicates that perceived efficacy of student feedback does
not vary significantly based on year of study in the MSW program.
A strong positive correlation that was statistically significant was found between
the magnitude of use of the various feedback channels and their perceived efficacy r (57)
= .64, p < .01. This indicates that the variation in use of feedback channels can be
accounted for by the perceived efficacy of those channels. Thus students who give
feedback more frequently are more likely to believe that this feedback is important to and
taken into consideration by faculty and/or staff and that action is taken as a result of their
49
feedback. These results are indicated in Table 4. Figure 1 also represents this data in a
scatterplot which indicates the R 2 to be .412 with the inference that perceived efficacy of
the feedback mechanisms can explain 42% of the change in the use of feedback channels.
Table 4
Correlation between Use and Perceived Efficacy of Feedback Channels
Use Score
Efficacy
Score
Use Score
Efficacy
Score
Pearson Correlation
1
.642**
Sig. (2-tailed)
N
70
.000
57
Pearson Correlation
.642**
1
Sig. (2-tailed)
.000
N
57
57
**. Correlation is significant at the 0.01 level (2-tailed).
50
Figure 1
Specific Findings
Teaching Evaluations
Of the feedback channels examined in the survey, teaching evaluations were used
the most often by students with 43% of respondents reporting that they always use this
kind of feedback and an additional 48% reporting they sometimes or often use teaching
evaluations as a form of feedback for a combined total of 91% of students who report
they always or sometimes use teaching evaluations as channel for giving feedback.
51
Mean scores were calculated for variables measuring use and efficacy of teaching
evaluations with possible scores ranging from one indicating representing “never” and
five indicating “always” with three as the neutral value representing “sometimes”. As
discussed previously, on average respondents use teaching evaluations often to give
feedback. Additionally, on average students feel that teaching evaluations are often
important to faculty and/or staff. However mean scores for the efficacy variables indicate
that on average students only sometimes feel that feedback provided in teaching
evaluations are taken into consideration or that action is taken based on the feedback.
These results are summarized in Table 5 and 6.
Table 5
Scores on Teaching Evaluation Variables by Percentage of Respondents
Used to
Provide
Feedback
Never
Rarely
Sometimes
Often
Always
Total
Missing
Feedback
Important to
Division
0
4
23
27
45
100
0
Feedback taken
into in
consideration
0
11
34
33
22
100
0
52
1
21
43
23
12
100
0
Action Taken
Resulting
from
Feedback
1
33
43
14
8
100
0
Table 6
Descriptive Statistics for Teaching Evaluation Variables
N
T.E.s Used
T.E. Consideration
T.E.s Important
Action Taken from
T.E.s
Valid N
Minimum Maximum Mean
73
73
73
2
1
2
5
5
5
4.14
3.25
3.66
72
1
5
2.94
Std. Deviation
.916
.969
.946
.933
72
The researcher conducted a one sample t-test to compare the average scores on
the variable “teaching evaluations used as form of student feedback” with the neutral
value of 3 in the item used to measure this variable. The null hypothesis was that the
average score on the variable would be equal to the neutral value of three. The t-test
results indicated that the difference between the mean of the sample with the test value of
three was statistically significant at .01 level, (M = 4.14, SD = .92), t (72) = 10.59,
p<.01). This indicates that in this sample the students use teaching evaluations as a form
of providing feedback significantly more than the neutral value. Teaching evaluations
were the only form of student feedback whose frequency of use proved significantly
different from the neutral value in one sample T-tests.
Not only are students more likely to use teaching evaluations as a form of student
feedback than other feedback channels, but also they believed that teaching evaluations
were important to staff more than sometimes. A one sample T-test to compare the
average scores on the variable “Teaching evaluations are important to staff” with the
53
neutral value of three in the item used to measure this variable revealed a significant
difference between the mean of the sample with the test value of three. The null
hypothesis was that the average score on the variable would be equal to the neutral value
of three. The researcher was able to reject the null hypothesis, finding a significant
difference between the average score on this variable and the neutral value of three at the
.01 level (M = 3.66, SD =.956), MD = .657, t(72) = 5.94, p<.01. This indicates that in
this sample the students have confidence that teaching evaluations are important to
faculty and staff most of the time.
Likewise a statistically significant difference was found between the neutral value
of three and the mean score for the variable “teaching evaluations are taken into
consideration by faculty and staff” in a one-tailed t-test (M = 3.25, SD = .969), MD =
.247, t(72) = 2.18, p<.01.
However, significance was not found in one-tailed t-tests with
a neutral value of three for the variable “I believe the Division of Social Work takes
based on the feedback provided through teaching evaluations”. These results are
displayed in Table 7.
54
Table 7
Results of One-Sample T-Test for Teaching Evaluations Variables
Test Value = 3
t
df
Sig. (2Mean
95% Confidence
tailed)
Differen
Interval of the
ce
Difference
Lower
Upper
T.E. Used
10.59
72
.000
1.17
.923
1.35
T.E. Important
5.94
72
.000
.658
.437
.878
T.E Consideration
2.18
72
.000
.247
.021
.473
Action from T.E.s
-.505
71
.615
-.056
-.275
.164
Students were asked in open-ended questions to indicate the barriers to using
teaching evaluations as a form of student feedback as well as the suggestions for
improving this form of student feedback. The most often cited barrier to using teaching
evaluations as a form of feedback was a belief that they would not or may not be taken
into account, with twenty-two respondents indicating that this was a barrier. Of these
twenty-two responses, nine respondents indicated that faculty don’t look at the feedback
provided in evaluations while fourteen indicated that this form of feedback often does not
result in change, or in any case, students are not aware of the changes, if any, that take
place as a result of their input. For example one student noted that “students can feel
unheard (and) don’t see anything come from their feedback”, while others noted
“oftentimes the evaluations are ignored”, “students do not see actions taken”, and
“sometimes people really don’t care”. The quantitative data underscores these concerns,
indicating that 32% of respondents believe that action is rarely taken as a result of the
feedback provided in teaching evaluations and 40% feeling like this feedback sometimes
results in action.
55
Eleven participants felt fear of being penalized, for example by negative grades,
or offending staff were barriers to using teaching evaluations as a form of student
feedback. Similarly, five respondents stated concerns that teaching evaluations were not
anonymous ways of conveying feedback. An additional barrier, cited by five
respondents, is inadequate time either to complete the evaluations or to truly reflect
before doing so.
Part of the problem may be that students don’t understand what happens with
teaching evaluations after they are completed. Who sees them? What do they do with
them? Do they make a difference? Eight students remarked that students aren’t aware of
the process of what happens with their feedback stating for example “students may not be
aware of the process, what is done with student results of feedback” and “students are
unaware of what occurs after feedback is given and how it affects faculty”.
Proposed solutions to improving the use of teaching evaluations as a form of
student feedback largely centered on the concept of accountability. Students want their
feedback to result in change. While several students suggested that professors simply take
the feedback into consideration, others suggested exposing the results of evaluation for
public scrutiny, perhaps ascribing a rating or letter grade to professors. Six respondents
suggested evaluations be placed online. Four students requested additional measures for
confidentiality and anonymity, while three suggested allotting more class time to
evaluations.
56
The most frequently offered suggestions were related to increasing
communication, suggested by twelve respondents. Student/teacher meetings, public
faculty meetings, town hall meetings, and a student forum were all suggested as ways to
have additional discussions and “more communication” between faculty and students.
The request for additional face time between faculty and students to receive student
feedback can be considered as another means of increasing faculty accountability (as
faculty are made to respond directly to feedback in a forum or meeting), but not
anonymity.
Eight respondents felt that the teaching evaluations should ask different questions
stating for example “some of the questions should be changed” “(teaching evaluations)
leave little room for suggestions”, “(the evaluation form) does not allow students to
respond to the program as a whole” and “may not provide a clear picture of what is
occurring”. These responses indicate that students may have valuable feedback about the
formalized methods of obtaining student feedback themselves.
Email/Online Communication
The second most often used feedback channel was email or online communication
with 37% of respondents indicating that they always or often used these methods to give
feedback, while 32% of respondents indicated they sometimes use email as a form of
feedback. The majority of students felt that faculty and/or staff sometimes felt feedback
given through email or online communication was important while 17% felt they rarely
felt so, 27% felt they often did so, and a less than 10% felt they never or always felt this
57
feedback was important. Nearly half (47%) of respondents felt that feedback was
sometimes taken into consideration while over a quarter felt that it rarely was and an
additional 9% felt this feedback was never taken into consideration. 19% felt that
feedback given through email or online communication was often or always taken into
consideration. While the majority of respondents (43%) indicated that they felt faculty of
staff sometimes took action as a result of feedback given in this manner, nearly 32%
indicated that faculty rarely or never did so, while only 17% felt that action was taken as
a result of this feedback often or always. All of these results are indicated in Table 8.
Table 8
Scores on Email/Online Communication Variables by Percentage of Respondents
Never
Rarely
Sometimes
Often
Always
Total
Missing
Email/Online
Communication
Used to Provide
Feedback
Feedback given
in Email/Online
Communication
Important to
Division
16
12
33
23
15
100
0
7
17
40
27
9
100
0
Feedback Given
in Email/Online
Communication
taken into in
consideration by
faculty and/or
staff
9
26
47
13
6
100
0
Action taken
resulting from
feedback given
in Email/Online
Communication
8
24
43
12
5
92
8
Mean scores were calculated for variables measuring use and efficacy of
email/online communication as form of feedback with possible scores ranging from one
indicating representing “never” and five indicating “always” with three as the neutral
value representing “sometimes”. Results indicate that on average students use this
58
feedback channel and believe their feedback is important, taken into consideration, and
acted upon by the Division sometimes as indicated by mean scores very close to three for
these variables. These results are summarized in Table 9.
Table 9
Descriptive Statistics for Email/Online Communication Variables
N
Email/Online Used as form of Fdbk
Email/Online Fdbk Taken into
Consideration
Email/Online Fdbk Important
Action Taken from Email/Online Fdbk
Valid N
Mean
Std.
Deviation
Min
Max
73
1
5
3.08
1.28
70
1
5
2.91
.959
70
70
70
1
1
5
5
3.13
2.81
1.03
.967
The most frequently identified barrier to using email or online communication as
a form of student feedback was lack of response or uncertainty of when and if there
would be a response. Seventeen respondents stated that email messages go unseen or
without response by faculty and staff, stating for example “emails don’t get responses”,
“many faculty do not check their email”, and “person can filter past emails”. Others felt
that responses come too slowly with six respondents citing slow response time as barriers
to using this form of student feedback.
Other responses referenced fears of being misinterpreted or judged. Four
respondents noted that messages could be perceived the wrong way in email. Five
students feared a negative reaction from faculty or staff stating for example “I’ve heard
negative reactions that other students have received”, “I feel like the professor’s tone
59
inhibits me form emailing them”, and “the social work office does not like the amount of
emails they receive and often turn students away telling us to stop emailing”. As is true
for other feedback channels, anonymity was once again a concern, with seven students
citing this as a barrier to using email or online communication as a form of student
feedback.
The primary suggestion for improving email or on-line communication as a form
of student feedback was to increase the response rate by faculty and staff, with nine
responses in this category. Some respondents suggested merely that faculty and staff
respond while others suggested they should be required to do so. One student suggested a
way to make this feedback anonymous, while another echoed the concern for fear or
retaliatory grades by suggesting “do it when we have been graded”. An additional two
responses pointed to the general environment for feedback stating for example “just
generally (create) an environment that is more open to feedback”.
Class Discussion
Class discussion was the next most frequently used feedback channel of providing
student feedback with 35% of respondents stating that they used class discussions as a
form of feedback always or often. The usage of class discussion as a form of feedback
was normally distributed, yet skewed toward more frequent use with the bulk of students
(40%) reporting that they sometimes use this method of feedback, 30% indicating that
they often do, 15% indicating that they do so rarely, and 5% of students indicating they
60
always or never use class discussions as a channel for providing feedback. This indicates
that use of class discussions as a form of input may vary from instructor to instructor or
class to class. The majority of students felt that feedback given in class discussions was
sometimes important to faculty and/or staff and sometimes taken into consideration by
faculty and/or staff. Nevertheless 39% of respondents felt that action was never or rarely
taken based on this feedback, while an additional 39% felt it feedback given in class
discussions sometimes results in action. These results are indicated in Table 10.
Table 10
Scores on Class Discussion Variables by Percentage of Respondents
Never
Rarely
Sometimes
Often
Always
Total
Missing
Class
Discussion
Used to Provide
Feedback
Feedback given in
Class Discussion
Important to
Division
Feedback Given
in Class
Discussion
Taken into
Consideration
Action taken
resulting from
feedback
given in Class
Discussion
6
15
42
32
6
100
0
6
19
43
24
8
100
0
10
25
38
22
6
100
0
10
29
39
17
6
100
0
Mean scores were calculated for variables measuring use and efficacy of in-class
discussions as a form of student feedback with possible scores ranging from one
indicating representing “never” and five indicating “always” with three as the neutral
value representing “sometimes”. Results indicate that on average students sometimes use
this feedback channel and believe their feedback is important, taken into consideration,
61
and acted upon by the Division as indicated by mean scores very close to three for these
variables. These results are summarized in Table 11.
Table 11
Descriptive Statistics for Class Discussion Variables
N
Class Discussion Used as form of
Fdbk
Class Discussion Fdbk Taken into
Consideration
Class Discussion Fdbk Important
Action Taken from Class
Discussion Fdbk
Valid N
Min
Max
Mean
Std.
Deviation
72
1.00
5.00
3.17
1.28
72
1.00
5.00
2.89
.959
72
1.00
5.00
3.10
1.03
72
1.00
5.00
2.79
.967
72
The most often cited barriers to using in-class discussion as a form of student
feedback centered on fear of sharing ones opinion openly in front of the class, with
twenty-three responses coded into this category. “Students might be scared to share their
honest thought in class”, “can be intimidating”, “embarrassment”, “not everyone feels
safe in class to speak up”, “not willing to bring up concerns in front of peers” and “hard
to talk in front of entire class and teacher” were all responses in this category. Part of this
fear is fear of being judged by other students or fear of clash of opinions. For example,
one student stated, “some students dominate the classroom discussion”, while another
noted “many dissenting or conflicting opinions”. Likewise one student shared that
“students disagree with your input and that makes it difficult for the professor to really
hear what you have to say”.
62
Similar to these fears of giving feedback in front of the class is a fear of retaliation
by the professor and an associated concern for anonymity, cited by seven respondents.
These concerns all suggest that the classroom environment, rather than being a place of
inclusivity and openness, may hinder students from feeling safe or comfortable with
providing feedback openly in front of faculty and peers. Additionally, as is the case for
teaching evaluations as a form of student feedback, several students noted that the
feedback provided in in-class discussion may not be taken into consideration or acted
upon.
These views are reflected in the quantitative data wherein 32% of students feel
like feedback given in class discussions is never or rarely taken into consideration and
35% feel that this feedback is sometimes taken into consideration. Likewise, 37% of
respondents indicated that action is never or rarely taken based on this feedback while an
additional 37% felt that action was sometimes taken as a result of this feedback. While
one-tailed t-tests with a neutral value of three for the efficacy variables “feedback taken
into consideration”, “feedback important to faculty and staff” and “action is taken as a
result of feedback” did not prove significant at the .01 level, these scores indicate that for
the majority of respondents feedback given in classroom discussions is only rarely or
sometimes taken into consideration or acted upon by faculty and staff.
Suggestions for improving in-class conversations as a method of student feedback
largely addressed the primary barriers identified to using this method of feedback,
63
namely the classroom environment. Eight respondents referred to faculty or the
classroom environment being more open suggesting for example, “providing a more
inclusive space”, “engage students to speak up”, “listening to all students….” and
“making sure each student’s voice is heard and providing a safe environment”. Six
respondents had specific suggestions for ways to elicit feedback in in-class discussions:
One suggested using index cards to write down issues and concerns, while another
suggested the instructor write down summary points and clarify them with the class
before submitting this information to the department. Two responses suggested
formalizing this form of student feedback, one by making these discussions a part of
every course, and another by making it a class assignment or group task.
Informal Conversation
Twenty seven percent of respondents indicated that they always or often used
informal conversations with faculty as a method of providing student feedback, while the
majority, 36% indicated that they sometimes used this feedback channel. Responses on
the efficacy variables (feedback important, taken into consideration, action taken
resulting from feedback) were largely concentrated in the neutral value “sometimes” or a
mean score close to three. While students were more likely to think that feedback given
in informal conversations was often or always important to faculty than rarely or never,
students were more likely to feel that faculty and/or staff rarely or never take this
feedback into consideration or take action based on this feedback than sometimes or
always.
64
Table 12
Scores on Informal Conversation Variables by Percentage of Respondents
Informal
Conversation
Used to
Provide
Feedback
Never
Rarely
Sometimes
Often
Always
Total
Missing
Feedback given in
Informal
Conversation
Important to
Division
12
20
37
21
7
96
4
Feedback Given
in Informal
Conversation
Taken into
Consideration
7
14
42
24
13
100
0
Action Taken
Resulting from
Feedback Given
in Informal
Conversation
7
20
52
14
7
100
0
7
31
42
13
7
100
0
Mean scores were calculated for variables measuring use and efficacy of informal
conversations as a form of student feedback with possible scores ranging from one
indicating representing “never” and five indicating “always” with three as the neutral
value representing “sometimes”. Results indicate that on average students sometimes use
this feedback channel and believe their feedback is important, taken into consideration,
and acted upon by the Division some of the time as indicated by mean scores very close
to 3 for these variables. These results are summarized in Table 13.
65
Table 13
Descriptive Statistics for Informal Conversation Variables
N
Informal Conversation Used
as form of Fdbk
Informal Conversation Fdbk
Taken into Consideration
Informal Conversation Fdbk
Important
Action Taken from Informal
Conversation Fdbk
Valid N
Minimum
Maximum
Mean
Std.
Deviation
73
1.00
5.00
2.904
1.09
71
1.00
5.00
2.944
.955
71
1.00
5.00
3.21
1.07
71
1.00
5.00
2.82
.990
71
The primary barriers to using informal conversations as a form of student
feedback were perceived ineffectiveness of this method, lack of time, concerns of
anonymity or retaliation, and general discomfort approaching faculty or staff in this way.
Eighteen respondents noted that time was a barrier; many of them indicating that they
could not make it to professor’s office hours, especially if they are held on days students
are in field. Thirteen students cited concerns over anonymity or retaliation by professors
who don’t like what they hear from students, five of which specifically mentioned fear of
retaliatory grades. Nine responses were coded as uncomfortable with this feedback
method. For example one student noted “some people may feel intimidated to provide
their input and therefore may minimize a concern when in fact it is more serious”, this
was echoed by the concern “it’s hard to be honest”. Other students noted feeling
“intimidated”, “nervous”, or “tense”. One student noted the general environment is one of
“unwillingness to hear from students…the environment of unwelcoming students”.
66
Only ten students provided suggestions for improving informal conversations as a
method for providing student feedback. These ranged from increasing faculty availability
(3 responses), to having a suggestion box or comment cards (2 responses), to revoking
tenure and taking a “more student centered approach”. The scarcity and variety in
responses may point to the uniqueness of each student-faculty relationship such that
while several students may feel barriers include feeling uncomfortable or concerned over
lack of anonymity, solutions to these barriers may be difficult to prescribe.
Participation in Social Work Student Organizations
Sixty percent of respondents had never used participation in social work student
organizations as a way of giving feedback while 28% reported they did so rarely or
sometimes. Only 13% said they often or always provide feedback by participation in
social work student organizations. These results are displayed in Table 14.
Table 14
Scores on Participation in Social Work Student Org. Variables by Percentage of
Respondents
SW-Student
Feedback given Feedback Given Action Taken
Org.
in SW-Student
in SW-Student
Resulting from
Participation
Org.
Org.
Feedback Given
Used to
Participation
Participation
in SW-Student
Provide
Important to
Taken into
Org.
Feedback
Division
Consideration
Participation
Never
Rarely
Sometimes
Often
Always
Total
Missing
60
17
11
7
6
100
0
36
13
33
5
13
100
0
67
33
13
39
8
8
100
0
36
16
34
5
8
100
0
Mean scores were calculated for variables measuring use and efficacy of
participation in social work student organizations as form of student feedback with
possible scores ranging from one indicating representing “never” and five indicating
“always” with three as the neutral value representing “sometimes”. Whereas average
scores hovered around three for email/online communication, in-class discussion, and
informal conversations, average scores on variables related to this feedback channel were
closer to two, indicating that students on average only rarely use this form of feedback or
perceive it to be efficacious. These results are summarized in Table 15.
Table 15
Descriptive Statistics for Participation in Social Work Orgs. Variables
SW Org. Participation Used
as a Channel for Feedback
SW Org. Participation
Feedback Taken into
Consideration
SW Org. Participation
Feedback Important
Action Taken from SW Org.
Participation Feedback
Valid N
N
Minimum
Maximum
Mean
Std.
Deviation
72
1.00
5.00
1.82
1.21
64
1.00
5.00
2.45
1.25
61
1.00
5.00
2.46
1.37
61
1.00
5.00
2.33
1.25
61
A one sample T-test was conducted to compare the average scores on the efficacy
variables for feedback given through participation in social work student organizations
with the neutral value of three in the item used to measure this variable. The null
hypothesis was that the mean score of efficacy variables would be equal to three. The
68
researcher was able to reject the null hypothesis at the .01 level with t scores and p values
indicated in Table 16.
Table 16
Results of One-Sample T-Test for Participation in SW Student Organization
Test Value = 3
t
df
Sig. (2Mean
95% Confidence Interval
tailed)
Difference
of the Difference
Lower
Upper
Participation in
Student
-8.25
71
.000
-1.18
-1.47
-.895
Organization used
as a form of input
Participation in
Student
-3.51
63
.001
-.547
-.858
-.236
Organization taken
into Consideration
Participation in
Student
-3.08
60
.003
-.541
-.893
-.189
Organization
Important
Action taken
resulting from
Participation in
-4.27
60
.000
-.672
-.992
-.353
Student
Organization
The greatest barrier to using participation in the Social Work Student Association
(SWSA) or Phi Alpha Social Work Honor Society as methods for giving feedback was
lack of time for involvement in these organizations, with ten respondents indicating that
this is a barrier to participation. Several students mentioned that they cannot make the
scheduled meetings, while others remarked that they simply didn’t have enough time to
commit. Likely due to the inability of many to participate, five respondents noted that
these organizations are not truly representative of the entire student body. One student
69
felt that the feedback does not go directly to faculty while another felt that the faculty
does not support or acknowledge these student organizations. It was also noted by one
student that a certain GPA and minimum hours of participation are required for these
clubs.
Suggestions for improving participation in SWSA or Phi Alpha as methods of
student feedback centered on finding ways to involve more students. For example, two
respondents suggested the clubs have more meeting times, while three respondents
suggested the clubs find other ways to involve those who can’t attend meetings. One
student suggested faculty become more involved in student organization activities, while
another suggested a greater role for student organizations in channeling student input into
the Division of Social Work.
Participation in Other Non-Social Work Student Organizations
Participation in other non-social work student organizations as a form of feedback
was very rarely used with 67% of respondents indicating that they had never given
feedback through participation in these organizations, and 11% indicating that they rarely
did so. Only 3% indicated that they always give feedback through participation in other
non-social work student organizations while none indicated they often did so. The
majority of respondents (45%) indicated that they believe faculty or staff never to have
taken action based on the feedback provided through participation in non-social work
student organizations: 19% felt that action was rarely taken as a result of this feedback,
70
and 28% indicated that action was sometimes taken. Only 9% of respondents felt that
feedback provided through participation in other student organizations always or often
resulted in action by faculty or staff. These results are indicated in Table 17.
Table 17
Participation in Other Non-Social Work Student Org. Variables by Percentage of
Respondents
Other Non-SW Feedback given Feedback Given Action Taken
Student Org.
in Other Nonin Other NonResulting from
Participation
SW Student
SW Student
Feedback Given
Used to Provide Org.
Org.
in Other NonParticipation
Participation
SW Student
Feedback
Important to
Taken into
Org.
Division
Consideration
Participation
Never
Rarely
Sometimes
Often
Always
Total
Missing
71
11
42
15
43
22
45
19
15
29
27
28
0
3
100
0
9
5
100
0
5
3
100
0
6
3
100
0
One sample T-tests were performed to compare the average scores on all the
variables measuring use and perceived efficacy of feedback provided via participation in
non-social work student organizations with the neutral value of three in the item used to
measure this variable. These analyses revealed a significant difference between the mean
of the sample with the test value of 3 that was statistically significant at the .01 level.
This indicates that, in this sample, use of and perceived efficacy of feedback provided via
this channel is significantly different from the neutral value of three, reflecting infrequent
71
use and low ratings of perceived efficacy of feedback provided via this feedback channel.
Results of the one-tailed t-tests are displayed in Table 18.
Table 18
Results of One-Sample T-Test for Participation in Other Student Org. Variables
Test Value = 3
t
df
Sig. (2Mean
95% Confidence
tailed)
Difference
Interval of the
Difference
Lower
Upper
Participation in
Other Student
-13.16
71
.000
-1.47
-1.70
-1.25
Organization used
as a form of input
Participation in
Other Student
-7.47
66
.000
-.985
-1.25
-.722
Organization taken
into Consideration
Participation in
Other Student
-5.31
64
.000
-.800
-1.10
-.499
Organization
Important
Action taken
resulting from
Participation in
-6.84
64
.000
-.954
-1.23
-.675
Other Student
Organization
p<.01
Participation in Governance
Participation in governance was rarely used as a form of student feedback with
80% of respondents stating that they never participated in governance. Only 1% said
they always used this form of feedback, 7% said they often did so, 9% said they
sometimes did so, and 3% said they rarely used this form of student feedback.
72
Almost half (49%) of all respondents felt that feedback given through
participation in governance was never important to or taken into consideration by the
Division while an 28.6 and 29% believed that this was sometimes the case. Fifty-one
percent of respondents felt that action was never taken as a result of feedback given
through participation in student governance while 8% felt that action was rarely taken as
a result of this method of feedback. Twenty-seven percent of respondents felt that this
feedback sometimes resulted in action while only 14% felt that action was taken often or
always as a result of feedback given via participation on governance as indicated in Table
19.
Table 19
Scores on Participation in Governance Variables by Percentage of Respondents
Never
Rarely
Sometimes
Often
Always
Total
Missing
Participation in
Governance
Used to Provide
Feedback
Feedback given
in Participation
in Governance
Important to
Division
Feedback Given
in Participation
in Governance
Taken into
Consideration
Action Taken
Resulting from
Feedback Given
in Participation
in Governance
80
3
9
7
1
100
0
49
8
29
8
6
100
0
45
11
28
8
8
100
0
51
8
27
8
6
100
0
Mean scores were calculated for variables measuring use and efficacy of
participation in governance as a form of student feedback with possible scores ranging
from one to five with three as the neutral value representing “sometimes”. Results
73
indicate that on average students use this feedback channel and believe their feedback is
rarely important, taken into consideration, or acted upon by the Division as indicated by
mean scores close to 2 for these variables. These results are summarized in Table 20.
Table 20
Descriptive Statistics for Participation in Governance Variables
N
Min
Max
Mean
Participation in Governance
Used as form of Fdbk
Participation in Governance
Fdbk Taken into Consideration
Participation in Governance
Fdbk Important
Action Taken from Participation
in Governance Fdbk
Valid N
Std.
Deviation
71
1.00
5.00
1.46
1.01
64
1.00
5.00
2.22
1.31
63
1.00
5.00
2.14
1.29
63
1
5
2.11
1.30
63
One-sample t-test were conducted to compare average scores for all variables
related to feedback given through participation in student governance with the neural
value of three in the item used to measure this variable. The null hypothesis was that the
mean scores for these variables would be equal to the neutral value. The T-test results
indicated that the difference between the mean of the sample with the test value of three
was statistically significant at the .01 level for all of these variables. This indicates that
students in this sample give feedback through this feedback channel significantly less
often than the neutral value of three or “sometimes” and similarly have less confidence in
the efficacy of delivering feedback through this channel. These results are indicated in
Table 21.
74
Table 21
Results of One-Sample T-Test for Participation in Governance Variables
Test Value = 3
t
df
Sig. (2Mean
95% Confidence Interval
tailed)
Difference
of the Difference
Lower
Upper
Participation in
Governance used
-12.79
70
.000
-1.54
-1.77
-1.30
as a form of input
Participation in
Other Governance
-4.75
63
.000
-.781
-1.11
-.453
taken into
Consideration
Participation in
Other Governance
-5.26
62
.000
-.857
-1.18
-.531
Important
Action taken
resulting from
-5.44
62
.000
-.889
-1.22
-.56
Participation in
Governance
Nine participants identified barriers to using participation in student governance
as a method for providing feedback. Two students noted that there is not enough time to
participate, while another student disclosed that they had not even heard of opportunities
to participate. Similarly one student felt the opportunity to participate should be
advertised to all, while another shared that there is little student involvement in
governance. As in other feedback channels, concerns were voiced that participation can
be “used against you” and that students may be “intimidated by faculty”, along with the
concern that “the school may not support student decisions”.
Six students gave suggestions for improving student participation in governance
including “more awareness of our rights and when these meetings are”, “more two-way
75
communication”, “a more inclusive space, welcoming student feedback” and two
suggestions of seeking student input regularly.
General Solutions Questions
Twenty-seven participants responded to the question “What are some solutions to
increase student input into social work education and consideration of student input?”
Several suggestions were given including having additional meetings or forums such as
town hall meetings, cohort discussions or presentations, online forums, or class
discussions. Two respondents suggested a general review of the program be part of the
feedback process.
Ten responses were coded as directly addressing the culture or environment of the
Division that would facilitate feedback. Five of these responses specifically called upon
faculty to create this environment, for example by creating a “comfortable atmosphere for
sharing” or “environment of openness”, having a “more student-centered approach” and
“taking the time to really get to know their students”. One student suggested embedding
feedback in the explicit curriculum stating, “teach it and promote/encourage through class
assignments/projects because we often don’t get time to think about this system enough”.
Four students suggested more outreach and promotion of opportunities to give feedback
suggesting for example “make opportunities more publicized”, “raise awareness of ways
to be involved” and “increase outreach and communication”. These suggestions all point
to the development of the implicit curriculum as one that supports and institutionalizes an
environment rich in student feedback.
76
Correlations
Almost all of the variables measuring the frequency or use and perceived efficacy
of each of the identified feedback channels were significantly correlated with one another
for each feedback channel. The one feedback channel in which significant correlation
was not found between all variables was for teaching evaluations. Use of teaching
evaluations as a form of feedback was not significantly correlated with any of the
efficacy variables r(73) = .227, r(73) = .199, r(72) = .107, p>.001. This data is displayed
in Table 22. Thus while many students use this method of providing feedback, this does
not mean that many students believe feedback given in teaching evaluations is
efficacious. This may be attributed to the fact that teaching evaluations are the one
method of providing student feedback which is explicitly elicited from students and
institutionalized within the Division of Social Work, therefore students are giving
feedback in this way regardless of the perceived influence of that feedback, whereas
utilization of other feedback channels may be prompted precisely by the perceived
efficacy of their use. This finding will be further discussed under “interpretations to the
findings”.
77
Table 22
Correlations Among Teaching Evaluation Variables
T.E.s Used as
Teaching
Teaching Action
Form of Input Evals Taken
Evals
Taken
into
Important Resulting
Consideration
from
T.E.s
T.E.s Used as Pearson Correlation
1
.227
.199
.107
Form of Input Sig. (2-tailed)
.053
.092
.372
N
73
73
73
72
Teaching
Pearson Correlation
.227
1
.609**
.743**
Evals Taken Sig. (2-tailed)
.053
.000
.000
into
N
73
73
73
72
Consideration
Teaching
Pearson Correlation
.199
.609**
1
.613**
Evals
Sig. (2-tailed)
.092
.000
.000
Important
N
73
73
73
72
**
**
Pearson
Correlation
.107
.743
.613
1
Action Taken
Resulting
Sig. (2-tailed)
.372
.000
.000
from T.E.s
N
72
72
72
72
**. Correlation is significant at the 0.01 level (2-tailed).
Key Informant Interviews
Two faculty members were interviewed as key informants in order to offer this
study an institutional perspective. For purposes of anonymity they are referred to here as
Faculty Member A and Faculty Member B. Many of the ideas discussed by faculty
echoed the concerns of students. Without being explicitly questioned about specific
feedback channels, many of those assessed in the survey were brought up by the key
informants throughout the interviews.
78
According to both faculty members the type of feedback is crucial in determining
whether it results in change. Faculty Member A stated, “the type of input is the question.
Some people don’t like the grade, others feel there are too many assignments…I think it
is more productive to ask what do we really need?” Similarly Faculty Member B
commented on the quality of feedback, placing the onus on students to give meaningful
and constructive feedback: “Students don’t critically examine, self-reflect on their
contribution to the educational process. (We need to) move beyond feedback to a
discourse”.
One aspect of the end-of-semester teaching evaluations that challenges the
usefulness of this form of feedback according to Faculty Member A is that the students
are emailed the link to the evaluations and need to complete them on their own time.
This faculty member felt that students who had an okay experience tend to not take the
time while those who do take the time tend to have either very positive or very negative
views of the class, but they don’t always share the reasons why. Faculty Member A
shared “over the years, I get so few well-thought out responses”. Faculty Member B
echoed these concerns stating, “because it’s blind students are not accountable” adding
“it’s easy for faculty to devalue them”. At the same time, this faculty member shared that
students don’t realize the importance of this feedback stating, “it’s big. It’s the main
instrument for promotion”. This resonates with survey data wherein students shared that
they didn’t know what became of the feedback given in teaching evaluations.
79
Key informants also referred to in-class discussions, social work student
organization participation, and participation in governance as feedback channels. Faculty
Member A suggested that after a class is done, faculty could hold a roundtable discussion
to really hear what students think and elicit “honest and tangible” responses. Similarly
Faculty Member B discussed an educational philosophy that is more collaborative and
dialogic with smaller classes to facilitate these discussions.
Faculty Member A discussed the struggles of utilizing participation in student
social work organizations as a form of feedback. This faculty member’s concerns echoed
those of many students, including students not having enough time to get involved, and
the challenges of having a single entity speak with one voice on behalf of the student
body. Faculty Member B felt that faculty should be more involved in student
associations and both faculty members spoke of the waxing and waning membership,
power, and efficacy of these organizations over the years.
When discussing participation in governance, Faculty Member A echoed student
concerns that these meetings are intimidating. This faculty member added the additional
perspective, not mentioned in student surveys, that when students participate in governance
“the student representatives have a vote, but their main job is not to look at curriculum,
curriculum design”. He noted that other schools of social work have an advisory board of
only students to look at the curriculum and thought an advisory board could contribute to
the flow of feedback within the Division specific to curriculum.
80
Both key informants discussed the culture or environment in which student input
is received. This was reflected in two ways: the attitudes and openness of faculty
members, and the larger structural forces that constrain the environment.
Both key informants felt that not all faculty members are open to student
feedback. It was shared that some faculty look down on other faculty who are more open
to student feedback that are thought of as being “too nice to students”. Faculty Member A
referred to the empowerment goal of social work stating “I don’t know if we teach
empowerment very well…(faculty) teach students to be advocates but yet when they
advocate too much they don’t like that advocacy too much in an academic setting…when
students become empowered they are problems to some faculty”. This faculty member
similarly acknowledged the fear of retaliatory grading shared in many student responses
saying, “students truly believe their grades will suffer. There is a lack of trust in the
system and in the integrity of the professor”. In the same vein this faculty member
touched on concern about accountability for faculty who are tenured stating “(being a
professor) is a wonderful job for life even when you’re not wonderful anymore. That’s
great protection for faculty but not great protection for students”. Specific reference to
the institution of tenure came up once in student surveys, with one respondent feeling
tenure should be revoked—likely with the goal that faculty member be held more
accountable for being responsive to feedback.
Both informants referred to larger structural factors that constrain the flow of
81
feedback including the culture of the institution and the larger economic system of our
society. In reference to the culture of feedback both informants spoke in favor of
increased interaction outside of the roles and norms of the academic setting. Faculty
Member A shared stories of a time when faculty members would “take students out for a
beer and talk about things”. Now he laments, “culture is different between students and
faculty and we put up these walls…we’ve lost the ability to connect outside of roles and
ability to talk openly and honestly without it being an evaluative situation”. Similarly
Faculty Member B thought “faculty and students should interact with each other outside
the context (of graduate school)” saying “interacting in many different contexts people
get to see each other at a more human level”. These ideas are similar to the kind of
forums, town halls, and class discussions that were suggested in student surveys. The
cultural constraints that restrict this more open interaction were viewed by one faculty
member as stemming from “patriarchical, sexist ideas that are non-collaborative and view
the professor as the expert”. It was emphasized that “(the program is) overwhelmingly
women yet we have lots of male teachers. Need to look at gender ideas and be more
embracing of feminism”.
These cultural shifts in student-faculty relationships, it was argued, result from
structural changes (sociopolitical-economic) both within the university as an institution
and within the greater society. For example the use of more part-time faculty, limited
time, and greater teaching loads placed on full-time faculty are reflections of budget cuts.
Faculty Member B stated “students can locate larger issues in the faculty instead of
82
getting angry at the economic system” which he argued put pressure on the faculty, cuts
university resources and leaves students “busy just trying to get through”. “Students and
faculty should work together to confront larger political issues”. Faculty Member A
noted the challenges of effecting change in a large institution stating, “the university is
very large, like a big cargo ship headed in a certain direction for many years. It’s hard to
turn the ship around”. Many student respondents referred to time and resource
constraints without explicitly invoking greater economic or political structures within our
society.
Interpretations to the Findings
Primacy of Teaching Evaluations Among Feedback Channels
Of the feedback channels assessed, teaching evaluations were the most frequently
used with 43% of respondents reporting that they always use this feedback channel and
an additional 48% reporting they sometimes or often use teaching evaluations as a form
of feedback. The popularity of the use of teaching evaluation as a vehicle for student
feedback may reflect the fact that teaching evaluations, both mid-semester and end-ofsemester, are formalized methods of feedback within the system of the Division of Social
Work. As opposed to other identified feedback channels such as informal conversations
or emails, which require the initiative of the students and are not provoked by the faculty
or staff, teaching evaluations are sent electronically to all students at the end of the
semester and administered in many classes during the semester in paper form. While
83
students are not required to complete the evaluations, they are asked to do so. Teaching
evaluations are the only feedback that is uniformly elicited from all students.
The average score calculated for the efficacy variables associated with teaching
evaluations was 3.3, indicating that on average students believe the feedback they provide
in teaching evaluations is sometimes efficacious as measured by the variables “I believe
the Division of Social Work faculty and/or administration take the feedback I give in
teaching evaluations into consideration”, “I believe feedback I provide in teaching
evaluations is important to the Division”, and “I believe the Division of Social Work
takes action based on the feedback I provide in teaching evaluations”. While on average
students only sometimes feel that this feedback is taken into consideration or acted upon,
they are more likely, on average, to believe that it is important to Division faculty and
staff as reflected in a mean score for efficacy variable approaching four, and significant
difference in a one-tailed t-test between the neutral value of three and the actual mean
score for that variable.
Key informant interviews pointed out that this feedback is indeed important to
faculty and staff as teaching evaluations are considered for purposes of promotion and
tenure. Yet despite the perceived importance of teaching evaluations to faculty and staff,
many students are unaware of how they are used within the division. Communicating
information about the use of student feedback is an important component of building an
implicit curriculum that honors and encourages student feedback.
84
The aforementioned findings are in line with the research identified in the
literature review pertaining to teaching evaluations as a tool for student feedback in the
literature review. For example, while teaching evaluations may be important to faculty
members, several studies conclude that student ratings do not have a significant impact
on teaching improvement (Kulik & McKeachie, 1975; Kember et. al, 2002; Rotem &
Glasman, 1979). Thus other programs have a similar problem connecting this feedback
to action. Key informant faculty member concerns with the paucity of constructive, well
thought-out feedback provided in teaching evaluations. The unwillingness of faculty to be
open to such feedback is echoed Rotem and Glasman’s finding that feedback from
student ratings was not effective for improving university teacher performance because
feedback was not informative or specific to behavioral change as well as resistance of
recipients to feedback (as cited in Knol, 2013).
Student concerns that the feedback they provide not only through teaching
evaluations, but also through other feedback channels would be misinterpreted or taken
the wrong way is a concern validated in the literature wherin Theall and Franklin (2001)
conclude in their study of student ratings of faculty that student ratings often
misinterpreted, misused, or not used at all.
Interpretation of Barriers and Solutions for Utilization of Feedback Channels
Student fears. The implicit curriculum is defined as the educational environment
or context is which the implicit curriculum is delivered. Unfortunately, for many CSUS
85
MSW students, the educational environment within the Division of Social Work is one of
fear. Qualitative responses identifying barriers to using the feedback channels largely
indicated that students feared sharing their feedback, feeling perhaps that their grades
would be penalized. For example, eleven respondents cited these fears as barriers to
using teaching evaluations as a channel for student feedback, while an additional five
respondents cited concerns for anonymity. Similarly twenty-three respondents cited fear
of speaking up as a barrier to giving feedback through in-class discussion while seven
specifically cited fear of retaliatory grading or lack of anonymity for this feedback
channel. Barriers for utilizing email or online communication included fears that
messages could be perceived the wrong way or that professors and staff would react
negatively in addition to concerns for anonymity cited by seven respondents. For
informal conversations, thirteen students shared concerns over anonymity or retaliation
by professors who don’t like what they hear from students, five of which specifically
mentioned fear of retaliatory grades, while nine responses were uncomfortable with this
feedback method.
Culture/environment. Student fears regarding retaliatory grading and anonymity
as well as general discomfort giving feedback indicate that the environment or culture of
feedback within the Division of Social Work at CSUS may be an area for intervention in
order to improve student feedback within the system. Yet this educational context or
environment can be viewed in the context of larger societal factors as the system receives
input from the larger systems with which it relates and within which it is embedded
86
(Giroux & Penna, 1979). For example, the culture and policies of the university itself
inform and constrain Division policies.
In a similar manner, a key informant pointed out that the greater capitalist
economy puts economic pressures on public education resulting in the use of more parttime faculty and greater teaching loads placed on full-time faculty due to budget cuts
within the university. Similarly, students are pressured by the economic system to work
while attending school and in general may have less time to attend office hours, student
organizations, or serve on governing bodies. It is therefore not surprising that limited
time was an often-cited barrier to giving feedback. For example, eighteen respondents
noted that time was a barrier to providing feedback through informal conversations with
faculty, many of them indicating that they could not make it to professors’ office hours
due to field work or other constraints.
The concerns that emerged in the data regarding the culture or environment for
providing student feedback can be viewed from a systems perspective wherein
institutions of higher education are exposed to external interaction and influences for
example from external accreditation systems and other external systems such as the
labor market and society (Mizikasi, 2006). Thus the Division of Social Work, while
attempting to imbibe its implicit curriculum with the values and requirements of the
accrediting body (CSWE), also finds its ability to do so constrained by the greater
economic and social systems within which it is situated.
87
The critique of the larger environment in which the Division and the university
are situated as well as the power dynamic between students and professors is in keeping
with the critical perspective discussed in the literature review. A critical approach
requires “an insistence on foregrounding the processes of power and ideology that are
subsumed within the fabric of our institutional structures, procedures, and practices, and
the ways that inequalities in power intersect…”(Reynolds, 1999). By suggesting, as one
key informant suggested, that students and faculty locate problems in the larger social
and economic systems, students and faculty take a critical approach to education.
Role of faculty. In spite of the constraints imposed upon faculty, staff, and
students by larger systems, many respondents, including faculty who were interviewed
felt that the faculty could do more to create an environment in which student feedback is
elicited and valued. For example respondents noted that faculty could “engage students
to speak up”, “listening to all students….” and “making sure each student’s voice is heard
and providing a safe environment” for in-class discussions. One student shared that what
was needed is an “environment of openness; it starts with faculty and staff”. As Grant
(1997) points out, faculty members have power over students within a hierarchical
system. He argues that this inequality leads to lack of student confidence in the
importance or usefulness of their own opinions or ideas (as cited in Read, Archer, &
Leathwood, 2003). In this way both students and faculty have an awareness of power
differentials between faculty associated with the critical approach discussed in the
literature.
88
Role of students. Despite the emphasis on faculty responsibility for overcoming
barriers to student feedback, respondents acknowledges an important role for students in
overcoming these barriers. Some students suggested more collaboration between faculty
and students, for example one student wrote It’s up to both “students and faculty working
together to create a comfortable atmosphere for sharing”. An additional respondent
noted, “it’s not only the students responsibility (to provide feedback), education is
relational”, but adding that “faculty do not take the time to really get to know the
students”. This penchant for increased faculty-student relationships aligns with Astin’s
(1984) research, which shows that students who interact frequently with faculty members
are more likely than other students to express satisfaction with all aspects of their
institutional experience. Therefore he encourages finding ways to promote greater student
involvement with faculty.
Such an approach to enhancing student engagement providing feedback is in
keeping with the empowerment perspective outlined in the literature and prominent in
the explicit curriculum. Empowerment as described by Lee (2001) as involves the
development of a more positive and potent sense of self, the construction of knowledge
and capacity for a more critical comprehension of the web of social and political
realities of one’s environment, and the cultivation of resources and strategies, or more
functional competence, for attainment of personal and collective goals. In this way an
empowerment approach aligns with both critical and systems perspective by addressing
the interrelatedness of systems, and encouraging the awareness and ability to act to
89
achieve personal and collective goals, such as the improved integration of student
feedback into social work education.
Awareness of feedback systems. As Holosko, Skinner and MacCaughelty (2010)
built the implicit BSW curriculum as faculty members of their university, they took care
to make information and resources available to students, largely by posting information
on the program website, including a student glossary and field advising FAQ sheet,
which they developed. In the present study, many respondents shared that they were
unaware of how the feedback they provide is utilized. Several students shared that they
didn’t know what teaching evaluations were used for, while a key informant echoed this
concern. For example, nine students indicated that faculty don’t look at teaching
evaluations at all, while seventeen students stated that email/online communications go
unseen or without response by faculty and staff, stating for example “emails don’t get
responses”, “many faculty do not check their email”, and “person can filter past emails”.
Others felt that responses come too slowly with six respondents citing slow response
time as barriers to using this form of student feedback. Additionally the variation in score
on the efficacy variables indicate discrepancies in students’ beliefs about how the
feedback is considered and utilized and indeed whether it is taken into account at all.
Based on these responses, it follows that one way to improve student feedback is
to make it clear to students the available channels for student feedback and how the
feedback they provide through these channels will be utilized. For example students
90
should be informed of how the teaching evaluations they complete will be used by faculty
for example for tenure and promotion purposes. Similarly, students should be informed
of the various opportunities for participation in governance and the importance of their
participation, for example, students should all be aware that students have a vote on
faculty committees. All students, regardless of their participation, should understand the
structure and function of student social work organizations within the larger system,
including their role in providing feedback. For example, one suggestion for improving
student participation in governance was “more awareness of our rights and when these
meetings are”. While in-class discussions and informal conversations are more specific
to individual classes and students, the Division could make it a policy to inform students
that these feedback channels are accessible and assure students that feedback provided in
these ways will be honored and that students will not be penalized by providing feedback
in these ways.
Accountability for responding to feedback. Student concerns about the extent
to which their feedback is important to faculty and staff, taken into consideration, and
results in change could be assuaged by creating structures within the system that hold
faculty and/or staff accountable for utilizing the feedback received. Students had several
suggestions focused on accountability of faculty and staff. These ranged from revoking
tenure, to grading professors and publicizing results, to requiring faculty to respond to
emails.
91
On the same token, key informant interviews with faculty suggested the need for
holding students accountable for the feedback they provide. For example, one faculty
member noted that student feedback may be that they are unhappy with their grade or feel
there are too many assignments rather than more constructive feedback stating,“it’s more
productive to ask ‘what do we really need’?”. The second key informant echoed this
concern stating “over the years, I get so few well-thought out responses” in reference to
teaching evaluations.
Accountability for providing meaningful feedback. One way to both hold
faculty members accountable to student feedback and to hold students accountable for
providing constructive, well thought-out feedback is to encourage feedback through faceto-face methods, or as one faculty member suggested, “move beyond feedback to a
discourse”. This may be accomplished when students and faculty address inequalities in
power and take a collaborative approach to improving education by building “relational
power” (Warren, 2005) which draws upon trust and cooperation and acknowledges that
learning involves the power to get things done collectively by confronting, rather than
denying power inequalities. Likewise, student-centered approaches to education or
person centered learning discussed in the literature review (Fielding, 2006) call for an
educational environment in which argues in which student voice is student driven, staff
supported, and often a joint endeavor. Whereas existing channels for student feedback
may be characterized by under-utilization, lack of information, limited efficacy, and fear,
in Fielding’s vision of person centered learning, the educational environment would
92
provide significant occasions where students lead dialogue as equals and contributors to
their education (2006). Of course this can only happen when students are empowered to give
feedback and feel safe from retaliatory grading. Thus Fielding warns that in emphasizing
interpersonal dynamics of education it is important to pay attention not only the voices of
all students, but also to the “silences, inertia and fear that speak to the alienation and
indifference of those who choose to remain silent” (Fielding, 2006, p.311).
Formalizing feedback channels. Of the feedback channels studied, only teaching
evaluations were formalized within the Division. A common suggestion to improve
student feedback and overcome barriers to using existing feedback channels was to
formalize in-class discussions as an explicit component of the implicit curriculum.
Examples of suggestions for doing so include making class discussions focused on
student feedback a part of every course and reporting to the department the feedback
shared in these discussions. Perhaps institutionalizing this form of student feedback
would teach students, through the implicit curriculum, that their feedback is valuable. In
this way the implicit curriculum can empower students to engage as active stakeholders
in their own education.
Representation of students in feedback channels. Whereas all students can
participate in providing feedback through teaching evaluations, informal conversations,
and email/online communication, student organizations and governance structures are
representative bodies by design. Yet because these feedback channels are drastically
93
underutilized compared to other methods (57%, 43%, and 80% of respondents reported
they never participated in student social work organizations, other student organizations,
and governance respectively) it calls into question the capacity of these structures to
represent the student body. While lack of time and other barriers make it difficult to
improve participation, improvement of the implicit curriculum is dependent upon both
enhancing the ability of these structures to represent the student body and compensating
for their inherent inability to do so completely.
These concerns over lack of representativeness of student organizations and
governing bodies are discussed in the literature. For example, the Council of Europe
Project on Education for Democratic Citizenship (CC-HER Bureau, 2000 in Menon,
2003) which measured extent of student participation in university governance and
satisfaction with institutional practices at fifteen European and fifteen US colleges and
universities, found concerns that a small elite group of students dominated student
opinion. Similarly, they found that lack of information about governing structures was a
barrier to participation. They argued, as may be the case at CSUS, that student
participation in governance may be viewed merely “as a formality to satisfy statutory
requirements” (Menon, 03)
Social work values. Following CSWE, values including service, social justice, the
dignity and worth of the person, the importance of human relationships, and integrity should be
communicated through the implicit curriculum as well as explicitly in the content of courses.
94
Thus while the CSWE requires that social work values should underpin the explicit and implicit
curriculum and frame the profession’s commitment to respect for all people and the quest for
social and economic justice (EP 1.1.), there are many barriers to the realization of these goals
within the Division. Thus, as Holosko et al. (2011) argue in their own study of implicit
curriculum, there can be a disconnect between explicit messages and values being taught and
implicit values exemplified in the academic culture. The data collected in this study uncovered
some of the root causes of this apparent disconnect and elicited suggestions for improvement.
Awareness of and action against identified barriers to the use of feedback channels, as well as
work to improve the efficacy of feedback provided therein can assuage some of the apparent
disconnect between the social work values and competencies proscribed by CSWE and the
reality of the culture of student feedback as a component of the implicit curriculum.
95
Chapter 5
CONCLUSION, SUMMARY, AND RECOMMENDATIONS
Summary of Study
This study assessed the student feedback component of the implicit curriculum
within the Division of Social Work at CSUS from a systems perspective, analyzing
channels for student feedback within the system. Using an anonymous questionnaire,
MSW students were asked to rate their frequency of use of seven feedback channels and
perceived efficacy of the feedback provided through each of these channels. Students
were also asked to identify barriers to providing feedback through these channels and
provide suggestions to improve the use and efficacy of the channels. Feedback channels
included teaching evaluations of faculty, email/online communication, in-class
discussion, informal conversations with faculty or staff, participation in social work and
non-social work student organizations, and participation in program governance. Key
informant interviews with two faculty members provided additional insight about the
flow of feedback within the system.
The data collected in this study indicate that while students make some use of
teaching evaluations, email/online communication, in-class discussion and informal
conversation as channels for providing feedback into the Division of Social Work,
participation in student organizations and governance are rarely used as methods for
providing input. Most notable in preventing students from increased participation in
96
these feedback channels are fears of speaking up and/or receiving negative reactions
(such as retaliatory grades), belief that their feedback is not valued or acted upon, and
lack of time and awareness preventing them from participation in these feedback
channels. Several suggestions were provided for addressing these barriers, for example
by confronting the culture/environment of power inequality, lack of resources, and fear;
providing more information about available channels for providing feedback; improving
accountability of students for providing constructive feedback and accountability of
faculty and staff for responding to feedback. In this way, student and key informant
recommendations aligned with key concepts discussed in the literature review including
student or person centered learning, critical and empowerment perspectives, and systems
theory, as well as the results of similar assessments of student feedback.
Implications for Social Work
An empowerment approach spans both micro and macro social work practice by
beginning at the individual level with the development of a more positive and potent
sense of self and leading to the development a critical approach and response to social
and political realities. Results of this study indicate that while students learn about
empowerment in the explicit curriculum, they may not be empowered to fully participate
as stakeholders in their own education. Similarly, faculty and staff may feel they lack the
resources to respond to student input. The empowerment of both these stakeholders
97
requires the development of a critical awareness and the cultivation of resources or
strategies for the attainment of personal and collective goals (Lee, 2001).
Following Freire’s conception of radical or critical pedagogy, a critical
perspective is the result of a dialogical process of reciprocal exchange, which forms the
foundation of the student-teacher relationship. Thus empowerment, from the
perspective of critical pedagogy, involves a joint endeavor by both faculty and students
to challenge the power conventions that dictate their roles within a society that
normalizes hierarchical structure of power and normalizes domination and oppression
(Reynolds, 1999). While many respondents emphasized the importance of the role of
faculty in addressing the culture or environment for student feedback, indeed both
students and faculty can play a role in promoting critical awareness of factors which
stifle student feedback and developing resources and strategies to overcome these. Thus
neither faculty nor students are responsible for the empowerment of the other, yet
empowerment may begin at any level at any time, and indeed may begin with one
person.
With regard to social work practice, both faculty and social work students alike
can work to enhance “critical comprehension of the web of social and political realities
of one’s environment” that forms the basis for empowerment (Lee, 2001, p.34). This
awareness can be achieved in dialogic processes outlined by both Freire in his
conception of radical pedagogy as well as Fielding in his discussion of person centered
98
learning (Reynolds, 1999; Fielding, 2006). Sennett’s (1999) model of dialogic schools
calls for the incorporation of polyphony, or many voice, speaking through multiple
forums for conversation as well as moments of carnival in which norms of engagement
and role relations are turned upside-down (as cited in Fielding, 2006, p.308).
Engagement of students and faculty in critical conversation falls within the realm of
social work practice in engagement, assessment, intervention, and evaluation (EPA
2.11.10) and aligns with EP 2.1, “engage diversity and difference in practice”, which
requires that social workers ”recognize the extent to which a culture’s structures and
values may oppress, marginalize, alienate, or create or enhance power or privilege” (EP
2.1.).
At the Division level, student feedback can be enhanced through policies that
provide students with clear information about available channels for student feedback.
For example many students shared that they did not know what is done with teaching
evaluations, and probably did not know the importance they hold in retention, tenure
and promotion of faculty members. Similarly some students shared that they did not
know whether or not their emails were read, and many were unaware of opportunities to
participate in student organizations or governance. CSWE maintains that the implicit
curriculum should be “manifested through policies that are fair and transparent in
substance and implementation” (EP 3.0). Thus a logical step for macro practice at the
university level is to clearly inform students of policies surrounding the availability of
channels for providing feedback and the use of feedback provided therein.
99
One way to increase transparency and awareness of opportunities to provide
feedback and of the policies that govern response to feedback within the system is to
create formal policies that embed student feedback in the explicit as well as implicit
curriculums. For example, by requiring class discussions with the explicit purpose of
eliciting honest feedback in a safe environment, by establishing policies for responding to
student emails, by expanding office hours and promoting use of office hours as
opportunities to provide feedback to professors, by making information about the
structure and function student organizations and governance structures widely available,
and by devising ways to incorporate students with limited ability to participate more
fully. These are just a few methods, suggested by students and faculty members in this
study, that student feedback systems can be improved at the division or departmental
level. As stated in CSWE’s own definition of the implicit curriculum, “heightened
awareness of the importance of the implicit curriculum promotes an educational culture
that is congruent with the values of the profession” (EP 3.0).
Aside from formal policies at the Division level, macro practice can work to
enhance the culture or environment for student feedback by addressing the greater
environment in which the Division is situated. This may include engaging in policy
practice (EPA 2.1.8) to address university, state, and federal policies that limit resources
available to the Division, most notably time and money. Again, following both critical
and empowerment approaches, the first step in this process is development of critical
100
awareness of the political and economic realities as well as policies that limit the
resources and thereby restrict the flow of student of student feedback within the Division.
Recommendations
This study collected data from student surveys to examine the use and perceived
efficacy of seven identified feedback channels among MSW students at California State
University Sacramento and supplemented this data with interviews with key informants.
Future studies could expand the scope of this analysis by also incorporating BSW
students and additional faculty members, and by utilizing additional methods of data
collection. For example, focus groups could be used to identify feedback channels as
well as brainstorm barriers to their use and suggestions for improvement. Additional
interviews with students or more faculty members could also enhance the data.
Furthermore the use of a random sample of participants of sufficient numbers would
make the results of this study generalizable to a larger population. Replication of this
study with larger, randomly chosen and representative samples could confirm the results
of this study.
Much of this study was framed through the lens of CSWE standards for the
implicit curriculum, of which it was argued that student feedback is an essential
component. Due to the centrality of social work values and professional competencies to
the CSWE Educational Polices and Accreditation Standards (EPAS), it follows that an
assessment of the values and competencies of social work students as well as faculty and
101
staff could be incorporated into future assessments of student feedback in social work
education. For example, understanding what values students and faculty hold dear and
how the extent to which they feel these values inform the explicit and implicit
curriculums of the MSW program could lead to enhanced assessment of the implicit
curriculum.
Additionally, future researchers may consider additional demographic and
background data on participants in order to determine if any of these factors are
associated with outcomes of use or perceived efficacy of student feedback.
It is further recommended that future studies refine the primary instrument
developed to measure student feedback in this study. For the purposes of this study a
questionnaire was created in order to assess the flow of student feedback within the
Division of Social Work system at CSUS. This questionnaire is subject to tests of
reliability and validity and should be refined in future research in order to determine the
usefulness of this tool as a measure of student feedback.
Finally, while this study elicited suggestions for improvement of student feedback
within a particular social work program, future studies could examine the effectiveness of
such suggested interventions to improve student feedback. This approach would lead to
the development of best practices in development and implementation of the implicit
curriculum, especially in regard to student feedback. One model particularly well suited
to this endeavor is participatory action research wherein research participants are engaged
102
in cycles of action and reflection and the research results not only in additional
knowledge, but also in actual outcomes. For example, such a research model could
engage faculty, staff, students, and other stakeholders in a collaborative endeavor to
understand and improve student feedback in their program. In this way, this approach
encompasses an empowerment perspective and engages students in social work
competencies of engagement, assessment, intervention, and evaluation (EP 2.1.10) while
incorporating diversity (EP 2.1.4).
Limitations
The study is limited to identifying and evaluating feedback channels as well as
barriers to student feedback within the Division of Social Work at Sacramento State
University. It does not generate new policies or models for facilitating student feedback,
nor does it examine multiple methods for assessing student feedback. Rather, the study
itself is an assessment of student feedback assesses the extent to which these identified
feedback channels are used by students, the perceived efficacy of feedback channels in
producing change, barriers to student feedback within the system studied, and
suggestions to improve the flow of feedback through these channels.
The study employed non-random, purposive sampling to identify survey and
interview participants. Therefore findings are not generalizable to other populations.
Likewise, the perspective provided by key informants does not offer ideas that can be
generalized to a larger population. In this way, the study is limited in its applicability to
103
other social work programs and institutions of higher education. This study should be
replicated with larger, random samples of respondents in order to produce generalizable
data and confirm results of the present study.
Conclusion
This study assessed the student feedback component of the implicit curriculum in
the Division of Social Work at California State University Sacramento with the purpose
of assessing the use and perceived efficacy of seven identified feedback channels. It also
sought to identify barriers to effective feedback and suggestions to improve feedback
within the Division.. The study was grounded in critical theory and empowerment
frameworks, as well as a systems perspective to addressing feedback within an
organization, and addressed student feedback as a component of the implicit curriculum
as defined by the Council of Social Work Education (CSWE).
Study findings indicated that teaching evaluations were the most commonly used
feedback channel followed by in-class discussions, email/online communication, and
informal conversations, while participation in governance was the least used feedback
channel, and participation in both social work and other student organizations were listed
as rarely used by students.
Overall, the more commonly used feedback channels were
also perceived as the most efficacious by students.
104
Notable barriers to providing effective feedback identified by participants
included fear of being penalized by grades or being viewed negatively, a culture or
environment in which students aren’t comfortable providing feedback, limited time and
resources, and lack of accountability both of faculty and staff for responding to feedback,
and of students for providing constructive feedback. Suggestions to improve student
feedback in social work education included implementing policies that formalize and
make students aware of existing feedback channels, facilitation of communication
between students and Division faculty and staff, measures to improve accountability, and
promotion of increased participation in feedback channels. The research findings
strongly indicate the need for increased discourse between students and faculty about
student feedback within the MSW program at CSUS.
Finally, the study proposes a targeted instrument for the assessment of student
feedback that schools of social work can maintain for purposes of assessment of the
implicit curriculum.
105
Appendix A
Student Surveys
Implicit Curriculum in Social Work Education:
MSW Student input and feedback at California State University Sacramento
Part One: Demographic information
1. What year of study are you in within Social Work? Please circle one
a.
b.
c.
d.
MSW I
MSW II
MSW other
Other field of study
2. What is your age?
_________________
3. How many years of experience do you have working (full or part-time paid
employment) as a social worker or in a social work setting (please do not count volunteer
or internship work)?
________________
Part Two: Student Input
Please circle yes or no for the following questions:
1. I have provided my feedback/input to the division of Social Work faculty and
staff
Yes / No
2. I believe Division of Social Work faculty and/or administration take the feedback
I give in into consideration
106
Yes / No
3. I believe the feedback I provide is important to Division faculty and staff
Yes / No
4. I believe the Division of Social Work takes action based on the feedback I provide
Yes / No
Please turn page
107
Part Three: Feedback Channels
The following questions pertain to channels for student feedback in the Division of Social
Work. Please indicate on the 5-point scale your viewpoint of each feedback channel
Teaching evaluations of faculty
members (mid-semester and end-ofsemester evaluations)
I have used teaching evaluations as a
form of student input
Never Rarely Sometimes Often Always
1
2
3
4
5
1
2
3
4
5
I believe the information I provide in
teaching evaluations is important to
Division faculty and staff
1
2
3
4
5
I believe the Division of Social Work
takes action based on the feedback I
provide in teaching evaluations
1
2
3
4
5
I believe Division of Social Work
faculty and/or administration take the
feedback I give in teaching
evaluations into consideration
What are some of the barriers to using teaching evaluations as a method for providing
student feedback/input to Division faculty and staff?
Do you have any suggestions for improving this method of student feedback?
Informal Conversations with
faculty or staff
Never Rarely Sometimes Often Alwa
ys
108
I have used informal conversations
with faculty or staff as a form of
student input
I believe Division of Social Work
faculty and/or administration take the
feedback I give in conversations with
faculty or staff into consideration
I believe the information I provide in
conversations with faculty or staff is
important to Division faculty and
staff
I believe the Division of Social Work
takes action based on the feedback I
provide in conversations with faculty
or staff
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
What are some of the barriers to having conversations with faculty or staff as a method
for providing student feedback/input to Division faculty and staff?
Do you have any suggestions for improving this method of student feedback?
In-Class Discussions
I have used in class discussions as a
form of student input
I believe Division of Social Work
faculty and/or administration take
the feedback I give in in class
discussions into consideration
Never Rarely Sometimes Often Always
1
2
3
4
5
1
2
3
4
5
109
I believe the information I provide
in in class discussions is important
to Division faculty and staff
I believe the Division of Social
Work takes action based on the
feedback I provide in in class
discussions
1
2
3
4
5
1
2
3
4
5
What are some of the barriers to using in class discussions as a method for providing
student feedback/input to Division faculty and staff?
Do you have any suggestions for improving this method of student feedback?
Email or online communication
I have used email or online
communication as a form of
student input
I believe Division of Social Work
faculty and/or administration take
the feedback I give in email or
online communication into
consideration
I believe the information I
provide in email or online
communication is important to
Division faculty and staff
Never Rarely Sometimes Often Always
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
110
I believe the Division of Social
Work takes action based on the
feedback I provide in email or
online communication
1
2
3
4
5
What are some of the barriers to using email or online communication as a method for
providing student feedback/input to Division faculty and staff?
Do you have any suggestions for improving this method of student feedback?
Participation in SWSA or Phi
Alpha (Social work student
organizations)
I have used participation in social
work student organizations as a
form of student input
I believe Division of Social Work
faculty and/or administration take
the feedback I give in social work
student organizations into
consideration
I believe the information I provide
through participation in social
work student organizations is
important to Division faculty and
staff
I believe the Division of Social
Work takes action based on the
feedback I provide in social work
student organizations
Never Rarely Sometimes Often Always
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
111
What are some of the barriers to using participation in social work student organizations
as a method for providing student feedback/input to Division faculty and staff?
Do you have any suggestions for improving this method of student feedback?
Participation in other (non-social
work) student organization
I have used participation in other
student organizations as a form of
student input
I believe Division of Social Work
faculty and/or administration take
the feedback I give in other student
organizations into consideration
I believe the information I provide
through participation in other
student organizations is important
to Division faculty and staff
I believe the Division of Social
Work takes action based on the
feedback I provide in social work
student organizations
Never Rarely Sometimes Often Always
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
What are some of the barriers to using participation in other student organizations as a
method for providing student feedback/input to Division faculty and staff?
112
Do you have any suggestions for improving this method of student feedback?
Participation in governance (ex:
student government, faculty
meetings and governing boards)
I have used participation in
governance student organizations
as a form of student input
I believe Division of Social Work
faculty and/or administration take
the feedback I give through
participation in governance into
consideration
I believe the information I provide
through participation in governance
is important to Division faculty and
staff
I believe the Division of Social
Work takes action based on the
feedback I provide through
participation in governance
Never Rarely Sometimes Often Always
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
What are some of the barriers to using participation in governance as a method for
providing student feedback/input to Division faculty and staff?
Do you have any suggestions for improving this method of student feedback?
113
Part Four: Comments
Are there methods of sharing feedback with the Division of social work program not
covered in this survey? Please identify them here.
Please identify any barriers to using these addition methods of sharing feedback.
What are some solutions to increase student input into social work education and
consideration of student input?
Thank you for your participation in this study!
114
Appendix B
Key Informant Interview Protocol
Interviews will be unstructured, however guiding questions may include the following:
-How long have you taught in the social work division at CSUS?
-Are you full-time or part-time?
-Do you thing student input is important to faculty and staff in the Division of Social
Work?
-Do you think students have effective mechanisms for providing feedback to Division
faculty and staff? What are some of those feedback mechanism?
-What are some of the barriers to student input in the Division?
-Have you seen the quality or quantity of student input change throughout your time at
CSUS? How so?
-What are some ways that student input in social work education can be improved?
-What’s your vision for the future of student-faculty relationships within the Division?
115
References
Aguado, T., Ballesteros, B., & Malik, B. (2003). Cultural diversity and school equity. A
model to evaluate and develop educational practices in multicultural education
contexts. Equity &Excellence in Education, 36(1), 50-63.
Astin, A. W. (1984). Student involvement: A developmental theory for higher education.
Journal of college student personnel, 25(4), 297-308.
Banathy, B. H., & Jenlink, P. M. (1996). Systems inquiry and its application in education.
Handbook of research for educational communications and technology, 74-92
Brainard, A. H., & Brislen, H. C. (2007). Viewpoint: Learning professionalism: A view
from the trenches. Academic Medicine, 82(11), 1010-1014.
CC_HER Bureau (2000). Universities as Sites of Citizenship and Civic Responsibility,
Document DGIV/EDU/HE 36. Strasbourg: Council of Europe
Cohen, P. A. (1980). Effectiveness of student-rating feedback for improving college
instruction: A meta-analysis of findings. Research in Higher Education, 13(4),
321-341.
Council on Social Work Education (2008). Educational Policy and Accreditation
Retrieved from http://www.cswe.org/File.aspx?id=13780.
116
Edwards, R. L., LaSala, M.C., & Battle, D. (2008). Evaluating the Implicit Curriculum:
How Values are Transmitted in Social Work Education: Proposal for Alternative
Affirmation Project. Retrieved from http://www.cswe.org/File.aspx?id=30811
Feldman, R. A. (2009). “Reinventing Social Work Accreditation”: Write On!. Research
on Social Work Practice, 19(1), 124-126.
Fielding, M. (2005) Putting Hands Around the Flame: Reclaiming the radical tradition in
state education. Forum, 47(2&3), 61–69.
Fielding, M. (2006). Leadership, radical student engagement and the necessity of personcentred education. International Journal of Leadership in Education, 9(4), 299313.
Flutter, J. (2007). Teacher development and pupil voice. Curriculum Journal, 18(3), 343354.
Giroux, H. A., & Penna, A. N. (1979). Social education in the classroom: The dynamics
of the hidden curriculum. Theory & Research in Social Education, 7(1), 21-42.
Grady, M. D., Powers, J., Despard, M., & Naylor, S. (2011). Measuring the implicit
curriculum: initial development and results of an MSW survey. Journal of Social
Work Education, 47(3), 463-487.
117
Hafferty, F. W. (1998). Beyond curriculum reform: Confronting medicine's hidden
curriculum. Academic Medicine, 73(4), 403-7.
Holloway, S. (2008). Some suggestions on educational program assessment and
continuous improvement, revised for the 2008 EPAS. Alexandria, VA: Council on
Social Work Education Commission on Accreditation.
Holloway, S. (2009). Some suggestions on educational program assessment and
continuous improvement for the 2008 EPAS. Retrieved from
http://www.cswe.org/File.aspx?id=31582
Holloway, S., Black, P., Hoffman, K., & Pierce, D. (2009). Some considerations of the
import of the 2008 EPAS for curriculum design. Retrieved October, 29, 2009.
Holosko, M., Skinner, J., MacCaughelty, C., & Stahl, K. M. (2010). Building the implicit
BSW curriculum at a large southern state university. Journal of Social Work
Education, 46(3), 411-423.
Hutchison, E. D. (2010). Dimensions of human behavior: The changing life course.
Thousand Oaks, CA: Pine Forge.
Kember, D., Leung, D. Y., & Kwan, K. (2002). Does the use of student feedback
questionnaires improve the overall quality of teaching?. Assessment & Evaluation
in Higher Education, 27(5), 411-425.
118
King, M., Morison, I., Reed, G., and Stachow, G. (1999) Student feedback systems in the
business school: A departmental model. Quality Assurance in Education 7(2): 90100.
Knol, M. H. (2013). Improving university lectures with feedback and consultation.
Kulik, J. A., & McKeachie, W. J. (1975). The evaluation of teachers in higher education.
Review of research in education, 3, 210-240.
Lee, J. A. (2001). The empowerment approach to social work practice: Building the
beloved community. Columbia University Press.
Macmurray, J. (1933) Interpreting the Universe (London: Faber).
Menon, M. E. (2003). Student involvement in university governance: A need for
negotiated educational aims?. Tertiary Education and Management, 9(3), 233246.
Mizikaci, F. (2006). A systems approach to program evaluation model for quality in
higher education. Quality Assurance in Education, 14(1), 37-53.
Obondo, A. T. (2000). Politics of Participatory Decision-Making: The Case of Kenyatta
University and the University of Nairobi. French Institute for Research in Africa
“Les Cahiers.
119
Read, B., Archer, L., & Leathwood, C. (2003). Challenging cultures? Student
conceptions of 'belonging' and 'isolation' at a post-1992 university. Studies in
Higher Education, 28(3), 261-277.
Reynolds, M. (1999). Critical reflection and management education: rehabilitating less
hierarchical approaches. Journal of Management Education, 23(5), 537-553.
Warren, M. R. (2005). Communities and schools: A new view of urban education reform.
Harvard Educational Review, 75(2), 133-173.
Wood, D. D. (1993). Faculty, Student, and Support Staff Participation in College
Governance: An Evaluation.
120
Download