CHEMISTRY PROGRAM: Assessment of student learning activities

CHEMISTRY PROGRAM: Assessment of student learning activities.
CHEM I, II, III, IV
(1) The faculty associated with the teaching of CHEM I and IV, General Principles (Marc
Richard, Brian Rogerson, Louise Sowers, Rogers Barlatt, Kristen Hallock-Waters and Elizabeth
Pollock) decided to introduce WileyPLUS online homework in the general chemistry sequence.
Having done this for three semesters, it became important to determine whether increased
student engagement with CHEM I was having an impact on student learning. Using the first
semester exam as a baseline indicator of how freshmen students initially approach studying,
Brian Rogerson noticed an improvement in performance compared to the cohort taking the
course in the prior academic year, who were not assigned WileyPlus homework. Importantly, a
similar trend was observed when median scores for the American Chemical Society (ACS) final
examination were examined. During semesters when WileyPlus was not used, the median
score was below the national median. However, during the two semesters since WileyPlus was
implemented, the median score was at or above the national median. Whether the steady
assignment of online homework is actually having an impact on learning will require an analysis
of whether the student cohorts being compared are indeed similar. High school class rankings,
math SAT scores and determining how many of the students were actually first semester
freshmen, might shed additional light on the significance of these preliminary results.
(2) We are also investigating the utility of more qualitative assessments. For example, Marc
Richard designed IDEA questions that probed student perception of the entire WileyPLUS
online package and how it helped them learn chemistry. The aggregate data for all CHEM I
lecture sections during fall 2009 (n=170) clearly showed that the vast majority of students
answered “More true” or “True” in response to the following three questions:
(a) WileyPLUS had an overall positive impact on my learning
(b) WileyPLUS helped reinforce material from lecture and lab
(c) WileyPLUS helped me remain engaged with material
This has encouraged us to proceed with introduction of WileyPLUS into the teaching of CHEM
IV where similar assessments are now being conducted by Rogers Barlatt and Kristen HallockWaters. Furthermore, Brian Rogerson is also testing the impact of WileyPLUS online homework
and associated resources on students in his Biochemistry course. This is the first time such a
pedagogical strategy has been tried in this junior-level course.
(3) Kristen Hallock-Waters will continue the use of the ACS examination for the full year of
general chemistry (CHEM I + IV) to assess the extent to which students carryover into CHEM IV
what they learned in CHEM I.
(4) On the organic chemistry side, Ada Casares continued her research collaboration with Dr.
John Penn of West Virginia University on Computer Instruction and Assessment in organic
chemistry (CHEM II). As a follow up to their past collaborative study "Tools for performing
organic reaction mechanisms over the web”, she carried out some preliminary assessment of
what Stockton students learn in CHEM II using the online instructional databases that constitute
a fundamental part of the way she teaches the course. By comparing student performance on
the online tests versus standard “paper” exams with questions that were not in the database,
she tentatively concluded that students using computer instruction were making progress with
respect to the learning goals of the course.
(5) In addition, Jon Griffiths shared with the CHEM program the results of a 15-year study using
the ACS examination in organic chemistry as an assessment tool to gauge learning after
students completed the CHEM II + III sequence. The test was administered as a final exam for
the CHEM III course. As with CHEM I, he found that the median performance was below the
national median. Interestingly, when he compared the median score of the period spanning the
first 9 years versus the median score of the most recent 6 years, he found that the latter score
was significantly lower, meaning that recent cohorts were performing less well than earlier ones.
One interpretation was that this may be the result of an increased number of students taking
CHEM III due to recent changes in chemistry requirements made by some NAMS programs.
Whether this resulted in a decrease in the average level of academic preparedness of CHEM III
student cohorts remains to be determined. A more detailed analysis of class demographics is
required to settle this question.
(6) Based on our assessment workgroup discussions, Shanthi Rajaraman has included
questions in her CHEM II and CHEM III exams to test the following:
1. Structure, property correlation (qualitative goal)
2. Plotting and interpreting graphed information (quantitative goal)
To assess their understanding of graphs, she has been incorporating questions involving
graphs/figures from which they are required to interpret information and arrive at answers.
Shanthi has been able to see where the students are getting the concept and where they are
losing it, based on layering of questions on certain topics. These questions are a regular feature
in her tests- she is able to accurately gauge where the students have not grasped the concept
and also judge the level of challenge a student can take on, based on these layered questions
leading from lower level (something that involves slightly more than direct reproduction of
presented information) to higher level (synthesis questions).
(7) An organic chemistry workgroup has been recently formed that is beginning to review and
revise CHEM II laboratories in order to introduce the assessment of specific learning outcomes
for each laboratory exercise. The workgroup is not only made up of full time faculty, but includes
all part-time and adjunct faculty.
Experiential Chemistry (A GNM course)
This course was taught again after a brief absence from the schedule. Marc Richard is taking
over the reins of this course from Jon Griffiths who designed and taught it for many years. Jon is
retiring. This GNM course is unique since it is the only general studies course taught entirely in
the laboratory. A major revision of the course text, originally written by Jon Griffiths, has been
completed. In addition, Marc Richard has begun a two-year assessment of the course in
collaboration with the Center for Workshops in the Chemical Sciences. An outside evaluator
administered a survey and conducted face-to-face interviews with students. This past summer
Marc participated in a week long workshop conducted by the Center and the evaluator will be
returning do a second assessment.
Philosophy and Religion Program
Assessment Document
Working Copy
September 2010
Prepared by:
Prof. Lucio Angelo Privitello, Ph.D.
Associate Professor of Philosophy and Religion, and
Coordinator of the Philosophy and Religion Program
I. Opening Statement:
The study of philosophy entails cultivation in the history of the field along with
the use and proficiency in interpretive abilities founded upon critical thinking and
analytic techniques. In terms of the Program’s most current brochure - the study of
philosophy is undertaken to help develop student capacity for systematic reflection on the
human condition through philosophical analysis of values, with an application, through
proven models, of what they have learned to their lives lead as citizens and professionals.
While information is part of this process - historical, theoretical, laws of thought,
argument analysis, and interpretative techniques - the most important lessons in the study
of philosophy entail openness to the wonder of existence, to being in the world, and to the
importance of developing interpretative, analytic, and normative skills so that critical
reflection becomes an intellectual imperative carried out as a joy. For most of the
aforementioned key terms and areas of specialty there are measurable performances, and
yet, as part of the philosophical forma mentis, the exclamation “I know that I do not
know”, when uttered in sincerity is the very materiality of the measure that overwhelms
and at the same time grounds all practical and theoretical problems of philosophy as well
as learning outcome rubrics.
1
As an undergraduate program in philosophy our goal is to prepare students for the
further professional study of philosophy, along with its practice as a pedagogical art,
while maintaining a vision that a philosophical education serves as a thread within the
labyrinth of cultural emergence. If once philosophy was the Queen of the Sciences, today,
and with our help it may at least be the Prince of the Humanities. This call to a noble love
of culture serves our students for a preparation no matter what goals they have in mind,
and yet, hones and tests the skills of those who desire to pursue a graduate degree in
philosophy or religion.
II. Program Survey:
Faculty:
Full time: 5
Adjunct: 2
Majors: 19 (as of Sept. 2010)
Minors: 26 (as of Sept. 2010)
General Study Students: (number yet to be compiled that on average take
courses in Philosophy and Religion)
III. Goals:
Philosophy majors interested in continuing with their studies in the field, should
graduate with a certain level of mastery in the historical study of the field, and guided by
faculty members, begin to realize their particular interests (and deepen the use of
philosophical methods and analysis) so that selections of Graduate School study may be
realistically approached, and successfully begun. Because of the size of our Program we
are able to act as mentors in what can be called “apprenticeships in creativity” for the
good of the ideals of our students. For philosophy minors (requiring five courses) the
emphasis should be on the interdisciplinary application of philosophical methods, history,
and interpretative skills to their chosen discipline and area of interest. For majors and
minors as well as general studies students, a community of learning is displayed in
classes as the respect for dialectical debate, argument analysis, and the love of learning
2
and sharing ways of knowing. For students who take philosophy courses as part of their
GEN program, or as an elective, a goal is to provide an understanding of the nature of
philosophy as practice and theory, and deepen their awareness and interdisciplinary
aspects of critical reading and writing.
IV. Expected Outcomes:
For majors and minors in philosophy:
1. Theory of the philosophical and religious field(s):
a. the history and contemporary state of the field and issues
b. the understanding of the various fields and sub-fields within the disciplines
c. the ability to grasp principles, arguments, and methodologies
2. Practice in philosophical reflection and interpretation:
a. recognizing and analyzing arguments
b. clarity and confidence in oral and written expositions
c. ability to integrate and relate various approaches
d. ease in the research and critical selection of materials
3. Appreciation of Philosophy and Religion (for General Study students, and
majors and minors):
a. the ability to wonder, doubt, and reflect with joy
b. the widening of their cultural background and values
c. the improvement of communication and research skills
d. the development of a community of learners
e. the confidence to make contributions to their social arena
3
V. Dissemination:
Our Program brochure contains a basic outline of expected learning outcomes for
perspective students. As part of this first introduction, each Program member includes
these expectations, and adds detail to each of them, in their presentations to the Stockton
Open House gatherings that are scheduled twice a semester for prospective students and
parents. These presentations include information on what our more recent graduates are
doing, a record of admissions to graduate schools, placement records, and career choices.
This helps inspire our prospective students, and give some sense of reassurance to parents
while furthering our expected learning outcomes. In keeping an updated list of the
accomplishments, graduate programs, and degrees of our Program graduates we may
further disseminate and discuss what factors in our aims proved most effective in the
action of continuing a student’s education.
From our Program meeting discussions we maintain notes and periodically
incorporate these changes as part of the updated and edited versions of our Program
brochure. The frequency of changes to the Program brochure is once every two years (on
average). We wish to be able to make changes more frequently, and due to this factor we
are planning a more direct, interactive and timely use of our Program and Division
Website link. In brief, the program’s primary aim is to help develop student capacity for
systematic reflection on the human condition through philosophical analysis of values.
This aim goes hand-in-hand with an application, through proven models, of what they
have learned to their lives lead as citizens and professionals.
Course syllabi for Philosophy and Religion courses as well as GEN courses taught
by program members include course objectives for learning outcomes. We place our
course overviews and goals ‘front-and-center’ on the syllabus as “Course Description”,
“Course Objectives”, “Course Goals”, and follow these with “Requirements”, and
“Assessment” to how these goals must be attained. As part of our introduction to a
course, and before and after assignments, we outline goals and learning expectations.
4
Among these objectives there is the a) appreciation for the field and study of philosophy
and religion, b) the striving to raise interpretative and critical thinking abilities to
interpret texts and findings, c) the use of analysis and methods, d) participation in
discussions, e) the development of a critical understanding of student beliefs and the
belief and views of others, and f) the joy to explore and be comfortable with raising more
value questions about the world they live in to raise their consciousness of the affective
and social dimension. In the detailed responses to student papers the actions taken by
each program member is clearly evident, and displays a wide range of assessing progress
and pitfalls. The incorporation of “re-writes” (and for some courses, multiple versions of
papers) is our touchstone for a shared Program wide assessment findings. For proposed
courses for both the Philosophy and Religion Program and GEN courses there is a
presentation to a college-wide committee that reviews learning goals, the kind of courses
(Tradition, Thematic, Experiential), and general competencies and content experience.
The dissemination, discussion and evaluation of leaning measures at this level are
extremely intense and most rewarding. Review for courses over five years old (both
within the Program and in GENs) is also where we review and re-evaluate learning
outcomes and course effectiveness.
VI. Program Assessment History (thumb-nail version):
The Philosophy and Religion Program has, for some years, standardized the
format and expectations for the Senior Seminar, our capstone course. Students are
required to give a formal presentation on a text used within the course of philosophical
significance, along with an oral (and written) commentary on a fellow student’s paper.
The evidence of facility in using advanced research methods and the ability to produce a
15-20 page scholarly paper on a topic at that level has proven successful. As a Program
we have been collecting the Senior Seminar papers in electronic format and have, and
continue to store them on CDs. As will be mentioned below under “Discussion”, the new
course “Philosophical Research Methods” will allow us to take action on what we have
learned and shared as a Program from the collecting of Senior Seminar papers. The need
for a mid-point course to boost the facility for advanced research has already shown itself
5
as a positive step in the growth of our already implemented assessment stage at the
Senior Seminar level.
The Program has conducted “exit interviews” and administered them to students
to indicate their familiarity with majors themes and figures in the history of philosophy.
Part of the interview allowed students to report on the Program’s strengths and
weaknesses, and to indicate preferences for types of courses they would like to see
offered. Thanks to the consistent call for offerings in Religion we prepared for, and then
hired our specialist in the field, Prof. Siecienski. This has proven to be a deeply positive
outcome of that part of the exit strategy. Due to this we have decided to adjust the exit
strategy, and recalibrate it into what we are calling a “student narrative” (as mentioned
under “Assessment Measures”).
For a fuller version of our Program’s Assessment activities, discussions, course
load, advising load, and actions, we refer you to the Philosophy and Religion Program
2009-2010 Coordinator’s Report, prepared for and delivered to our Division Dean in May
2010, especially pages, 2, 4-5, and 15-20.
VII. Assessment Measures:
Our learning goals (see archived syllabi and the Philosophy and Religion Program
2009-2010 Coordinator’s Report) enable students to understand the nature of the theories
and methods acquired within our courses for use as Philosophy majors and minors. As
part of these goals we count a) corrected and graded papers (originals and re-writes)
where students are given guidelines for improvement, b) in-class presentations that
follow a clear format, c) research projects, and d) the Program’s “Capstone” Senior
Seminar course that is a culmination and intensification of these goals. The frequency of
points a through c is for every class and within the unfolding of a semester. While it is
rare (due to the size of our College, Division and Program) to have courses filled only by
Philosophy majors and minors (with the exception of the PHIL 4600 Senior Seminar, and
perhaps PHIL 3830 “Major Thinkers/Major Themes”, we must balance these goals by the
6
level of difficulty and intensity of focus in paper topic choices. As part of a response to
this aspect of a class, “topic choices” are commonly used that have a range of difficulty.
Both program courses and GEN courses taught by Philosophy and Religion program
members stress a structure for co-curricular learning, and provide a rich learning
experience for each enrolled and dedicated student. This is an evident match between our
courses and our Program objectives as outlined in our brochure and syllabi, as well as a
working partnership with our Division programs. Examples of these are found in our
syllabi archive.
Basic examples of the experience of assessment measurements begin with class
discussion, student presentations, student commentaries on presentations, as well as
faculty-student meetings (out-of-class interactions), papers and research. The frequency
for these mentioned aspects is during and for each semester course.
As a projected tool for assessment the Philosophy and Religion Program will be
implementing a “student narrative”, mostly likely begun before the midpoint of their
Philosophy and Religion course studies, (and returned to by the student) as a way to gain
more insight into how Philosophy and Religion majors and minors have experienced and
reacted to the courses, level, and workload. This has been based on our Program
discussion on the “exit interview”, and is a re-evaluation of our original document.
VIII. Discussion (Findings):
The learning goals are evaluated, developed and selected from faculty experience,
study, and reflection, and from questions related to student evaluations from the IDEA
forms as well as from faculty-student discussions at the end of the course. This
information is discussed at our Program meetings. Course objectives and work/reading
loads have been adjusted according to the constructive feedback of student questions.
These adjustments are recorded in the yearly Program’s Coordinator’s Report by placing
“Revised” after a specific course. By sharing these findings at our Program meetings, and
7
in conversations during the semester, we are able to reevaluate both the dissemination
and the learning measures. The archive of syllabi attests to these changes.
i) Assessment Measures (IDEA)
a. For each class taught by Program members we receive
assessment relevance rating scores on twelve learning objectives
that makes up the IDEA forms. Students also provide short
written comments. This information is released to faculty.
ii) Frequency
a. For every course (by semester) for untenured faculty
b. For selected courses (by semester) for tenured faculty
iii) Findings
a. Program faculty discusses student comments and adjusts
learning goals and pedagogical aspects accordingly.
b. Discussion to have copies of IDEA forms for all Senior
Seminars available to Program faculty next in line is currently
underway.
For philosophy majors, the implementation of a portfolio is one of the goals we
continue to work towards. This allows student to save their work from the various course
levels, so we may engage them in noticing their development. This factor should also be
stressed when it comes to student course Notebooks. This works best for philosophy
majors. Because our program is relatively small, we are able to engage students and
develop self-assessments because they show up in various courses that we teach, at
faculty office hour visits, at their student Philosophy Clubs, and the PST Honor Society
of Philosophy, as well as at the two Program dinner “socials” gatherings we offer. We
are able to assess their work, and show them actual progress, as they move from
introductory to upper-level courses due to the size and teacher-student ratio. We are also
able to discuss how they experienced and gauged our pedagogical approach. Because of
the size of our Program, and the co-curricular spirit of our College, we seek to
accommodate different learning styles, and in a few of the courses this takes the form of
8
projects that cross the curriculum into the fields of arts, theater, music, and poetry. We
have received positive feedback from Professors in various fields outside of philosophy
to how some of our projects have inspired and captivated their students.
Within each of the courses run by Program faculty, we share the technique of
showing students what represents excellent work in the field. This usually is in the form
of peer-reviewed articles, student articles in undergraduate journals of philosophy,
published Readers, and seminal texts from the history of philosophy. This particular
aspect is presently being refined as we move our new course “Methods” into its second
academic sailing, and will make up part of a “Methods Reader”. We expect great power
and range from how we have discussed the components of this new course.
Having recently implemented the new Philosophy/Religion program course
entitled “Philosophical Research, Writing, and Methodology” – or “Methods” for short,
we feel that assessing student learning will become clearer and more accessible. This is
presently “in the works”. The “Methods” course is now being taught for the second time.
This course is required at a specific time in a student’s learning process, and comes
before students may take upper level philosophy and religion courses and definitely
before taking the senior seminar course. As a midpoint stage to the course progression,
the methods course will allow students to critically review their past work and the work
they will be doing in a course that is all about outcomes and learning goals. It is at this
stage that a creation and subsequent critique of the “portfolio” of student work would
work best and prove most suitable for student learning. It is also a stage where the
“student narrative” will become a guide to both faculty and student assessment. This will
be a powerful tool, if not our most powerful tool, for assessing the changes and progress
in student work. The syllabus for this course, as well as the original course proposal is
archived and serves as a working document and testament to our commitment to achieve
excellence in the field.
As part of the work of program members, assessment, evaluation, re-evaluation
and actions, is also working at the level of student philosophy clubs and extra-curricular
9
activities. For instance, with the Stockton Philosophical Society (SPS), a student run
Society, we are able to gage how students take examples and topics from class and
initiate discussions among themselves. We notice what topics and problems, and theories
are more frequently raised. One faculty member is present at these meetings and shares
the outcome at our Program meetings. This is also part of what happens at the
“Philosophy Goes to the Movies” where students use parts of theories to further
understand and enjoy film making techniques and ideas. As a Program we have been able
to see actual student self-evaluations and changes in the approach to the SPS meetings.
We have seen and noted the changes that students have made to the structure of their
meetings, and discuss this at our Program meetings for use in the revision of course
aspects.
For philosophy majors and minors as well as interested Stockton students from all
majors, there also exists a more structured philosophy society, the Phi Sigma Tau
International Honor Society in Philosophy. This is an International and National
recognized Society, based on 3.00 GPA and three completed philosophy courses. There is
a student President, Vice-President and Secretary, and one Faculty Advisor. This
provides our Program with a clear gage of how students have responded and desire to
continue with readings and discussions outside of the classroom. This society (PST) is an
honor society where students graduate with recognition for academic achievement. The
National Newsletter publishes outlines of the meeting topics. Students from our Program
have consistently been recognized for their work, and have also published book reviews
(for their peer-reviewed undergraduate journal Dialogue). We are happy to find how
students re-evaluate the structure of the PST meeting, and share their decisions with
Program faculty.
Students in our Program also voluntarily attend lectures by visiting speakers
invited by our program in the field of philosophy, as well as attend the Classical
Humanities Society Lecture Series at Stockton (six lectures per academic year). This
evidence in the care for culture, the learning process, and the development of student
interest continues with our majors and minors with their participation in local philosophy
10
conferences “The Greater Philadelphia Consortium Undergraduate Conference”, as well
as conferences in NY and RI. These are very insightful markers for us because they are
purely voluntary as far as student attendance and interest is concerned. From the
experience of participating in these conferences students have proposed changes to their
own Philosophy Society (SPS, PST), and wish to expand their reach and organize a
Regional Undergraduate Philosophy Conference. These kinds of actions have given us
much to discuss on the vibrancy of our pedagogical approach.
IX. Precipitating Factors, Concerns, and Assessment Components:
For philosophy majors, fair assessment is best gained by students following the
course progression (Introductory, Mid-level, Upper-level courses) thus picking up tools
and mastering levels of difficulty to prepare them for the direct measurement of their
work within the Senior Seminar with the capstone project. As it stands, we wish to
outline and implement more rigorous “pre-requisites” so to guide students on this path,
and stress the need for Introductory and mid-level courses to be completed before more
theoretical/historical upper-level courses are entered into. A “Recommendation for
Philosophy Majors” has been proposed, and can be found on pages 49-50 of our 20092010 Coordinator’s Report, handed in May 2010.
Due to the size of our College, Division, and Program, and ability to offer
multiple sections of courses, and the restriction on low enrollment in a course, we remain
in discussion about the best way to obtain this course progression. We must be allowed to
adjust course enrollment numbers for this possibility. Without this change in a structured
course progression, assessment is weakened and ultimately suffers. This problem most
clearly shows itself with Transfer students and their previous courses and credits, and
students who enter mid and sometime upper-level philosophy courses without prior
courses in philosophy.
Our greatest need is to make sure that our new “Methods” course continues to run
(adjusting enrollment quotas) for without this quasi-Dantean mid-point gauge to student
11
development and practice in the tools that count for requirements as we have here
outlined and mentioned, assessment in the truest sense of the term would remain in a
pedagogical Limbo.
Part of our continuing vision for student involvement and, in turn, assessment of
student learning, is to have funds available for undergraduate assistantships to reward
dedicated and industrious philosophy and religion students to get a glimpse of the
workings of course preparation and the running of our scholarly societies (Classical
Humanities Society as a key example), as well as lecture preparation. This would prepare
students for future study, employment, and set us closer to the deepest factor in a
philosophical training, namely, mentorship and apprenticeships in creativity and values.
As a final concern for assessment outcomes and with the focus on the preparation
of collected materials, the Program has discussed the need of a paid course release for an
assigned faculty member (one course within two semesters) so to pool, copy, organize,
and report on the aspects of assessment factors outlined in this assessment document.
This course release would enable us to direct our materials and findings, pass over
student papers, evaluation forms, and all that would help refine and count as evaluations
to a faculty member that in fact will have been granted the time and office to compose
and deliberate on the findings.
12
ASSESSMENT OF STUDENT LEARNING
CHEMISTRY PROGRAM (2003-2007)
A. Learning goals
What are the particular levels of knowledge, skills, and abilities that our students attain as a
result of their experiences with the Chemistry Program curriculum? In response to this question,
during Spring 1999, the program came up with a list of skills/outcomes we wanted our students
to have/meet when they graduated. The categories were: Basic Laboratory Techniques,
Intermediate Lab Techniques, Instrumental Skills, Chemical Concepts, Quantitative (Science)
Skills, Technical Communication, and Decision Making Skills. This comprehensive list can be
found in Appendix 1.
These goals were primarily intended for our majors (Chemistry [CHEM] and
Biochemistry/Molecular Biology [BCMB]) and were in need of further deliberation. However,
we made no further progress as a program and instead individual faculty experimented with
classroom assessment techniques. With the advent of the institutional assessment of student
learning initiative, in spring 2003 the program decided to charge a committee/workgroup with
(1) identifying a set of learning goals that would be applicable to a broader student population
and (2) formulating an assessment plan. The committee members are Kelly Keenan, Kristen
Hallock-Waters, Shanthi Rajaraman, Bob Olsen, Ada Casares and Brian Rogerson.
The committee identified eight broad learning goals that need to be assessed:
1. Data Analysis Skills (graphing, data interpretation)
2. Molecular Representation
3. Basic Measurement Skills
4. Knowledge of Intramolecular and Intermolecular Forces
5. Recognizing the Properties of Substances: Structure/Properties Correlation
6. Knowledge of Solutions
7. Understanding How to Separate Compounds
8. Understanding Atomic, Molecular and Macromolecular Sizes.
This list is not meant to be comprehensive; it simply constitutes a reasonable starting point.
There is also no suggestion of relative importance at this stage. Each of these broad goals was
expanded into a subset of concepts that are described in detail in Appendix 2. This expansion
resulted in some degree of overlap across the main learning goals. Furthermore, discussions with
program faculty led to the recognition that this overlap was not only necessary but desirable as it
may help student learning and also help us achieve our desired learning outcomes. It should be
noted that since assessment of student learning will always be a work in progress, these goals
will be subject to constant modification, particularly as a result of feedback from assessment
instruments, or because of changing pedagogical interests of Chemistry Program faculty, and at
yet other times because of programmatic changes in the curriculum.
1
A preliminary discussion concerning appropriate assessment instruments to measure these
learning outcomes resulted in the narrowing of the scope of our initial assessment effort. The
assessment workgroup submitted a recommendation that we initially assess two basic qualitative
goals and two basic quantitative goals:
Qualitative:
1) Understanding what chemical formulas mean and how to properly represent the corresponding
chemical structures.
2) Understanding how to separate compounds.
Quantitative:
1) Understanding what the calculations required for the preparation of a solution are (using
written protocols).
2) Plotting data and interpreting graphed information.
These four goals are described in more detail in Appendix 3 and have been approved by the
Program. The workgroup suggested that these four concepts be used to model the assessment
process because they are among those amenable to assessment across the chemistry curriculum.
The idea is to evaluate how the understanding of the concept matures as the student progresses
through the curriculum. Informally, we have always made the claim that students “learn
something” by the time they graduate. Our intention then, is to provide direct evidence that this
is actually the case.
B. Assessment instruments for measuring the learning outcomes of our majors
We then considered how the learning goals would be assessed, which student populations would
be assessed, and at what stage(s) in their college education they would be assessed. Our thinking
has come full circle in at least one respect. We started out wanting to assess student learning in
our majors, but then broadened the effort to include a number of majors that we serve, such as
Biology, Environmental Studies and Marine Science. However, we are focusing once again on
our majors. One reason is to keep assessment manageable. Another is that our majors tend to be
very consistent about the sequence in which they take their courses. For example, a very limited
survey of the transcripts of 2004 and 2005 BCMB graduates shows that ~75% took CHEM I, II,
III and IV in sequence (while ~13% each took the I, IV, II, III and the I, II, IV, III sequences).
CHEM I and IV correspond to the first and second semester of general chemistry, while CHEM
II and III correspond to the first and second semester of organic chemistry. A third reason we
wish to focus on our majors is that it is easier to track the progress in student understanding when
following students who continue to take more chemistry courses, and these, of course, are our
majors.
As a way to monitor student learning, the assessment workgroup has decided to use embedded
questions to assess the progress of student understanding of the two qualitative and two
quantitative learning goals described earlier. Student understanding will be assessed as they
progress through the Chemistry Program curriculum. We wish to test the idea that multiple
exposure to a particular concept in different course contexts results in a better understanding of
the concept. The idea is to assess global programmatic influences (rather than individual faculty
2
influences) on students. For instance, questions about plotting data and interpreting graphs will
be placed in the CHEM I → CHEM II → CHEM IV → BIOCHEM → BLM (Biochemistry
Laboratory Methods) course sequence, because it is in these courses where students are plotting
data and interpreting graphs. Furthermore, the questions will be course-specific, with
expectations higher in the more advanced courses.
These instruments were discussed by the workgroup during the summer and early fall of 2006.
We have asked all interested faculty to identify candidate questions in their tests for this purpose.
At this point we are not concerned about format, just identifying questions that meet the spirit of
the assessment. Although we did have detailed discussions on how to best construct these
questions. As a way to assuage any fears, we have pointed out that whether a student does well
(or poorly) on an embedded question while in a particular class (i.e. with a particular teacher) is
not what should concern the Program. Rather, what matters is whether students show consistent
progress over time in their understanding of the concept and hopefully by the time they graduate.
Again, what is being addressed is the influence of the program as a whole. The hypothesis is that
exposure in different contexts (i.e. different courses) is the key to fully understanding a concept
or mastering a skill. If a particular student does not “get it” while in a particular course, they may
do so in the next course. Students have different ways of learning and faculty have different
ways of teaching, so it should be obvious that it is our joint effort (and not our isolated efforts)
that plays the major role in ensuring that our graduates are well prepared.
In order to make this experiment manageable we are planning to assess the learning exhibited by
cohorts of students (rather than by following individual students), and samples of students (rather
than all students).
During Fall 2006 and Spring 2007, workgroup members have been testing such embedded
question instruments to assess two learning goals that the program identified as essential: 1)
Understanding what chemical formulas mean and how to properly represent their chemical
structure (a qualitative goal), and 2) Plotting and interpreting graphed information (a quantitative
goal). Data from tests in CHEM I lab, CHEM II lecture, and Biochemistry lecture have been
collected and analyzed. Additional data will be collected and analyzed by the end of the spring
2007 semester. Indeed, all of the CHEM program faculty have been invited to participate to
increase the course coverage and example questions. We will then evaluate how effective the
instrument is at identifying proficiencies or deficiencies in student learning. Our expectations are
that the results from such assessments will guide how we modify our teaching to address any
deficiencies we may identify in student learning outcomes.
C. Earlier/Current assessment of student learning efforts
A number of us (Olsen, Hallock-Waters and Rogerson) are already experimenting with several
American Chemical Society standardized tests (Toledo, First-term and Two-term exams) for
measuring learning outcomes in general chemistry. We are assessing skills and knowledge in a
number of areas at the beginning and end of CHEM I (single course analysis), or the CHEM I,
IV sequence (combined course analysis). In the latter case, we are trying to determine the degree
to which students taking CHEM IV have retained any understanding from CHEM I. This work is
3
experimental in nature and aimed at determining whether these instruments are appropriate for
basic assessment of learning outcomes in general chemistry.
One critical issue for the Chemistry Program has been freshman performance in CHEM I, the
very first chemistry course students take at Stockton, and very likely the one that leaves a lasting
impression on students concerning our Program. We have noticed (as others have done at other
institutions) a significant withdrawal frequency reflecting a substantial fraction of students
struggling with college-level chemistry. Our concern translated into a study that we carried out
over several semesters during the 2002 and 2003 academic years involving most of the CHEM I
section instructors.
The study was led by Bob Olsen and Brian Rogerson, the results of which were presented at the
2004 Day of Scholarship by Dr. Olsen. We asked the question: Can success in CHEM I be
predicted? We attempted to answer this question by administering the Toledo Chemistry
Placement Examination during the first week of class. It is a standardized test designed by the
American Chemical Society, which consists of 20 questions on general mathematics, 20
questions on general chemical knowledge and another 20 questions on specific chemical
knowledge. Student background is assumed to include one year of high school chemistry and one
year of high school algebra. Not surprisingly, students tend to perform better on the general
rather than the specific chemical knowledge sections. However, what interested us was their
performance in the general mathematics section. When distributions of the student Toledo math
scores were plotted versus their final grades for the course, a trend suggesting a link between
quantitative skills and success in the CHEM I course became evident. Such a correlation was
also evident, but less striking when distributions of student math SAT scores were plotted versus
their final grades.
As we proceed with our assessment of student learning plan, it may be possible (with further
study) to use the Toledo math scores to identify students at risk in CHEM I. Such early
intervention may help put these students on a path that leads to more favorable learning
outcomes in CHEM I and future college chemistry courses.
D. Additional learning outcomes we wish to assess
We are also considering assessing student proficiencies at the senior research project/internship
level. We would like to develop a common assessment rubric that will measure student
understanding of the research experience in their written theses and oral presentations. Among
the desired learning outcomes we wish to measure are hypothesis formulation, understanding of
project significance, data interpretation, grasp of relevant data, awareness of related literature,
and understanding of the application of basic biological/chemical principles in their research
projects. We hope this will allow us to introduce some consistency into the research experience
particularly when it comes to assessing intramural versus extramural student projects. The plan
would be to have research mentors/sponsors meet at the end of each semester to discuss the
rubric scores and determine which goals require greater attention. Modifications would then be
introduced into our teaching of subsequent research students to remedy any past outcome
deficiencies
4
There are also non-curricular instructional goals to consider (adapted from Angelo and Cross’
Teaching Goals Inventory and listed in Appendix 4) such as developing a commitment to
accurate work and improving oral presentation skills. Some of the listed goals will be readily
amenable to assessment if instruments are designed or chosen properly for the qualitative and
quantitative goals listed above. Other goals may require their own assessment instruments.
E. How do we define success?
While the Program has not decided what will constitute a “satisfactory” percentage of students in
terms of meeting our learning goals (or other measured outcomes), the Middle States assessment
booklet “Options and Resources” can serve as a guide. The Teaching Goals Inventory (page 23)
suggests a rating scale that could be used for the goals chosen by the Program:
Essential
Very Important
Important
Unimportant
Not Applicable
A goal you always/nearly always try to achieve (76% - 100% of the time)
A goal you very often try to achieve (51% - 75% of the time)
A goal you sometimes try to achieve (26% - 50% of the time)
A goal you rarely try to achieve (1% - 25% of the time)
A goal you never try to achieve
If writing a good scientific manuscript is identified as an essential program goal, or if a particular
concept in the chemistry curriculum is identified as an essential program goal, we could ask if it
is being achieved > 75% of the time. The Program, of course, could come up with its own
guidelines and definitions of success. Middle States allows for this, and it is our prerogative.
Also, we know that a number of our CHEM and BCMB graduates go on to graduate school.
Others get jobs in academia or industry as lab technicians or research assistants. If graduate
school acceptance and placement in field-related jobs is identified as an essential program goal,
we can ask if it is being achieved > 75% of the time. In those cases where graduate school
acceptance requires a good score on an examination, it could be argued that entrance exam
performance is connected to what they learned at Stockton. In contrast, gaining employment says
little, if anything, about any student learning that may have occurred while at Stockton.
Employer satisfaction surveys also do not address what students learned while at Stockton. More
direct measures are necessary to determine whether learning is taking place. This is why we are
developing the embedded question assessment instrument (described above). If assessment
results indicate that certain learning goals are not being met, the program will introduce
curricular changes in an attempt to correct any such deficits in future student cohorts.
F. Obstacles to assessment of non-majors
It is useful to be reminded of what (and how many) chemistry courses Stockton students take.
The list below also highlights the broad student population that we serve (Our majors are
underlined). The Chemistry Program teaching responsibilities affect the following numbers of
Stockton graduates: ~10 chemistry majors/year, ~15 BCMB majors/year, ~120 biology
majors/year, ~30 environmental studies majors/year and ~ 30 marine science majors/year. The
challenge we face in assessing student learning in such a broad student population is significant,
mostly because non-majors take a limited number of chemistry courses and in many cases, can
5
do so at any time during their tenure at Stockton (i.e. as early as their freshman year or as late as
their senior year).
BCMB (Majors)
CHEM I, II, III, IV, Biochem, Biochem Lab Meth., BCMB 4800 (Senior research)
Survey of Inst., Adv. Biochem., P. Chem
BIOLOGY (Biotech track)
CHEM I, II, III, IV, Biochem (Survey of Inst. and Biochem. Lab Meth.)
BIOLOGY (General/Integrative track)
CHEM I, II, IV, CHEM III or Biochem
BIOLOGY (Pre-med track)
CHEM I, II, III, IV, Biochem (Survey of Inst. and Biochem. Lab. Meth.)
BIOLOGY (Pre-PT, and MPT-accepted)
CHEM I, II
CHEMISTRY (Standard track, ACS) (Majors)
CHEM I, II, III, IV, Inorg, Org. Lab., Lab. Meth. I, II, P. Chem I, II, CHEM 4800 (Senior
research), Biochem
CHEMISTRY (Environmental track) (Majors)
CHEM I, II, III, IV, Envl, Lab. Meth. I, II, Atmospheric, CHEM 4800 (Senior research)
ENVIRONMENTAL STUDIES
CHEM I, CHEM II or CHEM IV
GEOLOGY
CHEM I, CHEM II or CHEM IV
MARINE SCIENCE (Marine Biology, Marine Resource Management tracks)
CHEM I, II
MARINE SCIENCE (Oceanography track)
CHEM I, IV
PUBLIC HEALTH
CHEM I
6
Chemistry Assessment: Biochemistry
Biochemistry is a class that cuts across traditional disciplines of chemistry and biology.
Students in the class have taken prerequisites in chemistry as well as biology. It was observed
that students thought that some molecules were taught of and belonged in biology and some in
Chemistry. In particular, DNA and genes was observed to be thought of by the students to be
the realm of biology and proteins was the realm of chemistry. In semester after semester, the
students scored poorly on questions that asked them to relate the protein and the gene.
In order to address this, two changes were made in the class. The first involved the use of
articles that were related to class material but not specifically covered in class. For example,
one of the topics in Biochemistry involves serine proteases. An assigned article describes the
link between one such serine protease and the role it plays in Alzheimer’s disease. The gene
involved as well as the role of the protein encoded by it was described in the article which led
to potential medicines.
Another change in Biochemistry is the use of presentations. Students were put into groups and
assigned topics which often dealt with various diseases. Students were asked about the
defective protein in the diseases as well as what approaches can be used to treat them. Some
treatments aim to replace or overcome the loss of the protein and others aim to add a normal
version of the gene. Similarly to the articles, the questions asked about these diseases cut
across the biology and chemistry disciplines. IT has become important for students to master
this understanding of the relationship between the protein and the gene now that entire
genomes have been sequenced and efforts have been to identify all of the proteins.
Since the articles and the presentations have been used, there has been an improvement by all
students in those questions that cut across disciplines and address proteins and genes. The
overall average for the scores on these questions is similar to that of other material.
Chemistry Assessment use- CHEM I (CHEM 2110, Chemistry I, General Principles)
The decision by the CHEM I/IV workgroup (six faculty members) to use the WileyPLUS online
homework system is an example of how a group of faculty put assessment data to work. There
was a recognition that students were not engaged with the material. Student surveys revealed
that a substantial proportion of them were spending very little time studying for the course and
were not bothering to do homework (most instructors assigned it, but did not grade it). This
lack of engagement was reflected in their quiz and exam grades. When one of the instructors in
this group started doing daily (non-graded) in-class assessments, he noticed a clear drop in
withdrawal frequency, but unfortunately only a very modest improvement in academic
performance at the low end of the grade scale (an increase in C and Ds). This was very
disappointing, but reviewers argued that the in-class assessments were "helping" the marginal
students because they were serving as a partial substitute “for the study routine that is the
staple of the more able student”. This in-class assessment research was presented at the 5th
Annual Lilly Conference on College and University Teaching (2001) and published in the Journal
of Chemical Education (J.Chem.Ed. 2003, 80, 160-164)
Members of the CHEM I/IV workgroup always talked about the lackluster performance of our
CHEM I students. Evidence for a lack of student engagement was mounting and so the group
implemented the WileyPLUS online homework system, which is gaining widespread use at
other institutions.
During the three semesters since the implementation of this tool, the workgroup surveyed
CHEM I students about their perceptions of WileyPLUS. Students report that they are more
engaged and that the steady regimen of homework helps reinforce the material. This is
consistent with the improved scores that one instructor is getting on his final exam, a
standardized American Chemical Society (ACS) test. The ACS test is measuring the proficiency of
the students that did not withdraw from the class.
Many questions remain to be addressed. For example, whether the ACS data is generally true
for all CHEM I instructors (not all instructors use this tool as the final exam) and whether class
demographics have an impact (the freshman/non-freshman ratio varies from section to
section). The impact of class module and whether it was a fall or spring offering is slowly
averaging itself out. The instructor in question has taught CHEM I at many different times, in all
kinds of rooms, both fall and spring.
Use of WileyPLUS correlates with an improvement in the median score on a standardized test
ACS Test Results (CHEM I, n=233) No WileyPLUS
(Fall01, Spr02, Fall03, Spr04, Spr06, Fall07, Spr08)
14
12
10
8
6
4
20
25
30
35
Class
15
→
0
→
2
40
45
National
Number of students
16
50
55
60
Number of correct answers (out of 70)
ACS Test Results (CHEM I, n=86) With WileyPLUS
(Fall08, Spr09, Fall09)
7
6
5
4
3
2
20
25
30
35
→
15
Class
0
→
1
National
Number of Students
8
40
45
50
55
60
Number of correct answers (out of 70)
Chemistry Assessment use- Physical Chemistry
When the instructor for Physical Chemistry I taught it for the first time in Fall 2007 he noticed
that students were having a very hard time with the mathematics of physical chemistry. This
was true both of the mechanics of calculus (i.e. how to take an integral or derivative) but also of
the conceptual understanding of the calculus (what exactly does a derivative mean). He
observed this through class discussion. He was spending more class time than he had expected
going over material from Calculus I and when asked, students did not understand what these
mathematical observations meant in a chemistry context. In addition, he noticed on
homeworks and exams that a major obstacle to understanding the chemistry was the math,
and students were not performing well.
To address this situation, the instructor applied for and received funding and training through
the Summer Tech Academy to develop mathematics tutorials for Physical Chemistry students.
The instructor prepared a series of five video podcasts (available on the course blog) that had
several goals:
1. Help students with the mathematics so the underlying chemistry concepts are clear
2. Develop an understanding of both the concepts of calculus and the mechanics
3. Use online tutorials to cover the math topics outside of class and on-demand
4. Include problem solving strategies whenever possible
The tutorials outlined the mechanics that they should have developed in their previous calculus
courses as well as what these things meant in the context of physical chemistry (conceptual
understanding).
In Fall 2008 (the first semester with the podcasts) the instructor administered a pre- and postsemester math skills inventory to assess the impact on the tutorials on the students'
understanding of both the conceptual ideas and the actual mechanics. An increase in most
areas was observed (both conceptual and mechanical), especially in the area of partial
derivatives (which is a new concept not covered in the prerequisite calculus classes).
The instructor shared the results of this work in a seminar entitled “Screencast tutorials in
mathematics and problem solving for physical chemistry” at the 2009 Day of Scholarship and at the
2010 Biennial Conference on Chemical Education in Denton, TX.
Communication Studies Program Assessment of Student Learning
Introduction
As a collective faculty, we have not conducted any formal research regarding student learning;
however, we have had numerous Communication Majors make great accomplishments, both
inside and outside of the college. We believe that much of what we do in the Communication
Studies Program is based on theories of applied learning. We, therefore, concur that our students
have been able to successfully apply what they have learned in their Communication courses
here at Stockton. Below is an incomplete list of some of those accomplishments. We can also
produce copies of some of this material if requested.
Student Presentations
In the spring of 2007, approximately 30 students from the Communication Seminar Course
attended the 11th Annual New Jersey Communication Association Conference, held at Kean
University. Approximately 20 of those students presented on a panel entitled, “Communication
and Popular Culture.” Presentation titles included: MTV and its Impact on Music, Reality
Television, and Youth; The Beatles Impact on Popular Music; The McDonald’s Corporation and
its Impact on Family Interaction; The Impact of AOL Instant Messaging on Interpersonal
Communication; and The Impact of Cell Phones on Interpersonal Communication.
Jobs in the Field & Graduate Programs – 1989 – 2010
1989
• Andrea Gabrick (Fall 1998, Program Distinction, Magna Cum Laude) was offered a full-time
position in a media organization in Arlington, Virginia before she had completed the
internship there.
1999
• Janel Keller (Spring 1999, Program Distinction) was offered a part-time position after her
internship at Shore Memorial Hospital in Somers Point. The intern site created the job for her
as they were very pleased with her internship performance.
• Susan Evrard (Spring 1999, Program Distinction), who was a senior reporter and editor for
ARGO, was offered a reporter position in Asbury Park Press. According to Allison Garvey
of AP Press, “With each assignment Sue exhibited a professionalism and curiosity that signal
a real calling to a career as an outstanding beat reporter.”
• Amanda Keyes (Spring 1999) became an assistant manager at Unitel-Wireless
Communications Systems in Egg Harbor Township.
• Jennifer Wilson (Spring 1999) was offered a formal position for camera work at WMGM,
NBC, Channel 40.
• Kevin Barrett (Spring 1999) received a position for data management in Philadelphia.
• Joshua Gershenbaum (Spring 1999) was hired as a post-production technician at Superdupe
Recording Studio at Madison Ave., New York.
• Lori Rao (Spring 1999, Program Distinction) received a full-time position at Edison Media
Research.
•
•
•
•
•
•
•
Andrea Levick (Fall 1999, Program Distinction) was asked to take on a part-time position
within the Programming Department of 100.7 WZXL because of her excellent work in data
management and public relations
Debra Bessetti became a part-time copywriter for Equity Communications after her
internship.
Marc Strozyk was asked to stay at Ginn Computer Company, Ocean City with a starting
salary of $800 a week.
Regina Higbee was offered a decent job at Bell Atlantic Network Services in South Plainfield.
Bethany Farms got accepted in the production team of a Philadelphia TV station.
Matthew David Finelli (Fall 1999) accepted a job at a local advertising agency.
Tim Campbell (Fall 1999) got a job at TNS as a part-time cameraman.
2000
• Anthony Fiumara was awarded a full-time job in his internship company with an annual
salary of $38,000.
• Pamela Reistle received a full-time job offer from her internship company Compas Inc. in
Cherry Hill.
• Frank Tedesco, who completed his internship at WMGM, NBC, Channel 40 in 1999, received
a full-time job offer as a radiographer.
• Jennifer Wilson also got a video production staff position after her internship at WMGM.
• Shawn Rody (Fall 2000, Program Distinction) was hired as a TV production person in
Manhattan when he had completed his internship at Cronkite Productions Inc. in New York.
• Kathryn Player (Spring 2000, Program Distinction, Summa Cum Laude) received admission
notices with financial scholarship from several prestigious universities. She chose to complete her
graduate study in Sports Management at Temple University with a full assistantship.
2001
• Wesley Schnell (Spring 2001, Program Distinction) played a leading role in Communication
Society and maintained a GPA of 3.9. He received a number of scholarship and other honors
during his studies at Stockton and was listed in Who is Who. With strong recommendations
from James Shen and Joe Borrell, he attempted his internship first at the CNN headquarters
in Atlanta, Georgia, and later ended it in Washington D.C. for political communication and
congressional reporting.
• Minority student Llewelyn Glover (Spring 2001, Program Distinction) did extremely well in
his communication and theatre studies. With the reference letters from James and others, he
was admitted with scholarship in the film school of Chapam University in California.
• Elizabeth Makoski completed her internship at New Jersey Motion Picture and Television
Commission in Newark, NJ. Her media-related experience and computer skills won her a
position as manager associate in Commerce Bank before she walked through the
commencement. Her starting salary was $33,000.
• Right after her graduation Kari Biehler was hired as a commercial account executive by
Pitney Bowes, a Fortune 500 company. She impressed the employer with her solid computer
application skills, and the marketing and media experience she obtained from her internship
at the Press of Atlantic City.
• Christopher Nagle had his DJ job with his internship experience at K Rock 98.7 & The Coast
WCZT 94.3.
•
•
•
•
•
Guy Zompa got his PR job in Absecon before graduation.
Nicole Muscarella impressed the Manhattan boss with her oral, written and computer skills
and organizational insight during her internship at Koch Entertainment on Broadway, New
York. She was offered a formal job position with $41,000 before she finished her degree
studies at Stockton.
When Lisa Easter completed her internship at Atlantic Cape Community College, she was
asked to take an administrative position with the mid 30’s. But she found a better position at
Slack Inc. in Deptford as a marketing research assistant.
Nick DeLuca found a corporate position soon after his internship at KYW.
Christina Soffer (Fall 2001, Program Distinction) created a good performance record for her
internship in a real estate business in northern New Jersey. She received a job offer at WHD
(Whole Home Direct) with $35,000 and full benefits right after her graduation.
2002
• Brian Garnett interned at 102.7 WJSE Somerspoint, NJ last summer. Because of his excellent
work performance, he took a job offer from the radio station before his graduation from
Stockton.
• Kelly Washart (Spring 2002, Program Distinction) was offered a position of PR Assistant in
Parker & Partners Marketing Resources, L.L.C. for her internship performance impressed
the company supervisors.
• Damon Scalfaro received a job offer at Australian Trade Commission with lower 30’s after
he completed his internship in the Embassy of Australia in, Washington D.C.
• Kim Pinnix (Spring 2002, Program Distinction, Cum Laude) got a full-time position of
marketing assistant at Seashore Gardens Living Center because of her internship experience
at American Society for Engineering Education in Washington D.C.
• Suzanne Wavro (Spring 2002, Program Distinction, Cum Laude cum laude, transfer Scholar
Award Antoinette Bigel Studies Endowed Scholarship) was accepted by Stockton Education
Program.
• Lisa Boccuti, (Fall 2002, Program Distinction, Cum Laude) accepted a job as a marketing
associate of a pharmaceutical company in South Jersey.
• Katy Smith (Fall 2002, Program Distinction, Magna Cum Laude) received a job offer at
Access Response, Inc., an advertising company in Toms River, with her successful internship
experience in PR. The new alumna even offered internship positions for the communication
majors in 2003.
2003
• Lapell Chapman finished his internship at Pleasantville High School, where he operated the
Media Distribution System, and performed on site camera operations for sports events and
other activities, and controlled the sound system for board of education meetings. His
excellent performance bought him a dual position of a K-12 teacher and director of the TV
station with a starting salary of 45K.
• Kelly Vance became the promotions and marketing coordinator of the Atlantic City
Boardwalk Bullies Hockey Team when she had completed her internship in promotion and
advertising there.
• James Dunn was sent to WAYV 95.1, Atlantic City for his radio internship, and he got a job
offer there due to his outstanding work performance in radio production.
•
•
•
•
•
•
Robyn Porreca’s internship experience with WRAT 95.9 in S. Belmar helped her gain the
position of promotion support staff.
Melissa Gonzalez, who impressed her internship supervisor in Millennium Radio Group with
her outstanding work in radio promotion, copy writing, and advertising, found a conversion
sales representative position for R.J. Reynolds in Wright Consulting in Philadelphia with
32K. She worked as a city manager with another communication alumna Kelly Rich.
Kari Spodofora had her internship at Gilmore & Monahan Attorneys & Law, Toms River.
Her positive attitude and work performance in legal communication management was greatly
appreciated by the lawyers. With the education at Stockton and the internship experience, she
was admitted into Law School of Rutgers, Camden in fall 2003.
Jennifer Kane, (Spring 2003, Program Distinction, Magna Cum Laude), found an
administrative job in Ocean County College. She was later accepted as a graduate student in
the MAIT Program at Stockton.
Sharon Giordano (Fall 2003, Program Distinction, Magna Cum Laude) was hired by Stockton
as Assistant Director for Community of Scholar in 2004.
Stephanie Grossman was offered a job at the Government Affairs Office for the ExxonMobile
Corporation, Washington D.C. in February 2003. In six months she was promoted to be
Congressional Staff Assistant, working with the lobbyists of Exxon for the House and
Senate. In Spring 2004, she was accepted into the Columbian Graduate School of Public
Administration at George Washington University.
2004
• Kelly Lenzi did her internship at the Miss America Organization in Atlantic City. Her
outstanding performance and communication capability won her a producer position before
she finished the seasonal work there.
• When Erin Foster finished her internship at ImageMaker, Brick, NJ, she got a job at Lanmark
Advertising in Eatontown in October 2003. She started as a marketing associate at $29K.
• Leah Kuchinsky got a job at AC Coin & Slot as their sales coordinator after she completed
her internship at Taj Mahal.
• In Maryville Foundation, Franklinville, NJ, Laurie Emhe did her internship in fundraising
and social investigation for the Chair of Camden County Freeholder Thomas Gurick. Her
independent research work and grant writing was considered efficient and professional,
“providing valuable information concerning capitalization of foundations.” She was awarded
a position in October 2003 as Human Resource Manager/Grant writer for Maryville
Addiction Services (non-profit) in Williamston. Her starting salary was 37,500, to be
increased to 40,000 in 3 months.
• Esther Guzman did her internship at UniVision. Her positive attitude and work performance
in communication management was greatly appreciated by the Hispanic media organization.
She was offered a job position soon afterwards.
• Jessica Batha (Spring 2004, Program Distinction, Cum Laude), who did an advertising/PR
internship, found a job as assistant for the Account Executive.
• Cindy Coppola, who received a very positive internship evaluation from the Miss America
Organization, was accepted by both California State University at Los Angeles and
California State University at Fullerton in fall 2004. She went to the latter for her graduate
study in Communication/Public Relations.
•
Kim Troncone (Fall 2004, Program Distinction, Summa Cum Laude) with dual degrees in
Communication and Education, obtained a teacher position in a public school in South
Jersey.
2005
• Cannella Autumn, who interned in American Cancer Society, Philadelphia, was hired as
marketing sales associate in Fashion Biz, NYC in fall 2005.
• Terrence Field got a job at Human Capital Management, Inc., Valley Forge, PA, an IT
consultant business, after completing the internship in spring 2005.
• Allison L. Venezio did her in KYW-TV 3, CBS, Philadelphia. She not only participated in
many production events but also aired a show that she directed and produced.
• Sarah Hamilton got admitted at Master Program in Journalism, Regent University, Virginia
Beach, VA.
2006
• Sharon Giordano and Bojan Zilovic got admitted into Stockton’s MAIT Program in F06.
• Barbara Knopp-Ayres was hired by Absegami High School as Media Assistant.
• Intern: Lisa Diebner, Interned in WMCN DT-44, TV in Atlantic City, and was hired as
production assistant in Music Choice, New York City, 10/16/06
• Scott Holden, Staff Reporter of The Central Record in Medford, NJ, 11/16/06 e-mail
• Lauren Shapiro received admission notice from the graduate school of Communication
Department of Johns Hopkins University.
• Brian Coleman, who interned in Washington D.C. in fall 2006, received a job offer as
Continuity of Operations Specialist for the Department of Defense, Washington
Headquarters Services. Nominated by the Mass Communications Advisor to be featured in
the e-newsletter that the Washington Center regularly sends to prospective interns, Brian was
asked to deliver a speech as the Mass Communications representative at the scheduled TWC
commencement on December 11, 2006.
2007
• Elizabeth M. Runner’s application for Peace Corps got accepted by the organization’s
Administrations Office, Washington D.C.
• Vanya Kasakova, a S07 communication graduate, got accepted into the MAIT.
• Jill Pokus, a comm. graduate and the Program Distinction Honor recipient in May 2005, was
hired as Associate Center Manager and Marketing Manager of Kaplan Test Prep and
Admissions with a starting salary of 36k.
• Communication senior Aja Graffa interned with CNN in Washington D.C. in fall 2007. Aja
received a job offer from CNN right after she completed her internship.
2008
• Darlene Dobkowski is admitted into the fall 2008 Graduate Program of Communication of
Emerson College.
• Jeff Schiffer got accepted into Graduate School of Communication of Monmouth University
in spring 2008
•
•
•
•
•
Jaclyn Malley, a 2006 communication graduate and an associate producer of Red Films, New
York City, got admitted into the Graduate School of Corporate Communication, Monmouth
University, fall 2008.
One of James’ former advisees Sherri Alexandersen, who completed her MAIT studies at
Stockton with the honor of Founders Award in May 2007, landed a job as a Multimedia
Designer for the aerospace firm Lockheed Martin in fall 2007.
Another James’ former advisee Kimberly Pinnix graduated from Drexel University with an
M.S. Degree in Public Communication (in Publishing track, GPA: 3.88).
In 2008, there were at least five communication majors working at the Press of Atlantic City
as full-time or part-time employees, including three recent graduates Rachel Guevara,
Jennifer Hults, and Kelly Ashe, and the recipient of Charles C. Reynolds Endowed
Journalism Scholarship Brittany Grugan.
Darlene Dobkowski, a fall 2007 graduate now studying at the graduate school of journalism
at Emerson College, received an internship position at www.boston.com
2009
• Reyanna Turner got accepted at The University of Maryland, College Park, and Towson
University.
• Joshua Hoftiezer got admitted into the fall 2009 graduate program (writing) of the
communications department, Rowan University.
• Graduating senior Jessica Grullon interviewed Governor Corzine during her fall 2009
internship with NBC40 WMGM-TV). She received a very positive evaluation from the news
director for her media writing and Spanish speaking skills.
• Justin Dolan was hired by MTV in New York City in 2009.
• Bojan Zilovic, a 2005 graduate, was hired by ACCC as an assistant professor of
communication in 2009
• Jennifer Giardina, ’09 summer graduate, received a position as a part-time Production
Assistant with NBC40 WMGM-TV in November 4, 2009
• Aneta Soponska, post-graduation practicum at New York Times, New York, fall 2009.
Working fulltime in Poland now.
• Celeste George, a 2004 graduate, served as the art director of Artcraft Promotional Concepts
in Moorestown, New Jersey
2010
• In spring 2010, Christa Cycak interned with MTV in New York City. She took many daily
responsibilities for production management projects in Comedy Central:
o Coordinating a number of producers and free-lance workers for TV shows
o Designed spread sheets for nationwide contacts and crew traveling
o Prepared financial packets for the accounting department
o TV editing
o Assisted different projects, including a new reality show in downtown New York
City.
o During his internship with Atlantic Prevention Resources in fall 2009, William Walsh
helped produce a documentary for banning smoking in Atlantic City casinos.
• Lyndsey Watkins worked with the NJ Department of Environmental Protection in Leeds
Point. Her internship involved data management and web site improvement.
•
•
•
•
Christy Barbetto interned in Fastenal, Hammonton, NJ and received a manager’s position
before she completed the internship in spring 2010.
Jennifer Jade Lor performed an internship in the Southern New Jersey Office of U.S. Senator
Robert Menendez during the summer of 2010. She received a very positive evaluation for
“her effective communication and organization skills.”
Emily Lingo did her internship in the Clear Channel Communications in Philadelphia in the
summer of 2010. She updated the station’s website and coordinated many events and was
commended for her good communication management skills.
Andrew Moore went through the WLFR Internship and now he is one of the WLFR student
managers. He currently writes for a music magazine Origivation.
http://www.origivation.com/
Student Publications
• Thirteen Communication majors were published in China Daily during the 2009 – 2010
academic year. China Daily is a the only national English-language newspaper in China,
it is a prestigious publication with daily a circulation of more than 200,000 copies. It is
distributed more than 150 countries and regions. It is read by political leaders, corporate
executives, educators and other elite groups around the word.
• Student in the class, Writing for the Media – COMM 2103, have been publishing in the
Press of AC since 2005 with an average of 5 letters published each semester. Eighteen
letters were published in Fall of 2009 and Spring of 2010.
• Hristina Ninova produced about 200 byline stories during her internship at the prestigious
Washington-based newspaper, The Washington Examiner. She was also running a
column for that newspaper while she was working there.
• Five students were published in the 21st Century. 21st Century is the education edition of
China Daily.
Environmental Studies Assessment
I’ve attached the most recent material that we have submitted to the dean. At the most
recent meeting, the dean indicated that survey information from alumni or employers
was not what he was looking for regarding assessment. Currently, the program is
reviewing core courses, trying to agree on learning objectives. We are reluctant to
consider a senior level examination, having done that in the past with very mixed results
in part because our students select from a wide variety of electives.
Based on what our students seem capable of doing upon graduation through reviewing
their work, we’ve agreed to concentrate more on computer literacy in our core courses.
We’ve agreed to take our senior level course (Environmental Issues, ENVL 4300/4305)
and move the two sections closer together and provide more training in geographic
information systems such that all students will have some exposure to GIS upon
graduation.
In our 2000 level courses, we are now emphasizing using Microsoft Excel much more
heavily in our labs. The LO’s for one of the core courses (Physical Geography, ENVL
2100/2105) seem reasonably close among the various sections taught. There are
significant differences in some aspects of the LO’s for Ecological Principles, particularly
the lab. Jamie and I are a subcommittee trying to meld the LO’s more effectively.
Agreeing on the LO’s for the introductory level course seems more problematic at this
point. It’s a required course for ENVL majors, a course taken by EDUC students, and an
ASD course taken by a variety of students, some from NAMS but many from ECON,
BSNS. It’s also been taught by many different faculty members and in some cases was
team taught.
By our next Program meeting, we will present the LO’s for Ecological Principles and lab.
Each faculty member who has taught the Introductory level course will send me his/her
LO’s for that course, and we intend to produce a common core from that.
The following is the section on assessment from the most recent Coordinator’s Report.
According to Dr. Sonia Gonsalves, Environmental Science was one of the early
adopters of assessment and one of the earlier programs that used our data to
make changes in our curriculum. Changes in the Environmental Issues lab are
noted above. Other changes are discussed in previous Coordinators Reports or
Assessment Summaries.
This year, we discussed our progress with Dr. Gonsalves and our plans for
obtaining more data. We designed a brief telephone survey that we intend to
use to interview employers and perhaps, graduate school advisors. Our
intention was to have members of the faculty each contact one or two people
per week, using a list of some of the firms that have hired our graduates as well
as consulting firms and governmental agencies in the mid-Atlantic region.
Unfortunately, pressures on time have prevented us from beginning these
interviews. We intend to begin these surveys in August by having the
Coordinator contact subjects and will try again to use our faculty to contact
employers.
Several of us have adopted grading rubrics for assignments and papers. These
are provided to students, enabling them to complete assignments knowing
exactly what the standards are for success.
I can supply rubrics if you’re interested. One is used to grade student laboratory
reports.
M--
2009 Assessment Summary for the ENVL Program
Michael Geller, Coordinator
Environmental Science Program
A. Measures used in conducting assessment to date. Alumni Survey—Collecting
information from alumni focusing on
1. How well we prepare students in terms of skills, including
a. Writing
b. Oral communication
c. Quantitative skills
d. Working in groups
2. How well our Program prepares them in the different tracks (Biological
Resources, Environmental Chemistry, Social Science and Planning, GIS, Soils and
Water Resources)
3. The utility of the Senior Project/Internship
4. The value of specific courses taught within the Program
B. Implementation of Changes
1. Moved Environmental Issues course to senior year as a capstone course
a. This move was in part based on student comments in our survey indicating
the importance of group work
b. Implementation of Group Projects in Environmental Issues to help students
integrate college work with real world problems
c. Better preparation for becoming professionals to include resume building
2. Began collecting student portfolios to evaluate success in enhancing students’
writing skills
3. Introduction of more two credit courses to broaden the subjects students study
in our major and increased the number of courses students must take at the
3000/4000 level
4. Survey indicated the importance of writing, as a result faculty are requiring more
writing assignments in Core courses (Introduction to ENVL, Physical Geography,
and Ecological Principles)
5. Move toward a more coordinated approach to Senior Projects and Internships
(survey indicated this was a valuable experience and we want to strengthen
that). Changes to include having the students’ paper or poster include
a. Discussion of the purpose of the organization (i. e, NGO, governmental
agency, or private company) for interns, the laws and regulations relevant to
the organization, and the student’s role in the organization along with the
environmental problem addressed
b. Discussion of methods and techniques learned
c. Summary of work completed
d. Critical evaluation of organization, i.e., how well this organization addresses
the environmental problem and what changes to the organization the intern
would recommend that would enhance this
C. Perceived problems with Assessment using the Alumni Survey
1. Hard to replicate with good sample size. Statistical unreliability
2. Survey is still worthwhile doing every two or three years
D. Alternatives to Survey
1. Considering using a pre test/post test, administering these in the Introductory
course and again in the senior capstone course, Environmental Issues, to
measure how well students master
a. common content of program and
b. skills needed to succeed as an environmental professional
2. Develop consistent grading rubrics for papers and essay or short answer
questions, and where possible skills provide these to students in the course
syllabi
3. Work to establish common learning goals across the Core courses where these
are lacking
History Program Assessment of Student Learning
I. Senior Thesis
a. Assessment Measures
i. The program engaged an outside evaluator, Gail Savage of St. Mary’s College of
Maryland, to read a sampling of senior thesis papers. Her task was to judge the
effectiveness of the History Program conceptual history teaching in training
students in 1. Hypothesis formation, 2.Historiography, 3. Research, 4. Inference
construction, 5. Significance, and 6. Writing clarity.
ii. All theses completed in the spring were collected in alphabetical order by student
name. A random sampling, every other thesis in the collection, was then sent to
Gail Savage for assessment.
b. Frequency
i. Theses from spring 2006 were sent for assessment in summer 2006.
ii. Theses from spring 2008 were sent for assessment in summer 2008.
iii. Theses from spring 2011 will be sent for assessment in the summer of 2011. One
year delay is to assess if rubric matches stated goals of Richard Stockton College.
c. Precipitating Factors
i. Prior to 2005, the History program shifted the focus of 4000 level courses to
conceptual history seminars on Belief, Identity, Power, Nature, and instituted a
required senior thesis for all majors. An outside evaluator for the senior thesis
was thought an appropriate assessment measure for the effectiveness of
conceptual history teaching.
d. Findings
i. 2006 Savage Report
1. Overall assessment: some of the better theses reflected the themes of
“belief” and “identity” rather than “power” and “nature.” Expressed
concern over one thesis advisor for 52 theses.
2. Recommendations:
a. Strengthen hypothesis formation
b. Focus on scholarly journals to strengthen historiography
c. Link inferences to strengthen argumentation
d. Mandate CMS as style for footnotes and bibliography
e. Incorporate reference to thematic concepts in assessment rubric
3. Action:
a. Reduced size of Historical Methods and Thesis Seminar by
increasing advising to two professors
b. Mandated CMS formatting for footnotes and bibliography
c. Altered rubric to better incorporate thematic concepts
ii. 2008 Savage Report
1. Overall assessment: improved quality of the theses resulting from
implementing previous recommendations and re-organization of the
program.
2. Recommendations:
a. Special attention to hypothesis formation to help students
distinguish between topic and hypothesis
b. Special attention to linking inferences to nature of the source
material to strengthen argumentation
3. Action:
a. Re-evaluation of core skills and development of a checklist
focusing on acquisition at specific class levels
b. Discussed creation of possible ISH (Introductory Seminar in
History) classes based on model of current Transfer Seminar
courses
c. Discussed if current rubric matches goals of Stockton
d. Discussed possible need to change outside evaluator for different
perspective
II. CLA
a. Assessment Measures
i. Campus wide proctored essay exam given to seniors in the spring semester.
Participation is voluntary. Skills assessed include identifying issues and
information, organizing information, making a persuasive argument or an
objective analysis about an issue, presenting clear and well thought out
recommendations for a course of action, and explaining the basis and rationale
for these recommendations.
ii. Reports are based on sample size of 100 seniors.
iii. 14 history majors participated in 2008
iv. 19 history majors participated in 2010
b. Frequency
i. Spring 2008
ii. Spring 2010
c. Precipitating Factors
i. (Information not available as to why Stockton uses the CLA)
d. Findings
i. Spring 2008
1. (data not available)
ii. Spring 2010
1. History students spent an average of 40 minutes on the exam
2. Average GPA of test takers was 3.33
3. Performance task = 1149/ analytic writing = 1257
4. Stockton senior history majors scored slightly better than the average
student at Stockton on the performance task.
5. They scored slightly worse than the average college student on the
performance task.
6. They scored much higher, on average, than seniors at Stockton on
analytic writing.
7. They scored slightly better than the national average on analytic writing.
III. IDEA
a. Assessment Measures
i. Mandatory assessment for all faculty who provide a relevance rating (Essential,
Important, Minor or no importance) for each of 12 learning objectives included
on the IDEA form.
ii. Students rate their progress on these objectives and provide additional
information about teaching methods, course characteristics, and their own
characteristics.
iii. Scores are released to individual faculty.
b. Frequency
i. Every course for untenured faculty
ii. Selected courses for tenured faculty
c. Precipitating Factors
i. (Information not available as to why Stockton uses the IDEA)
d. Findings
i. History has requested a compilation of IDEA scores for all 4000 level courses for
the last three years as another method of evaluating conceptual history teaching.
ii. Upon receipt of IDEA scores, a program meeting will be scheduled to discuss
results and develop an action plan.
IV. Self-Assessment for Individual Classes
a. Assessment Measures
i. Prof. Rosner assessed classroom use of Google applications including gmail,
Google docs, Google calendar, iGoogle, and Google groups. A questionnaire was
given to all three of her classes in Fall ’09.
b. Frequency
i. One time assessment in Fall ’09.
c. Precipitating Factors
i. Prof. Rosner attended a Tech Academy workshop in the summer of 2009.
ii. Prof. Rosner sought an easier method of dialoguing with the class that can be
preserved for future reference
iii. Prof. Rosner wanted to an address issue raised by a survey of graduating
Stockton students indicating they had not improved their knowledge of
technology while at Stockton.
d. Findings
i. 95% thought Google docs were very useful for studying for exams
ii. 25% made specific recommendation that it would be more useful if all students
participated in answering sample questions.
iii. 74% found a gmail account useful for the course
iv. 60% used Google calendar for the syllabus, often citing it was too cumbersome
to maintain
v. 36% used iGoogle as a home page
vi. A total of 8 students used Google groups for online discussion, citing they prefer
classroom discussion for HIST and GAH courses
e. Action
i. All students were required to answer sample questions in spring of 2010.
ii. Google calendar was dropped from use in spring of 2010.
iii. The use of iGoole was omitted in spring of 2010
1
Assessment Plan
Department of History
for History majors and for Social Studies majors with a primary concentration in History
OVERVIEW OF PLAN
The History Department has adopted a qualitative assessment plan that seeks to
measure how well a given cohort of majors has mastered a defined set of skills and
abilities (see below). Assessment centers around a capstone experience in the Senior
Seminar in which students are expected to demonstrate the skills and abilities that
represent key outcomes expected of History majors and History concentrators in the
Social Studies program (the vast majority of Social Studies majors). The purpose of this
plan is to promote dialogue among history faculty addressing how well these skills and
abilities have been acquired by our majors and how instruction could be improved to
better achieves these outcomes.
Each semester, the instructor of the seminar will present a plan for assessing these
skills and abilities in their students. To do that, instructors will integrate course
requirements from their course syllabus with the department's list of skills and abilities,
showing which assessment tools will be used to assess the various skills and abilities
that all majors should acquire. Student mastery of these skills and abilities will then be
assessed during the course of the semester, eventuating in a written report evaluating
the overall skill level demonstrated by this cohort of students. The purpose of the report
is not to recapitulate assessment of individual students, but to record observations and
draw conclusions about the overall range and level of competency demonstrated by the
students as a group.
The attached documents represent a sample of how this plan will work, drawn from
the seminar Paul Harris taught in the Spring, 1998.
SKILLS AND ABILITIES
HISTORY AND SOCIAL STUDIES MAJORS SHOULD HAVE
A. Reading Comprehension and Cognitive Skills
History and Social Studies majors should be able to:
•
•
•
identify the main point or thesis in a piece of historical writing.
analyze how authors develop their theses and support them with evidence.
recognize and evaluate differences in historical interpretation among different authors.
B. Historical Thinking Skills
History and Social Studies majors should be able to:
•
•
•
•
•
recognize potential sources of bias in historical writings.
understand and interpret events in their appropriate historic context.
understand and interpret relations of cause and effect and other sequential relations.
understand the complexity of human motivations and appreciate cultural differences in patterns of
behavior and ideation.
synthesize a variety of evidence into a coherent and plausible account of events.
2
C. Research Skills
History and Social Studies majors should be able to:
•
•
•
•
•
•
•
recognize the difference between primary and secondary sources, and understand the uses and
importance of each type.
select and refine an appropriate topic for a given assignment.
identify a variety of different kinds of source materials that could shed light on a particular topic.
use the library and various bibliographic aids to identify and locate different sources relevant to a
particular topic.
evaluate which of their sources are the most authoritative.
compile and annotate a bibliography, and present it in proper format.
conduct an oral history interview.
D. Written Communication Skills
History and Social Studies majors should be able to:
•
•
•
•
•
•
•
•
•
formulate a thesis on the basis of insights gained from research.
develop their thesis in an organized and logical progression.
use appropriate evidence to support points.
cite their sources properly.
summarize points made in source materials, and make the connections between different points
of view and their own.
recognize the shortcomings of their evidence and anticipate possible objections.
respond constructively to criticism and make appropriate revisions.
write clear and grammatical prose.
critically evaluate the work of other students.
E. Oral Communication Skills
History and Social Studies majors should be able to:
•
•
•
•
respond clearly and thoughtfully to questions and comments in class discussion.
draw upon and summarize reading materials in ways that address larger themes and issues.
deliver an effective oral presentation.
critically evaluate the work of other students.
F. Computer Literacy
History and Social Studies majors should be able to:
•
•
•
produce a paper using word processing software.
use e-mail.
conduct research using the World Wide Web in addition to traditional source
Assessment Report Spring 1998 Paul Harris
This report is based on the performance of nine students in History 401, the capstone seminar for History
majors and Social Studies majors with a primary concentration in History. Eight of the nine students were
either History or Social Studies majors. The topic was Religion and American History.
A. Reading Comprehension and Cognitive Skills
3
1. identify the main point or thesis in a piece of historical writing.
The listserv questions that students posted every week demonstrated that most students were very able
to identify key issues for discussion and to understand the main points of the readings. Research papers
also were very strong in using secondary sources in ways that demonstrated understanding of the
literature.
2. analyze how authors develop their theses and support them with evidence.
Questions posed on the listserv demonstrated a moderately good ability to pose critical challenges toward
the way authors argue and support their theses, and these critical abilities improved over the course of
the semester.
3. recognize and evaluate differences in historical interpretation among different authors.
This was a more difficult challenge for students. We often had to work fairly hard in class discussion to get
students to see how two authors’ views on related topics could be compared. Over the course of the
semester, students did improve markedly in their ability to make connections between different readings.
B. Historical Thinking Skills
1. recognize potential sources of bias in historical writings.
Students generally did not engage in critical discussions of historical literature in their written work.
However, their selection and use of sources generally showed awareness of this.
2. understand and interpret events in their appropriate historic context.
The History and Social Studies majors in the seminar were clearly superior in this regard compared to the
non-major, although all of them could stand to improve.
3. understand and interpret relations of cause and effect and other sequential relations.
Papers contained some very perceptive remarks in this vein, but understanding the relationships and
connections between events and historical developments is a very high level reasoning skill that
undergraduates cannot be expected to master.
4. understand the complexity of human motivations and appreciate cultural differences in patterns
of behavior and ideation.
The seminar placed great emphasis on the diversity of religious cultures and expressions in American
history, and students seemed very receptive to that. I would characterize them as open-minded but
understandably limited in the development of historical empathy.
5. synthesize a variety of evidence into a coherent and plausible account of events.
I would say that one of the most pleasant surprises of the seminar for me was the ability students
demonstrated in writing clear and coherent historical narratives.
C. Research Skills
1. recognize the difference between primary and secondary sources, and understand the uses and
importance of each type.
4
One student admitted during the course of the semester that she did not understand the difference
between primary and secondary sources, although she is a bright and hard-working senior Social Studies
major. Students clearly do not get enough exposure to working with primary sources in their other History
courses, and this deficiency was very evident in their papers. Despite repeated urgings, probably the
single greatest deficiency of final papers was the limited use they made of primary source materials.
2. select and refine an appropriate topic for a given assignment.
We worked extensively on this throughout the semester, largely through regular individual conferences. It
was difficult to get students to refine their topics to the point where they might actually produce something
halfway original. Although few of the papers included real tangents, many of them remained rather
broadly conceived. This shortcoming is related to their discomfort working with primary sources. They
were clearly more at ease extracting general points from secondary sources.
3. identify a variety of different kinds of source materials that could shed light on a particular
topic.
A variety of different kinds of source materials did make their way into people’s bibliographies, although
this varied greatly depending on the initiative of the student and the nature of the topic.
4. use the library and various bibliographic aids to identify and locate different sources relevant to
a particular topic.
On the whole, students still tend to rely too heavily on simple database searches for books in the MSU
library.
5. evaluate which of their sources are the most authoritative.
The principal strength of the bibliographies students compiled on their research topics was the success
they demonstrated in identifying the most authoritative and important works in their area.
6. compile and annotate a bibliography, and present it in proper format.
Their bibliographies were also generally sound mechanically, and the annotations were better than I
expected, given that this was something new to them.
7. conduct an oral history interview.
None of the students used oral history for their papers, although one did a survey of relatives, inquiring
about their experiences growing up Catholic, and the results were fascinating.
D. Written Communication Skills
1. formulate a thesis on the basis of insights gained from research.
This was also something that we worked on extensively. Most of the paper ultimately did have a
reasonable thesis, which they more or less stuck to during the course of their papers, but sustained
development of a single thesis for twenty pages is a very diffficult task for undergraduates.
2. develop their thesis in an organized and logical progression.
Having students turn in a first draft was clearly useful in helping students to see where their papers did
not flow very well, although success in this area varied significantly from one student to the next.
5
3. use appropriate evidence to support points.
This is another area where the non-major in the seminar had much greater trouble than the majors.
Needless to say, all papers contained points that were not well supported, but only the one student used
evidence is an entirely inappropriate way.
4. cite their sources properly.
Majors still don’t grasp the fine points of citing sources, but they have come a long way from their
freshman years.
5. summarize points made in source materials, and make the connections between different
points of view and their own.
They were generally successful in summarizing, but not skilled in dealing with sources that did not share
their points of view.
6. recognize the shortcomings of their evidence and anticipate possible objections.
Anticipating possible objections to their arguments is a very high level thinking skill that few
undergraduates master.
7. respond constructively to criticism and make appropriate revisions.
The amount of revision that students made between their first draft and final paper varied tremendously,
but the overall results were disappointing. Students generally responded to specific criticisms without
fundamentally rethinking what they were doing, which is not surprising.
8. write clear and grammatical prose.
The policy of the History Department to require writing in every class was nowhere more in evidence than
in the general strength of students’ prose.
9. critically evaluate the work of other students.
We did not do this with written assignments.
E. Oral Communication Skills
1. respond clearly and thoughtfully to questions and comments in class discussion.
One of the clear weaknesses of students’ preparation for the seminar was in this area. Although it was a
small group of bright students, most remained reluctant to participate in class discussions. My sense is
that students lack confidence in their ability to "think on their feet" and develop ideas in give-and-take
exchanges with others, and this is a confidence that must be gained by experience. The most successful
discussions were those that took on a more conversational quality when students began to express
personal views. The least successful were those involved in more formal analysis of reading materials. I
don’t think this was because students were not doing the reading, although they never do it as thoroughly
and carefully as one would like.
2. draw upon and summarize reading materials in ways that address larger themes and issues.
6
Although I received many perceptive questions on the listserv that demonstrated fairly good critical
reading skills, students were less able than I would have hoped at drawing upon reading materials in
discussion.
3. deliver an effective oral presentation.
Ability and comfort in this area varies a great deal, but the oral presentations on the whole were better
than I expected. Students were encouraged to use audio-visual aids, and several took the trouble to
develop hand-outs, transparencies, or timelines. Probably the greatest weakness of oral presentations
was not slowing down to explain difficult or obscure points.
4. critically evaluate the work of other students.
Comments that students made about others’ oral presentations were generally tactful, perceptive, and
appropriate.
E. Computer Literacy
1. produce a paper using word processing software.
None of the students seemed to have any great difficulty in this area.
2. use e-mail.
It took several weeks before most of the students really got into the rhythm of posting weekly discussion
questions on the course’s listserv. Part of the problem was that when we first went to the computer lab for
instruction in doing this assignment, the university server broke down in the middle of the lesson, which
created considerable uncertainty about how well this would work. It was some time before I felt
comfortable in trying to enforce the electronic postings very strictly. Otherwise, I would rate this exercise
as a big success.
3. conduct research using the World Wide Web in addition to traditional source materials.
Students clearly have varying levels of interest and comfort working the World Wide Web in particular.
GENERAL RECOMMENDATIONS
To my mind, two things stand out from my experience in teaching this seminar that the department as a
whole would do well to address:
1. Students do not get enough experience working with primary sources in researching and writing
papers.
2. Students do not get enough experience participating in class discussion. They need to learn to think on
their feet and be more comfortable speaking in front of a group.
In the Laboratory
Using Pooled Data and Data Visualization To Introduce
Statistical Concepts in the General Chemistry Laboratory
Stoichiometric Ratio:
As our department worked to strengthen the laboratory
component of the first-semester general chemistry course, we
decided to link several experiments more closely with conceptual
statistics as a unifying theme. Here I describe revising a standard
stoichiometry experiment by adapting graphical techniques
(1–5) associated with exploratory data analysis. It is the emphasis on graphical techniques that differentiates our approach
from other papers about statistics in the introductory laboratory
(6–8) that have appeared in the Journal.
The basic stoichiometry experiment is preparation of magnesium oxide by direct reaction of the elements (magnesium is
burned in air) and subsequent determination of its empirical
formula. We have found the inclination of students to report an
empirical formula consistent with an expected result rather than
with their data to be a persistent difficulty. For example, many
students would associate an O/Mg ratio (amounts are per unit
amount compound) of 0.845 with MgO rather than Mg4O3 or
Mg5O4. The instructor controls the list of reasonable formulas
in our graphical approach and can include discussion of how the
list was determined at his or her discretion.
amount O
amount Mg
Robert J. Olsen
Division of Natural and Mathematical Sciences, The Richard Stockton College of New Jersey, Pomona, NJ 08240-0195;
Robert.Olsen@stockton.edu
MgO1.000 (MgO)
0.9
0.8
MgO0.800 (Mg5O4)
MgO0.750 (Mg4O3)
0.7
Figure 1. Example number line for O/Mg = 0.845 (amounts
indicated are per unit amount compound). The reference lines
labeled to the right correspond to empirical formulas selected by
the instructor as reasonable. Line segments emanating from the data
point to the left are drawn assuming empirical formulas MgxOy with
1 ≤ x,y ≤ 5 are allowed; the nearest empirical formula is Mg5O4.
The data point to the right illustrates the outcome if allowed empirical
formulas are restricted to 1 ≤ x,y ≤ 4, in which case the nearest
empirical formula is Mg4O3.
Method
1.3
MgO1.250 (Mg4O5)
amount O
amount Mg
1.2
Stoichiometric Ratio:
Each student calculates the O/Mg ratio from his or her
data and then plots a point at the calculated O/Mg ratio on the
provided number line (see Figure 1). Next, each student draws
line segments outward from this point in both directions until
a labeled reference line is reached, thus locating the empirical
formula in best agreement with the data.
A student whose individual result leads to an empirical
formula other than MgO may be reluctant to report the formula
that is consistent with the data. This reluctance can be overcome
by moving beyond individual results, which we do by pooling
the class’s data and continuing the analysis with this larger data
set. Many recent papers (9–15) in this Journal have featured data
pooling in the introductory laboratory. To analyze the pooled
data, we use the same number line as before, although now the
data for an entire class is plotted as a 1-D scatterplot (see Figure
2). Both the central tendency and the dispersion of the data are
evident in a scatterplot.
Students repeat the steps they used to analyze their individual data—applying these steps to the mean of the class data—
and go on to attach confidence to this result by identifying the
number of allowed empirical formulas that fall in the interval
containing the middle 80% of the points. The only empirical
formula in this range in the scatterplot to the left is MgO, so
students in this section can say that they have found the empirical formula of magnesium oxide to be MgO with a reasonable
degree of confidence. By comparison, two additional empirical
formulas (Mg4O3 and Mg5O4) fall within this range in the scatterplot to the right, so students in this section must conclude
544
1.0
1.1
1.0
MgO1.000 (MgO)
0.9
0.8
MgO0.800 (Mg5O4)
MgO0.750 (Mg4O3)
0.7
MgO0.667 (Mg3O2)
0.6
MgO0.600 (Mg5O3)
Figure 2. Example scatterplots of pooled data. Points plotted as filled
circles make up the middle 80% of the students’ data measuring
O/Mg ratios (ratio amounts are per unit amount compound). Points
plotted as open circles are the smallest and largest 10% of the
students’ data measuring O/Mg ratios. The mean is plotted as a
thick, dashed, horizontal line.
Journal of Chemical Education • Vol. 85 No. 4 April 2008 • www.JCE.DivCHED.org • © Division of Chemical Education In the Laboratory
that they have not unambiguously determined the empirical
formula of magnesium oxide.
These two data sets highlight that imprecise data necessarily leads to imprecise conclusions, a lesson that we feel is
more valuable than confirming that magnesium oxide is MgO.
The instructor can exercise latitude regarding the choice of the
percentiles that determine the subset of points being analyzed.
Restricting the analysis to points between the 10th and 90th
percentiles balances retention of a reasonably large fraction
of the students’ data against elimination of extreme outliers,
whose presence can unduly increase the number of identified
empirical formulas.
Summary
We have found that 1-D scatterplots are an effective way to
emphasize the central tendency and dispersion of a data set, two
notions basic to understanding its statistical properties. More
experienced students are engaged by this new angle rather than
feeling that they are repeating their high school work. At the
same time, the new aspects of the data analysis do not increase
the workload of the experiment significantly for the remaining
students because a visual, intuitive approach to statistics puts
them on a comparable footing with their more experienced
peers.
Approaching data analysis and interpretation through
data visualization converts the experiment from a conventional confirmatory format (i.e., verifying that the formula
of magnesium oxide is MgO) to a style akin to that of a discovery experiment. Appropriate customization of the basic
1-D scatterplot allows a similar graphical component to be
added readily to the data analysis and interpretation associated with other experiments typical of the general chemistry
laboratory. We include graphical data analysis in an experiment comparing the volume of water delivered by a beaker, a
graduated cylinder, and a pipette; in an experiment comparing
the acid-neutralizing capacity of two brands of antacid; and
in an experiment in which the gas constant R is determined.
We plan to include it an experiment exploring Hess’s law and
an experiment in which the equilibrium constant of a reaction
is measured.
Hazards
Magnesium burns with an extremely bright flame that can
cause permanent eye damage if it is viewed directly. Porcelain
crucibles should be handled with care throughout the experiment because they remain hot enough to cause serious burns
long after they are no longer glowing red.
Acknowledgments
I would like to thank Tom Olsen, Brian Rogerson, and
Simeen Sattar for helpful discussions and critical readings of
the manuscript; Justine Ciraolo, Doreen Goldberg, Ed Paul,
and Brian Rogerson for adopting a pooled-data approach to
this experiment in their laboratory sections; and the students
in the many sections of Chemistry 2115 who have done this
experiment as it has evolved to its present form.
Literature Cited
1. Cleveland, W. S. The Elements of Graphing Data, revised ed.;
Hobart Press: Summit, NJ, 1994.
2. Cleveland, W. S. Visualizing Data; Hobart Press: Summit, NJ,
1993.
3. Tufte, E. R. The Visual Display of Quantitative Information, 2nd
ed.; Graphics Press: Cheshire, CT, 2001.
4. Tufte, E. R. Envisioning Information; Graphics Press: Cheshire,
CT, 1990.
5. Tufte, E. R. Visual Explanations: Images and Quantities, Evidence
and Narrative; Graphics Press: Cheshire, CT, 1997.
6. Spencer, R. D. J. Chem. Educ. 1984, 61, 555–563.
7. Marino, F. J. Chem. Educ. 1988, 65, 445–446.
8. Salzsieder, J. C. J. Chem. Educ. 1995, 72, 623.
9. Ricci, R. W.; Ditzler, M. A. J. Chem. Educ. 1991, 68, 228–231.
10. Ricci, R. W.; Ditzler, M. A.; Jarret, R.; McMaster, P.; Herrick, R.
S. J. Chem. Educ. 1994, 71, 404–405.
11. Ditzler, M. A.; Ricci, R. W. J. Chem. Educ. 1994, 71, 685–688.
12. Herrick, R. S.; Nestor, L. P.; Benedetto, D. A. J. Chem. Educ.
1999, 76, 1411–1413.
13. Sadoski, R. C.; Shipp, D.; Durham, B. J. Chem. Educ. 2001, 78,
665–666.
14. Moss, D. B.; Cornely, K. J. Chem. Educ. 2001, 78, 1260–1262.
15. Sanger, M. J.; Geer, K. J. Chem. Educ. 2002, 79, 994–996.
Supporting JCE Online Material
http://www.jce.divched.org/Journal/Issues/2008/Apr/abs544.html
Abstract and keywords
Full text (PDF) with links to cited JCE articles
Supplement
Student handouts: experiment write-up and data spreadsheet
Instructor notes, including: further discussion of the experiment;
a summary of the results obtained by 17 laboratory sections during
six semesters (Fall 2002–Spring 2006); suggestions about how
analysis of the pooled data can be extended; a description of the
spreadsheet used to pool the data; and information about assessment of the experiment during the 2006–2007 academic year
© Division of Chemical Education • www.JCE.DivCHED.org • Vol. 85 No. 4 April 2008 • Journal of Chemical Education
545
In the Classroom
Effectiveness of a Daily Class Progress Assessment Technique
in Introductory Chemistry
Brian J. Rogerson
Chemistry Program, Natural Sciences and Mathematics, The Richard Stockton College of New Jersey,
Pomona, NJ 08240-0195; Brian.Rogerson@stockton.edu
During my first year of teaching, a significant proportion of freshman students in my introductory (first semester) chemistry course did not perform well and either failed
or withdrew from the course. This phenomenon has been
observed in other sections of the course taught by a number
of different teachers. Colleagues have suggested a number of
factors that may be contributing to such an outcome, including poor academic preparedness, a lack of good study habits,
and the fact that students have off-campus jobs that significantly reduce the time available for their college work. However, as a novice teacher, I was also concerned about my
teaching effectiveness and how it impacted the performance
of my students. Although each term I administered several
quizzes and exams, the class as a whole did not seem constantly engaged in the work. Further, any feedback provided
on these tests came too late for the students, since errors had
already translated into poor grades. This was frustrating, not
only for the students, but also for me. With the harm already done, there was no easy way to revisit the material and
reevaluate student understanding after remediation. My attempts to monitor student understanding by conventional
means, for instance, by encouraging oral participation in class
or conducting workshops before exams, had mixed results.
It was during this time that I joined a study group at Stockton College that was using Angelo and Cross’s Classroom
Assessment Techniques (1) to improve teaching effectiveness
and student learning. During my second year of teaching, I
tested a new rapid feedback technique designed to give my
introductory chemistry students an opportunity to constantly
monitor their understanding of the material as the class progressed. I also hoped that student performance would improve if I became aware at the end of each class of where
students were having difficulties so that I could take corrective action and resolve the confusion before formal testing.
to accomplish three tasks: (i) to obtain feedback from all students in the class, not just the more vocal ones, (ii) to obtain
feedback immediately after each class, thereby creating an expectation in students that they needed to make an effort to
understand the material presented every time they came to
class, and (iii) to give feedback to students on their answers
to the assessment questions. To keep the technique simple
and to enhance a positive learning atmosphere, the assessments were not graded and focused on concepts I expected
my students to understand before they walked away from
class. Some examples of questions are shown in Figure 1.
These can usually be prepared in a couple of minutes; they
must be kept simple and straightforward and must pertain
to the key points of each class.
Not more than ten minutes before each class ended, two
copies of the question(s) were handed out. It took about five
minutes for students to record their answers on both copies.
Students returned one copy of the assessment, anonymously
if they wished, and retained the second copy so that they
could check their work at the beginning of the next class when
the correct and incorrect answers were discussed. Anonym-
•
a) 0.0560 L
160
b) 5.5 x 104 km
c) 10.0 ns
d) 0.003 g
•
Give two reasons why K is more reactive than Li.
•
Why is it that AlCl3 is the empirical formula of the ionic
compound made up of aluminum ions and chloride ions?
Why not AlCl, AlCl5, or Al2Cl?
•
You are studying a compound that is made up of Na, Cr,
and O. Following the strategy outlined in class you have
determined how many moles of each are present in the
compound: 0.761 moles of Na, 0.763 moles of Cr and
2.68 moles of O. What is the empirical formula of this
compound?
•
In the lab you are studying this reaction:
Methods
The introductory chemistry class at our institution is
divided into sections with a maximum capacity of 48 students, 70% of whom are freshmen, the subject population
of this study. Lecture classes meet three times a week and
class periods are 1 hour and 15 minutes long. The small class
size and longer meeting time compared to larger institutions
suggested that analyses of frequent surveys would be feasible
provided they were limited in scope. A daily class progress
assessment was developed with this objective in mind. It is
conceptually similar to the technique reported by Holme (2)
with several differences as described herein. At the end of every class period, students were asked to answer, in writing,
brief questions about material that had just been discussed
in class. My intent was to continuously survey all students
for their understanding of basic ideas. I wanted the technique
How many significant figures are there in the following
measurements?
2 NO2Cl(g)
2NO2(g) + Cl2(g)
and at equilibrium you found the following concentrations, [NO2Cl] = 0.00106 M, [NO2] = 0.0108 M, and
[Cl2] = 0.00538 M.
a) Write the equilibrium expression for this reaction.
b) Calculate the value of Kc.
c) Calculate the value of Kc for the reverse reaction.
Figure 1. A sampling of questions compiled from different class
assessments.
Journal of Chemical Education • Vol. 80 No. 2 February 2003 • JChemEd.chem.wisc.edu
In the Classroom
Results
Student grades during five consecutive semesters of introductory chemistry were examined. During this time, the
instructor, textbook, course content, and the order in which
chapters were taught did not change. The grading was based
on an absolute scale (not curved) and was derived exclusively
from a similar number of quizzes and exams with similar levels
of difficulty during the two and a half year study. Only during terms 3 and 4 was the daily class progress assessment
implemented. During terms 1, 2, and 5 no assessments were
used. As can be seen from Figure 2, the average withdrawal
(W) frequency observed for freshmen during the semesters
when class assessments were not administered was 26.7%
(24兾90). At Stockton College students are allowed to withdraw very late in the term (up until three weeks before classes
end), after they have been tested several times. Generally, students that withdraw are failing the course and so the combined failure and withdrawal frequency observed, in this case
34.5% (31兾90), best represents the subset of students that
was performing poorly. The total number of freshmen enrolled in terms without assessments was 90. In contrast, during the two semesters when the daily class progress
assessments were administered, the withdrawal frequency decreased to 6.7% (4兾60). While a slight increase in the failure
frequency was observed, the combined failure and withdrawal
frequency fell to 16.7% (10兾60), half of what was observed
without this technique. The total number of freshmen enrolled in terms with assessments was 60. Not surprisingly,
the gain associated with the use of this technique was reflected
in an improved performance by the marginal students since
an increased frequency of students earning C and D grades
was observed. Also, the fact that A and B grade frequencies
did not change significantly between the two groups (20.0%
vs 16.7% for A grades, and 21.1% vs 21.7% for B grades)
suggests that students who earn these grades were not being
affected by the technique.
To determine whether the drop in withdrawal frequency
was statistically significant, a chi-square analysis was applied
to the two sample enumeration data shown in Table 1. The
χ2 value was determined by the formula
χ
( observed − expected
=
2
− 0.5)
2
expected
The χ2 calculated was 8.202. The χ2 value for α = .005 and
df = 1 is 7.879; thus it can be said that the proportion of
students that withdrew was smaller when the class assessment
was used (p < .005). However, such a decrease would be
meaningful only if there was no significant increase in the
failure frequency. Therefore, it made more sense to place students that failed and withdrew in one group. Table 2 shows
the data that were subjected to chi square analysis. The χ2
calculated was 4.872. The χ2 value for α = 0.05 and df = 1
is 3.841; thus it can be said that the combined failure and
withdrawal frequency (in other words, the number of students that were performing very poorly) decreased when the
class assessment was used (p < .05).
Assessment sheets collected during term 4 were chosen
for a more detailed analysis. Since the technique was designed
so that students have the option of signing or anonymously
returning the assessments, we could not always distinguish
between freshmen and non-freshmen responses. Nevertheless, an examination of all responses from the class proved to
be instructive. As shown in Figure 3, the fraction of students
that returned assessments either signed or anonymously, varied between 97.8% (assessment no. 2 at the beginning of the
30
25
Frequency (%)
ity appeared to reduce performance pressure and helped students focus on answering the questions. Students were allowed to check their notes and to get help from their
neighbors in order to answer. As the term progressed I noticed that some students did check their notes at least some
of the time. In general, students made a genuine effort to
answer the questions on their own, a behavior I encouraged,
which differs from Holme’s format (2). Although it may appear time consuming, this is a remarkably simple and quick
assessment technique. Before the next class I would take about
thirty minutes to analyze the results. Since these assessments
were not graded, the evaluation process was very straightforward and quick. I determined how many students got the
assessment right, and then proceeded to categorize the incorrect answers. Often different students made the same mistake but there was always a subset of unique answers.
Depending on the nature of the errors and how many students made them, I spent more or less time discussing them.
Students’ actual answers were always summarized on an overhead. It should be kept in mind that these were questions
students would find in a quiz, except they were not being
asked to answer them days or weeks after the topic was discussed, but rather, immediately after the topic was covered
in class. Thus, the assessment not only gave me an indication of whether students understood what had just been discussed but provided information about student engagement
and attentiveness in class. It also assessed my effectiveness
during that class. The discussion of the incorrect answers and
any further explanations always took place at the beginning
of the following class and took less than ten minutes. This
discussion accomplished the dual role of correcting students
misconceptions and acting as a reminder of what was discussed in the prior class.
20
15
10
5
0
A
B
C
D
F
W
Letter Grade
Figure 2. Freshman performance without assessments (black bars)
and when daily class assessments were used (gray bars). Frequencies were calculated based on the performance of 90 freshmen
during three semesters without assessments and 60 freshmen during two semesters with assessments. Withdrawals are designated
by W.
JChemEd.chem.wisc.edu • Vol. 80 No. 2 February 2003 • Journal of Chemical Education
161
In the Classroom
term) and 48.9% (assessment no. 19, toward the end of the
term). The frequency with which students returned signed
assessments declined over time, while the fraction of students
returning assessments anonymously fluctuated (considerably)
around 30%. The increase over time in the number of students that did not return assessments can be mostly accounted
for by non-freshmen who continued to perform poorly and
were withdrawing from class. During this particular term,
2兾29 freshmen and 7兾16 non-freshmen withdrew from class.
In other words, the withdrawal frequency for freshmen during this semester when the assessment technique was in use
was 6.9%, whereas for non-freshmen it was 43.8%. The poor
performance of non-freshmen is beyond the scope of the current discussion, but it underscores the fact that freshmen and
non-freshmen are two very distinct populations. It must be
remembered that introductory chemistry is a course that students normally take during their first year in college. In any
event, toward the end of the term, 20% of the total number
of students (9兾45) had withdrawn from class, which inflated
the fraction of non-returned assessments (Figure 3). If one
corrects for this, then the fraction of students that returned
assessments, either signed or anonymously, varied between
98% and 61%. Although some students were choosing not
to return their assessments they presumably still reaped the
Table 1. Freshmen Withdrawal Outcomes
Freshmen
Stayed in Class
Withdrew
Total
Without
assessmentsa
66 (73.17)
24 (16.83)
90
With
assessmentsa
56 (48.78)
4 (11.22)
60
Total
122
28
150
a
Expected values are in parentheses next to the observed values.
benefit of the review process associated with them. While attendance was not mandatory, there were no obvious absenteeism problems that might have significantly contributed to
the lower number of returned assessments. This attendance
policy was in effect during both the assessment and non-assessment semesters. The lack of attendance records is the reason why the data in Figure 3 are based on the total number
of students rather than just those in attendance.
The types of errors observed for the questions in Figure
1 varied greatly. In response to the question on significant
figures, students often say that measurements such as 0.0560
L and 0.002 g have 4 and 3 significant figures respectively,
or that 5.5 × 104 km has 5 significant figures. In response to
why K is more reactive than Li, a common answer was “because it is lower in the group” without explaining what this
meant. When writing formulas for ionic compounds or when
asked to write the ions that make up such compounds, students often did not remember ion charges and had difficulties with the structures of polyatomic ions. They would split
them into monoatomic ions. This was a chronic problem with
the marginal students. When asked to use mole ratios to determine the formula Na2Cr2O7 students correctly arrived at
the 1:1:3.5 ratios but then some of them “approximated”, as
in NaCrO3 or NaCrO4. When asked to write the equilibrium expression for a reaction, some students would multiply the substance concentrations by their coefficients in the
balanced equation, and when calculating the constant for the
reverse reaction some would write the new expression and
go through the whole calculation again. These are just a few
examples, but in every case, these responses were obtained
after class discussions that I thought had ensured such responses would not occur.
Students that signed their responses were more likely to
be correct when answering their assessments than those students responding anonymously (Table 3). Conceivably, students responding anonymously did so because they were
unsure of themselves, a notion that is consistent with the
higher fraction of incorrect responses observed in this group.
Also, freshmen and non-freshmen were equally likely to sign
Table 2. Combined Freshmen Failure
and Withdrawal Outcomes
70
Passed
Failed or Withdrew
Total
Without
assessmentsa
59 (65.43)
31 (24.57)
90
With
assessmentsa
50 (43.62)
10 (16.38)
60
109
41
150
Total
a
Expected values are in parentheses next to the observed values.
Students (%)
Freshmen
80
signed
anonymous
60
50
40
30
20
not returned
10
0
5
10
15
20
Assessment Number
Table 3. Assessment Answers
All Students
Freshmen
Answers
Signed (%)
(n = 426)
Anonymous (%)
(n = 305)
Signed (%)
(n = 315)
Correct
62.4
51.8
62.2
Incorrect
37.6
48.2
37.8
162
Figure 3. Assessment returns from all students (including non-freshmen) during the second term the assessment technique was implemented. Solid line: students that returned signed assessments.
Dashed line: anonymous assessments. Gray line: the sum of students that either did not return an assessment, were absent, or withdrew. 100% of the students are accounted for at each time point
(assessment).
Journal of Chemical Education • Vol. 80 No. 2 February 2003 • JChemEd.chem.wisc.edu
In the Classroom
their responses. I found that on average 73.9% (315兾426)
of the signed responses were coming from freshmen. Since
freshmen constitute about 70% of the class, signing a response
appears to follow the class demographic.
Official student evaluations of teaching suggested that
students who remained in class (after the withdrawal date
deadline) had a more positive attitude toward the course when
the assessments were used. Without the assessment, 50.9%
of respondents rated their “course as whole” experience with
a score of 6 or 7 (1–7 scale, 7 being the highest possible score).
However, when assessments were used, a larger proportion
of students (68.1%) rated their experience with these high
scores. Interestingly, student rating for the “instructor’s overall
performance” category remained constant, with 86.1% of students giving the instructor scores of 6 and 7 when no assessments were used, while 88.7% did so when assessments were
used. Again, because of the anonymous nature of these evaluations, data could not be obtained exclusively from freshmen.
All we know is that during the semesters when the assessments were not used, an average of 67.9% of all the students
who stayed in class and answered the evaluations were freshmen, whereas when the assessments were used, 74.8% were
freshmen.
Discussion
The simplest interpretation of these results is that freshmen students experiencing difficulties with the material were
being helped by this technique. Being formally questioned
about material that was just discussed in class, each and every time the class met, placed an unfamiliar pressure on students. As the term progressed, students appeared to welcome
the opportunity to check their understanding of what was
taught in each class and, of course, students were not the only
ones to get feedback. This technique has helped my teaching as well. I now have a way of telling immediately at the
end of each class whether students are understanding my explanations, examples, et cetera and finding out where students are having difficulties. This feedback is immediate and
is obtained from the majority of students attending class.
These assessments attempt to survey all the students in the
class, not just the more vocal ones as occurs when prompting the class for questions. The inclusive nature of this technique cannot be emphasized enough; it is one of its most
important features and has helped me more than anything
else in taking the pulse and gaining insights into the progress
of the class. While this technique is not novel conceptually
(2, 3) it has some distinguishing features. For instance, students are being asked questions about what just transpired
in class and are not “studying” for the assessment.
When I first implemented this technique I was surprised
at how very simple questions would still reveal misconceptions or misunderstandings in a significant proportion of the
students. Even after classes in which I felt I had explained
something very well and thoroughly, there were students for
whom the answer to the assessment was not obvious. At the
other end of the spectrum, a student would occasionally complain that the questions on the assessments were too easy compared to questions on tests. However, the point of the
technique was to ensure that all students in the class under-
stood the fundamentals before proceeding to the next discussion. Indeed, I have yet to receive a 100% correct response
to an assessment. This has taught me to never take anything
for granted. The number of student errors on these assessments can vary greatly. There are times when 90% of the respondents will get the answer correct, but there are also plenty
of occasions when one sees 65%, 50%, or even only 10–20%
correct answers. There are times when poor results are predictable. However, there have been a number of occasions
when such results came as a total surprise. This was critical
information that I did not have beforehand (4). A very low
comprehension level suggests that the information was not
properly conveyed to the class or that I incorrectly assumed
the class knew background material. Therefore, these daily
class assessments are opportunities for clarification in the form
of new explanations as well as reinforcement of skills and previously covered material, particularly for the less vocal students. Presumably, this technique helped raise the baseline
level of understanding for a larger proportion of students than
was the case without the assessment.
There is great diagnostic value in analyzing a student’s
incorrect answers as discussed at a recent conference by
Gonsalves et al. (5). Reflecting on why a student provided
an incorrect response to an assessment can serve to improve
both teaching and learning in a way that tests cannot. It also
allows for early intervention so that information transfer can
be modified to ensure that a misconception does not gain a
foothold in the student’s mind and interfere with the student’s
learning of further material. Admittedly, this daily class
progress assessment is just a first step in a multistep process,
which improves student retention and helps the marginal students meet a minimum proficiency level to pass the course.
Clearly, additional teaching strategies will have to be tested
to determine whether student learning can be further improved (6, 7); after all, these are still marginal students. Yet,
even a modest improvement as described herein indicates that
these marginal students are not refractory to classroom strategies aimed at improving student learning. Strategies for increasing interaction among students will be important to test
since these have been suggested to improve learning (2, 8,
9). However, since introductory chemistry is a course that
services many majors (students majoring in chemistry are rare
in these classes) and since this first semester chemistry course
will be the only chemistry many students will have at Stockton, the assessment technique that is discussed here can already be viewed as fulfilling a useful role.
Students looked forward to the class assessments, since
on those rare occasions when I was unable to administer one,
a number of students lightheartedly complained. Their feeling of anticipation at finding out how they did on an assessment was often quite evident, suggesting that this technique
may be helping students develop self-assessment skills (10).
End-of-term surveys revealed that students believed the assessments helped them gauge their progress at understanding the material and strongly endorsed them. Many said
knowing that an assessment would be coming up at the end
of class helped them pay attention, and many also suggested
that continuous feedback should be a regular feature in all
classrooms. My hope was that the student response to an incorrect answer on an assessment would be additional work
JChemEd.chem.wisc.edu • Vol. 80 No. 2 February 2003 • Journal of Chemical Education
163
In the Classroom
outside of class or a consultation with the instructor if the
in-class explanation of the incorrect answers remained unclear.
Anecdotal evidence suggests this was the case. However, a
subset of students remained who still withdrew or failed the
course. More often than not, these students either lacked the
needed quantitative skills or the time (or maturity) to study
owing to ambitious credit loads, full-time jobs, or busy extracurricular activities.
My expectation was that eliminating the class assessment
would result in higher withdrawal frequencies again. Indeed,
during the fifth term when I decided to test this notion, I
found this to be the case. My (presumably) more experienced
teaching did not help the marginal freshmen students the way
the assessment technique did during terms 3 and 4. As one
anonymous reviewer stated: “For the marginal students, the
review and reflection of the daily progress assessment provides a replacement, in part, for the study routine that is the
staple of the more able student.” Indeed, I consider poor
studying skills the biggest problem I face. While the insights
I gained during the assessment semesters became part of my
teaching during this fifth term, they were not effective in the
way student reflection and review were when assessments were
in use.
Summary
The benefits derived from this technique include: (1)
encouraging student reflection and review, (2) identifying,
before formal testing, areas where students are having difficulties, (3) assessing the effectiveness of information transfer, and (4) student appreciation for the unrelenting
nongraded feedback. Admittedly, this technique has some impact on the instructor’s ability to cover all the material in the
syllabus, but only requires that a few adjustments be made
toward the end of the course to ensure all the material is covered. I plan to use rapid feedback strategies in every course I
teach because they provide valuable information on how well
I am teaching and how students are learning. I can no longer
see myself teaching without them.
164
Acknowledgments
I wish to thank Ed Paul, Kelly Keenan, Ellen Clay, Jamie
Cromartie, and Yitzhak Sharon for critically reviewing the
manuscript and anonymous reviewers for their helpful suggestions. I also thank Stockton’s Institute for the Study of
College Teaching for its support of first-year faculty members. Part of this work was presented at the 5th Annual Lilly
Conference on College and University Teaching (April 2001),
at Towson University, Towson, MD.
Literature Cited
1. Angelo, T. A.; Cross, K. P. Classroom Assessment Techniques: A
Handbook for College Teachers, 2nd ed.; Jossey-Bass: San Francisco, CA, 1993.
2. Holme, T. J. Chem. Educ. 1998, 75, 574–576.
3. Frellich, M. B. J. Chem. Educ. 1989, 66, 219–223.
4. Steadman, M. Using Classroom Assessment to Change Both
Teaching and Learning. In Classroom Assessment and Research:
An Update on Uses, Approaches, and Research Findings; Angelo,
T., Ed.; Jossey-Bass: San Francisco, CA, 1998; pp 23–35.
5. Gonsalves, S.; Ince, E.; Kubricki, S.; Mathis, S. Student’s Incorrect Answers as Diagnostic Teaching-Learning Opportunities: A Discipline Based Study; Paper presented at the Lilly
Conference on College and University Teaching; University
of Maryland, 2000.
6. Cottell, P.; Harwood, E. Do Classroom Assessment Techniques
(CATs) Improve Student Learning? In Classroom Assessment and
Research: An Update on Uses, Approaches, and Research Findings; Angelo, T., Ed.; Jossey-Bass: San Francisco, CA, 1998;
pp 37–46.
7. Duffy, D. K.; Duffy, J. J.; Jones, J.W. Journal on Excellence in
College Teaching 1997, 8, 3–20.
8. Lemke, J. Talking Science: Language, Learning and Values;
Ablex: Norwood, NJ, 1990.
9. Johnson, D. W.; Johnson, R. T.; Smith, K. A. Change 1998,
30, 27–35.
10. Wiediger, S. D.; Hutchinson, J. S. J. Chem. Educ. 2002, 79,
120–124.
Journal of Chemical Education • Vol. 80 No. 2 February 2003 • JChemEd.chem.wisc.edu
Journal of Computational Science 1 (2010) 55–61
Contents lists available at ScienceDirect
Journal of Computational Science
journal homepage: www.elsevier.com/locate/jocs
Diagnostics and rubrics for assessing learning across the computational
science curriculum
J. Russell Manson, Robert J. Olsen ∗
School of Natural Sciences and Mathematics, The Richard Stockton College of New Jersey, Pomona, NJ 08240-0195, United States
a r t i c l e
i n f o
Article history:
Received 24 March 2010
Accepted 26 March 2010
PACS:
01.40.Di
01.40.G−
01.40.gb
a b s t r a c t
We describe our experiences with learning assessment in a new computational science program. We
report on the development and pilot testing of assessment tools in both core and cognate courses. Specifically, we detail a diagnostic assessment that predicted success in our introductory computational science
course with reasonable reliability; we give an account of our use of an existing assessment tool to investigate how introducing computational thinking in a cognate course influences learning of the traditional
course material; and we discuss rubric development for project evaluation.
© 2010 Elsevier B.V. All rights reserved.
Keywords:
Computational science education
Computational thinking
Curriculum development
Assessment
Placement diagnostics
Force Concept Inventory (FCI)
Rubrics
1. Introduction
As an emerging discipline, computational science does not yet
have a customary curriculum. Graduate curricula were surveyed
first [1] and recommendations have been made for competencies to form the core of an undergraduate curriculum [2,3]. In
addition, the way in which coursework is apportioned among
existing disciplines and the extent to which courses overlap in
the undergraduate curriculum has been analyzed [4]. A handful
of textbooks written specifically for undergraduate and computational science courses have appeared [5–9]. The content of
newly developed undergraduate courses in computational science depends on determining which core competencies are not
being adequately developed in cognate courses already offered
in traditional majors. The shortfalls that are identified are met
both by distributing the topics in the new courses introduced
with the major and by renovating existing cognate courses where
possible.
∗ Corresponding author. Tel.: +1 609 626 5583; fax: +1 609 626 5515.
E-mail addresses: Russell.Manson@stockton.edu (J.R. Manson),
Robert.Olsen@stockton.edu (R.J. Olsen).
1877-7503/$ – see front matter © 2010 Elsevier B.V. All rights reserved.
doi:10.1016/j.jocs.2010.03.012
The New Jersey Commission on Higher Education approved a
program in computational science at Stockton in February 2006;
the entering class of Fall’07 was the first cohort that was able to
select the undergraduate major. The second author has taught CPLS
2110 (introduction to computational science) each fall semester
since Fall’07 and also taught the course in Spring’07 in preparation for the formal initiation of the major that fall. Enrollments in
CPLS 2110 prior to Fall’09 were quite small; the first two undergraduate computational science degrees will be awarded during
the 2010–2011 academic year.
The computational science (CPLS) program at Stockton has close
ties to the physics (PHYS) program. Therefore an early opportunity to expand the computational content of cognate courses
came when CPLS faculty were asked to teach the classical mechanics course that is a standard part of the undergraduate physics
major. The course was duly renamed PHYS 3220 (computational mechanics) and taught by the first author in Spring’08 and
‘09.
The first cohort of degree candidates was admitted into the M.S.
component of the program in Spring’10. All students entering with
a B.S. degree take CPLS 5100 (introduction to modeling and simulation) in the first semester. CPLS 5200 (scientific visualization)
is a required course that will be taught by the second author in
Fall’10.
56
J.R. Manson, R.J. Olsen / Journal of Computational Science 1 (2010) 55–61
2. Motivation
2.1. The need for assessment: CPLS 2110
The major will take root to the extent that science, mathematics, and computer science majors see the introductory courses as
electives that add value to their curricula. In other words, computational science programs must embrace the role of providing
service courses to larger, longer-established majors. This is neither
new nor unique. Chemistry serves biology, physics serves chemistry and biology, and mathematics serves physics, chemistry and
biology.
In the Fall’08 semester, we made a concerted effort to expand
the audience of CPLS 2110 beyond CPLS majors. We replaced
first-semester calculus as a co-requisite with pre-calculus as a prerequisite at the request of colleagues in the environmental science
program. Subsequent discussions among the CPLS faculty highlighted the importance of developing an assessment tool that would
give an early warning to students needing to remediate basic mathematics skills.
2.2. The need for assessment: PHYS 3220
PHYS 3220 focuses on Newtonian mechanics at a medium
to advanced level. In Spring’08 the first author introduced a
sequence of computational projects involving mechanics problems
of increasing difficulty. The projects ranged from modeling projectile motion to modeling motion of connected multiple rigid bodies
(a complete list is given in Table 1).
Does renovating an existing course by taking a computational
approach detract from the traditional content, which is after
all the raison d’être of the course? This question motivated a
study utilizing the Force Concept Inventory (FCI) [10], a wellestablished diagnostic tool that assesses mechanics knowledge,
which we used as an anonymized pre- and post-test. The pre-test
results alert the instructor to misconceptions about Newtonian
mechanics held by the class as a whole. Comparison of pre- and
post-test results was used to determine whether student understanding was affected by the increased emphasis on computational
thinking.
2.3. Embedded assessment: a pancurriculum rubric
CPLS faculty make extensive use of projects in both CPLS and
cognate courses. Project work reflects current best practice in
computational science education [2,3]. Reports for computational
science projects tend to follow a standard model across the curriculum regardless of who teaches the course. Broadly speaking,
they include an introduction, model description, model testing,
model application and a conclusion. The first author decided to
develop a rubric for computational science projects for use in PHYS
3220.
Table 1
Topics of the projects in PHYS 3220 and the associated computational concepts.
Topic
Computational concept
Motion of a projectile
Motion of an object sliding down an
inclined plane with friction and
bouncing against a spring
Motion of a satellite orbiting Earth
Solving ODEs using Euler’s method
Limitations of Euler’s method and
solving ODEs using Runge–Kutta
methods
Long-time accuracy of numerical ODE
solvers
Multiple model realizations for
optimization and simulation
Solving stiff DAEs
Motion of a rigid body
Motion of multiple connected rigid
bodies
Concerns about the granularity of both the categories and the
scoring scale of the rubric have led the authors to collaboratively
design a more robust rubric which could be used across the computational science curriculum. A decided benefit of a pancurriculum
rubric is that it reinforces a student’s computational skills through
consistent emphasis on core competencies throughout the four
years of undergraduate study.
3. Methods
3.1. Tools for assessment: CPLS 2110
The second author designed a brief assessment of basic skills
that was administered on the first day of the Fall’09 semester.
Questions gauged geometric understanding of the meaning of
the slope and y-intercept of a straight line; recognizing common functions (mx + b, sin x, and ex ) when plotted as data sets
as they might appear if obtained as experimental results (i.e.,
with noise); and associating a (global) minimum, a (local) maximum, a point with positive slope, and a point with negative
slope on the graph of a function with labels describing the rate of
change as smallest in magnitude, largest in magnitude, positive and
negative.
On the first and last days of the semester students completed
a survey, shown in Table 2, aimed at ascertaining their experience
with and attitudes toward computing. Questions are paired in the
survey, asking first about coursework in general and second about
math and science coursework in particular. Responses were on a
seven point Likert-style scale, with 1, 4 and 7 corresponding to
never, sometimes and regularly, respectively.
3.2. Tools for assessment: PHYS 3220
The FCI [10] is a tool developed to help assess facility with
and misconceptions about Newtonian thinking as a way to explain
motion and its causes. It consists of 30 conceptual questions in
multiple choice format and has been extensively studied and promoted [11]. Given its widespread use, the first author chose to
administer the FCI to see whether undertaking the aforementioned
computational projects (Table 1) was fostering the ability to apply
Newtonian thinking to problems in mechanics. We expected that
implementing computational models of various mechanics problems and analyzing the results with graphing and visualization
tools (e.g., movies in MATLAB) would be an aid to understanding
the mechanics concepts.
The FCI was administered twice in Spring’09, once at the outset of the course and again at the end. Results of the FCI were not
included in the final grade, so the assessment was of low stakes. To
further alleviate test anxiety, the FCI was administered in such a
way that the instructor was able to pair the results without identifying individual students.
Table 2
Survey questions about experience with and attitudes toward computing. Oddnumbered questions Q1, Q3, etc. omit the words in parentheses; even-numbered
questions include them.
(Q1, 2)
(Q3, 4)
(Q5, 6)
(Q7, 8)
How often have you used a computer when doing an
assignment in a (math or science) course?
How often have you used spreadsheet (e.g., Excel) software in
a (math or science) course?
How often do you use a computer, even if it is not required, to
do an assignment in a (math or science) course?
How often have you found that using a computer helped you
understand a concept in a (math or science) course?
J.R. Manson, R.J. Olsen / Journal of Computational Science 1 (2010) 55–61
57
Table 3
Example rubric category. The scale is labeled by the category name and followed by guidelines describing excellent (9–10, A range), good (7–8, B range), satisfactory (5–6, C
range), and poor (0–4, D and F range) work.
Model construction
The flow of information in the model is easily followed in an excellent project, is followed with some effort in a good project, is followed only with significant effort in a
satisfactory project, and is followed only with considerable difficulty in a poor project.
3.3. Tools for assessment: a pancurriculum rubric
The aforementioned rubric was first used in PHYS 3220 in the
Spring’09 semester and contained six categories (accuracy, learning
and understanding, narrative, introduction, analysis and conclusion). Each category was assigned a number from 0 to 3, giving a
total score that ranged from 0 to 18. Despite providing a somewhat
more objective way of assessing computational science projects,
the rubric was often found to be too coarse-grained. Since the
second author was assigning projects in CPLS 2110, the idea of
developing a robust pancurriculum rubric arose.
The authors worked together to modify the original rubric, subdividing the categories to provide a more fine-grained instrument.
The modified rubric had 18 categories evaluated on a 0–10 scale,
giving a maximum possible score of 180. Table 3 contains a sample
category. To test the modified rubric the authors created a pool of
10 exemplary projects, five each from the most recent instances of
CPLS 2110 (Fall’09) and PHYS 3220 (Spring’09). This pool was then
graded by both authors.
Fig. 1. Course grade versus diagnostic score. There is a strong correlation (r = 0.91)
between the diagnostic score and the course grade, indicating that the diagnostic is
a useful predictor of success in CPLS 2110.
4. Results and discussion
4.1. Assessment outcomes: CPLS 2110
14 of 17 students (82%) answered questions about slope and
y-intercept of a line correctly. Given “noisy” data following exponential, linear, and sinusoidal trends and equations for each
function, 14 of 17 students (82%) correctly matched the exponential data and function, 13 of 17 students (76%) correctly matched
the linear data and function, and 16 of 17 students (94%) correctly
matched the sinusoidal data and function. Of the seven students
who answered one or more of these questions incorrectly, three
withdrew from the course almost immediately, one withdrew early
in the semester, and the remaining three obtained the three lowest
course grades.
On the question involving rates of change and slope at a point
on a curve, 13 of 17 students (76%) identified the minimum with a
slope of smallest magnitude and 2 of 17 students (12%) identified
the maximum with a slope of smallest magnitude. Just 4 of the 13
students selecting the minimum realized that the maximum should
be likewise identified, and neither of the two students selecting the
maximum realized that the minimum should be likewise identified.
Although the question clearly stated that more than one point could
be associated with a label, it is likely that habit built from years of
test taking caused students to answer this question quickly rather
than carefully. Redesigning the question to ask students to rank the
rate of change at each point may provide better information. 10 of
17 students (59%) correctly identified the function as having a positive rate of change at the point with positive slope and a negative
rate of change at the point with negative slope. Several incorrect
responses appear to be due to differences between the slopes at
the labeled points being insufficiently pronounced; a graph with
more distinct features will be used in subsequent versions of the
assessment.
Fig. 1 demonstrates that course grade correlates well with the
score on the diagnostic assessment. One student, represented by
the open circle, fared considerably worse in the course than per-
formance on the diagnostic would imply. The diagnostic provides
relatively little resolution at the top end of the scale, as indicated
by the clustering of points at diagnostic scores above 90. The selfassessments revealed by the survey of Table 2 were not positively
correlated with course grade.
4.2. Assessment outcomes: PHYS 3220
We present the sorted differences between after and before FCI
scores in Fig. 2. Nine pairs of tests were collected. Although two
students appear to have diminished facility with Newtonian thinking, the majority (seven) appear to have benefited from the course
and from the computational projects. A paired t-test indicates that
the improvement is significant at the 90% confidence level. As used
here, the FCI indicates that including computational projects in
a mechanics course (a computational science cognate) does not
detract from learning mechanics concepts and appears to help most
students.
The small sample size is certainly of statistical concern; however, this is a problem for computational science assessment not
only now, owing to low enrollments in courses in this new field,
Fig. 2. Change in FCI score. Nine students completed the FCI assessment at the beginning and end of PHYS 3220 in Spring’09. Seven of the nine scores increased; the
difference in scores (before–after) is significant at a 90% confidence level according
to a paired t-test.
58
J.R. Manson, R.J. Olsen / Journal of Computational Science 1 (2010) 55–61
Fig. 3. Evolution of the pancurriculum rubric. Categories are arranged in columns corresponding to the original rubric (first column, Section 2.3 and Section 3.3), the expanded
rubric (second column, Section 3.3 and Section 4.3), and the rubric after further refinement (third column, Section 4.3 and Section 5.1). Arrows between columns show the
elaboration and merging of categories as the rubric evolved. Categories in the current rubric are constellated into the supercategories to the far right.
but also likely in the future, owing to the relatively small class
size imposed by the necessity of teaching in computer classrooms.
Further work is required to tease out how much understanding is
gained from the computational project work versus other course
activities.
4.3. Assessment outcomes: test-driving the pancurriculum rubric
We are not aware of any other pedagogical studies in computational science wherein two individual faculty members compare
their grading of the same student material in a systematic way
and this makes the results particularly interesting. Considering that
students in the two courses were given independent guidelines
for their projects without reference to the rubric, the correlation
(r = 0.63) between the total scores we assigned to the 10 projects
is reassuring. We selected the same project as best and identified
the same two projects as least accomplished. Moreover, we concurred on four of the top five projects and four of the bottom five
projects. We evaluated two projects near the middle of the score
distribution rather differently. The correlation between our rankings is fairly strong (r = 0.76), which is significant at the 98% level
(p = 0.02; two-tailed t-test). This lends support to the notion that
experienced teachers of computational science can spot good and
bad projects “at a hundred paces”.
When scores for all categories and all projects are compared
in a two-tail t-test, the p-value is 0.19. This indicates that we
cannot conclude with confidence that the scores are drawn from
different populations, i.e., there is no significant difference overall in the grades we assigned when all categories are integrated.
This provides some justification for the idea of different instruc-
J.R. Manson, R.J. Olsen / Journal of Computational Science 1 (2010) 55–61
59
Table 4
Pancurriculum rubric categories and evaluation criteria.
Subject mastery
Awareness of limitations
Integration of math and science
Critical analysis
Data visualization
Model construction
Model integrity
Documentation
Distinctive features
Degree of difficulty
Introduction
Results and discussion
Conclusion
Bibliography
Spelling
Grammar
Word usage
Formatting
An excellent project demonstrates complete mastery of the subject material and has no technical mistakes. A
poor project has major conceptual misunderstandings or several technical errors.
Full awareness of the limitations of both the model and the methods is evident in an excellent project; a list of
situations in which the model should not be used is included. At most, a poor project mentions only briefly
limitations of either, but not both, of the model or the method.
An excellent project links, through equations and narrative, the mathematical and physical concepts. A poor
project makes at best a minimal attempt to link the mathematical and physical concepts.
In an excellent project all results are scrutinized in view of known limitations of the model and method;
results are neither oversold nor undersold. One or more results are accepted unquestioningly in a poor project.
Figures in an excellent project have accurately labeled axes, an appropriate choice of point markers and line
styles, clearly distinguished data sets, accurate captions that highlight the most distinctive features of the
data, effective use of color, minimal whitespace, and minimal clutter. Many aspects of the figures in a poor
project can be improved. Figures with unlabeled axes or multiple data sets that are not distinguished ensure
that this aspect of the project will be rated as poor.
In an excellent project, the rationale behind the model and how the model works is lucid and unambiguous. In
a poor project, it is not clear how or why the model works, even if it does provide plausible answers.
The accuracy of the model as well as its fidelity to applicable scientific laws, exact solutions and mathematical
or experimental benchmarks is demonstrated in an excellent project. The model is shown to produce accurate
results in no more than the simplest of cases in a poor project.
The documentation accompanying an excellent project allows another modeler with similar experience to use
or modify the model after reading the documentation. A poor project does not describe how to use the model
or has uncommented code.
Any of several features distinguish an excellent project from a good project. Examples of such features include
especially thorough analysis of model integrity, particularly effective figures, unusually insightful discussion
of model limitations, perceptive identification of further questions that the model might be used to answer,
and identification of modifications required to make the model more broadly applicable.
Any of several traits differentiate a difficult project from an easy project. Projects that involve investigating a
system of greater complexity than that of a typical homework problem, comparing results obtained from
more than one method, or using a method not discussed in class all qualify as difficult. A project that requires
effort equivalent to a homework problem is an easy project.
An excellent introduction engages the reader even if he or she is not knowledgeable about the problem at
hand. A poor introduction deters the reader from reading further.
The results and discussion section of an excellent project guides the reader through the key results and
explicitly refers to the figures and tables in support of the analysis; in a poor project, key results are omitted
from the results and discussion section or no reference is made to the figures and tables.
The conclusions section of an excellent project accurately and concisely summarizes the key results and
includes cross-references to relevant parts of the results and discussion section. The conclusions section of a
poor project gives an inaccurate account of the results.
The bibliography of an excellent project contains accurate references to the specified number of authoritative
sources. The bibliography of a poor project consists of only the course text.
An excellent project contains no errors that spellchecking the document would detect and no more than two
spelling errors per page of text. The spelling errors in a poor project are so numerous that the reader is
distracted from the content.
An excellent project contains no more than one grammatical error per page of text. The grammatical errors in
a poor project are so numerous that the reader is distracted from the content.
In an excellent project, word use is apt and enhances the presentation. Words are used incorrectly and
phrasing is immature in a poor project.
In an excellent project, equations are typeset and figures and tables are integrated at appropriate points in the
manuscript. Handwritten equations or figures and tables gathered at the end of the manuscript are
characteristic of a poor project.
tors using this project rubric in different computational science
courses. As noted previously, the advantage of such a policy is
that students find best practices and skills for their discipline
being reinforced over the four years of their computational science
education.
Delving more deeply into the data reveals some differences. We
consider individual categories of the rubric for which our scores
are not well correlated as having ambiguously defined criteria. The
refinements that have resulted from this closer analysis are shown
in Fig. 3 and Table 4.
5. Ongoing work
The process of developing a computational science curriculum
and the supporting assessment tools has been a thought-provoking
and illuminating experience, and we encourage others in the wider
computational science community to undertake similar projects if
they have not already done so.
5.1. Next steps: CPLS 2110, PHYS 3220 and the pancurriculum
rubric
Definite next steps for each of the three initiatives described in
this manuscript are already underway.
(1) We plan to further test and refine the pancurriculum rubric and
advocate for its adoption in all CPLS courses and in any cognates
for which it is appropriate. The first author is teaching PHYS
3220 and the second author is teaching PHYS 3352 (nonlinear systems), another cognate course, this semester (Spring’10).
Projects are an integral part of both courses. We have included
the rubric as part of the instructions to our students for their
projects and we will grade the projects jointly (i.e., we will effectively be team-teaching these courses with regard to project
evaluation). We will gladly provide the current version of the
rubric to anyone in the computational science community who
wishes to contribute to its further testing and refinement.
60
J.R. Manson, R.J. Olsen / Journal of Computational Science 1 (2010) 55–61
Table 5
Guidelines for design of effective data visualizations.
(1)
(2)
(3)
(4)
Is the graphic constructed so that it can be interpreted accurately?
Considerations include placement of tick marks, scale accuracy, and
discriminability of plotting symbols.
Is the graphic designed so that the intended task can be
accomplished efficiently? Considerations include placement of
legends and labels, choice of scale (linear, logarithmic, etc.) and
inclusion of trend lines. Important features of the data should be
noticed first. Objects should be grouped so as to take advantage of
human perceptual capabilities.
Is the data emphasized? Considerations include data encoding
(position, size, color and shape of plotting symbols), efficient use of
space (i.e., minimization of whitespace), and data/ink ratio.
Is the graphic suited to the intended audience? The primary
consideration is presumed background knowledge.
(2) The diagnostic assessment developed for CPLS 2110 shows
promise as an indicator of preparedness for beginning study
in computational science. Reflecting on student work from the
Fall’09 semester, facility with units seems to be a discriminator that is not currently part of the diagnostic. Before using the
diagnostic in Fall’10, we will incorporate this topic and make
the modifications mentioned in Section 4.1.
(3) We intend to use the CPLS 2110 diagnostic as the foundation for
a series of assessments of increasing sophistication for use in all
of our courses, including those at the graduate level. The development and testing of this computational thinking inventory
will be a large component of our future work.
(4) The use of the FCI assessment tool in PHYS 3220 has yielded
some interesting, albeit preliminary, results. Some concepts
the tool examines are more closely tied to the computational
projects in the course than others. We will lump the questions
into groups by concept and analyze the results at this mediumgrained level with a view toward extracting more information
about the influence of computational thinking on cognate learning.
Our most immediate efforts involve testing the pancurriculum
rubric and sharpening the use of the FCI. Table 4 contains the categories of the refined pancurriculum rubric and the criteria by which
each category is evaluated. Most categories are rated from excellent (A) to poor/failing (D/F), with excellent corresponding to 9–10
and poor/failing to 0–4. Exceptions are “distinctive features” and
“degree of difficulty”. We include them to give the rubric a bit
of open-endedness so that it does not become merely a checklist
when used by students. We envision using these two categories to
distinguish good from excellent work.
5.2. First steps: CPLS 5200 project rubric
There has been steady accumulation of monographs about
design of effective data graphics since the early 1990s (references
[12–15] are a partial list). As computer hardware and software have
increased in power and availability and datasets have increased in
complexity, scientific visualization has grown from an early focus
on statistical graphics to encompass data and information visualization.
The visualization literature contains many recommendations
for design of an effective visualization (see, for example, references
[16–20]). However, effectiveness has been defined in a variety
of ways [19], and recommendations from different sources are
sometimes in conflict with one another. We have synthesized the
available recommendations into a list of guidelines (see Table 5)
for students to use as they work on visualization projects in CPLS
5200.
Projects will consist of the visualization itself as well as an
accompanying essay that details how the guidelines informed the
design of the visualization. The primary evaluation criterion will
be attentiveness to the guidelines, with particular attention being
paid to awareness of any trade-offs that were involved in the design.
Projects in this course are narrower in scope than the projects in
CPLS 2110, PHYS 3220 and PHYS 3352, so the pancurriculum rubric
discussed in Sections 2.3, 3.3, 4.3 and 5.1) is not an appropriate
assessment tool.
We will develop a rubric for data visualization projects by elaborating the guidelines of Table 5. Rubrics function not only as
summative assessment tools but also as formative assessment tools
if students are involved in designing the rubric [21]. Well-chosen
criteria yield a rubric that permits authentic assessment of authentic tasks [22]. Discussion of criteria that reflect best practices help
define the discipline, an important step as students progress from
novice to expert. Converting guidelines to a rubric provides the
opportunity for incorporation of the research literature into the
course materials; the realization that many open questions attach
to what seems to be an everyday task (e.g., graphing data) is another
important step in the transition from novice to expert.
Acknowledgement
The authors would like to acknowledge the support of the U.S.
Department of Education (FIPSE) for this work through grant award
P116Z080098.
References
[1] SIAM Working Group on CSE Education, Graduate education in computational
science and engineering, SIAM Review 43 (1) (2001) 163–177.
[2] Ralph Regula School of Computational Science, Summary of undergraduate
minor program requirements, 2006 [cited 7 January 2010, online].
[3] SIAM Working Group on CSE Undergraduate Education, Undergraduate computational science and engineering education, 2006 [cited 8 January 2010,
online].
[4] O. Yasar, R.H. Landau, Elements of computational science and engineering education, SIAM Review 45 (4) (2003) 767–805.
[5] R.H. Landau, A First Course in Scientific Computing: Symbolic, Graphic, and
Numeric Modeling Using Maple, Java, Mathematica, and Fortran90, Princeton
University Press, 2005.
[6] A.B. Shiflet, G.W. Shiflet, Introduction to Computational Science: Modeling and
Simulation for the Sciences, Princeton University Press, 2006.
[7] G. Strang, Computational Science and Engineering, Society for Industrial and
Applied Mathematics, Philadelphia, Pennsylvania, USA, 2007.
[8] R.H. Landau, J. Paez, C.C. Bordeianu, A Survey of Computational Physics: Introductory Computational Science, Princeton University Press, 2008.
[9] C.F. Van Loan, D.K.-Y. Fan, Insight Through Computing: A MATLAB Introduction
to Computational Science and Engineering, Society for Industrial and Applied
Mathematics, Philadelphia, Pennsylvania, USA, 2010.
[10] D. Hestenes, M. Wells, G. Swackhamer, Force concept inventory, The Physics
Teacher 30 (3) (1992) 141–158.
[11] D. Hestenes, I. Halloun, Interpreting the FCI, The Physics Teacher 33 (8) (1995)
502.
[12] E.R. Tufte, The Visual Display of Quantitative Information, 2nd Edition, Graphics
Press LLC, Cheshire, Connecticut, USA, 2001.
[13] W.S. Cleveland, The Elements of Graphing Data, Revised edition, Hobart Press,
Summit, New Jersey, USA, 1994.
[14] S.M. Kosslyn, Graph Design for the Eye and Mind, Oxford University Press, 2006.
[15] H. Wainer, Picturing the Uncertain World: How to Understand, Communicate,
and Control Uncertainty through Graphical Display, Princeton University Press,
2009.
[16] H. Wainer, How to display data badly, The American Statistician 38 (2) (1984)
137–147.
[17] S.M. Kosslyn, Graphics and human information processing: a review of five
books, Journal of the American Statistical Association 80 (391) (1985) 499–512.
[18] S.G. Eick, Scientific visualization, overviews, methodologies, and techniques,
in: G.M. Nielson, H. Hagen, H. Mller (Eds.), Scientific Visualization, Overviews,
Methodologies, and Techniques, IEEE Computer Society, Washington, DC, 1997,
pp. 191–210.
[19] Y. Zhu, Measuring effective data visualization, in: G. Bebis, et al. (Eds.), Advances
in Visual Computing, Third International Symposium, ISVC 2007, Part II,
vol. 4842 of Lecture Notes in Computer Science, Springer-Verlag, 2007, pp.
652–661.
[20] T. Munzner, A nested model for visualization design and validation, IEEE Transactions on Visualization and Computer Graphics 15 (6) (2009) 921–928.
J.R. Manson, R.J. Olsen / Journal of Computational Science 1 (2010) 55–61
[21] H.G. Andrade, Using rubrics to promote thinking and learning, Educational
Leadership 57 (5) (2000) 13–18.
[22] K. Montgomery, Authentic tasks and rubrics: going beyond traditional assessments in college teaching, College Teaching 50 (1) (2002) 34–39.
J. Russell Manson is a civil engineer with 15 years of
experience in undergraduate and graduate education.
His primary research interests are estuary modeling and
stream metabolism. He is the founding director of the
master’s degree program in computational science at The
Richard Stockton College of New Jersey.
61
Robert J. Olsen is a chemist with 15 years of experience in
undergraduate education. His primary research interest is
modeling chemically reacting systems, particularly from
a dynamical systems viewpoint. He is a founding member of the computational science program at The Richard
Stockton College of New Jersey.
LANG Program Assessment of Student Learning
2010-2011
Measures
The rubrics for foreign language proficiency established by The American Council on the Teaching of
Foreign Languages (ACTFL) provide the basis for our assessment process.
Novice Level: these students can form sentences in the target language, but these sentences are
often memorized formula. When pressed to communicate more fully, students at this level have
trouble articulating ideas and resort to communication in words, word pairs, or sentence
fragments.
Intermediate Level: these students can reliably communicate at the sentence level. This
communication involves creative expression, in that the students take words they have learned
and form them into coherent sentences that they have never read or heard before. They can
also ask a variety of questions with interrogative pronouns or adverbs. With these abilities, they
can make themselves understood by sympathetic native speakers of the language. When
pressed to communicate more fully, students at this level experience frustration and revert to
expressing themselves in short, simple sentences.
Advanced Level: these students can reliably communicate in paragraph length discourse-sentences linked through conjunctions or other narrative detail. They can describe things or
persons in detail and narrate stories in the past, present, and future time frames. They can also
handle a conversational situation that involves some complication or resistance on the part of
the interlocutor. When asked to support an opinion or hypothesize however, these students
experience a breakdown in their ability to communicate.
Superior Level: whereas students at the advanced level talk mostly about their own
experiences, students at the superior level can speak on a broad range of subjects. Indeed, they
can support opinions on abstract topics and hypothesize about them. They produce no
systematic grammatical errors.
ACTFL divides all of the above levels, except for superior, into three sublevels: low, mid, and high.
Students at the low sublevel can barely manage communication at their level during an interview (10
minutes for novice; 15-20 minutes for intermediate; 25-30 minutes for advanced). Those at the high
sublevel spend at least half of the interview communicating at the next highest level, but cannot
maintain that higher level throughout the entire interview.
Precipitating Factors
Because of the federal No Child Left Behind law, new school teachers must prove themselves “highly
qualified” in some area of study. For foreign language teachers, New Jersey chose ACTFL’s Oral
Proficiency Interview (OPI) as one of its measures of this qualification. OPI ratings are based on the
levels indicated above. New Jersey considers potential language teachers “high qualified” if they rate at
the advanced-low level in their chosen language. Not all language students, nor even all majors, at
Stockton will become language teachers; however, the State’s choice of the OPI made our selection of
assessment criteria easier. One the one hand, familiarizing faculty and students with the rubrics will aid
in our preparation of future teachers, and on the other, the ACTFL rubrics focus on the language
functions that a student can perform rather than on some less productive criterion, such as grammatical
precision or error minimization. Preparing students to function in a second language is central to our
mission.
Cultural knowledge does not figure within the ACTFL rubrics, except at the highest level. To cover this
type of knowledge, the State requires that potential teachers take a written exam, the Praxis II.
However, the success rate of our students on this test of reading and cultural knowledge is such that
adding such a test to our assessment measure would be superfluous.
Frequency
Since November 2005, three members of the Program have received ACTFL sponsored OPI tester
training, and one of these faculty members has become a certified OPI tester. All three began
conducting interviews with their students as part of their training beginning in 2006. Since 2007, all
majors planning to undertake the teacher certification process in New Jersey have participated in
practice interviews with trained faculty. The French section of the program has made helping students
make the transition from the intermediate level to the advanced level an explicit part of all of its 3000
level courses. The Spanish section has created a course--LANG 3257, Proficiency Methodology in
Spanish--devoted to this same end.
Since 2008, the Program has conducted OPI’s with students at the elementary and intermediate levels.
The Program anticipates using the OPI as an assessment tool for courses at all levels on an annual basis
for the foreseeable future.
Assessment in the Literature Program
Marion Hussong, Deb Gussman, and Adalaine Holton
September 27, 2010
Several years ago, the LITT Program developed a comprehensive assessment rubric for
the Senior Seminar. The rubric assessed students’ performance on aspects such as
writing, bibliographic and research skills, literary analysis, and critical thinking. Using
the rubric gave faculty a clear and consistent matrix for evaluating senior theses. Students
appreciated the rubric for the same reason, as feedback showed at the time. In 2005, Deb
Gussman wrote an article for the September issue of Evidence that introduced the LITT
Program’s transition to using the rubric and traced the program’s further steps toward
integrating assessment.
Following the pilot, various LITT faculty read randomly selected samples of student
theses from subsequent Senior Seminars and compared grading results across the
program. By and large, we found no significant grading discrepancies among our faculty.
Since then, multiple instructors have used and continue to use the Senior Seminar rubric.
In the few years since the initiation of the rubric, the LITT program observed a recurrent
weakness in senior theses: Our students struggle with bibliographic skills. We therefore
set out to strengthen our students’ research and bibliographic proficiency in the gateway
courses that lead to the Senior Seminar capstone experience.
The Introduction to Research in Literature course, a program requirement, teaches our
students how to construct an annotated bibliography. Students practice how to effectively
use MLA format and learn how to write concise, fluid, informative annotations. Despite
this introduction, it was obvious that not all our students had mastered these skills when
they enrolled in Senior Seminar. Therefore, the program decided to reinforce
bibliographic skills in representative 3000-level courses, which act as gateway courses to
Senior Seminar. Our objective is for students to master the skill before they take Senior
Seminar.
Instructors within the LITT Program now use a variety of effective approaches to teach
bibliographic skills in Introduction to Research and reinforce them at the 3000-level.
Recently, the online program Zotero, used by several faculty members, has shown real
promise.
We are currently piloting assessments that will help us find out whether we are meeting
our objective. In Spring 2010, Marion Hussong developed an assessment tool to trace
students’ progress in that area. The assessment was first tested in her LITT 3602 class,
Literature After the Holocaust. The students received an assignment that required three
annotated bibliographies of course readings, spaced about one month apart. Students
received a 3-part rubric on adherence to MLA format and annotation writing. After
completing the first installment, the instructor completed the rubric and provided
feedback. Students then revised their annotated bibliography and resubmitted it together
with the second part of the assignment. The process is repeated with the third installment,
which is due near the end of the semester.
This approach gave the students the opportunity to practice and perfect their skills, while
the instructor monitored their improvement. If a student showed mastery of the skill after
the second assignment, he or she did not need to submit the third installment and received
gratis points, thus avoiding busywork. The results of Marion’s pilot were very
encouraging. The cycle of practice, followed by feedback and reinforcement seemed to
work well: At the end of the semester, all students had mastered the skill sufficiently well
to leave the instructor reassured that they would be ready for Senior Seminar. (It should
be noted, however, that some students needed rather extensive one-on-one tutoring from
the instructor to reach that objective.)
Regarding our students’ aptitude to write effective annotated bibliographies, the next step
for our program should be to consider our instructors’ varied approaches to teaching
bibliography, to examine Marion’s pilot rubric, consider alternatives, and adopt one or
more assessment tools that will provide us with data on our students’ readiness in that
skills area.
Currently, LITT faculty members use rubrics to evaluate a wide spectrum of assignments,
among them research and reader response papers, bibliographic assignments,
presentations and performances, technology assignments, and creative writing projects.
Some of these rubrics are used with success by more than one instructor.
At our May 2010 retreat, the program decided to create a survey to assess alumni
perceptions of the major and its preparations for careers. Possible survey questions
included: What track were you in? What did you plan to do after graduation? What did
you end up doing after graduation? In what ways did the LITT major prepare you for
your current career?
The program has scheduled an assessment retreat for December 2010 to develop and
adopt strategies for assessment across the program. For instance, student progress toward
mastery of bibliographic skills could be traced from the introduction of that skill in
Introduction to Research in Literature, through representative 3000-level courses, to
Senior Seminar. The program will also look into comparable strands of assessment to
follow our students’ development in other important skills areas. Finally, we will work on
designing the alumni survey that we agreed on at the May retreat. The program has not
decided yet on the best approaches but looks forward to the retreat to develop an
assessment plan that makes sense for our students and faculty.
MARS Program Learning Objectives
Learning Objectives
(Desired Outcomes)
Measures
Results (also called
Actual Outcomes on
some charts)
Action Taken
Cognitive: Students
should be generally
aware of the interaction
and importance of the
fields of marine
geology, marine
chemistry, and physical
oceanography
Direct: senior year
general MARS test
In the past, a weakness
in oceanography was
noted. Over the last 3
years, the mean scores
(out of 10) on nonbiological questions has
risen, from 7.2, to 7.4,
to 8.4
The program faculty
added another required
course in oceanography
during AY 2007
Cognitive: Students
should be familiar with
the important groups of
marine organisms, as
well as their interaction
with other organisms
and the physical marine
environments
Direct: senior year
general MARS test
Over the last 3 years,
the mean scores (out of
10) on biological
questions has remained
steady (7.6, 7.4, and
7.7, respectively)
Faculty reviewed results
at annual meeting;
general agreement to
continue emphasizing
these concepts
Affective: Alumni
should be satisfied with
their RSC education in
general, and their
marine science
education in particular
Indirect: program
review self study (every
five years)
56 alumni responded to
the last survey. On a
scale from 1to 10, the
average rank for their
college courses was
8.40, while the average
for the MARS courses
was 8.96
Results shared with
external consultant,
Dean and Provost.
Program sent a copy of
results to Alumni Office.
Marine Science Assessment use
Update on program and faculty activity with regard to the assessment of learning outcomes.
The MARS program has had various assessment tools in place for many years. Recently, the
program assessment processes has been revamped and includes four factors: Analysis of IDEA
data, Analysis of Internship Reports, Analysis of accumulated knowledge (a test given to seniors
during the spring semester), and Analysis of Alumni Progress (a survey done once every five
years). We intend to use this model for several years, and it will be re-evaluated as a tool
during the next five-year self-evaluation.
What has the program and/or its faculty learned to date?
One of the first things we noticed was that students did not seem to be retaining information
and appreciation of the non-biological (oceanography and marine geology) aspects of marine
science. This has been addressed by adding a second required oceanography course to be taken
later in the student’s academic career, and we have begun to teach a marine geology course
that can be used as an alternative to the physical geology course.
What changes has the program and/or faculty implemented as a result of their assessment
activities.
One instructor changed the sequence of material in BIOL/MARS 3105 (Biostatistics) so that
some of the procedures (for example, chi-square tests to compare predicted and observed
proportions) are taught earlier in the semester so they can be applied to other courses (for
example, BIOL 2110, Genetics).
Another instructor who teaches GNM 1123 (Fisheries in Crisis) experimented with different
ways of acclimating the class to data (including images of the variables or organisms being
presented, then, as the class became more comfortable with the material, this instructor was
able to introduce actual scientific publications from Science and Nature as outside readings. In
MARS 2201 (Introduction to Marine Biology) additional tools were added (extra review sheets,
lecture outlines, etc.), so rather than lowering the bar on the material, the instructor devoted a
good amount of energy to building student confidence and encouraging new study habits. In
Fisheries Science and Management (MARS 3307) this same instructor realized that a “hook”
was needed to keep the labs interesting, so many of the computer-based exercises were turned
into “challenges” (for example, one successful lab challenged students to uncover spurious
correlations by pitting standard fisheries data against bogus predictor variables).
Mathematics Assessment- Starting the assessment of student learning process
Starting this semester (Fall 2010), the Mathematics Program is determined to create a list of
learning goals for the base courses. These courses include Precalculus, and Calculus I, II, and III.
We have a team of faculty working together. The first step is to create the list of desired
learning outcomes for each of these courses. The program has sent the lists to Dean Weiss and
is waiting for his comments. We will then determine which assessment methods will be used.
The Mathematics Program is just beginning this task.
Physics Assessment
American Physics Society Standardized Tests
Instructors are already experimenting with several American Physics Society standardized tests
for general physics concepts. We are assessing skills and knowledge in a number of areas at the
beginning and end of certain courses or sequences of courses (namely PLS I, PLS II, PHYS I and
PHYS II). However, the work is experimental in nature and aimed at determining whether these
instruments are appropriate for basic assessment of learning outcomes in general physics.
Assessment of WebCT and Black Board in Physics Teaching
Several of the program members continue to use WebCT and Black Board in the teaching of
Physics for Life Sciences I and II. The electronic class rooms assisted in the use of computer
simulations to demonstrate principles of physics. WebCT homework and quiz assignments allow
the students to receive instant feedback on their work. Some preliminary assessment has
tentatively concluded that students using computer instruction which gives instant feedback
were making progress with respect to the learning goals of the course. However, whether the
steady assignment of online homework is actually having an impact on learning will require an
analysis of whether the student cohorts being compared are indeed similar. High school class
rankings, math SAT scores and determining how many of the students were actually first
semester freshmen, might shed additional light on the significance of these preliminary results.
Faculty Direct Assessment Activities:
On a daily basis Physics faculty are constantly engaged in many types of direct assessment
activities such as faculty developed In-class content specific assessment, carefully designed
homework sets as the weekly assessment tool, end-of–class assessment, pre-lab assessment,
lab final project assessment, student senior project assessment, etc.
1. In-class content specific assessment as a daily assessment tool
Instructors have developed their own topic-specific post-learning assessment to evaluate the
students’ learning outcomes on the specific topics covered in the course. This type of
assessment was employed at least once a week. This type of assessment takes about five
minutes each time. Usually each assessment contains three simple questions related to the
specific topics covered in that class and/or in that chapter and/or in that week. The student
assessments are collected and analyzed very carefully. The instructors summarize how well the
students learn in each given question and where most students seem to have difficulty. At the
very beginning of the next class the instructor spends about five minutes to review those topics
which most students have difficulty with. The students’ feedback has shown that this technique
had an overall positive impact on their learning, helped reinforce material from lecture and
helped them remain engaged with material. This has encouraged us to proceed with
introduction of the topic-specific assessment into the teaching of the intermediate and
advanced courses where similar assessments are now conducted by other physics faculty
members.
2. Carefully designed homework sets as a weekly assessment tool
Students in every single physics lecture course and GNM course offered by a physics faculty
member have weekly homework assignments. The homework problems are carefully designed
to assess the students’ understanding of the concepts and to foster critical thinking and
problem-solving ability. Each homework set is collected weekly and graded with detailed
feedback and solutions. The preliminary assessment has shown that the strict requirement on
the homework has had an overall positive impact on their learning, helped reinforce material
from lecture and helped them remain engaged with material. This encourages us to keep on
using the frequent homework as both the teaching tool and the assessment tool.
3. Pre-lab assessment in PLS I &PLS II lab, in PHYS I & PHYS II Lab
Based on our assessment workgroup discussions, Fang Liu has designed her own pre-lab
assessment test in her PHYS I lab to test the following:

Equipment skills (in using air tracks, camcorders, oscilloscopes, multi-meter, etc.)

Data analysis concepts and skills (uncertainty, significant figures, error propagation,
proficiency in using Excel, video making, etc.)

Plotting and interpreting graphed information (quantitative goal)

Basic mathematical skills (Qualitative goal)
These questions are also a regular feature in another instructor’s end-of-class student learning
assessment. This instructor is able to accurately gauge where the students have not grasped the
concept and also judge the level of challenge a student can take on, based on these layered
questions leading from lower level (something that involves slightly more than direct
reproduction of presented information) to higher level (synthesis questions).
4. Lab Final Project Assessment
As an assessment, students in both the algebra based Physics for Life Sciences lab and Calculus
based PHYS lab are required to design their own projects. In the project, each student will be
assessed in the following aspects:
1) Knowledge of basic Physics concepts (at the Introductory and Intermediate levels)
2) Data taking and data analyzing
3) Understanding equipment
4) Knowledge of research methods and of effective techniques for the written and oral
presentation of research findings
The preliminary assessment has showed that the students in these introductory courses have
good knowledge of basic physics concepts. However, many of them still need more training in
the aspect of data taking and data analyzing. This has resulted in more instructional time on
data analysis during the laboratory time.
5. Student senior project to assess the upper level physics majors
We have required our junior students to attend bi-weekly research method and colloquium
and senior physics students to carry out a senior project. The project is both proposed and
defended at the physics colloquium. The project results are required to be presented at the
NAMS annual poster session and is followed by a written report in the format of a Master’s
thesis. We also encourage our students to actively seek the summer research opportunity
for undergraduate student (REU program).
The assessment result has indicated that the application of physics is the key to help the
students understand the fundamental principles and laws. We have been working very
hard to better equip our physics laboratory courses. As a result, laboratory course teaching
is more effective. In addition we will continue to ask our students to design their own
original experiment and to write up their results in a publishable format, which could
significantly enhance their understanding of the fundamental physics concepts and
strengthen their ability to apply these concepts in the solution of problems.
These direct assessments give us instant feedbacks which can be immediately utilized to fine
tune the class activities, learning goals and learning opportunities, which can effectively help
improve the learning outcomes.
Assessment at the Program Level
Meanwhile as a collective effort of the program, we have continued to develop and test several
different assessment tools at the program level.
1. Major Field Test
We created an assessment test which contains GRE (Graduate Record Examination) physics
subject test type of questions. It was given to all physics majors at the end of spring 2008. The
students who took this test ranged from the freshman level to the senior level. The test
consisted of approximately 16 five-choice questions, some of which were grouped in sets and
based on such materials as diagrams, graphs, experimental data, and descriptions of physical
situations. The aim of the test was to determine the extent of the examinees' grasp of
fundamental principles and their ability to apply these principles in the solution of problems.
Most test questions can be answered on the basis of a mastery of the first year of
undergraduate physics which cover the following topics: classical mechanics, electromagnetism,
optics and wave phenomena, and thermodynamics.
The freshmen did surprisingly well on the materials they just learned in the 1st year of their
physics course. They scored at least 60% of all the problems. In contrast, some of the junior or
senior level students did not score as high as we had expected. This could be caused by the fact
that this test was administered with virtually no warning. Overall we feel that these results
show that our students are retaining a fair amount of the core material. Meanwhile, we
believe that the test result would be more meaningful if we had notified the students in
advance so they could take the test more seriously.
2. Modular Exams I &II
Instead of the single exit exam that is given at many institutions to graduating seniors, we feel
that two topically modular exams will better assist us in assessing the learning outcomes of our
students at different levels. It will help the faculty to revise their courses so as to be most
effective. Students will take these exams seriously and will be able to benefit from the
evaluation of their individual performance outcomes. This type of assessment can serve as a
diagnostic tool to assist the students to prepare for the advanced physics courses.
Exam 1 will assess the student’s progress after having taken the introductory physics and
intermediate physics courses. The classes which will be covered will include Physics I, Physics II,
and Physics III. There courses constitute gateway courses for upper division physics courses and
therefore a clear understanding of these subjects is essential for every physics major.
To make sure our data will be meaningful we have come up with an incentive to encourage the
students to take the assessment test more seriously. We will give this test to the students right
after they have completed their physics III test to help them earn an extra percentage toward
their physics III final grade. We believe that this will serve as a nice incentive to encourage the
positive efforts in the test and thereby ensuring a more meaningful assessment data.
Following the guidelines of our program learning goals, currently we are designing the
questions to be included in this modular exam. The Modular Exam I will be administered in fall
2010 for the students who have taken the Physics I, II and III series.
Modular Exam II will assess the upper division courses such as optics, thermodynamics and
statistical mechanics, quantum mechanics, laboratory methods, and specialized topics. A bank
of questions will be compiled and evaluated by the physics faculty.
3. Curriculum Re-evaluation
Physics Program is currently re-evaluating the curriculum to provide the physics students with a
curriculum with both the depth and the breadth. Physics program used to offer an application
oriented physics course, the application of physics. It is proposed that this course should be
offered again to enable physics students to see more applications of physics.
School of Social and Behavioral Sciences
Program Assessment: Fall 2009 - Spring 2010
Criminal Justice
Goals & Objectives
Measures
Outcomes
Reasons/
Hypotheses
Action
Goals & Objectives
Measures
Outcomes
Reasons/
Hypotheses
Action
Goals & Objectives
Measures
Outcomes
Reasons/
Hypotheses
Action
Knowledge of the application and analysis of statistics
Direct measure: Pre and post-test assessments of statistical knowledge given in first
and last week of class in CRIM 2145 (Statistics for CJ).
Overall students scored better in post test than they did in the pre-test. There were
some areas in which students did not perform well.
Students did NOT excel at any of the “Summation/Reporting Data” variables. They
were also fairly weak in the “Causality” variables.
Statistics will only be taught by full-time faculty. The faculty will focus on
incorporating more guidance in these two areas in fall 2010.
Knowledge of the application of APA in research writing
Direct measure: Pre and post-test assessments of APA skills given in first and last
week of class in CRIM 2141 (Research & Evaluation).
Overall students scored significantly better in post test than they did in the pre-test.
Nine (of 21) questions were sig. better. One was worse!
Students performed better on the formatting questions but had a lot of trouble on the
in-text and reference page citations.
Research Methods will only be taught by full-time faculty. The faculty will focus on
teaching and reinforcing in-text and reference page citations in fall 2010.
Knowledge of Criminal Justice topics immediately prior to graduation and after
completion of all required CRIM classes
Direct measure: pilot test given to students in two capstone courses in spring 2010. 75
questions (15 from each o f 5 core courses).
Students scored between 40% and 61% correct in each of the sections. Average was
51% on the tests.
Students did not “prepare” for the pilot since it was not part of the capstone grade.
But the test did provide a starting point for refining the instrument for fall 2010.
Results for each core section will be scrutinized by the faculty committee and the
instrument will be revised for the fall 2010 semester. Faculty will also offer study
sessions for students.
Economics
Goals & Objectives
Measures
Outcomes
Reasons/
Hypotheses
Action
Have students developed the ability to think critically about the validity of data and the
biases inherent in the use and presentation of data.
A CLA-model test was administered in the Senior Seminar in Economics to assess the
above question.
Students were able to identify and discuss misleading interpretations of data
presented as percentages and rates of change and of correlation and causality. Some
of the students were also able to identify issues relating to the reliability of data. For
instance one of the pieces of information the students are asked to evaluate is an
anecdotal account that is used as a generalized statement.
Overall the results indicate that the students were able to identify all the relevant data
issues in the test. This affirms that the learning goals are being satisfied.
This generated some discussion about inadequate sample sizes and the use of
survey data particularly in the news media. Also, results were shared through a
report in Evidence.
Gerontology
Goals & Objectives
Measures
Outcomes
Reasons/
Hypotheses
Action
Goals & Objectives
Do students graduating with a GERO minor know more about dementia than students
who have not taken any GERO courses at Stockton?
Direct: Alzheimer's Disease Knowledge Scale - 30 item true/false (Carpenter et al.,
2009).
GERO students scored significantly higher on the scale (M=24.53, sd =3.64) than
Stockton students who had not taken Gerontology courses (M=19.8, sd = 2.63).
The difference between the two groups was statistically significant, (t(37)=4.63,
p<.000).
To assess knowledge about an important psychological topic related to aging.
Biopsychosocial content is required for Program of Merit status to be awarded by the
Association of Gerontology in Higher Education, and a psychological course is not
required of all GERO minors although it is covered in many elective courses.
Results were satisfactory and indicate that the curriculum adequately covers the topic
of dementia for GERO minors.
Determine if students report learning about biopsychosocial and public policy areas of
aging.
Measures
Indirect: Self-report by students graduating with GERO minor.
Outcomes
On 5 point Likert scale (1=disagree strongly, 5 = agree strongly)
Biological: 4.6
Psycho: 4.7
Social:
4.5
PublicPol: 4.3
Determine needs to adjust curriculum requirements in essential areas of
gerontology.
Reasons/
Hypotheses
Action
Will not adjust curriculum and consider use of direct measures of biopsychosocial
theory and public policy content to determine competency.
Political Science
Goals & Objectives
To develop the ability of our students to think critically and rigorously, and analyze
complex arguments thoughtfully.
Measures
Senior Seminar assessment and faculty collaboration.
Outcomes
Improving.
Reasons/
Hypotheses
An enhanced focus on critical thinking in multiple introductory and advanced
program courses.
Action
Continue current efforts.
Goals & Objectives
To effectively express their views, knowledge and analysis in written communication.
Measures
Senior seminar and faculty collaboration.
Outcomes
Very Good.
Reasons/
Hypotheses
Inclusion of writing skills in multiple program courses. Increase in POLS courses
with ‘W2’
designation.
Action
Continue current efforts with possible effort to increase focused writing instruction
and assignments in advanced courses.
Political Science, continued
Goals & Objectives
To effectively express their views, knowledge and analysis in oral communication.
Measures
Senior Seminar.
Outcomes
Fair.
Reasons/
Hypotheses
While most students demonstrate strong verbal skills, a minority remain less-skilled.
Action
Possible enhancement of presentation and discussion requirements in some
program
Goals & Objectives
To understand and critically evaluate the application of sophisticated quantitative
analysis to social and political questions.
Measures
Research Methods and Senior Seminar outcomes assessment.
Outcomes
Good and improving.
Reasons/
Hypotheses
Increased assignment and discussion of quantitative research in several program
courses.
Action
Continue to assign quantitative readings in program courses.
Goals & Objectives
To be able to apply basic quantitative analysis to general social and political questions.
Measures
Research Methods.
Outcomes
Good.
Reasons/
Hypotheses
Requirements of Research Methods as a core course for all majors, and maintenance
of high standards in this course. And use of basic statistical methods in several other
program courses.
Action
Continue to enhance the use of basic statistical methods in appropriate program
courses.
Goals & Objectives
To establish the knowledge, understanding and enthusiasm for engaged citizenship on
the part of our graduates.
Measures
Graduating student focus groups and surveys of graduates.
Outcomes
Good and improving.
Reasons/
Hypotheses
Recent enhancement and expansion of the Washington DC program, adding local
internships in Trenton, New Jersey, and the incorporation of service and engagement
projects in some program courses provide markers for assessing how POLS majors
utilize experiential knowledge to understand current social/political issues and
further their own pre-professional development.
Action
Increasing use of service and engagement projects by program faculty. Continued
enhancement of local and state internship options.
Goals & Objectives
To provide students with an understanding of local, national and global political and
social issues and policies.
Measures
Graduating student focus groups and surveys of graduates.
Outcomes
Very Good for international and national. Fair for Local.
Reasons/
Hypotheses
Strong program courses that ensure a full scope of instruction in political and social
issues on the national and international stage.
Action
Continue current efforts, and consider ways of improving our instruction of state and
local issues.
Political Science, continued
Goals & Objectives
To provide our students with the skills and general preparation necessary for
professional success.
Measures
Graduating student focus groups and surveys of graduates.
Outcomes
Very good for general life and career skills, less successful on specific career
transition support.
Lack of adequate career transition support.
Reasons/
Hypotheses
Action
Current initiatives will help improve early student awareness of curricular options
and increase guidance for graduating seniors in their transitions.
Psychology
Goals & Objectives
To assess whether students can identify a major flaw in an experimental design.
Measures
Online performance task.
Outcomes
Over 100 students performed the task online. Results did not show any relationship
between their performance on the task and the number of psychology courses they
had taken (r = .13, p > .10).
These data suggest that we can improve as a program in teaching the essentials of
experimental design.
Reasons/
Hypotheses
Action
Goals & Objectives
Measures
Outcomes
Before firm conclusions can be made, the task needs a few modifications, followed
by new data collection.
To assess students' critical thinking skills.
CLA Crime Reduction Performance task and Drunk Driving performance task,
constructed by Sara Martino and Connie Tang.
Currently evaluating the data this summer [2010] to see if the course shows an
increase in critical thinking skills.
Reasons/
Hypotheses
Action
Goals & Objectives
In Nancy Ashton's distance ed. sections of PSYC 3322 - Lifespan Development: to see if
there were grade differences when the stimulus was a video clip vs. a still photo or
chart.
Measures
Grades on student papers.
Outcomes
There were no differences.
Reasons/
Hypotheses
Action
Goals & Objectives
To see if there are differences in student performance between distance ed. and faceto-face sections of Nancy Ashton's PSYC 3322 - Lifespan Development classes.
Measures
Student grades.
Outcomes
In the process of comparing outcomes; no results yet.
Reasons/
Hypotheses
Action
Psychology, continued
Goals & Objectives
In Nancy Ashton's distance ed. sections of PSYC 3322 - Lifespan Development: to see if
there is a relationship between concepts in required discussion postings and
correct/incorrect quiz answers.
Measures
Student discussion postings and quiz results.
Outcomes
Analysis not yet complete.
Reasons/
Hypotheses
Action
Goals & Objectives
In Jessica Fleck's PSYC 2215 - Cognitive Psychology: to assess effectiveness of critical
thinking exercises.
Measures
End-of-semester surveys.
Outcomes
Most students liked the exercises and felt that they enhanced their understanding of
research design and theories in cognition. Students were also generally more
prepared for group discussions than in the past.
Reasons/
Hypotheses
Action
Goals & Objectives
In Jessica Fleck's Seminar in Cognitive Neuroscience (PSYC 3641): to gage student
level of preparation prior to class.
Measures
Critical thinking exercises.
Outcomes
Students’ preparation was far superior to that of prior semesters and it allowed
discussions to include most students in the class.
Reasons/
Hypotheses
Action
Goals & Objectives
In Jessica Fleck's Experimental Psychology (PSYC3242): to assess the effectiveness of
Personal Response Systems (PRS) for student learning of APA style during in-class
exercises.
Measures
Pre/post-test design of APA knowledge.
Outcomes
Results showed a significant improvement in students’ understanding of the material.
Reasons/
Hypotheses
Action
Goals & Objectives
Jennifer Lyke and Michael Frank: to compare student performance in online and faceto-face sections of PSYC 3392 - Theories of Counseling.
Measures
Student performance on 10 weekly quizzes.
Outcomes
Preliminary analyses show no difference between classes, thus far.
Reasons/
Hypotheses
Action
Psychology, continued
Goals & Objectives
Measures
Outcomes
Reasons/
Hypotheses
Action
In Julia Sluzenski's Experimental Psychology class, to measure:
1) ability to identify the type of research design,
2) ability to critique the flaws in a research design, and
3) knowledge in ethics in social science research.
Three pre-post assessments, two using open ended questions to assess students'
knowledge, and the other a performance-based task where students had to critically
analyze a flawed research design.
Students improved significantly in all areas, and the gains were large. Professor
Sluzenski was especially impressed with their gains in #2 (ability to critique the flaws
in a research design), which was a more performance-based assessment than a pure
knowledge-based assessment.
Students were given practice on the questions (not the exact questions, but similar
ones), and were told that the practice was relevant to their final exam (the post test
was in fact part of their final). Since the assessment was part of their grade, they
may have taken it more seriously.
Professor Sluzenski has since made the course almost completely about skills and
problem solving. For instance, instead of having to read a textbook on a regular
basis, and having to take multiple-choice exams, students spend their time in
performance-based skills, such as by reading and summarizing journal articles,
writing 2 different kinds of research papers, designing an experiment, and taking
performance-based exams (e.g., an exam on the use of SPSS software).
Social Work
Goals & Objectives
Measures
Outcomes
Reasons/
Hypotheses
Action
The Social Work Program’s formal curriculum design is structured and delivered in a
way that the Mission and Goals are operationalized into 10 core competencies and its
respective 41 practice behaviors, which then become translated into course objectives,
content, and assignments.
Our assessment plan to measure the competencies, as operationalized through
measurable practice behaviors, is comprised of multiple measures: these are, 1) SelfEfficacy Survey, which includes all practice behaviors; 2) Senior Field Placement
Evaluation completed by field instructors and students together, which includes all
competencies and practice behaviors; 3) an Exit Survey of students, which includes
all competency-associated practice behaviors and seven items to assess the implicit
curriculum; 4) an Alumni survey, which also includes all competency-associated
practice behaviors and seven items to assess the implicit curriculum.
Upon review of our findings, for the fall of 2009 and spring of 2010 semester, all
assessment measures used indicated all competencies and practice behaviors met
the benchmark we established (mean score of three or greater). Therefore, these
data affirm our explicit curriculum.
These measures may need additional revisions to assure that we appropriately
evaluate competencies and practice behaviors. For instance, it would be valuable to
see comments from field instructors about students’ attainment of practice behaviors.
Furthermore, the self-efficacy measure, exit survey, and alumni survey in their
current forms assess student (and graduate) self-perception of the extent to which
Our End-of-the-Year Program Retreat, held in June of 2010, provided us the
opportunity to reflect upon findings for the academic year. We determined that for
the next academic year we would raise the benchmark to a mean of 4.00. Most
recent results also led us to consider administering the Self-Efficacy Survey to
students in the introductory sequence of social work courses and students in the
junior sequence of courses. This would allow us to measure students’ perception of
their ability to perform practice behaviors associated with Program competencies
across the Social Work Program curriculum.
Social Work, continued
Goals & Objectives
To assess our implicit curriculum or learning environment.
Measures
Seven items added as part of the Exit Survey.
Outcomes
Findings showed that in the seven-item scale we administered to students as part of
the Exit Survey, all items reached a benchmark of 4.00 or better.
These findings affirm our implicit curriculum or learning environment.
Reasons/
Hypotheses
Action
Goals & Objectives
Measures
Outcomes
We plan to continue to build on the strengths of our learning environment to assure
even better student performance.
To design a performance task assessment capable of evaluating the higher order
cognitive skills of critical thinking, analytical reasoning, and problem solving.
CLA-style performance task assessment (pilot), administered pre/post to seniors in
the Senior Fieldwork course first in Fall 2009, and again in Spring 2010.
Each performance task was assigned an ID code (to eliminate rater bias), and scored
using a rubric. Results are currently being analyzed.
Reasons/
Hypotheses
Action
We will meet as a group to resolve score discrepancies among raters and to revise
the task and rubric, if necessary. We anticipate making revisions that will allow us to
assess student performance on a number of competencies and associated practice
behaviors, which will provide us with a more thorough understanding of student
achievement of competencies than is provided by self-report measures alone.
Sociology/Anthropology
Goals & Objectives
Measures
Outcomes
To measure a range of outcomes, both qualitative (field research, written projects) and
quantitative (statistical literacy), in two upper level, mandatory courses.
The Program agreed to conduct a holistic, blind reading of final projects from the
courses "Field Methods" or "Research Methodologies" using a rubric that includes
competencies related to qualitative and quantitative research methods.
Our first set of data was evaluated in 2009 and met our outcome expectations. This
year’s data is being examined.
Reasons/
Hypotheses
Action
Discussions are underway to add other rubrics to our Program Assessment.
Web Links
Assessment Use
1.
General Studies Assessment
http://intraweb.stockton.edu/eyos/gens/content/docs/2010SummaryofAssessmentintheSchool
ofGeneralStudies.docx
2. Institute for Faculty Development EVIDENCE
http://intraweb.stockton.edu/eyos/page.cfm?siteID=187&pageID=7
Summary of WileyPlus Student Feedback, Fall 2009 (N = 170) WileyPlus had an overall posi2ve impact on my learning False True WileyPlus helped reinforce material from lecture & lab False More False More False In Between In Between True More True More True WileyPlus helped me remain engaged with material Encountered few technical difficul2es using WileyPlus False More False False In Between True More True True More False More True In Between E-­‐book use 5-­‐6 Cmes/wk Use of other e-­‐book features 7+ Cmes/wk 5-­‐6 Cmes/wk 7+ Cmes/wk 3-­‐4 Cmes/
wk 3-­‐4 Cmes/wk Did not use 1-­‐2 Cmes/wk Did not use 1-­‐2 Cmes/wk Assignments on WileyPlus per week 4 5+ Book Purchased E-­‐book 1 Hardcover 2 So<cover 3 THE RICHARD STOCKTON COLLEGE OF NEW JERSEY
LIBRARY
SUMMARY OF
LibQUAL+® RESULTS
2005 & 2008
LIBRARY ASSESSMENT COMMITTEE
March 20, 2009
Committee Members
Chair: Jianrong Wang
Kerry Chang-FitzGibbon
Carolyn Gutierrez
David Lechner
Richard Miller, Ph.D.
David Pinto
Mary Ann Trail
Introduction The library continues to be very active in gathering data and feedback from library users and from members of the college community. We also closely review data generated through our acquisition and public service functions. Our entire assessment process seeks to use all available information to help evaluate all aspects of library performance and to assist in setting priorities for future development. This report compares the results of two campus‐wide surveys focused solely on the library. These are the LibQUAL+® library user satisfaction surveys conducted in 2005 and 2008. This instrument is used by academic libraries throughout the country and our results are compared to national norms as well as the results of participating academic libraries in New Jersey, all of whom are members of Virtual Academic Library Environment (VALE) NJ, the state’s academic library consortium. The library faculty and administration have worked together to scrutinize the data gathered through these two surveys. We have summarized our findings and major recommendations in this brief report. Although most of our work has focused on the library’s shortcomings identified through this process, there was a great deal of positive feedback received from grateful library users. Anyone interested in reviewing the data in more detail may request the original LibQUAL+® reports, as well as any supporting documentation. All of the recommendations found on page 9 of this report are being pursued. There have been some d changes made in library operations resulting from these surveys, many of which were small and readily enacted. Some recommendations, such as adding student group study rooms, require coordination well beyond the walls of the library. These changes take time, but are no less a priority for the library. We note here that there are many other survey results and forms of feedback received and reviewed through the library assessment process. This report deals with the results of just the two most comprehensive and formal surveys. David Pinto Director of the Library 2
LibQUAL+® SURVEY REPORTS, 2005 vs. 2008 SUMMARY A LibQUAL+® survey comprises three dimensions: Affect of Service, Information Control and Library as Place. For each dimension users are asked about their desired level of service, the minimum level of service they would accept, and what they perceive to be the actual level of service provided. Additionally, it includes local questions specific to Stockton users. The Library Assessment Committee analyzed the data from the 2005 and 2008’s surveys. The following report details the results as well as important points of comparison between Stockton Library and other libraries participating in the Virtual Academic Library Environment consortium of New Jersey academic libraries. Through the analysis we found that faculty, students and staff had higher expectations in 2008 than in 2005. Affect of Service rated significantly higher in 2008 than in 2005. In addition, results revealed that there was a greater demand for more electronic resources, journals, adequate modern equipment and space for quiet study and group work. OBJECTIVES The objectives of the Committee in analyzing the LibQUAL+® data were: ƒ To identify areas in which feasible improvements might be made ƒ To offer constructive suggestions on how these improvements might be accomplished ƒ To highlight the areas that received positive comments from respondents 1. Respondents In 2005, 345 undergraduate students, 16 graduate students, 61 faculty, 9 library staff and 26 college staff members participated in the survey. In 2008, 239 undergraduate students, 42 graduate students, 82 faculty, 10 library staff and 38 college staff members responded to the survey. The number of graduate student, faculty and staff respondents in 2008 increased, however, overall, the total participants dropped by 10%. Table 1 in Appendix illustrates a comparison of the groups’ participation in the two surveys. 2. Analysis of the Three Dimensions The dimensions were analyzed by user groups. Within each group, a comparison was made of the results of the 2005 and 2008 surveys. Additionally, an evaluation was conducted between Stockton and other VALE institutions’ 2008 survey results. Indications and trends are summarized when applicable. Finally, recommendations are offered in areas in which improvements are feasible. 2.1. Affect of Service (AS) Questions related to this dimension: AS‐1. Employees who instill confidence in users AS‐2. Giving users individual attention AS‐3. Employees who are consistently courteous AS‐4. Readiness to respond to users’ questions AS‐5. Employees who have the knowledge to answer user questions AS‐6. Employees who deal with users in a caring fashion AS‐7. Employees who understand the needs of their users 3
AS‐8. Willingness to help users AS‐9. Dependability in handling users’ service problems 2.1.1. Dimension summary ƒ Data indicated that the expectations of all groups in Affect of Service significantly increased in 2008 (See Table 2, Appendix). ƒ Overall, the Library’s service in this dimension was rated significantly higher in 2008. 2.1.2. Summary by user groups Undergraduate Undergraduates’ minimum and desired levels of service rose in 2008 but so did their perception of how the library is providing service. Currently, the library is holding its own, but, if this trend continues, it must keep pace with expectations. Undergraduate comments identified areas in which the library might improve. There were several comments about staff speaking too loudly or in an unprofessional manner and there was one criticism of unhelpful student workers. Graduate The desire of graduate students for faster response to users’ questions increased dramatically but their perception of the library's readiness to respond did not significantly change. Faculty Due to the small number of faculty respondents, no significant changes were noted between the 2005 and the 2008 surveys. However, scores for “Employees who are consistently courteous” decreased, and so did “Dependability in handling of users' service problems” in 2008. Staff There were no significant changes in the staff’s ratings of Affect of Service. On the other hand, “Employees who instill user confidence” rose 0.39 points and “Responsive to users’ questions” and “Dependability in handling of users' service problems” decreased 0.36 and 0.38 points respectively. 2.1.3. Comparison with VALE ƒ Stockton undergraduates and faculty’s minimum and desired expectations were higher than those of their VALE peers. Graduate students’ expectations were slightly lower than that of VALE’s. ƒ The perception of service of all groups in this dimension was higher than that of VALE’s; with the exception of graduate students who rated this area lower (Table 2.1, Appendix). ƒ Stockton staff responded that in “Readiness to respond to users’ questions” the library even met their desired expectation. Since meeting users' desired expectations is very difficult to achieve, the library deserves to be commended on this point. All other areas were considered below their desired expectations by both Stockton and VALE’s users. However, the overall Stockton scores were higher than VALE scores. 2.1.4. Spotlight areas (positive area): All nine areas in this dimension. 2.1.5. Alerted area (area for improvement): If any area was deemed lower than another, it was “Employees who understand the needs of their users (7)”. 4
2.1.6. Recommendations It would seem that in Affect of Service, the professionalism and courtesy of the library staff professionalism were questioned more than their ability. Recommendations therefore are focused on service over knowledge. 1) Meeting with the entire library staff to discuss the LibQual+® results. 2) Workshops for Public Services employees devoted to improving service. 2.2. Information Control (IC) Questions related to this dimension: IC‐1. Making electronic resources accessible from my home or office IC‐2. A library Web site enabling me to locate information on my own IC‐3. The printed library materials I need for my work IC‐4. The electronic information resources I need IC‐5. Modern equipment that lets me easily access needed information IC‐6. Easy‐to‐use access tools that allow me to find things on my own IC‐7. Making information easily accessible for independent use IC‐8. Print and/or electronic journal collections I require for my work 2.2.1. Dimension summary ƒ Overall all user groups’ expectations in Information Control were higher in 2008 than in 2005 (Table 3, Appendix), with significantly higher expectations in remote access to electronic resources and the Library’s web site, modern equipment, and making information easily accessible for independent use. ƒ On average, perception of Information Control service for all user groups scored higher in 2008 than 2005. Remote access to electronic resources, and making information easily accessible for independent use gained positive significant difference. ƒ Due to users’ higher expectations in 2008, library service in Information Control failed to meet their minimum expectations, especially in modern equipment and print and/or electronic journals. 2.2.2. Summary by user groups Undergraduate ƒ The expectations of the undergraduate students have been rising. ƒ They rated their expectations of having "The printed library materials I need for my work” significantly higher in 2008 than 2005. ƒ They considered the library’s print and/or electronic journals below their minimum expectations. ƒ They were less happy about remote access to electronic resources, the library’s web site, and modern equipment in 2008 than in 2005. Graduate ƒ The minimum expectations of the graduate students in Information Control have also been rising, with the expectation of modern equipment significantly higher. ƒ Desire for remote access, user‐friendly tools and access to the library’s web site were higher in 2008. ƒ Perception of the library’s Information Control service overall was rated higher in 2008, with the library’s web site rating the highest. ƒ Nevertheless, in 2008, all areas were considered below minimum expectations, with the exception of "Making information easily accessible for independent use” which scored above minimum expectations. 5
Faculty ƒ Faculty’s expectations in Information Control have also been rising, except for print materials. ƒ Faculty were slightly unhappy about the library’s website, print materials, electronic resources and easy‐to‐use tools. ƒ Faculty’s perception of service in most of Information Control was below their minimum expectations. ƒ Modern equipment, making information easily accessible for independent use, and print and/or electronic journals were seen as slightly better in 2008 than that of 2005. Staff ƒ Staff expectations were lower overall in 2008 than in 2005, except for the categories “remote access to electronic resources,” “making information easily accessible for independent use,” and “print and/or electronic journals” which were higher. ƒ They were more satisfied with Information Control in 2008 than 2005, particularly in the area of print and/or electronic journals. ƒ Although the desired expectations of staff were not met in either year, the gap between desired and perceived was narrower in 2008, especially in remote electronic resources. 2.2.3. Comparison with VALE ƒ The expectations of all Stockton user groups were slightly higher than that of VALE’s in this dimension. ƒ Stockton students were slightly unhappier in all areas of this dimension than their VALE peers, except in remote access to electronic resources and library’s web site. They were especially unhappy about a perceived lack of availability of print and/or electronic journal collections. ƒ Stockton faculty rated Information Control higher than that of VALE faculty in most questions, except for modern equipment, easy‐to‐use access tools and print and/or electronic journals. 2.2.4. Spotlight areas: ƒ “Making electronic resources accessible from my home or office.” ƒ “Having the printed library materials I need for my work.” ƒ “Making information easily accessible for independent use.” 2.2.5. Alerted areas: ƒ “The electronic information resources I need.” ƒ “Modern equipment that lets me easily access needed information.” ƒ “Print and/or electronic journal collections I require for my work.” 2.2.6. Recommendations 1) To form a focus group and open a communication channel with the graduate students and the faculty. 2) Design and distribute questionnaires to these two groups to find out why they are less satisfied in certain areas. 2.3. Library as Place
Questions related to this dimension: LP‐1. Library Space inspires study/learning LP‐2. Quiet space for individual activities LP‐3. Comfortable and inviting location 6
LP‐4. A getaway for study/learning/research LP‐5. Space for group study/research 2.3.1. Dimension summary When all groups surveyed are included, Stockton Library met or exceeded the minimum for all five questions in 2008, although the overall minimum expectations increased (Table 4, Appendix). The overall picture, however, is misleading. The results are very different when the responses of undergraduate and graduate students are examined separately from those of faculty and college staff. Undergraduates found the library to be less than adequate for all five questions while graduate students rated it less than adequate in all but one area, "Comfortable and inviting location." The comments of respondents offer insight into what lies behind the numbers. Noise was mentioned most frequently, followed by poor lighting, the need for more individual and group study rooms and more seating. 2.3.2. Summary by user groups Undergraduate ƒ The minimum expectation increased for all five questions. There was a significant increase in “Library as a getaway for study/learning/research.” ƒ Although not statistically significant, the perceived and desired services increased in all five questions. ƒ A negative gap between the minimum and perceived scores increased in all five questions. Although not statistically significant, the fact that adequacy scores are in the red zone, indicated that these five questions should be investigated further. Graduate ƒ
ƒ
ƒ
ƒ
ƒ
All of the minimum expectations rose in 2008, indicating an increase in student expectations, except for “Library Space inspires study/learning.” In all but one question, the perception of the Library as place is below minimum expectations in 2008, especially as a "quiet space for learning and research" and "space for group study/research." The desire for quiet space, a comfortable location, and a getaway for study and learning increased. Minimum expectations for the library to provide quiet space rose in 2008 from 2005 while perception of providing it decreased. The need for space for group study & research is important to graduate students. Their expectations rose in 2008 and, while the Library scored slightly higher than in 2005, it is still in the negative range with a difference of 0.63 between the minimum expectation and the perceived. The library surpassed minimum expectations as a comfortable and inviting location but the minimum expectations rose more than the perception. The perception score in 2008, while still in the positive range, was lower than in 2005. Faculty ƒ Minimum expectations for faculty on all 5 questions were lower in 2008. ƒ Desired service scores increased slightly in all areas but “comfort and inviting location” and “space for group study and research.” ƒ Perceived service increased on all areas with the exception of “quiet space” and “getaway for study, learning and research.” 7
Staff ƒ Staff perceived the library as having improved since 2005 in “Quiet space for individual activities,” “Comfortable and inviting location,” and “A getaway for study/learning/research)”. ƒ They perceive the library as doing worse in “Library Space inspires study/learning” and “Space for group study/research”. ƒ The lower level of the library received lower ratings in “library as a place that inspires study and learning” and “Space for group study/research”. 2.3.3. Comparison with VALE ƒ Minimum expectations of Stockton undergraduate students on Library as Place in questions 1 through 4 are higher than that of their peers in other VALE libraries, particularly for the library as a getaway for study/learning and research, but lower on space for group study/research. ƒ Graduate Stockton students had lower minimum and desired expectations on Library as Place than their peers at other VALE libraries. ƒ
Stockton undergraduates perceived their Library as Place as below their minimum expectations in all areas with the highest negative score for availability of quiet space for individual activities while VALE undergraduate students perceive their Library as Place as better than the minimum in all five questions (Table 4.1, Appendix). ƒ
Stockton graduate students rate the library as below their minimum expectations in every question except the library as a comfortable and inviting location, although the perceived score on this question is lower than in the 2005 survey. ƒ
Faculty and staff had lower minimum and desired expectations than their colleagues at other VALE institutions. 2.3.4. Spotlight areas The only area that deserves a qualified spotlight is “A comfortable and inviting location.” 2.3.5. Alerted areas: ƒ “Quiet space for individual activities” (Noise was the most common complaint). ƒ “Library space that inspires study and learning” (Inadequate space and shortage of seating). ƒ “A getaway for study, learning or research.” ƒ “Community space for group learning and group study” (Need for more group study rooms and quiet study space). ƒ “A comfortable and inviting location” (Poor lighting making the library unattractive and difficult for reading. Uncomfortable furniture or not enough comfortable furniture). 2.3.6. Recommendations: 1) Hold sessions with staff on noise awareness/reduction. 2) Improve lighting. 3) Find areas for placement of group and individual study rooms. 4) Arrange for focus group sessions to investigate further what groups would like for their Library as Place. 5) Increase comfortable seating. 2.4. Local Questions 1. Availability of online help when using my library's electronic resources This question was not applicable. 8
2. Contribution to the intellectual atmosphere of the campus Undergraduates are the most positive of the three groups on this question with both faculty and graduates holding negative perceptions. 3. Making me aware of library resources and services, and 4. Teaching me how to locate, evaluate and use information The numbers differ but the trends are the same for questions 3 & 4. Faculty and graduate students rate both our efforts fairly positively. The undergraduates, where we concentrate our effort s at instruction, are much more critical. 5. Efficient interlibrary loan / document delivery Undergraduates are not as concerned as graduate students with ILL. Faculty’s perception of ILL is the highest. Graduates showed the most concern about ILL. Hopefully, the survey currently being conducted by ILL will offer more information. KEY FINDINGS 1) Users had higher expectations in 2008 than in 2005. Affect of Service and Information Control expectations were significantly higher in 2008. 2) Affect of Service rated significantly higher in all areas in 2008 than in 2005. The Library is seen as improved in this dimension. 3) Users want more electronic resources and journals. 4) Users want to have adequate modern equipment. 5) There is a need for space for quiet study space and group work, as well as more seating. 6) Users wish to have better lighting and comfortable furniture. RECOMMENDATIONS 1) Hold a general library staff meeting to discuss the LibQual+® results. Initiate regular workshops for Public Services employees devoted to improving service. 2) Form focus groups to find out the needs of students and faculty. 3) Improve lighting and increase seating, including comfortable seating. 4) Investigate areas for placement of group and individual study rooms. APPENDIX Table 1: Comparison of RSC Population and Respondent, 2005 vs. 2008 Date
Group
Population Respondents
2005
6,579
345
Undergrad
2008
5,746
239
2005
423
16
Graduate
2008
576
42
2005
410
59
Faculty
2008
455
82
2005
650*
26
Staff
2008
701
38
*Estimate. 9
Table 1.1: Comparison of RSC Population and Respondent in Chart, 2005 vs. 2008 7,000
6,000
5,000
4,000
3,000
2,000
1,000
0
Population
Respondents
2005 2008 2005 2008 2005 2008 2005 2008
Undergrad Graduate
Faculty
Staff
Table 2. RSC Users’ Overall Minimum Expectations, Affect of Service, 2005 vs. 2008 8.00
7.00
6.00
5.00
4.00
2005
3.00
2008
2.00
1.00
0.00
AS‐1
AS‐2
AS‐3
AS‐4
AS‐5
AS‐6
AS‐7
AS‐8
AS‐9
Table 2.1. Users’ Perceived Service, Affect of Service, RSC vs. VALE, 2008 7.80
7.60
7.40
7.20
7.00
6.80
6.60
6.40
6.20
6.00
RSC
VALE
AS‐1 AS‐2 AS‐3 AS‐4 AS‐5 AS‐6 AS‐7 AS‐8 AS‐9
10
Table 3. RSC Users’ Overall Minimum Expectations, Information control, 2005 vs. 2008 7.40
7.30
7.20
7.10
7.00
6.90
2005
6.80
2008
6.70
6.60
6.50
6.40
IC‐1
IC‐2
IC‐3
IC‐4
IC‐5
IC‐6
IC‐7
IC‐8
Table 3.1. Users’ Perceived Service, Information Control, RSC vs. VALE, 2008 7.40
7.30
7.20
7.10
7.00
6.90
6.80
6.70
6.60
6.50
RSC
VALE
IC‐1
IC‐2
IC‐3
IC‐4
IC‐5
IC‐6
IC‐7
IC‐8
Table 4. RSC Users’ Overall Minimum Expectations, Library as Place, 2005 vs. 2008 7.20
7.00
6.80
6.60
2005
6.40
2008
6.20
6.00
5.80
LP‐1
LP‐2
LP‐3
LP‐4
LP‐5
11
Table 4.1. Users’ Perceived Service, Library as Place, RSC vs. VALE, 2008 7.60
7.40
7.20
7.00
6.80
6.60
6.40
6.20
6.00
RSC
VALE
LP‐1
LP‐2
LP‐3
LP‐4
LP‐5
Note: Sources of tables: LibQual+® Surveys, 2005 and 2008. For convenience, the three dimensions are listed as below: Affect of Service (AS) AS‐1. Employees who instill confidence in users AS‐2. Giving users individual attention AS‐3. Employees who are consistently courteous AS‐4. Readiness to respond to users’ questions AS‐5. Employees who have the knowledge to answer user questions AS‐6. Employees who deal with users in a caring fashion AS‐7. Employees who understand the needs of their users AS‐8. Willingness to help users AS‐9. Dependability in handling users’ service problems Information Control (IC) IC‐1. Making electronic resources accessible from my home or office IC‐2. A library Web site enabling me to locate information on my own IC‐3. The printed library materials I need for my work IC‐4. The electronic information resources I need IC‐5. Modern equipment that lets me easily access needed information IC‐6. Easy‐to‐use access tools that allow me to find things on my own IC‐7. Making information easily accessible for independent use IC‐8. Print and/or electronic journal collections I require for my work Library as Place (LP) LP‐1. Library Space inspires study/learning LP‐2. Quiet space for individual activities LP‐3. Comfortable and inviting location LP‐4. A getaway for study/learning/research LP‐5. Space for group study/research
12
ARTV Program Assessment of Student Learning
Like most American colleges and universities, Stockton College is currently
involved in an ongoing study of outcomes assessment. The Visual Arts Program has
been a willing participant in this college-wide effort.
Introduction
The Visual Arts Program utilizes long-standing practices of outcomes
assessment (though not always articulated in these terms) that are widely practiced
in studio art curricula. We use various techniques to establish an ongoing system of
assessment of our students’ work in the areas of practice, process, and
conceptualization. Nearly every visual arts studio course uses a series of problem
solving assignments, group critique, and individual review, in addition to the use of
grades and written feedback to provide students with multiple layers of evaluation
and commentary. Our studio classes meet for additional contact hours (beyond the
standard hours for a full credit lecture course) to accommodate this pedagogy.
The concepts of portfolio, student centered learning, and assessment of specific
skills and knowledge are thoroughly integrated into our teaching and curriculum.
We are continually seeking methods to improve our teaching, assessment, and
outcomes. Art careers often require a “long view” to measure success in the context
of post-graduate activity. Our alumni are active professional artists in areas that
include gallery and museum exhibitions, self employed freelance work, advertising
agencies, design companies, photography studios, graduate study in the arts, art
education, museum professionals, and academic positions at colleges and
universities throughout the United States. In addition to careers in the arts, our
students have pursued a wide range of professional activity including businesses,
social services, healthcare, information technology, and a variety of miscellaneous
careers.
The portfolio is the single most important aspect of evaluation in the arts. We use
the portfolio (a collected sample of the student’s best works) as the basis for many
evaluations of student progress and teaching effectiveness. Most classes (with the
exception of Art History and Arts Management) require the student to submit works
of art, which lead to either a final portfolio review for the term or both a midterm
and final portfolio review.
I. Entrance Portfolio Review
Our program has an entrance or Incoming Portfolio Review. All students who wish
to declare a major in the ARTV program are required to submit a portfolio. Students
without a portfolio may chose a provisional status while taking the courses
necessary to create an acceptable entrance portfolio.
a. Assessment Measure
1. Candidate portfolio is placed online for faculty review.
Submission consists of twelve images. Still life drawings and a self-portrait drawing
must be done for application.
b. Frequency
1. Portfolios are reviewed every semester for each applicant, whether
freshman, transfer or already matriculating. We have eighty-six online so far.
c. Precipitating factors
1. We have always had portfolio review option for freshman and required for
transfers in order to allocate transfer credits.
2.This portfolio documentation of incoming student work can provide an
initial baseline for comparison to later portfolio documentation.
3. At one point we were promised by vice president Nazzaro up to ten
students who would be admitted to the college based on our recommendation if SAT
and grades were otherwise lower than usual.
d. Findings
1. Overall assessment. Of the 86 applicants 72 were accepted, and 14 did not
meet criteria and were asked to resubmit. Six were given strong recommendations
to the Admissions office for acceptance to college based on portfolio review. There
was a mix of freshman and transfer applicants.
e. Recommendations
1. Continue with portfolio reviews for applicants. Increase the ease of
submission to include in person reviews as well as electronic submission.
2. We are not sure that Admissions is currently heeding our
recommendations at all. It would be appropriate to reinstate some influence of the
program recommendations on admissions for Visual Art Majors.
II. Junior Portfolio Review
a. Assessment Measures
1. Our program has transformed the Junior Portfolio Review into an entrance
or Incoming Portfolio Review. (See above)
In the past the entire faculty reviews portfolios during the Junior Portfolio
Review and through the production of the Senior Project exhibition, which is
required of every student. The Junior Portfolio Review has been a program practice
for more than a decade and is not unlike the old junior writing exam held collegewide. All art students with a specific number of credits are required to submit a
portfolio and the entire faculty reviews the portfolio. Proficient students are
permitted to proceed to register for Senior Project in their track, marginal students
must resubmit the portfolio at the end of the term, and deficient students must
successfully complete a specified course or series of projects before proceeding to
Senior Project.
b. Frequency
1. Every semester at Preceptorial advising time, for over ten years.
c. Precipitating factors
1. This was a way to validate student readiness for the Senior Project classes.
d. Findings
1. Roughly 10- 20 % were asked to resubmit portfolios after taking requisite
courses.
2. We have decided to no longer use the junior portfolio as an evaluation instrument
for senior project. Though it did give us a broader view of the ARTV Junior work on
the whole, and the kinds of learning taking place in the other classes – we have
found that passing grades (currently C or better) in the perquisite Junior classes in a
particular track or concentration are the primary indicators of success the Senior
Project Class.
III. Assessment of ARTV Studio Course work
a. Assessment Measure
1. Most classes (with the exception of Art History and Arts Management)
require the student to submit works of art, which lead to either a final portfolio
review for the term or both a midterm and final portfolio review. Most of our classes
go beyond the common practice in arts curricula by providing students with an
individual review of assignments, group critiques, and a grade.
Students build portfolios for each class as a gradual culmination of skills and
effort. The faculty evaluation of the portfolio is based upon clearly stated
expectations for work created during the course. These expectations are stated in
various ways that are appropriate for the content of each course.
The two semesters of Senior Project in the former BA and the current BFA
culminate in a senior exhibition of a focused body of work. For most this body of
work becomes the portfolio that they present to potential employers, graduate
schools and other career paths upon graduation.
b. Frequency
1. As a program we ask that every member of the program provide a clear
and detailed syllabus; in addition, many faculty use other means of communicating
expectations and methods of evaluation. Also each faculty member will gather
examples of student work to retain for their own records.
2. We had outside evaluations of student work as part of NASAD consultant
Joseph M. Ruffo’s report for our five year program review in 2007 and from external
examiner Kurt Wisneski as part our state application for our proposed BFA in 2009.
c. Precipitating factors
1. Some classes use specific rubrics for the grading of student work while
others use more generally stated goals. In every case the faculty provides students
with examples of excellent work as a means of establishing a specific understanding
of expectations.
d. Findings
1. Student work was reviewed by our visiting external examiner Kurt
Wisneski, Professor of the Fine arts Department of Massachusetts, Dartmouth in
June 2009. Professor Wisneski approved the submission of our BFA proposal. His
evaluation of student art work was positive. “The paintings, drawings, sculpture,
and visual designs represented quality in my opinion. From what I saw as far as the
formally exhibited artwork on the walls (a very nice endeavor, by the way) and
loose artwork on tables, the curriculum discussed at the External Examination
meeting is being applied to students in their solutions. So In my opinion, the
curriculum and instruction is well thought out and the results stemming from the
classes represent a product from a quality program”.
2. A review by Joseph M. Ruffo, Professor Emeritus, Department of Art and
Art History at the University of Nebraska-Lincoln of student work exhibited
throughout building, in open display areas, display cases, classroom studios and on
CD’s provided by faculty.
“Indicated that student work, overall, appears to demonstrate full engagement with
conceptual thinking, risk-taking, and the utilization of the broad range of cultures
and history. Classroom visits showed students actively engaged with instructors and
classmates. The work appeared to strongly advance toward development of
individual viewpoint with good conceptual ideas, techniques and a certain degree of
aesthetic style.
The foundations student work appeared to display competent explorations.
There were examples that suggested student accomplishment with skill level and
engagement with formal concerns.
In most areas students appear to be gaining the knowledge, skills, craft, and
the abilities expected for their needs. They seem to be able to apply this knowledge
to produce work appropriate for specific media.”
Professor Ruffo indicated the painting samples were less accomplished, but
demonstrating acceptable levels. He indicated that with the advent of the BFA of the
BFA there would be higher expectations from NASAD with respect to competencies
and outcomes. However it must be noted that the painting program was in a faculty
transition year in 2007 and the suggested improvements were in place for the
subsequent review in 2009 by visiting examiner Professor Wisneski.
Professor Ruffo concludes “ The work in photography, printmaking, visual
communications illustration and graphic design was very accomplished from both a
technical level and from a standpoint of creative investigation.”
3.We concur with Professor Wisneski and Ruffo on the whole. However,
having various types of portfolios and methods of selection, in the records of class
work makes more broad based comparative assessments over time difficult.
e. Recommendations
1. Implement use of Picasa online documentation of Senior Project classes.
Use same classification categories of work across the Visual Arts Tracks.
2. Phase in implementation of online documentation to include samples from
core and junior level studio classes.
3. Rationale
We believe that the value of Picasa is that it provides an assessment tool that
is consistent with the pedagogy of studio arts instruction as described above.
Assessing visual arts requires professional judgment. There is no good quantitative
measure for assessing student performance in the arts independent of professional
judgment. Indeed, attempts to quantitative arts assessment measures of art have
proven to be counterproductive, limiting student's creative response rather than
fostering it.
"... they [quantitative assessment techniques] can be difficult to
apply to some of the more unusual outcomes and inhibit some
of the more creative responses...that reflect contemporary practices
in art and design. In effect, when there is over -reliance on criteria
they act as a regulatory device through which both teaching and
learning practices are normalized. " - Problems of Assessment In
Art and Design, Trevor Rayment, ed., pp. 15 - Rachel Mason, John
Steers
"...given that there is no finite definition of art, the process of art
education must allow for idiosyncrasy, divergence, and uniqueness."
- Problems of Assessment In Art and Design, Trevor Rayment, ed.,
pp. 20 - Rachel Mason, John Steers
We believe that organizing class results within Picasa will provide a means of
assessment that is broader in scope across classes and across time periods.
Instructors and other outside evaluators will have an opportunity to compare
assessment strategies or evaluative criteria over greater breadth, depth and history.
This system could also provide for more complete analysis over time of individual
student development, course development, and program development.
The beauty of Picasa, or any searchable visual database, is that it is not a
substitute for professional judgment. It simply extends the reach of that judgment
across time, and among peers. Using this newly available technology, we are in a
position to evaluate our students and our program in a way that was simply not
feasible even 5 years ago.
Once we phase in use of Picasa to take a reasonable sample across ARTV
classes, we anticipate being able to use this system for accreditation purposes. In
this way, we will not only be able to see how the students are doing, but how we are
doing. As a program, we all teach different subjects, which require different
techniques, and have different standards. Nevertheless, using Picasa, we anticipate a
more robust and nuanced peer-to-peer dialogue about evaluation criteria. Beyond
the individual, we can begin to understand what constitutes good work in the eyes
of the program as a whole. Finally, with it's online aspect, using Picasa opens the
possibility having student work evaluated more extensively by peers and colleagues
outside the college.
IV. Assessment of Art History
Assessment of learning outcomes in the area of art history involves assessing
three groups of students: studio art majors (who comprise the majority of ARTV
majors), non-art majors who take art history courses as electives, and art history
majors.
a. Assessment Measures
1. All three groups of students are assessed through papers and exams over
the course of the semester, resulting in a final grade that reflects the individual
student’s abilities and improvement with a fair degree of accuracy. Papers are
graded on the quality of the student’s research and writing, as well as the student’s
synthesis of material and understanding of the topic. Tests involve both
quantitative/factual means of assessment and qualitative/written forms of
assessment. In-class discussion is used to help students gain verbal fluency with
terms and concepts.
b. Frequency
1. Most are over the course of each semester. Art history majors also
participate in a two-semester senior project. In the first semester they create a
portfolio of written assignments, including website essays, press releases and wall
signage, exhibition reviews, exhibition proposals, and “article reports” that involve
summarizing a variety of theoretical articles on topics in art history (for example,
feminist, political, psychological, and formal points of view). During the second
semester of senior project, art history majors research and write a paper on a topic
chosen in consultation with their advisor.
Assessment of the paper is currently based on a grade determined by the quality of
the student’s research and writing, as well as the student’s synthesis of material and
understanding of the topic.
c. Precipitating factors
1. Standards of assessment for BA Art History majors.
D. Findings
1. In recent years, the art history cohort has averaged between 3 and 7
students graduating per year, a few of whom have gone on to graduate school and
careers in the field.
Biology Program - Update on Learning Outcomes & Assessment
October 19, 2010
Prepared by M. Lewis, with assistance K. York and L. Smith
The faculty:
During the last few years, the Biology Program has undergone a drastic turnover in faculty. Most of the
senior faculty who were hired in the early years of Stockton retired and many new and replacement
faculty were hired. As of Fall, 2010, the last of the original Biology faculty, Professor Wood, has entered
the Transition Program. Currently, we have 13 faculty, roughly half of whom are either untenured or
have received tenure within the last few years.
Changes in the curriculum:
Over the last ten years, some minor changes have been made to the requirements for the Biology major.
These changes include the requirement of "C" or better in all core courses and an increase in the
amount of chemistry and/or physics required. Our "Preparation for Research" course (PFR) was also retooled several years ago.
One critical change occurred during the last year: we switched the sequence of our two introductory
courses. Instead of beginning with the theory of evolution and the biodiversity of organisms (formerly
BIOL 1100/1105 Organisms and Evolution, and changed to BIOL 1400/1405 Biodiversity and Evolution),
we begin with cellular and molecular levels of biological organization formerly presented in the second
of our core courses (BIOL 1200/1205 Cells and Molecules) followed by BIOL 1400/1405. Increasingly,
the evidence for evolution and the biodiversity of organism is based on cellular and molecular processes
along with natural history and organismal level knowledge. Therefore, we think that teaching cellular
and molecular biology before the theory of evolution and biodiversity of organism may enhance and
clarify student understanding of basic biology. In addition, overall, our students have greater exposure
in high school to information about cellular and molecular biology rather than the theory of evolution.
Thus, students are likely to be more comfortable with BIOL 1200 Cells and Molecules, in the fall
semester of their first year, a critical point in an entering freshman’s successful orientation to college.
As in the past, students take BIOL 2110/2115 (Genetics) after completing the two introductory courses.
Program-wide Assessments:
For many years, the Biology Program has assessed scientific literacy, scientific skills (e.g., statistical
analysis, familiarity with the metric system, graph interpretation), and knowledge/attitudes of critical
concepts (e.g., evolutionary theory) within our required junior-level Preparation for Research course
(BIOL 3600). Our decision to change the order and eventually content of our core courses was partially
based on many years of data from this particular assessment instrument. The results from these
assessments showed that our students are not acquiring and retaining knowledge that we covered in
our introductory courses.
To assess the effectiveness of the change in the sequence of our introductory courses, we have
developed an assessment tool to be administered in another of our core courses, Genetics (BIOL 2110).
In the academic year of 2010/2011, students taking Genetics will be a mix of those who experienced the
old and new sequence of Cells and Molecules (1200/1205) and Biodiversity and Evolution (1400/1405),
along with transfer students. Data is collected such that we can determine whether the student took
the old O&E/C&M sequence OR the new C&M/B&E sequence OR both intro courses at another
institution or one intro course at another institution and one at Stockton (and which course they took
here). Thus, we will be able to quantify and compare not only how old and new sequence students
respond to questions that focus on the retention of knowledge from our introductory course sequence,
but also how these students fare in comparison to students who have transferred in with one or both
introductory courses taken elsewhere. There is a copy of our assessment in the appendices of this
report.
Professor Sedia and Professor York attended the Assessment Workshop in the summer of 2010. Karen's
project was focused on program assessment and analysis of the Preparation for Research data, while
Kathy's project focused on developing an instrument for assessment of GNM courses.
Other program-wide assessment projects target specific courses or concepts. For example, Professor
Lague evaluates student understanding of evolutionary concepts at the beginning of his upper-level
Human Evolution course (BIOL 3240) to assess retention of knowledge from introductory classes, as well
as general attitudes towards evolution. At the end of the course, students are given the same test to
determine whether the course has changed their understanding and/or attitudes towards evolution.
Most importantly, the Biology Program has decided that it is important to review what we are currently
teaching in our core courses to assess whether it still meets the needs of our students (both major and
non-majors) today. We are working to ensure that we have broad program agreement about specific
learning goals for these courses. Additional assessment projects include assessing our required, juniorlevel Preparation for Research and assessing curricular choices of our graduates.
A primary goal of all of our assessment activities is to develop a curriculum map to track where key
concepts are offered in Biology. Currently, we are at the point of identifying those key concepts.
Initial Objectives:
In our efforts to develop a curriculum map, we are currently working on identifying Learning Objectives
(LOs) for the three core courses. It is particularly important to determine whether the LOs form a
coherent structure that allows courses to build upon one another. While repetition is the key to
learning, we will be able to determine whether there any LOs that are being repeated across courses
more than the program feels necessary. We can also determine whether there are additional LOs that
should be added in to the core courses and whether we need a fourth core course to accommodate the
additional material.
The Biology faculty have divided into three groups: one for each core course. The following faculty
members volunteered to chair an LO committee for each core course:
a.
b.
c.
Biodiversity & Evolution (B&E) - chaired by Professor Harmer-Luke
Cells & Molecules (C&M) - chaired by Professor Burleigh
Genetics - chaired by Professor York
Each committee is populated by faculty that REGULARLY teach a particular core course.
Preliminary Results:
We began by examining syllabi from all faculty teaching core courses. While C&M and Genetics syllabi
did not show much variation, B&E syllabi indicated that while the vast majority of material was taught
by all faculty, there was some variation in how much physiology and ecology was taught by different
individuals. While work on the LOs for C&M and Genetics progresses, B&E faculty are currently working
on LOs for the shared content of their course. LOs for more variable B&E content (i.e., physiology and
ecology) will be created and their relevance debated. Once all LOs are in place for all courses, we can
proceed with any necessary restructuring.
In addition to the work on LOs, Professor York shared some additional data from her summer
Assessment Workshop as another step towards overall curriculum analysis. Professor York analyzed
what students in the FY 2009-2010 year had taken for their upper-levels in terms of types of courses and
specific courses. The majority of students had taken an upper-level "plants" course. Fewer had taken
upper-level "evolution", "biostats", etc. courses as upper-levels.
Key issues that came up during the first few meetings include:
1. Should the amount of physiology be increased (or required) in core courses and, if so, how?
(based on requirements in various standardized tests, e.g., GRE)
2. Should the amount of ecology be increased (or required) in core courses and, if so, how? (based
on requirements in various standardized tests, e.g., GRE)
3. Do we need a 3rd "intro" course before Genetics to accommodate increased amount of
information within our field?
4. Although we have re-organized the Preparation for Research course several years ago, how are
we working to increase the understanding and appreciation of scholarly literature in both our
lower and upper-level courses? What type of scientific writing and reading skills are our majors
getting across BIOL courses?
5. Upper-levels: What upper-levels are our students taking and why? What are our students
actually learning in upper-level courses? What concepts (or LOs) cross-cut our upper-level
courses? Should upper-level courses be ordered into categories that students have to choose
from or are we okay with students specializing while at the BA/BS level? (This may first
necessitate a discussion of what our degree is all about, particularly when an increasing number
of students are applying to some type of graduate school.)
6. What field, internship, independent study, lab, etc. experiences are our students getting after
leaving the core courses?
We plan to address these questions and any others that arise as we work towards developing our
learning outcomes and a curriculum map.
Conclusion:
In sum, the Biology faculty are excited about examining our curriculum. While we could easily churn out
learning outcomes for our current courses, we prefer to re-examine our curriculum to determine
whether these learning outcomes are still appropriate and relevant for students in their post-Stockton
career. Having clear learning outcomes that all faculty have agreed upon will bring us more in line with
the intent of the original faculty: a shared knowledge of what a specific core course is about and a clear
understanding of what is tolerable variability between sections. These learning outcomes will also
provide a general framework for thinking about how upper-level courses, independent research, and
internship experiences relate to one another, thus allowing the building of our curriculum map.
Appendix: Tool for assessing impact on knowledge retention from the change in the sequence
of our introductory courses.
Name: __________________________
Intro Biology Assessment— Fall 2010
This survey will be used to evaluate student learning in the introductory biology courses. The
results will NOT affect your grade, but accurate information is important to how we teach core
biology courses. Thank you for taking the time to carefully and thoughtfully complete this
survey. Please mark your answers on the scantron sheet. You may write clarifying
comments/notes directly on the survey.
1. How many academic years have you completed at The Richard Stockton College of NJ?
a. Less than one full academic year
b. One
c. Two
d. Three
e. Four or more
2. How many academic years have you completed at other institutions of higher education?
(Please do not include dual-enrollment courses in high school, AP credit, etc.)
a. None
b. One or less than one full academic year
Name the Institutions below:
c. more than one up to two academic years
d. between two and three academic years
e. More than three academic years
3. What is your current class status at Stockton (total earned credits)?
a. Freshman: 32 earned credits
b. Sophomore: 33 to 64 earned credits
c. Junior: 65 to 96 earned credits
d. Senior: 97 to 128 earned credits
e. More than 128 earned credits, or Nonmatriculated student or other (explain below)
Explain ___________________________
4. While in high school did you take AP Biology?
a. Yes
b. No
5. If yes, and you took the AP exam, what score did you receive on the AP exam?
a. 5
b. 4
c. 3
d. 2
e. 1
6. Where did you take BIOL 1200 Cells and Molecules or equivalent?
a. The Richard Stockton College of New Jersey
b. Another 4 year institution
c. Community or county college
d. I have not yet completed BIOL 1200 Cells and Molecules or equivalent
7. What was the highest grade you earned in BIOL 1200 Cells and Molecules or equivalent?
a. A
b. B
c. C
d. D
e. F or W
8. Where did you take BIOL 1100 Organisms and Evolution, BIOL 1400 Biodiversity and
Evolution, or equivalent?
a. The Richard Stockton College of New Jersey
b. Another 4 year institution
c. Community or county college
d. I have not yet completed BIOL 1100 Organisms and Evolution, BIOL 1400 Biodiversity
and Evolution, or equivalent
9. What was the highest grade you earned in BIOL 1100 Organisms and Evolution/BIOL 1400
Biodiversity and Evolution, or equivalent?
a. A
b. B
c. C
d. D
e. F or W
10. If taken at Stockton did you need to repeat BIOL 1100 / BIOL 1400 or BIOL 1200 to earn a
C or better?
a. Repeated BIOL 1200 Cells and Molecules
b. Repeated BIOL 1100 Organisms and Evolution
c. Repeated BIOL 1400 Biodiversity and Evolution
d. Repeated both BIOL 1200 Cells and Molecules and BIOL 1100 Organisms and
Evolution or 1400 Biodiversity and Evolution
e. Did not repeat either course.
11. In which order did you take the introductory biology sequence, regardless of where you took
your introductory biology classes?
a. BIOL 1200 Cells and Molecules or equivalent then BIOL 1100 Organisms and Evolution,
BIOL 1400 Biodiversity and Evolution, or equivalent
b. BIOL 1100 Organisms and Evolution, BIOL 1400 Biodiversity and Evolution, or equivalent
then BIOL 1200 Cells and Molecules or equivalent.
12. When did you complete the introductory sequence of courses?
a. Spring or Summer 2010
b. Fall 2009
c. Spring or Summer 2009
d. Fall 2008
e. Prior to Summer 2008 state when _____________________
Choose the one alternative that BEST completes the statement or answers the question.
Record your selection on the scantron sheet. You may write clarifying comments/notes
directly on the survey.
13. Which of the following statements best distinguishes hypotheses from theories in science?
a. Theories are hypotheses that have been proven.
b. Hypotheses are guesses; theories are correct answers.
c. Hypotheses usually are relatively narrow in scope; theories have broad explanatory
power.
d. Hypotheses and theories are essentially the same thing.
e. Theories are proven true in all cases; hypotheses are usually falsified by tests.
14. The structure of the amino acid asparagine is shown below with the side chain circled.
Which of the following statements about the side chain of asparagine (as shown) are correct?
a.
b.
c.
d.
The side chain can form H-bonds.
The side chain can form ionic bonds.
The side chain is hydrophobic.
The side chain is can form disulfide bridges.
Briefly explain your choice for 14.
15. Which aspect of the structure of the DNA molecule carries hereditary (genetic) information?
a.
b.
c.
d.
e.
The helical nature of the molecule.
The hydrogen bonding between bases on each strand of the helix.
The specificity of the complementary base pairs
The sequence (order) of bases along a single strand of the helix.
The deoxyribose sugar.
16. In which way are plants and animals different in how they obtain energy?
a. Animals use ATP; plants do not.
b. Plants capture energy from sunlight; animals capture chemical energy.
c. Plants store energy in sugar molecules; animals do not.
d. Animals can synthesize sugars from simpler molecules; plants cannot.
17. Your muscle cells, nerve cells, and skin cells have different functions because each kind of
cell:
a.
b.
c.
d.
e.
contains different kinds of genes
is located in different areas of the body.
activates different genes.
contains different numbers of genes
has experienced different mutations.
18. Which of the following would NOT be found in a prokaryotic cell?
a.
b.
c.
d.
e.
DNA
Cell wall
Cell membrane
Ribosomes
Endoplasmic reticulum
19. How similar is your genetic information to that of your parents?
a.
For each gene, one of your alleles is from one parent and the other is from the other
parent.
b. You have a set of genes similar to those your parents inherited from their parents.
c. You contain the same genetic information as each of your parents, just half as much.
d. Depending on how much crossing over happens, you could have a lot of one parent's
genetic information and little of the other parent's genetic information.
20. In a diploid organism, what do we mean when we say that a trait is dominant?
a.
b.
c.
d.
e.
It is stronger than a recessive form of the trait.
It is due to more, or a more active gene product than is the recessive trait.
The trait associated with the allele is present whenever the allele is present.
The allele associated with the trait inactivates the products of recessive alleles.
The allele is present more frequently in individuals in the population.
21. Which of the following statements best describes the evolution of pesticide resistance in a
population of insects?
a. Individual members of the population slowly adapt to the presence of the chemical by
striving to meet the new challenge.
b. All insects exposed to the insecticide begin to use a formerly silent gene to make a new
enzyme that breaks down the insecticide molecule.
c. Insects observe the behavior of other insects that survive pesticide application, and
adjust their own behavior to copy that of the survivors.
d. Offspring of insects that are genetically resistant to the pesticide become more abundant
as the susceptible insects die off.
22. Natural selection produces evolutionary change by :
a. changing the frequency of various versions of genes.
b. reducing the number of new mutations.
c. producing genes needed for new environments
d. reducing the effects of detrimental versions of genes.
23. How can a catastrophic global event influence evolutionary change?
a.
b.
c.
d.
Undesirable version of the genes are removed.
New genes are generated.
Only some species survive the event.
There are short term effects that disappear over time.
24. Bird guides once listed the myrtle warbler and Audubon’s warbler as distinct species.
Recently these have been classified as eastern and western forms of a single species, the
yellow rumped warbler. Which of the following pieces of evidence if true would be cause for
this reclassification.
a. The two forms interbreed often in nature, and their offspring have good survival and
reproduction.
b. The two forms live in similar habitats.
c. The two forms have many genes in common.
d. The two forms have similar food requirements.
e. The two forms are very similar in coloration.
25. In using molecular data to generate phylogenies, several assumptions are required. Which
of the following assumptions would NOT be essential in generating a molecular phylogenies?
a. Mutation rates are constant over time and constant in all species.
b. Proteins with similar amino acid sequences reflect common ancestry rather than
coincidence.
c. Mutations do not change the amino acid sequence of proteins.
d. The changes in amino acid sequence used to calculate molecular phylogenies do
not cause changes in function.
26. Considering all animals (invertebrates and vertebrates), which of the following is true?
a. All animals have a separate mouth and anus.
b. All animals have a nervous system.
c. All animals have their mouth at their “head” end.
d. All animals are heterotrophic and digest their food.
Questions 27- 29 use the following brief description of Venezuelan guppies to answer the next
three questions
Venezuelan Guppies
Guppies are small fish found in streams in Venezuela. Male guppies are brightly colored with
black, red, blue, and iridescent (reflective) spots. Males cannot be too brightly colored or they
will be seen and consumed by predators, but if they are too plain, females will choose other
males. Natural selection and sexual selection push in opposite directions. When a guppy
population lives in a stream in the absence of predators, the proportion of males that are bright
and flashy increases in the population. If a few aggressive predators are added to the same
stream, the proportion of brightly colored males decreases with about five months (3-4
generations). The effects of predators on guppy coloration have been studied in artificial ponds
with mild, aggressive, and no predators, and by similar manipulations of natural stream
environments.
27. A typical natural population of guppies consists of hundreds of guppies. Which statement
best describes the guppies of a single species in an isolated population?
a. The guppies share all of the same characteristics and are identical to each other.
b. The guppies share all of the essential characteristics of the species; the minor variations
they display don’t affect survival.
c. The guppies are identical on the inside, but have many differences in appearance.
d. The guppies share many essential characteristics, but also vary in many features.
28. Fitness is a term often used by biologists to explain the evolutionary success of certain
organisms. What feature would a biologist consider to be most important in determining
which guppies were “most fit”?
a. Large body side and ability to swim quickly away from predators.
b. Excellent ability to compete for food.
c. High number of offspring that survived to reproductive age.
d. High number of mating with many different females.
29. In guppy populations, what are the primary changes that occur gradually over time?
a. The traits of each individual guppy within a population gradually change.
b. The proportions of guppies having different traits within a population change.
c. Successful behaviors learned by certain guppies are passed on to offspring.
d. Mutations occur to meet the needs of the guppies as the environment changes.
School of Education- Teacher Education Assessment Activities
Over the course of the 2009-2010 academic year, the School of Education faculty and staff
have worked to develop a shared understanding of the Framework—adopted as the
Stockton Components of Professional Practice. The faculty and staff representing all areas
of the School completed teaching performance assessment training to establish School
norms and inter-rater reliability for the assessment framework.
The student teaching taskforce developed new instruments for use in assessing student
teachers. These were introduced in a Student Teacher Supervisor Training program in
August 2010. Ron Tinsley and Andre Joyner will facilitate the training. All supervisors
worked toward achieving inter-rater reliability using the Danielson Framework, as adopted
and incorporated into the Stockton Components of Professional Practice.
The overall goal of Education at Richard Stockton College is to develop competent, caring,
qualified teachers. The post-baccalaureate initial certification program is designed to help
qualified degree holders become competent novice teachers.
The Teacher Education Program at Stockton uses a developmental approach toward
teacher competency development built upon the work of Charlotte Danielson in her book:
Enhancing Professional Practice: A Framework for Teaching (2007, ASCD: Alexandria, VA).
Danielson’s work provides teachers a well-defined path for achieving exemplary practice,
identifies what effective teachers know and do, and provides a common language for
describing and discussing excellence in teaching and learning. Danielson’s work is based
upon empirical research and aligns with both NBPTS and ETS assessments, such as
Praxis III and Pathwise. Many school districts in the Stockton region have adopted
Danielson’s framework into their evaluation systems.
This framework was adopted to be used as the Stockton Components of Professional
Practice by unanimous vote of the Teacher Education Program faculty in May 2008 and
now serves as the foundation for our overall assessment system. Meeting all of these
components ensures our program completers have met our program’s claims.
The Education Program Faculty approved the following claims for use in our TEAC Inquiry
Brief Proposal. Assessment of our student learning outcomes must validate these claims.
1) Our novice teachers demonstrate competence in the subject matter they will teach.
2) Our novice teachers understand and apply appropriate pedagogy.
3) Our novice teachers demonstrate caring teaching practices in diverse classroom settings.
The claims are validated using multiple assessment measures. Our assessment system is
comprehensive and makes use of standardized measures, student feedback, faculty
evaluations of student learning, and supervisors’ and cooperating teachers’ evaluations of
student teaching performance in the classroom. More specific information related to
assessment measures can be found in the annual coordinator reports:
http://intraweb.stockton.edu/eyos/page.cfm?siteID=84&pageID=45