Documentary Bodies and Bodily Documents: Notes from an

advertisement
Documentary Bodies and Bodily Documents: Notes from an Attempted Assessment
by Zack Anderson
Introduction
The idea for this paper came from a panel presentation at last year’s Disembodied Poetics
conference at Naropa. This panel, titled “Bending the Source: Research, Poiesis, and Document
Fluidity,” examined the concept of the document in the context of poetic production. The
panelists—Kristin Cerda, Athea Merredyth, Merete Mueller, and Andrea Spofford—argued that
a document is a sort of intermediary vessel, containing what they termed “surrogates of ideas”
during the process of composition.
Cerda, Merredyth, Mueller, and Spofford were also
interested in exploring the ways that digital forms alter the state of documents. As we have all
experienced, a digital document becomes much more fluid as text is cut, copied, pasted,
translated on the page, and changed in appearance. During this panel, Merete Mueller suggested
that the poet’s work is that of excavation, becoming an archive of language and experience and
digging poems out of this assemblage. In other words, the poet’s body itself becomes a fluid
document. This paper also emerges from my experience of attempting to construct and perform
an assessment of our writing center at the University of Wyoming. This paper—part narrative,
part exploratory investigation into pedagogical possibility—takes the idea of the fluid document
as its theme. The first section will explain and describe the process we undertook for our
assessment and the obstacles that we encountered along the way. In the second half, I will
attempt to deal with some of the implications of thinking about the messy boundaries between
body and document in the context of both writing center assessment and pedagogy.
Description of UW Writing Center Assessment
Some of you may already be familiar with the assessment process. For those who may
not be, I’d like to describe my experience. In the fall of 2014, I was given the task of creating
and implementing an assessment of our writing center with one of my colleagues and with the
guidance of our director. We began by identifying a subject group. We settled on a graduatelevel health sciences class because we had already been seeing fairly high writing center traffic
from those students.
Although the sample size was fairly small, we felt that the relative
homogeneity of the subject pool would help us to evaluate writing improvement. The instructor
was very eager to work with us and agreed to continue to encourage the students to use the
writing center. Our research plan did not involve any contact with the students. It was an
outreach course, so most of these graduate students had been using our online consultations. As
a result, we were already in possession of the initial drafts, which our consultants had
commented on and sent back. Our method was to select one or two specific characteristics
(paragraph structure, source integration) and then compare the early drafts that our consultants
had worked with to the final drafts that the students handed in to the instructor. We would assign
a random number to each student so as to preserve anonymity. This process would give us
qualitative data on the effectiveness of our consultations for this specific population.
The next step in our process was obtaining permission from the Institutional Review
Board to conduct our study. The Board determined that even though we only intended to study
documents—textual artifacts—the study was still classified as using human subjects. The Board
also mandated that my co-researcher and I, as well as our director, undergo training online for
human subject research. We completed this training through the Collaborative Institutional
Training Initiative. We were required to make our way through fifteen required modules (34
total) on such topics as The Belmont Report, “History and Ethical Principles,” research with
prisoners, and internet-based research. As you can gather, most of these modules were entirely
irrelevant to our project. I completed the training process over the course of several days, taking
an estimated 4-6 hours to do so. In the section titled “Defining Research with Human Subjects,”
the training program informs us that “According to the federal regulations 45 CFR 46.102
(Protection of Human Subjects 2009), a human subject is a “living individual about whom an
investigator (whether professional or student) conducting research obtains (1) Data through
intervention or interaction with the individual, or (2) Identifiable private information’” (CITI
Program).
Hence, our research was classified as human subject research to preserve
confidentiality, even though as writing center staff we already had access to the students’ names
and early drafts.
We encountered other difficulties as well. The greatest delay to the study has been a lack
of response from the students, from whom we were required to obtain consent. As of this
conference, we have yet to receive the permission we need from the students. More broadly, we
find ourselves in the great quandary of writing center assessment. As Diana Bell and Alanna
Frost write in “Critical inquiry and writing centers: a methodology of assessment,”
“Postsecondary institutions increasingly call upon writing center directors to engage in the
institutional language of quantitative and outcomes assessment. Despite an awareness of the
limited resources most centers are allocated, institution administrators often require directors to
provide assessment data to justify--usually in quantitative terms--the existence of the writing
center for reasons of funding, space, and allocation of intellectual capital resources” (Bell and
Frost). In order to address this increased demand for quantitative assessment, Bell and Frost
propose studies based on retention and graduation rates as justification for the writing center.
However, drawing this sort of correlation seems problematic, since those students in the habit of
using resources like the writing center are probably predisposed to higher retention and
graduation rates in the first place. Our method of looking at highly specific and concrete
characteristics of writing would at least hope to avoid these kinds of sweeping generalizations.
Document/Body Fluidity
My purpose is not to blame the IRB process for the problems that we have faced in trying
to assess our writing center and I do hope that my narrative of our experience will be useful to
those like us who are new to the assessment game. However, what I want to point out is the
strange underlying factor that affects not only how we go about assessing ourselves, but also
informs how we think about the consultant-client relationship in the writing center. This factor
appears in the blurry boundaries of classification between documents and the living bodies that
produce them. Human bodies require protection from research-related risks. But by extending
the same protections to texts, the documents become embodied and the writers’ bodies
documented. The documents we are attempting to study are not dead, fixed artifacts of a writer’s
activity, but a half-living, fluid extension of the writer. Like a segment of a worm that has been
cut off and continues to move on its own.
Effect on Assessment
As I think my narrative has shown, the conflation of document and body has significant
implications for designing an effective assessment of the writing center. The very nature of
quantitative measurement is predicated on the idea of solid boundaries and relatively fixed
categories. A researcher can easily compile records of writing center clients, arranging them in
categories like age, sex, discipline, and class standing. We can compare these records to other
quantitative measures like GPA or graduation rate.
However, we are always left with
correlations that occlude other significant contexts like economic background, secondary
schooling, presence or absence of mentor-figures who help to cultivate scholarly habits like
using the writing center, etc.
These sorts of assessments also seem to create a wooden,
mechanical version of intellectual processes. As we all know, the writing process is messy.
Teaching writing is messy. And assessing writing is messy. If we conceive of writing as a
feedback loop between the fluid, self-revising bodies of the writer and the document, quantitative
analysis makes little sense. Instead, we should be looking at very specific textual skills inscribed
in the motion of writers through academic space over time. Do writing center conferences
improve argumentative skills like building theses? Do writing center clients develop better
organizational abilities? Better integration of source text? Better acquisition of spelling and
conventions? By examining specific writing skills over a period of time, we can use qualitative
analysis to track writing center effectiveness.
Interaction With Process-Based Pedagogy
Let us return to the idea that documents—especially digital ones—are infinitely mutable.
Our clients come to our writing centers with drafts. Our clients ask questions, we provide
suggestions, the drafts change. But the writers change too, with each appointment. And so do
the consultants. Nancy Grimm describes the problems posed by what she calls an “autonomous
model of literacy” which regards literacy as an individual skill. Grimm writes, “The pressure to
prove that writing center ‘intervention’ makes a difference in student writing is part of the
autonomous model” (Grimm 45). Grimm is arguing for a model that views literacy as a
collection of social practices. In other words, literacy is highly interactive. Scholars also refer to
this dynamic as “scaffolding.”
Jo Mackiewicz and Isabelle Thompson, in their paper
““Instruction, Cognitive Scaffolding, and Motivational Scaffolding in Writing Center Tutoring,”
explain the application of this term: “Defined generally, scaffolding metaphorically refers to a
learning opportunity in which a more expert tutor teaches a less expert student to answer a
question, correct an error, or perform a task without telling the student the answer or doing the
work for him or her. The tutor acts as a scaffold, helping the student to do things he or she cannot
perform alone” (Mackiewicz and Thompson 54-55). Although Mackiewicz and Thompson’s
definition foregrounds the power imbalance in between tutor and student, the concept of
scaffolding seems compatible with the concept of the fluid body/document. Both consultant and
client are fluid entities that self-revise based on the presence of the other over the course of the
consultation. For instance, in working with ESL students, we are often called upon to play the
role of cultural informant, giving the student relevant information in order to make meaning in
the context of American culture. This is not a unidirectional process, though. When the
consultant realizes that this cultural information must be made explicit, the consultant must selfrevise to re-orient in a context where American culture is not implicitly understood in the
background. This act of re-orientation also helps to flatten the power relations in the writing
center that can interfere with our ideal of collaboration.
Posthumanism, Application to Writing Center
The emergent, uncanonized field of posthumanism may leave us with uncertain, albeit
exciting, possibilities for integrating the document/body hybrid into our ideas about writing
center pedagogy.
In the article “Toward a Posthumanist Education,” Nathan Snaza and
colleagues write, “While the human may seem to be an old, sure, stable idea…its contingent,
historical character is becoming increasingly clear as a rising tide of research we may call
posthumanist erodes its fabricated borders from the animal, the machine, and the thing” (Snaza
et. al 42). Posthumanism is a mode of thought that questions the binaries of human/nature,
man/machine, and, conceivably, author/text. Let us extend this to our discussion of writing
assessment. Posthumanism rests on an Object-Oriented Ontology, which holds that everything is
an object. Snaza et. al. write that “OOO maintains that relations are independent of pre-formed
identities. That is, we can only know things by the relations into which they enter, by the
contacts they forge, and effects they are able to produce” (47). The posthuman, the ObjectOriented Ontology, and the document/body all seem to preclude the viability of quantitative
writing center analysis. We can, however, conduct the kinds of studies like that of Mackiewicz
and Thompson, who evaluate cognitive scaffolding in extremely specific contexts. The blurring
of author and text, document and body allows us to reconceptualize the writing center as a point
of contact, an intersectional space with document/bodies feeding back and forth. The author is
not dead. Just not where we thought.
Bibliography
Bell, Diana Calhoun, and Alanna Frost. "Critical inquiry and writing centers: a methodology of
assessment." The Learning Assistance Review 17.1 (2012): 15+. Academic OneFile. Web. 13 Apr. 2015.
Cerda, Kristin, Athea Merredyth, Andrea Spofford, and Merete Mueller. “Bending the Source:
Research, Poiesis, and Document Fluidity.”
Jack Kerouac School of Disembodied Poetics.
Institute, Boulder, CO. 11 October 2014. Conference Presentation.
Grimm, Nancy Maloney. “In the Spirit of Service: Making Writing Center Research a ‘Featured
Character.’” The Center Will Hold. Logan, UT: Utah State U.P., 2003.
Mackiewicz, Jo and Isabelle Thompson. “Instruction, Cognitive Scaffolding, and Motivational
Scaffolding in Writing Center Tutoring.” Composition Studies 42.1 (2014). 54-78.
Snaza, Nathan et. al. “Toward a Posthumanist Education.” Journal of Curriculum Theorizing
30.2 (2014). 39-55.
Naropa
Download