Uploaded by Rodrigo Donoso Figueroa

1-s2.0-0747563294900442-main

advertisement
Pergamon
Computers in Human Behavior, Vol. 10, No. 4. pp. 5 1l-527, 1994
Copyright Q 1994 Elsevier Science Ltd
Printed in the USA. All rights reserved
0747-x32/94 $6.00 + 00
Writing as Process and Product:
The Impact of Tool, Genre, Audience
Knowledge, and Writer Expertise
Florida At/an tic University
CmMichael Levy
Universj~ of Florida
Abstract - Two experiments investigated the impact of writing tool (word
processing or handwriting), genre (narrative or exposition), and audience
Cfamiliar or unfamiliar) on measures of writing quality, syntactic complexity, and
number and type of initial text production
revisions. In the first, 84
undergraduates with little word processing experience wrote letters by hand or
computer. The 64 subjects in Experiment 2 were experienced college writers who
always wrote by computer. Subjects composed more syntactically complex letters
of higher rated quality to an unfamiliar audience than to a familiar one.
~andwri~en letters were of higher rated q~li~ than word processed. Although
there were more total revisions when using a word processor, there were more
text-preserving than meanin~ul revisions. The number and distribution of
revisions also depended upon the writers’ level of experience. The Hayes and
Flower (1980) model of the writing process remains a useful heuristic, but our
data indicate that it warrants extension.
Requests for reprints should be addressed to Dr. Sarah E. Ransdell, 2912 College Ave., Florida
Atlantic University, Davie, FL 33314. E-mail: RANSDELL@FAUVAX
511
512
Ransdell and Levy
Research on the psychology of writing has ranged from studies primarily interested
in analysis of the written product to those whose goal is to explain the writing process as it unfolds through time. Verbal protocol analysis, pioneered by Hayes and
Flower (1980), was one of the first process-tracing methods. Evidence emerging
from protocol analysis suggests that strategic knowledge plays a major role in all
phases of writing. Different kinds of strategic knowledge can influence the written
product, but the mechanisms by which such knowledge operate are far from clear.
Given the complexity of adult composition, strategic knowledge probably does not
combine in simple, additive ways. For example, the information stored about audience knowledge may be organized quite differently from knowledge about rhetorical genres, such as how to write a narrative or persuasive letter.
Some of what a writer knows about an audience may be only basic demographic
information. Other information about an audience, such as what they know about
the topic of a composition, may need to be inferred. Thus, the process of writing
results in a dynamic interaction among the writer’s knowledge representations. Just
as it is possible that strategic knowledge structures can interact during the writing
process, it is possible that these structures can influence the products of writing in
complex ways that the study of individual structures, no matter how rigorous, will
never be able to detect. Thus, studies that focus only on the contribution of a single
variable, such as audience knowledge, can provide only narrow conclusions about
either the writing process or the written product.
The present research investigates the role of several kinds of knowledge structures as well as writing tool on both writing process and product. Our goal in this
research is to examine how the strategies that result from writers’ access of their
structures affect multiple attributes (e.g., quality, syntactic complexity) of compositions, as well as the kinds of revisions that are created during text production. We
also aim to assess, using a uniform experimental methodology, the relative contributions and interactions of these variables that have been studied in virtual isolation from one another in research that has little methodological overlap as shown in
Table 1.
Most experimental research has focused on the impact of writing assignment and
audience knowledge in terms of measures such as syntactic complexity (Crowhurst
& Piche, 1979; Hunt, 1983; Langer, 1984), cognitive effort or capacity (Kellogg,
1987; Reed, Burton, & Kelly, 1985), draft to draft revisions (Bean, 1983; Bridwell,
1980; Collier, 1983; Lutz, 1987), and writing quality (Rubin & Rafoth, 1986; Witte
& Faigley, 1981). Process-oriented research has been mainly restricted to protocol
analysis (e.g., Hayes & Flower, 1980) and to cognitive effort studies (Kellogg,
1987; Reed, Burton, & Kelly, 1985). Very few studies have focussed specifically
on the relationship between process and product measures in order to explain the
complexity of written language production.
The methodology that we use showcases word processed writing because it is
particularly amenable to analysis of both process and product. The increasing use
of the word processor as a writing tool calls for steps to determine its influence
on written language. In a review of recent research, Cochran-Smith (1991) concluded that the effects of word processing clearly interact with preexisting skills
and strategic knowledge. While there have been reports of systematic differences
between word processed and handwritten text (Collier, 1983; Joram, Woodruff,
Bryson & Lindsay, 1992; Lutz, 1987), the few experimental studies that exist
focus on either the cognitive processes or the products themselves, but usually
not both.
Process and product
513
Table 1. Cited References by Focus, Method, Variables Studied, and Type of Subjects
_IFocus
Method
Product
Ohs.
Expt.
independent
Var.
Dependent Var.
Length
t-units
Clause length
Errors
Quality
5 high-knowledge,
5 low-knowledge adults
Witte & Faigley
(1981)
Topic knowledge
Clause length
Quality
99 10th grade students
Langer (1984)
Audience’
Genre2
Age3
Clause length12,3
t-units2t
120 6th grade students,
120 10th grade students
~rowhu~t PI
Piche (1979)
Expertise’
Genre9
Cognitive
en~gement~ 12
Clause lengthfp2
t-units’12
Quality1 p2
21 low-, 21 average-, 21
high-knowledge first-year
undergraduates
Reed et al.
(1985)
35 students in first-year
Rubin &
college composition course Rafotth (1986)
Expertise’
Genre2
Time (draft)3
~uaii~1y2~3
16 low-, 16 average, 16
Anxietyf*213
high-ability Education
Apprehension’ ,293 students with no word
processing experience
Reed (1992)
Not applicable
Testimonials
4 first-year college
students with no word
processing experience
Bean (1983)
Draft1
Revisions
Quality’
Length’
100 12th-grade students
Bridwell (1980)
Topic knowledge’
Processing time
Cognitive effort1
30 high-knowledge, 30 low- Kellogg (1987)
knowledge undergraduate
Tool
Style
Time (session)
Quality
Length
Self-reports
8 professional writers
(graduate TAs) with word
processing experience
BridwellBowles et al.
(1987)
Dbs.
Tool’
Revisions1
Quality
4 college composition
students with no word
processing experience
Collier (1983)
Expt.
Expertise1
Tool2
Total time2
Revisions2
Length2
4 professional writers,
3 experienced writers
(graduate TAs) with word
processing experfence
Lutz (I 987)
Tooll, Topic*
Revision’ ,2
Quality2
20 advanced first-year
college students
Hawisher
(1987)
Audience’
Revisions1
A~itude scores1
87 undergraduates
Redd-Boyd &
Slater (1989)
Ohs.
Expt.
Product
and
Process
Researchers
Expertise
Expertise’
Genre2
Social cognitive
abilitys
Process
Subjects
(Tsbe I continued~
514
Ransdell and Levy
Table 1. Continued
Focus
Method
Independent
Var.
Dependent
Audience
awareness
adaptation
Tool’
Time of revision*
Instructions prior to
2nd draft3
Skill of subjects4
Var.
Subjects
and
Quality3s4
Creativity
Nature of revisions
Various measures
from thinking aloud
protocols
31 average and above
average 8th grade
students
Note. Superscripted
numerals indicate where the researcher(s)
relationships between the variables sharing the same numeral.
WORD PROCESSING,
Researchers
REVISION
reported
Joram
et al. (1992)
statistically
significant
AND QUALITY
The most widely observed effect of word processing on writing is that it results in
greater amounts of revising (Collier, 1983; Lutz, 1987). However, the correlation
between absolute number of revisions and writing quality is not always strong, and
so many researchers categorize revisions into those that change the meaning of the
text and those that do not (e.g., Faigley & Witte, 1984). Several researchers have
reported that word processed writing tends to contain a higher proportion of textpreserving revisions to revisions that actually change the content or meaning of the
text (Bridwell-Bowles et al., 1987; Lutz, 1987). Hawisher (1987) observed a negative correlation between the number of text-preserving revisions and improvement
in writing quality in first to final drafts created by word processing. She also found
a positive relationship between the amount of meaningful revisions and writing
quality (meaningful and text-preserving changes do not always “trade off”).
These reports must be interpreted cautiously because a design decision limits
their generalizability: Revision was measured by changes from one completed draft
to the next. A more fine-grained process approach would look at revisions at the
point of initial text production. Only with this latter approach is it possible to categorize the complete set of revisions that occur. In fact, the relative ease with which
writers can make point-of-entry modifications in word choice, order, and organization is perhaps a large part of what makes composing with word processing unique.
The present research is concerned with the process of revision as it occurs within
a draft. We use a special purpose terminate-and-stay-resident
(TSR) program
(Ransdell, 1990) to record when each keystroke is made and later replay the composition in real time. The technique enables us to study various types of text and
idea manipulations at the point of utterance. This approach has been used by
Bridwell-Bowles and her associates (Bridwell-Bowles, Johnson, & Brehe, 1987)
but has been limited to case study procedures and to experienced computer users
(but see Bonk & Reynolds, 1992 for a study of an adolescent population).
In a cognitively challenging task environment, when the mental resources needed to perform a subtask (such as revising) are reduced, the frequency with which
that subtask occurs may increase. For example, many have observed that
keystroking errors and minor surface changes are much easier to accomplish with a
word processor than on typewriter. Accordingly, we anticipated that our moderate-
Process and product
515
ly proficient touch typists would revise more when the properties of their writing
tool facilitated rather than hindered revisions. Specifically, we predicted that writers would exhibit more revising when they composed using a word processor than
when handw~ting, but a smaller prounion of such revisions would change the
meaning of the text.
AUDIENCE AND SYNTACTIC COMPLEXITY
Constraints on idea and text production can also be imposed upon writers by altering audience knowledge. Crowhurst and Piche (1979), Redd-Boyd and Slater
(19891, Reed, Burton and Kelly (1983, and Rubin and Rafoth (1986) have reported how audience for a composition can increase syntactic complexity and improve
quality. Very few past studies have simultaneously compared product indices to
process measures such as the nature and qu~tity of revisions. Our approach examines the impact of audience familiarity on writing quality, syntactic complexity, and
type and amount of revision.
Assigning a specific audience to writer’s essays has been found to increase motivation, the use of audience-based strategies, and actually improve the writing’s persuasiveness (Redd-Boyd & Slater, 1989). Furthermore, when writing in a persuasive genre, writers are most likely to benefit from knowledge of audience because
they can tailor their arguments to appeal to what they know about the reader
(Crowhurst & Piche, 1979; Rubin & Rafoth, 1986).
Skilled writers are often more aware of audience than less skilled (Hayes 2%
Flower, 1980; Rubin & Rafotb, 1986). Assigning a distant, unfamiliar, or adult
audience frequently leads writers to produce syntactically more complex documents as revealed by longer average clause length (Crowhu~t & Piche, 1979). This
may reflect an impression-management function - to put one’s best foot forward
- when writing to authority figures (Kirsch, 1991). We predict that our writers
will create letters of greater quality and syntactic complexity when writing to an
unfamiliar audience than to someone they know well. From the writer’s perspective, the point of revising is to produce a better document. It is an open empirical
question, however, whether there is a strong positive correlation between the extent
of meaningful revisions and the rated quality of a finished written product. To the
extent that such a correlation generally exists and to the extent that writers are
motivated somehow to compose at their best for an unfa~liar audience, we should
find a relationship between audience and the quantity of meaningful revisions. This
relationship may hold for revisions made during the creation of a single draft of a
document or between two drafts of the same document, or both.
GENRE OF DISCOURSE
A third factor suggested by the Hayes and Flower model that may influence writing is the genre requirements of the assignment. Reed, Burton and Kelly (1985)
asked students to write in one of three genres (descriptive, narrative, or persuasive)
while performing a secondary task, responding to a tone, which served as a measure of cognitive engagement. Writing in the narrative genre produced the fastest
reaction times to the tone, indicating the least amount of engagement. Subsequently,
516
Ransdell and Levy
Reed (1992) found that both high- and low-ability writers produced their best
essays when writing narratives and their worst when writing persuasive essays.
These results corroborate a finding in Ransdell’s (1989) study where 86% of the
writers described persuasive writing as being more difficult than narrative.
In addition to simple effects, interactions between genre and audience familiarity
have also been reported. Crowhurst and Piche (1979) found significantly longer
average clause length in argumentative essays when the intended audience was less
familiar to the writer (i.e., a teacher) than when the audience was very familiar
(i.e., a best friend). In contrast, in narrative writing, clause length was not affected
by audience. Crowhurst and Piche suggest that persuasive writing demands greater
attention to audience than narrative or description.
All of these findings are also consistent with an explanation that is based upon
the writer’s knowledge of the genre. Because knowledge is commonly considered a
continuous rather than a binary entity, these data may reflect the fact that the extent
of a typical subject’s knowledge of narrative production rules are more extensive,
more practiced, and more automatically invoked than the rules for creating persuasive documents. Either conceptualization, however, leads to the same set of conclusions: writing in a narrative genre should result in more meaningful revisions and
higher rated quality than writing in the persuasive genre.
In summary, we predict that measures of quality, revision, and syntactic complexity will vary as a function of writing tool, genre, and audience familiarity.
Hayes and Flower (1980) described a process model that makes no explicit predictions about written products. An extension of this theory suggests that when writers
have access to strategic knowledge, they will bring that knowledge to bear in composing higher quality documents.
Thus, we explicitly manipulated the subjects’ knowledge of their audience, the
writing tool, and genre. In Experiment 1 we vary tool, audience familiarity, and
genre between subjects. In Experiment 2 we examine the effects of the writers’
expertise in terms of college writing experience in combination with the effects of
audience and genre knowledge.
EXPERIMENT
1
Method
Subjects. The subjects were 84 introductory psychology students1 at the University
of Maine who voluntarily participated for extra credit in their class. Experience
with word processing, typing ability, and history of college writing courses were
collected in an initial class period. Subjects were invited to participate only if they
had little or no word processing experience, but fairly good self-reported typing
ability. Sixty-eight percent of the subjects had written at least one paper for a college assignment, 67% were taking a freshmen composition course at the time, 59%
reported writing at least two personal letters a month, and 91% considered themselves to be at least average writers compared to their college peers. Only seven
subjects had ever used any word processing software that they could name, and
none had used the program used in this study.
Design. The design was 2 (writing tool: handwriting or word processing) by 2
(audience familiarity: familiar or unfamiliar) by 2 (genre: narrative or exposition)
Process and product
517
between-subjects factorial experiment. Holistic quality ratings, syntactic complexity as measured by mean clause length, and total number and type of point of text
produc~on revisions served as dependent measures.
A~~arQt~s. Some subjects wrote using a popular word processing program on
IBM-PC compatible microcomputers, The word processor was used in a simple
and generic form; that is, the screen was initially blank and the subjects were
informed about only an extremely limited subset of editing cormnands on a colorcoded template. The other subjects wrote on notebook paper using indelible pens.
Reed (1992) has noted the difficulty in directly comparing word processing with
handwriting. Note that several characteristics of our procedure for collection, analysis, and interpretation attempt to take this inherent difficulty into account.
Procedure. All students were first trained in some of the basics of the word processing software, including how to insert or delete text and how to save their document.
Half of the subjects wrote by hand using the pen and paper provided and half
used the computers and the word processing software. Those writing by hand were
asked to make any changes or revisions visible by crossing them out with a single
line. All subjects wrote for 20 min; they were warned when only 5 min remained.
One of our aims was to devise a paradigm in which the writer’s knowledge of the
topic and of the genre would not limit the production of prose. We, therefore, selected a topic about which students at most universities would have had direct experience over an extended period of time: college course selection and registration.
Half of the subjects wrote a letter to a family member or close friend; the
remainder wrote a letter to the president of the University of Florida. All subjects
were told they would be given copies of their letters to send in order to enco~age
them to be as realistic as possible. To protect their privacy, subjects were told not to
sign their names to the letters.
Half of all subjects wrote a descriptive narrative. They were asked to describe
the problems that they had encountered in selecting and registering for classes at
the University of Maine. In addition to describing the process, they were asked to
include what they actually did and how they felt about their personal experience.
The other subjects were asked to write an argumentative exposition. They were
asked to argue for some innovative ways that the university could improve how
students select and register for classes. They were asked to state as many arguments as possible to support their ideas. Cards with genre and audience instructions
were placed in front of each subject’s work space.
All h~dw~tten letters were transcribed into word processed form so raters could
be blind as to tool. Two judges independently rated each letter on 13 dimensions of
writing quality based upon a tool developed in holistic quality assessment at the
University of Maine as part of English placement examinations (Nees-Hatlen,
1989; see Appendix A). Training sessions for the raters followed closely the procedures from those of the placement exams so as to maximize interrater reliability.
Each judge rated each letter one dimension at a time. Overall interrater reliability was r = .84 for total quality scores across the four raters. A total quality score
was calculated for each subject by averaging the 13 evaluations of each letter
across both judges, then converting these to percentages. This interrater reliability
is consistent with previous research (see Cooper, 1977).
The analysis of point of text production revisions measured in the present study
foilowed a commonly used taxonomy of revision types (Faigley & Witte, 1984).
518
Ransdell and Levy
The main distinction was whether or not a revision changed the meaning of the
text. Within the bounds of meaningful changes are those that reflect changes in the
topics or concepts that are included in the essay as well as those that change the
general organization or outline of the text. Text-preserving changes include both
surface changes such as those of spelling, tense, and punctuation, as well as additions, deletions and substitutions that preserve meaning.
Results: Experiment 1
writing tool. Separate 2 x 2 x 2 between-subjects ANOVAs were conducted on
each of the dependent measures. The first revealed a main effect for writing tool;
handwritten letters were judged to have higher holistic quality (mean score = 7 1%)
than word processed (mean = 65%) [F(l) 68) = 6.65, p c .Ol]. No interactions were
significant with tool for quality scores (see Table 2).
Syntactic complexity (as measured by mean clause length) was correlated with
quality scores (Pearson r = .32, p < .004). This significant correlation found
between the two product measures was the only such relationship among the four
dependent measures. Syntactic complexity was not, however, influenced by writing
too1 [F< 11.
Subjects made significantly more total revisions within word processed letters
(mean = 50) than for handwritten (mean = 6.7) [F(l, 68) = 162.6, p c .OOOl].
However, after removing the types of revisions that did not influence meaning,
such as typos and spelling changes, handwritten letters (mean = 6.7) actually contained more revisions than did word processed letters (mean = 4. l), [F(l, 68) =
8.88, p < .004]. The assigned genre had no reliable influence on any of the four
dependent measures.
Holistic quality was affected by audience familiarity, with letters to an
unfamiliar audience receiving a higher score (71%) than to a familiar audience
(65%) [F(l, 68) = 5.85, p < .Ol].
Letters written to an unfamiliar audience contained significantly more words per
clause (mean = 8.3) than letters to a familiar audience (mean = 6.8), [F(l, 68) =
Audience.
Table 2. Mean Quality Scores, Clause Length, and Point-of-Utterance
Tool, Genre, and Audience in Experiment 1
Revisions
By
Revisions(#)
Quality(%)
H
Tool
Genre
Aud
WP
Clause Length
(wrdskl)
H
WP
(All)
H
WP
(Mnfl)
H
WP
71 > 65
7.5 = 7.5
6.7 < 50
6.7 > 4.1
N
N
N
N
E
E
68 = 68
7.5 = 7.6
U
U
F
71 >65
F
8.3 > 6.8
E
27.1 = 29.3
U
F
29.9 = 26.8
E
4.5 = 6.2
U
F
6.9 > 4.0
Note. The following abbreviations are used in the table: H = Handwritten, WP = Word
processed; N = Narrative genre, E = Expository genre; U = Unfamiliar audience, F =
Familiar audience;
and Mnfl = Meaningful
revisions only.
Process and product
519
.OOOl].Furthermore, in analyses involving all revisions, audience was
not a significant effect [F < 11. In an ANOVA containing only meaningful changes
in word processed letters, audience was a reliable factor. An average of 6.9 revisions occurred in letters to an unfamiliar audience versus 4.0 to a familiar [F( 1,76)
= 9.64, p c .003]. Unlike clause length, the total number of revisions was unrelated
to overall quality, Pearson Y = -. 18, p = .l, nor was meaningful only related to
quality, r = -.02. Neither revision measure was related to clause length nor were
the two measures of revision correlated with one another.
A significant interaction occurred between audience and tool in the analysis
with meaningful revisions [F(l, 76) = 4.05, p < .04]. Handwritten letters to an
unfamiliar audience contained relatively more meaningful revisions than word processed letters to a familiar audience.
The analysis of total revisions produced a significant three-way interaction
between audience, genre and tool [F(l, 76) = 4.64, p < .03]. The most striking pattern was that handwritten, exposition letters to an unfamiliar audience contained
fewer total revisions than word processed narrative letters to a familiar audience.
21.10, p <
Summary and Discussion of Experiment 1
A summary of the results shows that subjects composed more syntactically complex letters to an unfamiliar audience than to a familiar one. While the writers
revised more often when using a word processor than when writing by hand, they
made more text-preserving than meaningful revisions.
Holistic quality was positively correlated to syntactic complexity. No other
measures, however, were significantly correlated. Some dependent measures
showed no effect of writing tool. Syntactic complexity was a powerful result of
manipulating audience familiarity but was not affected at all by writing tool. Total
number of revisions increased with word processing, but our real-time record
allowed us to see that the bulk of these changes were text preserving. Other studies looking at draft-to-draft revisions have suggested this result (Collier, 1983;
Lutz, 1987).
Hunt (1983) has suggested mean clause length is the most parsimonious way to
measure syntactic maturity or complexity, particularly for high school and collegeage writers. As writers mature, they tend to consolidate ideas into larger clauses
rather than simply into longer sentences. Our results are, thus, similar in magnitude
to those Hunt reported for 12th grade students.
EXPERIMENT
2
In this experiment, we examine the effects of writer expertise in combination with
the effects of audience and genre. Because there are many reports of differential
cognitive processing by experts and novices in domains varying from encoding in
short-term memory (Chase & Ericsson, 1982), problem-solving (Chi, Glaser, &
Rees, 1982), decision-making (Northcraft & Neale, 1987), and reading (Daneman,
Just & Carpenter, 1982) differences between the writing processes and products of
experts and novices is anticipated. Studies that make such comparisons, however,
often compare college faculty and professional writers to undergraduates, seldom
recognizing that such groups differ in several noteworthy dimensions that might
influence the writing quality in a specific domain.
520
Ransdell and Levy
In the present study, we targeted individuals who had equivalent verbal SAT test
scores, were from the same cohort, but who differed on dimensions that conceivably could influence their performance and their products. Specifically, we
focussed on a group of students who had taken multiple college courses that
required extensive writing and who often wrote letters in their free time, and compared them to a group who had deferred completing the university requirement for
these classes with heavy writing demands and who seldom wrote letters in their
free time.
Method
Nearly 1,000 students enrolled in a general psychology course at the
University of Florida were given a survey to determine the extent of their experience with word processing programs, their typing speed, the college-level courses
they had taken that required extensive writing2 and the number of personal and
business letters written each month. Those people who had some prior experience
using a word processing package and who stated that they could touch type at least
10 words per min were included in the candidate pool. Those invited to participate
met one of the following criteria: (1) they had taken no more than one course
requiring extensive writing and wrote an average of no more than one business and
personal letter per month, or (2) they had taken two or more courses requiring
extensive writing and wrote an average of four or more business and personal letters per month. Those who met the first criterion are referred to as “Low Expertise”
and those who met the second are identified as “High Expertise.”
Usable data were collected from 64 subjects; data from six other subjects were
voided owing to equipment malfunctioning and procedural errors. Subjects were
tested in groups of two to six.
Subjects.
Materials and Procedure
Because they all had prior word processing experience, it was necessary only to
familiarize subjects with the keyboard layout of the IBM PS/2 computers and to
give them practice in typing and editing a sample paragraph. Half of the subjects
were then asked to write to a close friend or relative (familiar audience) about
selecting and registering for classes at the University of Florida. The remaining
subjects wrote on the same topic, but addressed their letter to the President the
University of Maine, an unfamiliar audience. Within each audience group, half of
the subjects wrote a narrative letter describing their experiences and their feelings;
half wrote an expository letter arguing for changes and improvements. Low-expertise and high-expertise writers were equally represented in each of these four
groups. There were eight subjects initially in each of the 2 x 2 x 2 (Audience x
Genre x Expertise) cells of the experiment.
Immediately after being given their writing assignment, the TSR program was
started to enable monitoring and recording of the writer’s keystrokes. When the
subject had written for 20 min, the researcher gave each subject a printed copy of
their first draft to refer to as they tried to “strengthen and improve the quality of
their letter” by using the editing features of the program to create a second draft.
The subjects spent the final 15 min editing their documents on the screen.
The quality rating procedure was the same as in Experiment 1. Two of the three
judges had evaluated the letters composed in the earlier experiment; the assessments
Process and product
521
of a third trained judge were included to resolve differences in ratings between the
first two judges. A second pair of judges determined the length of each clause written, using the same criteria reported in the earlier experiment. Five additional independent judges initially assigned each letter a value along a 7-point Likert scale
indicating whether it was primarily narrative or expository or some mixture of the
two genre. Several weeks later they reread each letter and used another 7-point scale
to record their estimate of the writer’s expertise; they were to focus on the quality of
the exposition and the structure of the document, ignoring typing errors. Analyses
were conducted on median ratings of judged genre and expertise.
Results: Experiment
2
Level of expertise. Subjects initially categorized as low in expertise reported writing an average of 1.2 business and personal letters per month and had taken an
average of 0.6 college courses that required substantial (6000-word minimum)
amounts of writing. In contrast, subjects categorized as high in expertise reported
writing an average of 3.7 letters per month and having taken an average of 2.3
courses requiring substantial writing. The high- and low-expertise subjects were
reliably different on these two dimensions (p < .OOl), but did not differ in reported
verbal SAT or ACT scores or reported typing speed. Our judges’ independent
assessment of the writers’ expertise provided a measure of the validity of the preexperimental assignment to high- and low-expertise groups.
Subjects assigned to the low-expertise group produced letters that were judged
as being written by poorer writers than those assigned to the high-expertise group
(means = 3.8 vs. 4.7, along a 7-point scale with 1 = poor). While this effect was
significant [F(l, 114) = 10.24), p c .002], it is important to note that judges did not
evaluate the writers as either extraordinarily good nor poor.
Assignment to high or low expertise corroborated the quality assessment results.
The letters written by high-expertise subjects were rated as superior in quality to
those written by low-expertise subjects [F(l, 122) = 5.83, p < .02]. A significant
expertise x draft interaction [F(l, 122) = 10.7, p c .OOl] revealed that the nearly 5point superiority in overall quality for the letters written by the high-expertise subjects on the first draft was reduced to about 4 points for the second draft. The average clause length was longer in the letters written by the high-expertise subjects
[F(l, 56) = 4.8, p < .02], but only in the first drafts.
Audience. Audience was a powerful variable in this experiment, influencing not
only the syntactic complexity of the subjects’ letters, but their length, and the number of revisions. Letters to unfamiliar audiences contained significantly longer
clauses than those to familiar audiences [F(l, 122) = 22.7, p < .OOl]. Letters
addressed to a friend or relative were also reliably longer than those addressed to a
university president [means = 370 vs. 314 words, F(1, 122) = 12.1, p < .OOl]. This
significant relationship occurred in the first drafts and increased when subjects prepared their second drafts. The revised letters to friends and relatives were 21%
longer than the original versions, but the second drafts to the university presidents
were only 11% longer than the initial drafts, producing a significant audience x
draft interaction [F(l, 122) = 8.1,~ < .Ol].
Genre. As in the previous experiment, the effects of genre on the process and
products of this writing assignment were small. For example, expository letters
522
Ransdell and Levy
contained longer clauses than narrative letters and they were also judged as higher
in quality than narrative letters, but these differences were not reliable. Inspecting
the letters, we saw few strongly presented arguments in the expository letters.
Therefore, as noted above, we asked a new set of independent raters to read each
letter and judge whether it was primarily narrative, primarily expository, or contained various amounts of each genre. Subjects were successful in writing to the
task assigned when they used the narrative genre (mean judged genre = 1.4, where
1 represents a primarily narrative letter containing no argumentative, expository
elements). In contrast, they were not particularly successful in producing purely
expository prose. The mean judged genre was 4.5, indicating that while there were
expository elements present, they occurred only slightly more frequently than narrative elements. The difference between these judged genre means was highly significant [F(l, 114) = 104.0,~ < .OOl].
Independent of the genre that they were assigned, subjects tended to fashion letters containing more narrative elements when they were writing to a familiar audience than to an unfamiliar audience ljudged genre means = 2.6 vs. 3.3, F( 1, 114) =
6.5, p c .Ol]. There were strong social pressures to write something in this very
public setting. The subjects seemed to resort to using the more familiar, well-used,
and accessible schema for composing narrative prose rather than to write nothing
at all. The four-way interaction [F(l, 40) = 7.19, p < .Ol], between audience,
genre, expertise, and type of revision was the only significant interaction, In general, it indicated a pattern of more meaningful revisions by the high-expertise subjects when they wrote the expository letters written to an unfamiliar audience than
by low-expertise subjects (who made about the same number of meaningful revisions regardless of audience and genre). This relationship was almost completely
reversed in the pattern of text-preserving revisions. In that case, the high expertise
subjects made a relatively small number of revisions (regardless of genre and
audience) but low expertise varied greatly across the combinations of audience
and genre.
Revisions to the second draft were assessed by comparing the first and second
drafts word by word. In this analysis, a “meaningful revision” included adding or
deleting a single word or adding or deleting a multiword phrase that constituted a
clause. Overall, subjects made an average of 1.3 text-preserving changes to the second draft, but an average of 11.9 meaningful changes [t(59) = 13.9, p < .OOl].
Neither the number of meaningful nor the number of text-preserving changes were
reliably influenced by genre, audience, or expertise.
Summary and Discussion of Experiment 2
Writer expertise had some significant effects on quality, revision, and clause
length. “High” expertise subjects’ letters were rated as higher in quality and contained a larger percentage of meaningful revision relative to text preserving.
Audience familiarity remained a robust factor influencing clause length but the
“high” expertise subjects were less affected by this variable suggesting that they
generally use strategic knowledge about audience in their written work.
The four-way interaction between audience, genre, expertise, and type of revision in Experiment 2 was the only reliable “higher-order” interaction. In general, it
showed that there were more meaningful revisions by the high expertise subjects
when they wrote the expository letters written to an unfamiliar audience than by
low expertise subjects. The low expertise subjects made about the same number of
Process and product
523
meaningful revisions, regardless of audience and genre. This pattern was almost
completely reversed when considering text-preserving revisions only. In this case,
the high-expertise subjects made a relatively small number of revisions (regardless
of genre and audience) but low expertise varied greatly across the combinations of
audience and genre. This result was predicted on the basis of several studies indicating that less skilled writers spend relatively more time manipulating text than
they do creating ideas (Bean, 1983; Bridwell, 1980; Hayes & Flowers, 1980;
Hillocks, 1986; Sommers, 1979).
GENERAL
DISCUSSION
A major strength of the present study is that several results from previous research
have been replicated in two well-controlled experiments. Writing to an unfamiliar
audience improved writing quality, increased syntactic complexity of compositions, and led to a higher proportion of meaningful revisions than when writing to a
familiar audience. High-expertise college writers wrote compositions of higher
quality and greater syntactic complexity than low-expertise writers. The compositions of high-expertise writers also differed from low by containing a higher number of meaningful revisions, especially when writing expositions and when writing
to an unfamiliar audience. Added to these results suggested by previous research
are the important effects of writing tool. Word processed writing contained more
total point-of-utterance revisions, but was of poorer quality and contained fewer
meaningful revisions than handwritten writing.
The Hayes and Flower (1980) model remains a useful heuristic, but our experimental data indicate that it warrants extension. On the basis of our findings, we
propose that writing tool should be included as an important factor in models of the
composition process and that the relationship between product and process measures should be considered.
For example, it is important to note that the dependent measures, (syntactic complexity, quality, and amount and type of revision) were generally not correlated.
These measures are dissociable in terms of what task environment factors, and pertinent strategic knowledge influenced them. For example, overall quality was related only to syntactic complexity and, surprisingly, total amount or type of revision
was not related to quality.
The use of the word processor in these studies was critical in permitting realtime analysis of process measures like amount and type of revision. Revision clearly increases when word processing is used as an alternative to handwriting, but the
present studies indicate that the revising seldom changes the meaning or holistic
quality of the text.
In general support of Hayes and Flower’s (1980) theory, writing genre and audience familiarity were shown to influence planning, sentence generation, and
reviewing as revealed by writers’ adjustments in syntactic complexity (a product
measure) and point of text production revision (a process measure). Writers’ strategic knowledge must include information suggesting that letters to unfamiliar audiences, especially “university presidents,” must be more formal and polished than
letters to “Mom and Dad.” This knowledge, then, directs and guides the writer to
make more revisions and incorporate syntactically more complex structure when
writing to unfamiliar audiences.
524
RansdeiE and Levy
It is impo~ant to keep in mind that strategy use based on genre and audience
knowledge has been shown to have greater impact on quality than writing tool in
several studies (Bridwell, 1980; Collier, 1983; Kellogg & Mueller, 1989; Lutz,
1987). Our evidence suggests that strategic knowledge is dependent on the writer’s
ability to coordinate the simultaneous demands of genre, audience, and writing tool
familiarity in order to compose successful letters.
The clause length analyses revealed that structurally more sophisticated letters
were written to the unfamiliar than to the familiar audience. Interestingly, a separate count of the number of task-related ideas created in Experiment 2 (i.e., distinct
points made related to the assigned topic) revealed no effect of audience. Thus,
these shorter and more sophisticated letters to the unfamili~ audience did not contain fewer task-relevant ideas. They were simply less verbose.
The familiar and unfamiliar audiences differed, however, on many dimensions
besides familiarity; and in an attempt to maximize a potential effect, these other
dimensions were allowed to vary freely. Nevertheless, it should be recognized that
while the familiar audience was heterogeneous (including peers and older family
members), the unfamiliar audience was not. Even though the experimenter made
no references to personal characteristics of the unfamiliar university president, all
but one letter was addressed to a man. During debriefings, it was apparent that subjects spontaneously constructed a profile of a university president: a middle-aged
man with considerable power, prestige, and literary competence. Having demonstrated a technique capable of documenting that knowledge of one’s audience can
influence a composition, we must subsequently determine which combinations of
components were most con~buto~ to fully understand the process.
For all dependent measures, genre was not as robust an effect as was audience
familiarity. Many writers used narrative information in letters that were supposed
to focus on argument, though obviously some narrative content is necessary to
argue a point. Few, however, introduced argumentation into their narrative letters.
When rated genre was evaluated in Experiment 2, we found that subjects tended to
create letters containing more narrative elements when writing to a familiar audience than to an unfamiliar one, regardless of their assigned genre. These results
may, in part, be due to the difficulty our writers had in producing argumentative
text at all.
As Ransdell (1989) noted, college students judge w~ting argumentative essays
to be a more difficult task than a descriptive narrative. It is not clear yet whether
this judged difficulty (and relative paucity of ~gumentation in the present studies)
is an inherent difference between these two genre. It is certainly plausible that they
are merely indicative of vastly more opportunities to use the narrative vs. the argumentative genre. Certainly everyday conversations tend to be more narrational than
expository. And there is a rich literature in cognitive psychology that establishes
that cognitive effort needed for successful production of a task varies inversely
with knowledge of that task.
Although students seemed to take the experimental task seriously, they were
writing under time constraints not typically important when they write ordinary
business or personal letters. They were also required to write their letters in an
atypical public setting. These compro~ses to a study of spontaneous writing in a
natural environment were specifically made to enable a ubiquitous adult activity to
be examined under controlled conditions.
The Hayes and Flower (1980) model has helped to guide writing research for
more than a decade, The model has many virtues: it is intuitively plausible, parsi-
Process and product
525
moniaus, data based, and sufficiently detailed to generate unidirectional predictions, Xt is an exemplar of Level 4 (process description) inquiry described by
Bereiter and ~c~darna~~a (1987). The model must be amended, however, to
account for the diss~~ab~e con~ib~tions of writing toof, audience knowledge,
rhetorical genre, and writing expertise that we report here to influence both process
and products. Whether the Hayes and Flower model can be continuously revised to
incorporate new findings and still serve as a heuristic for generating research is an
open question. A more important long-term problem for this model is that it lacks
We precisely drawn components that would permit it to be proven wrong.
Relative to the fields of human memory, attention, and reading, the field of writing is still relatively nascent, As the evolving qualitative and quantitative evidence
cm writing processes and products accumulates, some of it appears to be converging. This may signal that the time has come for the emergence of theories of writing having the strength and power of Anderson’s ACT* (1983) or a neural network
(e.g., ~c~~e~~and 62 ~urne~h~, 3986) that can hefp guide the next generation of
writing research.
Acknowledgemenfs
- The authors would like to thank Robert Tennyson for serving as editor on this
manuscript and two anonymous rcviowers for their helpful comments.
NOTES
I, Rxty and 44 subjects were orj~~~~Iy run in two separate replication trials, Uriginal experiment
served as a factor in initial analyses, and because it did not have any reiiable effects, was collapsed.
2. At the ~ni~~rs~~ of FItida, ~d~r~dua~
must complete six or more courses in which they
write at least 6@&3words. T&se were o~m~o~~ly d&med as the courses ~~vo~~~~~“‘extensive” tiring.
REFERENCES
Anderson, J. R. (1983). The architecture ofcognition. Cambridge, MA: Harvard University Press.
Bean, J. C. (1983). Computerized word-processing as an aid to revision. College Composition and
Communication, 34, 146-148.
Bereiter, C., & Scardamalia, M. (1987). The psychology of w&ten compasition. Hillsdale, NJ:
Lawrence Erlbaum Associates, Inc,
~~dw~Il-Bowles, L., Johnson, P., & Brebe, S. (1987). Composing mnd computers: Case studies of
experienced writers. In A. Matsn~~s~~ (Ed.), Writing in reaMme: ~~~el~~g production processes.
New Jersey: Ablex.
Chase, W. G., & Ericsson, K. A. (1982). Skill and working memory. In 0. H, Bower (Ed.), The psychology of learning and memory, Val. 16, l-58, New York: Academic Press.
Chi, M. T., Glaser, R., & Rees, E. (1982). Expertise in problem solving, In R. J. Stemberg (Ed.),
Advances in the psychology of human intelligence, Vol. 2, Hillsd&, NJ: Lawrence Erlbaum
Associates, Inc.
C~cbmn-Smith, M. (1991). Word processing and writing in elementary classmoms: A critical review
of related literature. Review 0~~~~~~~~~~~
ReseaEh, 63, 107-155.
Collier, R. M. (1983). The word processor and revision strategies. Co&ge C~rn~~~~f~o~alsd
C~~~~~~~~n~
34,149-155.
526
Ransdell and Levy
Crowhurst, M., & Piche, G. L. (1979). Audience and mode of discourse effects on syntactic complexity in writing at two grade levels. Research in the Teaching ofEnglish, 13(2), 101-109.
Daneman, M., Carpenter,
P. A., & Just, M. A. (1982). Cognitive progress and reading skills.
Advances in Reading Language Research, 1,83-124.
Faigley, L., & Witte, S. (1984). Measuring the effects of revisions on text structure. In Research on
Written Composition: New directions for teaching (pp. 95-108). Urbana, IL: ERIC Clearinghouse.
Hayes, J. R., 8~ Flower, L. S. (1980). Identifying the organization of writing processes. In L. W. Gregg, &
E. R. Steinberg (Eds.), Cognitive processes in writing. Hillsdale, NJ: Lawrence Erlbaum Associates.
Hayes, J. R., & Flower, L. S. (1986). Writing research and the writer. American Psychologist, 41,
1106-1113.
Hillocks, G., Jr. (1986). Research on written composition: New directions for teaching. Urbana, IL:
ERIC Clearinghouse.
Hunt, K. W. (1983). Sentence combining and the teaching of writing. In The psychology of written
language. Chichester: John Wiley and Sons.
Joram, E., Woodruff, E., Bryson, M., & Lindsay, P. (1992). The effects of revising with a word processor in written composition. Research in the Teaching of English, 26, 167-193.
Kellogg, R. T. (1987). Effects of topic knowledge on the allocation of processing time and cognitive
effort to writing processes. Memory and Cognition, l&256-266.
Kellogg, R. T., & Mueller, S. (1989). Cognitive tools and thinking pe$ormance:
The case of word
processors and writing. Paper presented at the annual meeting of the Psychonomics
Society,
November, 1989.
Kirsch, G. (1991). Writing up and down the social ladder: A study of experienced writers composing
for contrasting audiences. Research in the Teaching of English, 25, 33-53.
Langer, J. A. (1984). The effects of available information on responses to school writing tasks.
Research in the Teaching of English, 18, 27-43.
Lutz, J. A. (1987). A study of professional and experienced writers revising and editing at the computer and with pen and paper. Research in the Teaching of English, 21(4), 398-421.
McClelland, J. L., Rumelhart, D. E., & the PDP Research Group. (1986). Parallel distributed processing: Psychological and biological models, Vol. 2. Cambridge, MA: MIT Press.
Nees-Hatlen, V. (1989). Personal communication on procedures for judging holistic quality.
Northcraft, G. B., & Neale, M. A. (1987). Experts, amateurs, and real estate: An anchoring and
adjustment
perspective
on property pricing decisions. Organizational
Behavior and Human
Decision Processes, 39(l), 84-97.
Ransdell, S. E. (1989). Producing ideas and text with a word processor. The Computer-Assisted
Composition Journal, 4, 22-28.
Ransdell, S. E. (1990). Using a real-time replay of students’ word processing to understand and promote better writing. Behavior Research Methods, Instruments, and Computers, 22, 142-144.
Redd-Boyd, T. M., & Slater, W. H. (1989). The effects of audience specification on undergraduates’
attitudes, strategies, and writing. Research in the Teaching of English, 23, 77-108.
Reed, W. M. (1992). The effects of computer-based
writing tasks and mode of discourse on the performance and attitudes of writers of varying abilities. Computers in Human Behavior, 8,97-l 19.
Reed, W. M., Burton, J. K., & Kelly, P. P. (1985). The effects of writing ability and mode of discourse
on cognitive capacity engagement. Research in the Teaching of English, 19(3), 283-297.
Rubin, D. L., & Rafoth, B. A. (1986). Social cognitive abilityas a predictor of the quality of expository
and persuasive writing among college freshmen. Research in the Teaching of English, 20(l), 9-21.
Sommers, N. (1979). Revision in the composing process: A case study of college freshmen and experienced adult writers. Dissertation Abstracts International, 5374-A.
Witte, S. P., & Faigley, L. (1981). Coherence, cohesion, and writing quality. College Composition
and Communication, 32, 189-204.
APPENDIX
A
Ho/is tic Quality Rating Dimensions
CONTENT OF THE ESSAY Weaknesses - writer uninvolvement
with topic
and unacknowledged
bias; Strengths - engagement with topic and awareness of
other views.
Process and product
527
PURPOSE/AUDIENCE/TONE: Weaknesses - unclear or unrealized purpose,
in appropriate or inconsistent tone; ~~~e~g~~~
- focus and intent clear and consistent and language and tone approp~ate.
WORDS/CHOICE AND AR~NGEME~:
Weaknesses - awkward or faulty
sentences; Strengths - readable, unambiguous sentences.
ORGANIZATION AND DEVELOPMENT: Weaknesses - few examples as
support, fragmentary thoughts, intent of paragraphing unclear; Strengths - adequate support and elaboration, sense of completeness, and closure and meaningful
paragraphing.
STYLE: Weaknesses - choppy, difficult to read prose, tendency to play safe
with words and ideas; Strengths - fluent, readable prose, occasional willingess to
be daring in thought or word.
TECHNICAL QUALITY~ECHANICS:
Weaknesses - immature sentences,
strange idioms, poor gr~rn~, spelling; Strengths - sustained point of view, tenses, gr~matic~
accuracy.
Note, Each dimension was rated on a 5-point scale, with 5 being strongest and 1
being weakest.
Download