SPECIFICATIONS FOR HUMAN FACTORS REPORTS (better known as Superman Solves a Puzzle) by: Dr. Paul Green University of Michigan (Ann Arbor) Department of Industrial & Operations Engineering (IOE) and the Transportation Research Institute for IOE 333, 334, 433, 436, 491 August, 2003 2 TABLE OF CONTENTS BACKGROUND ................................................................................................ 1 GENERAL REQUIREMENTS ........................................................................... 1 DETAILED REQUIREMENTS ........................................................................... 3 Cover ...................................................................................................... 3 Table of Contents ................................................................................... 3 Introduction ............................................................................................. 4 Test Plan ................................................................................................ 5 Test Participants .......................................................................... 5 Test Materials and Equipment ..................................................... 5 Test Activities and Their Sequence ............................................. 6 Results ................................................................................................... 7 Conclusions ............................................................................................ 11 References ............................................................................................. 12 Appendix ................................................................................................ 14 FOR MORE INFORMATION ............................................................................. 14 SAMPLE JOURNAL ARTICLES ....................................................................... 14 SOME WEB RESOURCES ............................................................................... 15 LAST STEP ....................................................................................................... 15 BACKGROUND Human factors/ergonomics courses expose students to human performance data, research methods, analytic procedures, and report writing. Writing reports is an activity most engineers detest but all engineers must do well if they want to succeed. This document contains complete specifications for writing human factors/ergonomics research reports. It also includes an abbreviated hypothetical report ("Superman Solves a Puzzle"). Technical reports are used by society to define knowledge and by engineers, managers, and others to make design decisions. To make decisions based on technical reports, they need to know exactly what was done and how. Decision makers need to know what was investigated. They also need to be able to judge if the method was appropriate for their application of interest, and if not, how the data might be adjusted to be useful. Engineers and managers also need to know the complete results and any recommendations from the report authors. Students still ask, "Good grief, why are you using this detailed format?" In the past, when what was desired was left unsaid, students did not know what to include in their reports and frequently omitted necessary information. Neither students nor faculty members were happy with that arrangement. These specifications have made it easier for everyone. Remember, a Pascal program will bomb if a semicolon is missing. If that level of perfection is required to communicate with computers, it should be reasonable to expect similar precision in communicating with people. The difference between good and poor engineering is in the details. While other courses may ask for reports that differ superficially, the essence of how a report is written will not differ. Further, life after the University of Michigan will not involve taking tests, but there will be many reports to write, for which this document is a guide. Read this document carefully before starting the first few reports you write. After the report is written, use this document as a checklist. GENERAL REQUIREMENTS 1. All human factors/ergonomics reports will follow this format unless otherwise stated. If there are conflicts between the requirements in Superman and those you may seen elsewhere (in another class, in a book, etc.), the requirements of Superman take precedence for this class. 2. THE GOAL IS TO PROVIDE ENOUGH INFORMATION IN THE REPORT SO THAT A REASONABLY KNOWLEDGEABLE PERSON COULD DUPLICATE YOUR METHODS AND RESULTS WITHIN THE LIMITS OF STATISTICAL ERROR. 1 (Scientists refer to this as "publicly replicable.") This includes engineers without specific training in human factors and no knowledge of the particular experiment. (The instructor is not the intended audience.) If in doubt about how to present something and specific instructions do not appear in this document or the course pack, rely on this general rule for guidance. Think about the decisions that might be made based on the report and what the reader should be told. 3. As you consider how to present information, bear in mind that “user friendliness/ reader friendliness” is important. 4. The entire report will be written in English in a sentence format. Each sentence should contain a subject, verb, and should be a complete thought. Ideas should occur in a logical sequence and be connected. Use plain, clear writing. "Flowery" language has no place in technical reports. 5. Since the report describes a completed experiment, use the past tense (for example, "Participants responded by..."). 6. Write in the third person (he, she, they) not the first person (I, we). For example use - "Subjects 1 and 2 made fewer errors" - not - "We made fewer errors." 7. Make sure the pages of the report are securely fastened together. Pages from reports held together with paper clips or stuffed in folder pockets are easily lost (or recycled). 8. Said the king to the queen, "If thou knowest not each page by name, number thy pages." The cover does not get a number but pages in the appendix are numbered and are included in the total page count. 9. Consistently use one set of units, preferably metric (kilograms, meters, etc.) not English units (pounds, feet, inches), as the primary measurement system. To help the reader unfamiliar with the metric system, English units may also be shown in parentheses. If the equipment presented the data in English units, then provide the metric units in parentheses. 10. When referring to numbers, the accepted rule is to express them as words if they are the first word of a sentence (since digits cannot be capitalized). Elsewhere, use digits if any value in the sentence is 10 or more, or if more than 3 values are in the sentence. Personally, I always use digits except for the first word of a sentence because the text is easier for non-native English readers to understand. Choose either way, but be consistent. 11. Make reports faxable. Do not use color or gray scale (or gray backgrounds on figures). For some fax machines gray may become black or white, resulting in no contrast with other graphical or text elements. 12. The Engineering College assumes that students have learned how to use Word, Excel, PowerPoint, and a drawing application in high school (or before). If you need 2 a demo of a drawing application, contact your instructor. If you do not know how to use Word, arrange to meet with either your instructor (along with any other students who may lack such knowledge) well before the first assignment is due. 13. Twelve point type is preferred for the body of the report, but 10 point might be acceptable in some cases. Larger sizes may be desired for the cover. Finally, be sure the text in figures and tables is big enough to be easy to read. 3 DETAILED REQUIREMENTS 1. Every laboratory report will have a cover. Use laser printer paper for the cover, not cardboard or plastic. On the cover will appear: • the experiment title Provide a title that is specific, complete, and brief. (For example use "High Temperatures and Short Term Memory Loss" and not "Heat" or "The Effects of Artificially Induced High Temperature on Recall in a Brown-Peterson Task in a Laboratory Setting." The best title, the title you should use, may not be the title given in the course pack. When in doubt, select titles that are specific over those that are brief. Imagine the reader is viewing a list of titles and sees yours. Can they determine, using only the tile, if your report pertains to the question they are trying to answer? Experience has shown that creating the title should occur after the report is basically done. Consider the following approach. a. The authors should brainstorm (that means no criticism) to identify key words that should appear in the title. The conclusions is a good source for such words. Think about the audience. Is the audience primarily researchers (interested in methods), practitioners (interested in conclusions), or a mixture of both? b. From the list of potential key words, identify those that are most important. c. Construct several possible titles using the key words and select the best one. Optional structures include asking a question ("How does age ..."), giving an answer, describing the factors examined ("Effects of ..."), or two phrases separated by a colon. Pay special attention to the first word (often used to search alphabetized lists), what happens if the title is truncated, and the use of obscure words in the title. • the author(s) - the name of student or students who wrote it • the instructor's name • the course title, section number (including the University and Department name ...do not abbreviate), and the meeting day and time (for courses with multiple sections) • the date submitted The section information is strictly for the grader's and instructor's convenience. More than once students have "lost" reports (for example, at the library), but, because the cover page was complete, their reports managed to wander to the appropriate faculty office. 2. Every laboratory report more than eight pages long (not counting the cover) will have a Table of Contents. It will be the first page after the cover page. On it will appear the words "Table of Contents" along with all of the section and subsection names and the pages where they start. For the convenience of the grader, put the initials 4 of the authors of each section named in the Table of Contents. Do not include Figure or Table numbers as Table of Contents entries. 3. Every laboratory report will have four sections entitled "Introduction," "Test Plan," "Results," and "Conclusions," and may contain two sections entitled "References" and "Appendix" in that order. 4. The Introduction must describe (a) the problem, (b) why the problem is important, (c) the purpose or object of the study (the questions addressed) , and (d) review previous research. Be specific. (For students in IOE 333 and 334, reviewing previous research is not required.) Often example applications strengthen your arguments. The potential benefits of the research should also be mentioned. Remember, you are writing for a professional audience, so the objectives are the scientific objectives (e.g., "Are the lighting levels ... adequate?"), not the expected learning/educational objectives (e.g., learn how to use light meters). Stating the objectives as who, what, when, where, why, or how questions is essential. The research objectives, the questions to be addressed, may be explicitly stated in the assignment description. At other times, an assignment may only generally describe the questions, in which case refinement of the questions is necessary or because of the way the experiment was executed, some modification of the stated question or questions is needed. Often, after carrying out an experiment, something may be learned regarding a question that was not part of the original questions to answer (for example, how a test protocol might be improved). Adding those questions to the introduction is appropriate. An example introduction section follows: INTRODUCTION Manufacturers are constantly seeking ways to reduce costs so as to remain in business. A significant fraction of those costs, especially for assembly operations, are related to labor, in particular the cost of hiring employees and employee productivity once hired. Hiring decisions are often made based on selection tests, tests that must be equitable to meet legal requirements and reliable for their intended purpose. To date, there have been few selection tests whose scores have been highly correlated with on-the-job evaluations (Sanders and McCormick, 1987). Given that it costs $10,000 to train each new worker, effective worker selection can lead to significant cost reductions for an employer. Because human tests can be costly, Smart and Not (1971) have proposed using animals of different intelligence levels instead of people to evaluate the validity of selection tests. It is suspected, however, that differences between people and animals may decrease with practice. Others have proposed that it would be informative to test those who are allegedly very intelligent (Successful Superheros, 1968; Smith, Smith, Smith, and Smyth, 1991). In utilizing them as subjects, care must be taken to avoid their use of special powers (Pittenger, 1983). 5 This report examines several new tests that may overcome those limitations. Three questions were examined: Are the overall results for animals, regular employees, and superheros the same? Are the results for individuals consistent from trial to trial? Do the test scores correlate with on-the-job performance? A good report will have a strong connection between (1) the questions or issues raised in the introduction, (2) the test method described, (3) the results addressing those questions, and (4) the topics in the conclusions section. 3. The Test Plan section should describe how the experiment was conducted. The Test Plan section will consist of 3 subsections: "Test Participants," "Test Materials and Equipment," and "Test Activities and Their Sequence." (Most professional journals use the terms: "methods," "subjects," "apparatus," and "procedure" for these sections. The substituted terms should be easier for students to understand.) Within the Test Plan section, choose an order for these three subsections that facilitates the flow of ideas. 4a. The Test Participants subsection will state who participated in the experiment, how many subjects there were, and how they were selected (volunteers? paid?). Selection procedures may relate to the motivation of subjects, which in turn could affect the quality of the data collected. Major demographic characteristics such as age and sex and always included and key physical descriptions (e.g., handedness) and incapacities (e.g., visual defects, etc.), may be included. If a human characteristic might have an effect on the data, then mentioned it. So, for example, if the experiment concerned sensitivity to various odors (a smelly study?), you probably would not mention how many participants were left-handed (unless they responded by pressing buttons) but you would verify that participants did not have olefactory disfunctions. Use text, not tables, to describe the participants. Except where participants are well known and what they are known for is important to the experiment (e.g., a study of the throwing biomechanics of baseball players), refer to participants by number (preferred) or by using their initials, not their names to assure privacy. If there are more than four subjects, consider using ranges and means to describe them instead of listing characteristics for each subject. 4b. The Test Materials and Equipment subsection will describe all of the materials used. In brief, if someone was going to order, build, or reconstruct the materials and equipment, what would they need to know? How can graphics be used? Reportable items include the size, color, contrast, and luminance of materials on slides or in test booklets and the frequency/intensity composition of auditory stimuli. If complex visual stimuli are used, examples should be included as figures. Information about stimuli (for example, font) is more important than equipment specifics (for example, 6 which computer was used). The materials are more important than the equipment in most cases. How the test hardware was arranged and connected should also be identified (possibly using a figure). Sometimes a logical arrangement is sufficient, but if viewing or listening distance is an issue, then a dimensioned drawing is necessary. Also mention the software used and provide source information. If there is a user's manual for the package, give a citation for it. Use paragraphs, not an outline. Be sure to identify the manufacturer, model and/or model number, name, size, and arrangement of each piece of equipment. Since all equipment having the same model number (or catalog number) is assumed identical, do not include serial numbers. A serial number is generally not meaningful to others. Depending on the situation, the serial number may either uniquely identify that item (every item made by that company has a unique serial number) or do so only in combination with the model number. So a Sears model xyz 8-inch circular saw, serial number 125 may be the 125th copy of that saw. Think about where you might put some of the details on a figure to shorten the explanatory text. In general, if you get the same model number item (or a similar item), you should be able to satisfy the public replicability requirement. 4c. The Test Activities and Their Sequence subsection should summarize each step of the test. It should describe the experimental tasks (including stimulus duration, intertrial intervals, the nature of feedback to participants, etc.), the exact instructions to participants for key phrases (shown in quotes), the order of experimental conditions (e.g., which trial blocks occurred first, how many trials were in each block, etc.) and the method of randomization. One strategy for keeping this section short is to put some of the details describing each test block in a table. Where a standard procedure was followed (e.g., a manufacturer's calibration instructions), summarize it. Please do not rehash handouts or manuals or describe the process by which the protocol was developed (First, we brainstormed about the problem. Then...) in any detail. You may find it helpful to include critical phrasing given to subjects in quotes. In describing the design of an experiment, use terms consistent with practice in the behavioral sciences. For the "button pressing" experiments use the following terms: trial - the presentation of usually 1 item of information (slides, a tone, etc.) to which a person immediately responds; block - a group of trials that occurs in a continuous sequence, often 10-20 trials; condition - one or more blocks of trials over which some parameter of interest does not change (for example, speedometer type or veiling luminance). The Test Activities and Their Sequence section should say what you did and how you did it. The most common mistakes students make in the Test Plan section, especially in the initial reports, relate to not providing enough detail. If you are unsure if an item should be included, include it. 7 A sample Test Plan section follows: TEST PLAN Test Participants There were two groups of participants, all of whom worked part time in the Hood Ornament Assembly Department of Global Motors for 1 year. None of the participants had experience in assembling puzzles. The "superheros" group included 1 right-handed male humanoid from the planet Krypton (S). He was reported to possess X-ray vision. He was reputed to be "faster than a speeding bullet, more powerful than a locomotive, and able to leap over tall buildings in a single bound." He was a volunteer and not paid for his time. The "animals" group consisted of 1 male flying squirrel (R) and 1 male moose (B), both of unknown ages. The squirrel was right pawed. The moose right hoofed. Test Materials and Equipment The test material was a Frostbite Enterprises model IC 3-dimensional puzzle. The puzzle was full size replicate of the famous Kirwood Derby. Solution times were measured using a Mattel Toy Company model JLM "Just Like Mommy's" hour glass egg timer. The parts assembly sequence was recorded by the experimenter using a custom program (Computer-Nerd, 1986) running on a CRAY-1 computer under the Cray Timesharing Operating System. Supervisor's evaluations were made using the Global Motors Supervisor's Standard Annual Appraisal , Promotion Evaluation, and General Review Form (GMSSAAPEGRF) (Global Motors, 1991). Test Activities and Their Sequence The participant was seated on a large, open, raised platform located in the center of a main traffic intersection in downtown Gotham City. Data was collected on clear, warm, sunny days (approximately 20 degrees C), except for the superhero’s last trial (snowing, -5 degrees C). Before starting each trial, the disassembled puzzle was placed in a drum that was shaken to randomize the arrangement of the pieces. When that process was complete, the participant was instructed to "assemble the puzzle as rapidly as possible" and to "ignore the crowd's jeers." The contents of the drum were then dumped on a table before the participant, the timer was started, and he began. When the participant correctly assembled all of the pieces, the timer was stopped. The pieces were then placed back in the drum to be shaken. The participant was given a 5-minute break before beginning again. Each participant assembled the puzzle 5 times. 8 Supervisor's ratings of on-the-job performances were obtained from the GMSSAAPEGRF data base from Global Motors. Only the end of the year evaluation for 1999 was available. 7. The Results section summarizes the data collected and, as was noted earlier, must thoroughly address each of the objectives, issues, or questions listed in the introduction. If only a few data points were collected and the purpose of the study was to collect those numbers (as in the case of the photometry experiment), then the data belong in the results. If a limited amount of data was collected and it does not appear in figures or tables, then the raw data may go in the appendix. If reams of data were collected (as in the case of most of the computer-controlled experiments), the raw data should not be included in the report. If any data were discarded before analysis, provide a well-supported rationale. Usually included in the results section are tables and figures. They are an excellent way to present the data when a simple mean in the body of the text is inadequate. When several conditions are being compared, they should appear in the same table or figure. Guidelines on figure design appear in Gillan, Wickens, Hollands, and Carswell (1998). When you refer to a figure or table, point out what you want the reader to observe. Don't just say, "the results are in Figure 1." (Notice the F in Figure and the T in Table are capitalized.) Do not refer to tables as "Chart 1" or figures as "Plot 1," "Graph 1," or "Exhibit 1." Figures and tables always appear on the page on which they are cited or, if adequate space is not available, on the next page. Use cut and paste functions to insert information from Excel, MacDraw, and other applications into Word so pages are properly numbered. Tables are arrays of data with headings to identify each entry. Every table should have a number and title that appears above the table. Exact duplication of tables in the course pack may not be the best way to present the results. An example table (Table 1) follows: 9 Table 1. Performance of Participants in Test Battery (Means) Test (units) Garbonzo Bean Counting (beans/hr) Long Distance Run (furlongs/fortnight) Frisbee Toss (meters) JK 3205 RN 2610 Participant GF JC 2885 3195 175 75 80 230 100 200 120 5 -5 120 40 100 RR 2950 GB 3100 Figures are graphic illustrations of the data and include plots, histograms, sketches, pictures, and drawings. Every figure must have a number and title, though for some unknown reason, that information appears below the figure, the opposite of tables. If there are 3 items or less, showing them as a figure (e.g., a histogram with 3 bars) is a waste of space. Figure axes must be labeled with legible text and units must be given. Chose a figure style that best represents what you want to show. Often this is not the default output from Excel. For distributions, histograms and cumulative plots are appropriate. For showing the relationship between variables, scatter or line plots are appropriate. If you select a jagged line plot, limit the number of dependent measures shown (the items for which jagged lines are shown) to 6 (or less) and make sure each line is coded (dashed, solid, etc.) and/or labeled. With more than 6 lines, trends are difficult to spot and figures become busy. Jagged line plots should be used when data points are connected in time (trial 1, 2; minute 1, 2; etc.) though they often are appropriate in other situations. Be economical in developing figures. Think about where placing multiple variables on the same figure may aid in seeing new relationships and save space. If multiple figures are used, make sure the axes are consistent. (So 2 figures referring to the same variable should have the same spacing and range.) To avoid wasting paper, do not use a whole page for a figure. A sample figure (Figure 1) follows: Table 1 Figure 1. Test Furniture Averages (mean, median, mode), ranges, and measures of variability are often worth including. Since most 333, 334, and IOE 436 students have not had statistics, they need not provide results from t-tests, Analysis of Variance, or regression 10 analysis. Some statistics are expected of IOE 433, 534, 535 students. Unless computations are unique or new, do not include any calculations in the report. Always provide the formula used in making a computation except for a common statistic such as a mean. For example: "the Pluto Performance Pace (PPP) was computed using formula (1)....etc." 5 (1) PPP = ∑ (t x popi x h) i=1 where PPP = t= popi = h= Pluto Performance Pace current temperature in Nome, Alaska population of 5 largest cities in the U.S. number of hairs on Superman's head When reporting raw data and calculations be wary of the number of significant digits used. The sum of one and one is not 2.000000 even though your calculator suggests otherwise. Finally, in describing the findings, do not force the reader to memorize abbreviations (for example, display #1, condition A). Spell it out (for example, the Ford Speedometer) A sample Results section follows: RESULTS Individual Differences Figure 2 shows the relationship between puzzle solution time and practice for each participant. The mean time for the superhero to solve the puzzle was 6.5 hours. Public (1973) reported that "average citizens" took only 1.7 hours to perform the same task. Similarly, the mean times for the moose and squirrel were also less than the superhero (3.6 and 3.2 hours, respectively). 11 10 Time (hours) 8 6 Superhero 4 Moose Squirrel 2 Average Citizens (Public, 1973) 0 0 1 2 3 4 5 6 T rial Figure 2. Puzzle Solution Time Versus Practice. Changes with Practice As shown in Figure 2 the performance for average citizens showed steady improvements with practice, as did the squirrel, and, to some extent, the moose, a pattern similar to that for average citizens. The improvement in superhero performance was more substantial dropping from 9.5 to 4.0 hours by trial 4. However, performance for the superhero on trial 5 was the same as trial 1, 9.5 hours. (Trial 5 began after almost 24 hours of continuous effort. The subject was quite tired at this point.) The superhero in this study chose the trial and error method for solving the puzzle. It was less expedient than the more sophisticated analyses of average citizens. The procedures used by the moose and squirrel defy description. Correlation of Test Scores with Performance Figure 3 shows the relationship between puzzle solution times and supervisor's performance ratings. There was no relationship between the two sets of data. 12 8 Performance Rating 7 Superhero 6 Average Citizens 5 4 Squirrel 3 2 Moose 1 2 3 4 5 Solution Time (hrs) 6 7 Figure 3. Puzzle solution times vs. supervisor's performance ratings 8. The Conclusions (note plural) should draw out key findings contained in the previous results section and emphasize the implications of those findings. The most common reasons for low scores on the first report following the Superman format are (1) the questions to be addressed are not stated explicitly in the introduction, or (2) the questions were not thoroughly addressed in the results and conclusions sections. The Conclusions section should not be a simple restatement of the results nor a place to present additional results. For example, in the results section one might say - "the mean response time for the low luminance condition was 305 milliseconds, for moderate luminance was 375, and for high luminance it was 424." In the conclusions one might say - "Increasing luminance led to improved performance. Therefore, in designing widgets...." One rarely refers to tables or figures in the Conclusions section. General recommendations also appear in the Conclusions section and so do comments on the merits of the study. Avoid personal comments and petty fault finding. ("I did not learn anything from this study.") Sometimes, the outcome of any experiment may be that one cannot determine if there are differences between conditions. Comments about the experimental method are desired. Where a critique is offered, realize that one never has infinite funds, subject time, or the ideal equipment for an experiment. Are the sources of error or flaws in the method or data serious enough to discount the findings? If there are any, support your concerns with data or concrete evidence, not speculation or laundry list of possibilities. For example, lets imagine your were examine the ease of reading 2 displays, but your believe the data from 1 of the displays was flawed because the display came loose from its mounting and vibrated at the end of the experiment. If that is the case, then you should note exactly when the changed occurred, compare the 2 situations, and provide 13 quantitative data on the amount of display vibration in the 2 situations. You should not draw conclusions about situations you did not test unless you have cited supporting research literature or similar strong evidence. A sample Conclusions section follows. CONCLUSIONS According to these data, the results for the three groups were not the same. Average citizens solved puzzles more rapidly than superheros, both the first time a puzzle is attacked and after considerable practice. Their performance was also superior to that of the moose and the squirrel, though the difference between average citizens and the two animals was constant.. This suggests that the performance of average citizens can be predicted using adjusted scores from the moose and squirrel. Average citizens had better performance because the solution method they chose was more appropriate. In addition, the unusual experimental conditions of this study should be taken into account. (See Farout, 1983.) It is the experimenter's view that the sub-zero temperature (the subject was clad only in tights and a light cape) and swirling snow present during the experiment served to degrade the subject's ability to handle the puzzle pieces and obscured the partially assembled puzzle. The subject also complained of a lack of familiarity with the test materials, a situation that did not occur in the Public, 1973 study. "It would have taken me a lot less time to put the puzzle together if it looked a little more like Lois Lane." As was indicated above, there were trial to trial differences, especially for the superhero, whose improvements with practice were substantial, except for the last trial. Beyond that, the results were consistent from trial to trial. Finally, there was no correlation between the puzzle test score and on-the-job performance (as indicated by supervisor's performance ratings). Therefore, the Puzzle task should not be used to select workers for assembly tasks. These results might be challenged on the basis that the test conditions were not representative of the working conditions for which the findings are intended to be applied. Certainly, the difficult test conditions lead to poorer performance. However, there is no reason to believe that there is an interaction between solution time, performance rating, and test condition difficulty, so, the same conclusions about the merits of the Puzzle task would be reached under more optimal test conditions. 9. Documents are listed in the References section so they can be easily retrieved by the reader. Include only documents cited in the text. (Most are in the introduction.) So, if you consulted a book but did not mention it in your report, do not list it. When referring to a source in the text, give the author and date in parenthesis (e.g., 14 McCormick and Sanders, 1982). If taking a quote word for word from a text, the citation in the text should give the page number of the citation (e.g., McCormick and Sanders, 1982, p. 93) and quote marks should surround the text. In the reference section, list entries alphabetically by author or lead item. To facilitate retrieval, make sure your listings are complete. For books, the author(s), title, place of publication, publisher, and date of publication should be provided in that order. Web sites are too new for common practice to be established, so follow the principles of easy retrieval and consistency with other references. This suggests listing the author of the site (or the organization it represents), the date of the retrieval, the page or site name, and the web site address. Sample references follow: REFERENCES Computer-Nerd, A. (1985). "User's Manual for the IDSFA (It Doesn't Stand for Anything) Data Recording Program." Ann Arbor, MI: Galactic Software. Farout, O.W.O.W. (1983). Human performance and smoke. Lecture presented in Chemical Engineering 256 (Toxicology), Holy Cow University, Plasticville, MI, February 13, 1983. Gillan, D.J., Wickens, C.D., Hollands, J.G., and Carswell, C.M. (1998). Gudielines for Presenting Quantitative Data in HFES Publications, Human Factors, 40(1), 28-41. Global Motors (1991). Global Motors Supervisor's Standard Annual Appraisal , Promotion Evaluation, and General Review Form (GMSSAAPEGRF) (GM form W399X321-zzz/2 revision 6a.1), Detroit, MI: Global Motors, Personnel Department. McCormick, E.J. and Sanders, M.S. (1987). Human Factors in Engineering and Design (6th ed.) New York: McGraw-Hill. Pittenger, J.B. (1983). On the plausibility of Superman's x-ray vision., Perception, 12, 635-639. Public, J.Q. (1973). People solve puzzles, Puzzle Review, 14(2), 147-148. Smart, I.M. and Not, U.R. (1971). How to spot a dummy, Journal of Experimental Brain Research, 30(1), 82-88. Smith, A.A., Smith, B.B., Smith, C.C., and Smyth, D.D. (1991). Batman is not birdbrain, Brains, Brains, Brains, 3(3), 1-2. Successful Superheros, (1950). Gotham City Enquirer, February 31, p.1. UMTRI Human Factors Division Driver Interface Group (December, 2002). Driver Interface Group Home Page (http://www.umich.edu/~driving). 15 Finally, when you refer to a source that cites another, make sure your reference indicates what you examined. So, if the text said "...Aardvark (1982)..." the reference should be Aardvark, A. (year). "title," Journal, volume (number), pages if you looked at Aardvark --orAardvark, A. (1982), etc., pages as cited in McCormick, E.J. and Sanders, M.S. Human Factors in Engineering and Design, etc., if you did not. 10. The "Appendix," if included, will contain all secondary information and usually the raw data. It will not serve as a dumping ground for figures and tables. They belong in the text. FOR MORE INFORMATION American Psychological Association (1994). Publication Manual of the American Psychological Association (4th ed.), Washington, D.C., American Psychological Association. Benford, H. (1967). The Literate Naval Architect (3rd ed.), Ann Arbor, MI: Department of Naval Architecture and Marine Engineering, University of Michigan, November. Human Factors and Ergonomics Society (1992). Author's Guide, Santa Monica, CA, Human Factors and Ergonomics Society. Strunk, W., Jr. and White, E.G. (1972). The Elements of Style (2nd ed.), New York: MacMillan. (A 78 page masterpiece that sells for about $2.) Turabian, K.L. (1973). A Manual for Writers of Term Papers, Theses, and dissertations (4th ed.), Chicago, IL, The University of Chicago Press. (This is probably what you used in high school. There are major differences between her style and styles used for technical writing.) SAMPLE JOURNAL ARTICLES If you are still uncertain what to do, look at these articles for ideas. If that does not prove to be satisfactory, go to the UMTRI Library and look at the reports and papers Paul Green has written. Miller, G.A. (1956). The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information, The Psychological Review, March, 63(2), 8197 (violates some of the rules but is well written). 16 Roscoe, S.N. (1979). When Day is Done and Shadows Fall, We Miss the Airport Most of All. Human Factors, 21(6), 721-731. Smith, S.L. (1979). Letter Size and Legibility. Human Factors, 21(6), 661-670. (This paper does not follow all the rules but it is very interesting.) SOME WEB RESOURCES results of search of Yahoo using "techncial writing, " google.com “publication style guides” The Mining Company (2000). Guide to Web Resources for Technical Writers (http://techwriting.miningco.com/arts/techwriting/). Society for Technical Communication (2000). Society for Technical Communication home page (http://www.stc-va.org/). Dewey, R.A. (2000). Welcome to Psych Web (http://www.psychwww.com/). Dewey, R. (2001). APA Publication Manual Crib Sheet, (http://www.wooster.edu/psychology/apa-crib.html) University of Chicago (2000). Citation and Style Guides for Psychology (http://www.lib.uchicago.edu/e/su/psych/citing.html). LAST STEP 11. The last task of an author before submitting a report is to proofread it carefully. A wise decision is to have someone unfamiliar with the report's content of the report read it as a final check. 17