Project guide - The Enabled Web

advertisement
Guide to the Low Vision Research Project
Understanding and operating the system
Tom Jewett
Research team
Principal investigator: Dr. Wayne Dick; research advisor: Dr. Kim Vu; research
associate and webmaster: Tom Jewett; researchers: Blake Arnsdorff, Elyse Hallett,
Zach Roberts, John Sweet; research associate, parts 2 and 3: Dr. Alvaro Monge.
Research conducted at the Center for Usability in Design and Accessibility (CUDA),
California State University, Long Beach.
Version date: 9/03/14
Overview
Goal: In part 1 (completed) and part 2 (current) we compare two methods of text
enlargement used by Web readers with low vision (visual acuity > 20/60). HTML
pages are identical for both methods.
 The first is a leading commercial screen enlarger, which is provided by the
CSULB campus to low-vision readers. This product simply enlarges the
screen content bit-by-bit, with no manipulation of the page layout.
 The second is text enlargement with ordinary controls provided by modern
browsers (we are using Firefox), coupled with accessible design techniques
that can be used by all readers, low vision or not. These techniques include
elements of so-called "adaptive design" and "responsive design," and also go
beyond the "sufficient techniques" recommended by the W3C.
 In part 3 (future) we will use the adaptive design to compare reading stamina
of low vision participants using "normal" Web styles versus styles that are
customized for each participant in aspects such as color and layout.
Tasks: each study participant completed six tasks (part 1) using the screen enlarger
and six tasks using browser controls. Tasks were taken from standardized sources
to permit statistically valid comparison with larger populations. There are four task
types and two different task sets for each type:
 Data entry, using a typical HTML form that might be found on a retail Web
site. Participants were given fictitious data to enter.
 Math (graph interpretation). There are two of these tasks per condition.
 Reading comprehension. There is one short and one long reading per
condition. Screen design of these tasks (in part 1) exactly mirrored the
standard printed page layout using numbered lines for reference.
 Proof-reading, using a standardized test given to U.S. Postal Service
applicants.
 In part 2 and part 3, only the reading tasks will be used. The page layout is
modified specifically for on-screen use by all readers, based on information
gained from part 1.
Variables compared:
 Task completion time (quantitative)
 Accuracy (quantitative)
 Qualitative information about the lab experience will be collected in all parts;
part 2 will also include:
o Phase 1: demographics
o Phase 2: academic survival interview
o Phase 3: in-lab quantitative study as in other parts
Task balancing: In part 1, 16 participants each ran a unique sequence of tasks,
which were balanced in three independently-varying dimensions:
 Condition: screen enlargement first, browser controls second, or vice-versa
 Task set: set one first, set two second, or vice-versa
 Order of tasks within each task set (data entry, math, comprehension, proofreading)
 In parts 2 and 3, only the reading task will be used; balance will be by
condition and task set only.
Participants: participants in this study are volunteer students from the CSU Long
Beach campus. Each one is compensated with a small gift card for on-campus
purchases. To achieve sufficient sample size in phase 1, the participants were fully
sighted; this phase also served to refine our testing materials and methods.
 Low vision conditions were simulated by using a mini-tablet display placed
at a distance which is at the self-reported extreme limit of the participant's
visual acuity for a non-enlarged page.
 Participants had the use of a standard keyboard and mouse connected to the
display, as would a low vision reader using a large screen.
 Task pages were specifically designed to present typical challenges to
reading that are encountered in every-day Web use by low vision readers.
 Parts 2 and 3 will be performed with a smaller sample of true low vision
participants, using the visual environment that is currently provided to them
on campus.
Presentation sequence
Guided by members of the research team, each participant accomplishes a sequence
of activities, which we term a "run." Much of this sequence is facilitated by a customdesigned computer program in which a web-based front end communicates with a
back-end database. Both of these will be explained in more detail later in this
document.
 In a conference room environment, the participant completes necessary
paperwork such as consent forms and the researcher will provide an
overview of the study.
 Moving to the research test station, the researcher enters starting
information for the run.





The participant is then shown an orientation page, and the researcher will
adjust the display position and measure the participant's visual acuity limit.
The researcher will also configure the tablet (part 1) and provide instruction
on the use of keyboard controls needed for the starting condition (screen
enlarger or browser controls).
The researcher enters the measured visual acuity limit on the starting page,
then begins the actual run.
For each of the first condition tasks, pages will be presented in sequence as
described above. Each page will begin with a popup, insuring that the
participant is ready to begin (at which time the completion "clock" will start).
Completion time is recorded when the participant clicks (or presses Enter)
on the "Next" button.
After the first tasks are completed, a "Short Break" page is presented, from
which the researcher will again show the orientation page, re-configure the
tablet (part 1), and provide instruction on controls needed for the second
condition (browser control or screen enlarger).
The second sequence of tasks proceeds as did the first. When the "Finish"
page appears, the participant is thanked and excused; the researcher may
optionally begin another run.
Publications
Analysis of part 1 data has resulted so far in two publications:
 John Sweet et.al., "A Preliminary Investigation into the Effectiveness of Two
Electronic Enlargement Methods", submitted to the 30th International
Technology & Persons with Disabilities conference.
 Elyse Hallet et.al., "The usability of magnification methods: A comparative
study between screen magnifiers and responsive web design", invited paper
for HCI International, 2015.
Future publications will of course include findings from subsequent parts of the
study. Results will also be communicated to relevant policy-making organizations.
Computer variables
This and the following section are intended for programmers who wish to
understand the automated working of the system. The variables listed here are used
by both the program and the database. For a complete description of the database,
please see our Data Dictionary and Data Diagram. Also available is the complete
MySQL code used to set up all tables and all values in static-information tables.
 Setup variables (set by the researcher in the setup.html page and stored in
the lv_runs table):
o Researcher ID: runby
o Run type: runtype (Practice, Research, or Single Page)
o Participant ID: partid
o Presentation sequence: sequence
o Single page name for testing: pageSelect
 Setup variable (set automatically by the database in the lv_runs table):
o Run ID: runid (a unique identifier for each run of any type)
 Hidden summary variables set by JavaScript on each task page and recorded
in the lv_runpages table:
o Completion time: timeOnTask
o Number of keystrokes: keycount
o Number of mouse clicks: mousecount
o Actual sequence of keys pressed: keystrokes
 Hidden variable built in (hard-coded) on each page and recorded in the
lv_runpages and lv_runpageanswers tables:
o HTML page name: thispage
 Hidden variables added by the sequencing program to each page; used by the
program to store returned values and find the next page in sequence:
o Run ID: runid (from above)
o Presentation sequence: sequencenmbr (sequence from above)
o Page order in the sequence: pageorder (from lv_sequencepages table)
o Run type: runtype (from above)
 Visible variables on each task page:
o Question answers (names vary by page; values stored in
lv_runpageanswers table; accuracy computed by comparison with
lv_pageanswers table and question count in lv_pages table).
Program flow
This section provides a natural-language summary of the lowvis.php script
operation. Full text listings of the PHP code and the JavaScript code are also
available.
 Read the name of the calling page
 First call (no page name) or call from Finish page:
o Output the Setup page
 All other calls to the script:
o Call from Setup page:
 Read setup variables
 Authenticate researcher (exit if not authorized)
 Store setup variables in lv_runs table
 Read unique runid generated by lv_runs
 Find (from number in lv_sequencepages) and output the first
task page content
 Output the hidden sequencing variables and end of page
o Call from any other page:
 Read summary and sequencing variables
 For single page run:
 Read and store task answers in lv_runpageanswers
 Compute accuracy
 Store summary variables in lv_runpages
 Output stored values as an html page
 For practice or research (full) run:
 Read and store task answers in lv_runpageanswers
 Compute accuracy
 Store summary variables in lv_runpages
 Find (from number in lv_sequencepages) and output the
next task page content
 Output the hidden sequencing variables and end of page
Download