Journal of Neuroscience Methods 222 (2014) 15–23 Contents lists available at ScienceDirect Journal of Neuroscience Methods journal homepage: www.elsevier.com/locate/jneumeth Clinical Neuroscience An initial validation of the Virtual Reality Paced Auditory Serial Addition Test in a college sample Thomas D. Parsons a,∗ , Christopher G. Courtney b a b Department of Psychology, University of North Texas, Denton, United States Department of Psychology, University of Southern California, Los Angeles, United States h i g h l i g h t s • • • • We validate a Virtual Reality Paced Auditory Serial Addition Test (VR-PASAT). The VR-PASAT requires sustained attention at an increasingly demanding rate. The VR-PASAT is an attentional processing measure. The VR-PASAT provides unique information not tapped by traditional attention tasks. a r t i c l e i n f o Article history: Received 26 August 2013 Received in revised form 6 October 2013 Accepted 9 October 2013 Keywords: Neuropsychological assessment Ecological validity Paced Auditory Serial Addition Test Virtual environment a b s t r a c t Background: Numerous studies have demonstrated that the Paced Auditory Serial Addition Test (PASAT) has utility for the detection of cognitive processing deficits. While the PASAT has demonstrated high levels of internal consistency and test–retest reliability, administration of the PASAT has been known to create undue anxiety and frustration in participants. As a result, degradation of performance may be found on the PASAT. The difficult nature of the PASAT may subsequently decrease the probability of their return for follow up testing. New method: This study is a preliminary attempt at assessing the potential of a PASAT embedded in a virtual reality environment. The Virtual Reality PASAT (VR-PASAT) was compared with a paper-and-pencil version of the PASAT as well as other standardized neuropsychological measures. The two modalities of the PASAT were conducted with a sample of 50 healthy university students, between the ages of 19 and 34 years. Equivalent distributions were found for age, gender, education, and computer familiarity. Results: Moderate relationships were found between VR-PASAT and other putative attentional processing measures. The VR-PASAT was unrelated to indices of learning, memory, or visuospatial processing. Comparison with existing method(s): Comparison of the VR-PASAT with the traditional paper-and-pencil PASAT indicated that both versions require the examinee to sustain attention at an increasingly demanding, externally determined rate. Conclusions: Results offer preliminary support for the construct validity (in a college sample) of the VR-PASAT as an attentional processing measure and suggest that this task may provide some unique information not tapped by traditional attentional processing tasks. © 2013 Elsevier B.V. All rights reserved. The Paced Auditory Serial Addition Test (PASAT) is a serial addition task developed as an aurally mediated alternative to visually mediated assessments of attention processing (Sampson, 1958; Sampson and MacNeilage, 1960). The PASAT has been found to have robust correlations with tests of executive function (e.g., Stroop; Trails B; Card Sorting; Spikman et al., 2001; Sherman et al., 1997). ∗ Corresponding author at: Clinical Neuropsychology and Simulation (CNS) Lab, Department of Psychology, University of North Texas, 1155 Union Circle #311280, Denton, TX 76203, United States. Tel.: +1 940 565 4329. E-mail address: Thomas.Parsons@unt.edu (T.D. Parsons). 0165-0270/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.jneumeth.2013.10.006 Gronwall and Sampson (1974) extended the PASAT to clinical populations in the 1970s with emphasis upon the impact of traumatic brain injury (TBI) on speed of information processing. The PASAT also has been used to investigate the ways in which neurocognitive processing has been impacted by multifarious neurological conditions (e.g., mild traumatic brain injury, multiple sclerosis, chronic fatigue syndrome, lupus, hypoglycemia, renal transplant, depression, and schizophrenia; see Tombaugh, 2006 for review). The typical administration of the PASAT involves the presentation of a series of single digit numbers where the two most recent digits must be summed. For example, if the digits ‘5’, ‘3’ and ‘2’ were presented, the participant would respond with the correct sums, which 16 T.D. Parsons, C.G. Courtney / Journal of Neuroscience Methods 222 (2014) 15–23 are ‘8’ and then ‘5’. Participants are required to respond prior to the presentation of the next digit for a response to be scored as correct. While studies have found evidence for the utility of the PASAT for neuropsychological assessment of multiple sclerosis (Kujala et al., 1995; Benedict et al., 2002), Parkinson’s disease (Dujardin et al., 2007), epilepsy (Prevey et al., 1998), systemic lupus erythematosus (Shucard et al., 2004), attention deficit hyperactivity disorder (Schweitzer et al., 2000), and traumatic brain injury (O’Jile et al., 2006), a number of potential normative considerations have emerged as possible confounds. Studies have found increasing age to be negatively related to PASAT performance (Brittain et al., 1991; Roman et al., 1991; Diehr et al., 1998, 2003; Wiens et al., 1997), especially after age 50 (Roman et al., 1991). A general exception to this trend can be found in studies that involve young adults (Wingenfeld et al., 1999). Clinically meaningful sex differences in the PASAT performance have not been found (Brittain et al., 1991; Wiens et al., 1997; Wingenfeld et al., 1999; Diehr et al., 2003). While some studies (Brittain et al., 1991; Wiens et al., 1997) have found the effects of education to be marginal, Diehr et al. (1998) found age, education, and ethnicity to be significant predictors of PASAT performance. A further issue for the PASAT has been the use of number lists and the resulting impact of math ability (Chronicle and MacGregor, 1998; Gronwall and Wrightson, 1981; Hiscock et al., 1998; Royan et al., 2004; Tombaugh et al., 2004). Royan et al. (2004) and Tombaugh et al. (2004) found that scores on a math test accounted for a greater amount of variance for more difficult lists than for an easier list. A potential solution to this issue is to reduce the complexity of the answers (Johnson et al., 1988) and/or slow the inter-stimulus interval (ISI)s. For example, a modified version of the PASAT has been incorporated into multiple sclerosis (MS)test batteries which uses only the 2.0 s and/or 3.0 s trials ( Rudick et al., 1997; Rao, 1990;Benedict et al., 2002). In addition to the potential demographic effects, a number of neuropsychologists have criticized the PASAT’s tendency to elevate levels of stress in participants (Aupperle et al., 2002; Deary et al., 1994; Diehr et al., 2003; Holdwick and Wingenfeld, 1999; Hugenholtz et al., 1988; Iverson et al., 2000; Kinsella, 1998; Roman et al., 1991). The negative impact of the PASAT on mood and affect has led neuropsychologists to advise that the PASAT generally should not be administered until the end of a neuropsychology battery (Holdwick and Wingenfeld, 1999; Hugenholtz et al., 1988) and should not be administered to participants who are highly anxious or experiencing post-traumatic stress symptoms specifically (Kinsella, 1998; Roman et al., 1991). The robust stress response elicited by the PASAT has even led to its use as a laboratory inducer of psychological stress and fatigue (Johnson et al., 1997). For example, Lejuez et al. (2003) developed a modified computer version of the PASAT (PASAT-C) that used three presentation rates (i.e., 3.0 s, 1.5 s and 1.0 s) to elicit graded levels of psychological stress. Stress was measured using (1) subjective ratings on a visual analog scale (0–100); and (2) objective psychophysiological measurements of skin conductance response. Results revealed elevations over baseline for both subjective and objective measures. Given the above, modified versions of the traditional PASAT have emerged. Promise has been found in the use of short forms and Adjusting-Paced Serial Addition Test (Adjusting-PSAT, Tombaugh, 1999) versions for reducing discomfort and stress effects by shortening the task. Another option that may reduce psychological stress is the use of virtual gaming environments that offer an advanced computer interface that allows participants to become immersed within a computer-generated simulation. Potential virtual gaming environment use in assessment and rehabilitation of human cognitive processes is becoming recognized as technology advances. Virtual gaming environments that are absorbing and interesting have been found to have a valuable effect on mood, health, and recovery (Prins et al., 2011). According to Gamberini et al. (2008) virtual gaming environments provide alternative realities in which users’ stress levels may be reduced as they step back from the real world. In fact, virtual gaming environments have been found to be helpful for psychotherapeutic assessment and treatment of stress reactions due to specific phobias and post-traumatic stress disorder (Parsons and Rizzo, 2008a). Research has shown that virtual gaming environments have the potential to lowering cortisol levels (Dandeneau et al., 2007). Since virtual gaming environments allow for precise presentation and control of dynamic perceptual stimuli, they may have promise for providing ecologically valid assessments that combine the veridical control and rigor of laboratory measures with a verisimilitude that reflects real life situations (McGeorge et al., 2001; Renison et al., 2012). Additionally, the enhanced computation power allows for a range of the accurate recording of neurobehavioral responses in a perceptual environmental that systematically presents complex stimuli. Such simulation technology appears to be distinctively suited for the development of ecologically valid assessments, in which stimuli are presented in a consistent and precise manner. As a result, participants are able to manipulate three dimensional objects in a virtual environment that proffers a range of potential task demands (Schultheis et al., 2002). A number of virtual environment-based neuropsychological assessments have been developed and validated. Examples of virtual reality-based cognitive assessments include: attention (Law et al., 2006; Parsons et al., 2007) spatial abilities (Beck et al., 2010; Moffat, 2009), episodic memory (Plancher et al., 2012), retrospective memory (Parsons and Rizzo, 2008b), prospective memory (Knight and Titov, 2009), spatial memory (Astur et al., 2004; Goodrich-Hunsaker and Hopkins, 2010); and executive functions (Armstrong et al., 2013; Henry et al., 2012; Jovanovski et al., 2012). The Virtual Reality PASAT (VR-PASAT) joins these projects and focuses on refined analysis of neurocognitive testing using a virtual gaming environment to assess attention processing in the context of traversing a virtual city. Specifically, the primary aim of the present study was an initial examination of the convergent and divergent validity of the VR-PASAT using the methodology provided by the multitrait–multimethod matrix (Campbell and Fiske, 1959). The use of this matrix approach with multiple neurocognitive measures allows for the simultaneous investigation of convergent validity (i.e., extent to which different neurocognitive measures of attentional processing are related) and discriminant validity (i.e., extent to which neurocognitive measures of domains other than attentional processing are unrelated). The use of the multitrait–multimethod matrix gave us the advantage of being able to examine method variance (i.e., degree to which scales are correlated because they use the same method of measurement rather than because they share valid trait variance). In our assessment of convergent validity, we hypothesized (1) that the performance on the VR-PASAT would correlate significantly with performance on traditional neuropsychological measures of attentional processing. Given the addition of being immersed in a virtual gaming environment, we hypothesized (2) that the correlations would be moderate rather than high; and (3) participants would endorse less discomfort and stress on the VRPASAT when compared to the PASAT 200 (see Diehr et al., 1998). In our assessment of discriminant validity, we hypothesized (4) that correlations between the VR-PASAT and traditional neuropsychological measure of domains other than attentional processing would not be statistically significant. T.D. Parsons, C.G. Courtney / Journal of Neuroscience Methods 222 (2014) 15–23 1. Methods We acquired data on the implementation of a VR-based PASAT (i.e., VR-PASAT) in a college aged sample that also received a traditional (paper-and-pencil; computerized) neuropsychological battery. We aimed to assess the psychometric properties of the VR and paper-and-pencil measures. Hence, scores were correlated with demographic and other performance measures administered. Standard correlational analyses using a brief demographic survey and standardized (paper-and-pencil; computerized) neurocognitive tests aided our initial assessment of both the concurrent and divergent validity properties of this form of assessment. This study with college students expands considerably upon a previously published initial validation with active duty military (Parsons et al., 2012). 1.1. Participants The University of Southern California’s Institutional Review Board approved the study. A total of 50 college-aged subjects participated in the study. Given that some research has shown age and education to be significant predictors of PASAT performance (Diehr et al., 1998), the age range of participants was 19–34 years of age (age: M = 25.58; SD = 4.25) and the education range of participants was 12–19 years (education: M = 13.82; SD = 1.93). Participants were 75% male. No significant differences were found for age, gender, education, self-reported symptoms of depression, or computer familiarity. After informed consent was obtained, basic demographic information, participants responded to questions designed to measure computer experience and usage activities: how frequently participants use a computer (e.g., “How many hours per week do you spend on the computer?”); their perceived level of computer skill on a Likert scale (1 – not at all to 5 – very skilled); e.g., “How many hours per week do you spend playing video games?”; and what type of games they play (e.g., role-playing, strategy, sports, etc.). Participants were also given a medical health history form to assess the presence of any mental or physical disorders that may have hindered their performance. All subjects were free of histories of neurologic disease or injury, psychiatric illness including substance abuse or dependence, or self-reported specific developmental disorders. No participants were excluded for responses given on this form. 1.2. Design and measures Experimental sessions took place over a 2-hour period. Participants completed the VR PASAT as part of a neuropsychological battery administered under standard conditions. In addition to a standardized traditional (paper-and-pencil; computerized) neuropsychological battery, participants completed two versions of the PASAT: (a) the PASAT 200 digitally recorded auditory presentation of the PASAT (Diehr et al., 1998); and (b) the VR PASAT. Testing occurred in a quiet, climate controlled environment in a university-owned computer lab. The order in which the various PASAT tests were administered was counterbalanced across subjects. Participants completed the simulator sickness questionnaire, which includes a pre- and post-VR exposure symptom checklist (Kennedy et al., 1992). Participants were asked to rate their preference for assessment: “From the following options, please place a check mark next to your most preferred mode of assessment: (1) audio recording where you had to add numbers together; (2) the computerized assessments where you watched stimuli on a monitor; or (3) the virtual reality assessment where you added numbers together while traveling through the virtual city.” 17 1.2.1. Traditional paper-and-pencil measures Paced Auditory Serial Addition Test (PASAT-200): Each participant completed the PASAT-200 (Diehr et al., 1998) in the following manner: a digital recording presenting auditory stimuli (i.e., numbers) of four sequences of 50 digits was used (maximum possible score for each sequence was 49). The four sequences had presentation rates of 3.0, 2.4, 2.0, and 1.6 s per digit, respectively. Each sequence of 50 digits was unique. The digital recording had approximately a 15-s delay between each sequence. The PASAT-200 (Levin et al., 1982; Diehr et al., 1998) was chosen over the original version PASAT-244 version (Gronwall, 1977; Gronwall and Sampson, 1974). The “PASAT-200” is a revised 200-item version that was introduced by Levin et al. (1982) and extended by Diehr et al. (1998). While both versions follow the typical administration (four series of numbers presented at increasing speed), during the PASAT-244 the same series of 61 pseudo-random numbers is presented four times. Brittain et al. (1991) suggested that this increases practice effects decreases the usability of the PASAT-244 for repeat testing. Contrariwise, the PASAT-200 uses four series of 50 digits and each series is unique. D-KEFS (Color–Word Interference Test): Each participant was presented with the following stimuli from the D-KEFS Color–Word Interference Test (Delis et al., 1997): (a) “color naming” card with 50 colored blocks; (b) “word reading” stimulus card with 50 color words printed in black ink; (c) “color–word inhibition” card with 50 color names printed in a discrepant ink color; and (d) “color–word inhibition/switching” card, in which the subject performed the interference task if and only if the words (50 total words) did not have a box drawn around them. We followed the D-KEFS manual’s prescribed approach to administration and the D-KEFS’s “Scoring Assistant” software for scoring of the Color–Word Interference Test. 1.2.2. Automated Neuropsychological Assessment Metrics (ANAM4) The ANAM4 is a library of tests that have been recently normed from over 107,500 participants ranging from 17 to 65 years of age (Vincent et al., 2012). The ANAM4 TBI provides automated measures of fundamental neurocognitive functions including response speed, attention/concentration, immediate and delayed memory, spatial processing, and decision-processing speed and efficiency (ANAM, 2007). We followed the ANAM (2007) manual’s approach to administration and scoring of the ANAM tests. Brief descriptions of each test follow (see also Vincent et al., 2012): Code Substitution—Learning Phase: This measure assesses learning and immediate memory of symbol-digit pairs. The participant presses designated buttons to indicate whether the pair in question represents a correct or an incorrect mapping. In the “Learning Phase”, the defined pairs are presented on the screen simultaneously with the symbol-digit stimulus in question. Code Substitution—Delayed Memory: This measure assesses memory for symbol-digit pairs from the set of previously memorized (Code Substitution—Learning Phase) symbol-digit pairs. The participant presses designated buttons to indicate whether they remember the pair from the “Learning Phase”. The same comparison stimuli used in the Code Substitution-Learning phase are again presented individually but without the target cue. Procedural Reaction Time: This task measures attention and reaction time by presented a number (e.g., 2, 3, 4, or 5) on the monitor display using a large dot matrix. The participant is required to press designated buttons to indicate whether the number presented is a “low” number (2 or 3) or a “high” number (4 or 5). Mathematical Processing: This measure assesses working memory and the participant’s ability to solve arithmetic problems that involve three single-digit numbers and two operators (e.g., “4 − 1 + 2 =”). The participant presses designated buttons to 18 T.D. Parsons, C.G. Courtney / Journal of Neuroscience Methods 222 (2014) 15–23 indicate whether the answer to the problem is “less than 5” or “greater than 5”. Matching to Sample: This test assesses visuo-spatial abilities. The participant views a pattern produced by eight shaded cells in a 4 × 4 sample grid. The sample is then removed and 2 comparison patterns are displayed side by side. One grid is identical to the sample grid and the other grid differs by one shaded cell. The participant is required to press a designated button to select the grid that matches the sample. Stroop Task: This task requires the subject to press a computer key labeled red, green, or blue to identify each color stimulus presented. There are three possible blocks of 50 trials for this test. In the first block, the words RED, GREEN, and BLUE are presented individually in black type on the display. The participant is instructed to read each word aloud and to press a corresponding key for each word (“red” = 1; “green” = 2; and “blue” = 3). In the second block, a series of XXXXs is presented on the display in one of three colors (red XXXXs, green XXXXs, or blue XXXXs). The participant is instructed to say the color of the XXXXs aloud and to press the corresponding key based on color. In the third block, a series of individual words (“RED,” “GREEN,” or “BLUE”) are presented in a color that does not match the name of the color depicted by the word. The participant is instructed to say the color of the word aloud rather than reading the actual word and to press the response key assigned to that color. 1.2.3. Virtual Reality Paced Auditory Serial Addition Test The VR-PASAT is a measure of cognitive function that aims specifically to assess auditory information processing speed and flexibility, as well as calculation ability. Two monitors were used: (a) one for displaying the Launcher application, which is used by the examiner administering the test; and (b) another for displaying the participant’s view of the virtual environment in the head-mounted display (HMD; eMagin Z800 with an InterSense InteriaCube 2+ attached for tracking). To increase the potential for sensory immersion, a tactile transducer was built using a three-foot-square platform with six Aura bass shaker speakers (AST-2B-04, 50W Bass Shaker) attached. The tactile transducer was powered by a Sherwood RX-4105 amplifier with 100 watts per channel × 2 in stereo mode. Animation software was utilized for development of the virtual city environment. The environments were rendered in real time using a graphics engine with a fully customizable rendering pipeline, including vertex and pixel shaders, shadows, bump maps, and screen space geometric primitives. A MATLAB scoring program (Wu et al., 2010) and human–computer interface (Clinical Neuropsychology and Simulation Interface; CNS-I) was employed for data acquisition (Wu et al., 2013). The CNS-I also allowed for key events in the environment to be logged and time stamped with millisecond temporal accuracy. The VR-PASAT is presented in a Virtual City. Single digits are first aurally presented every 3 s (3 PASAT) and then every 2 s (2 PASAT). The participant must add each new digit to the one immediately prior to it. The test result is the number of correct sums given (out of 49 possible). The participant follows a guide through the 5 zones of the city and does not stop at any point during the walkthrough scenario. Navigation through the scenario uses a common USB Logitech game pad device. While the participant is following the guide, s/he hears background chatter and a number presented at varying intervals. Instructions for the VR PASAT task can be found in Appendix A. Each section of the VR-PASAT has a maximum of 49 correct answers (i.e., 50 digits are presented for each part). Participant responses are recorded via the computer’s sound board and by the examiner. Metrics include: the number of correct responses per presentation rate; number of non-responses; number of errors; number of suppression failures (i.e., adding to the sum of the last addition rather than to the last number heard); number of consecutive correct responses; longest series of correct responses; and reaction time data. 1.3. Data analytic considerations After completion of each assessment, examiners compiled scores of the number of correct responses for each trial (e.g., maximum = 49); and the total number of correct responses summed over all trials (composite score). In addition to these scores, the examiner calculated the average time per correct response (dividing total trial time by number correct for each trial and averaging the results). Further, each trial duration was calculated by multiplying the duration of the ISI by 60 (e.g., 2.0 × 60 s = 120 s). In addition to the aforementioned PASAT scoring procedures, examiners also looked at percent correct, and latency of responding. The calculation of the percentage of correct responses provides an alternate approach that may be helpful for cross-study comparison. An important development in PASAT administration (included in the VR-PASAT) is computer-automation that permits measurement of the speed at which a participant responds as well as recording the number of correct responses (Tombaugh, 1999; Wingenfeld et al., 1999). Feature extraction and optimal response classification for the VR-PASAT responses were examined using a MATLAB scoring program (i.e., CNS-I) adapted specifically for this study (Wu et al., 2013). This allowed for assessment of performance validity (suboptimal effort) and screening for outliers to establish data integrity: (a) identification of outliers as observations exceeding three standard deviations from the median reaction time; (b) exclusion of observations that are in both the top 1% in speed and simultaneously in the bottom 1% of accuracy; and (c) filtering and pattern recognition assessment for establishing feature sets using support vector machine classifiers. 2. Results 2.1. Comparison of PASAT modalities Table 1 presents the descriptives for VR-PASAT and PASAT 200 response times, and percentage correct for performance during 3.0 s trials, 2.0 s trials, and composite scores. Given that all variables were found to be normally distributed, parametric Pearson r correlations were utilized for relational analyses, with attendant results for comparison between PASAT modalities depicted in Table 2. As indicated, there is a significant relationship among all PASAT scores for VR-PASAT and PASAT 200 modalities. All participants stated that they preferred the VR-PASAT modality over that of the PASAT 200. 2.2. Comparison of VR-PASAT to traditional neuropsychological measures Table 3 presents descriptives for standardized neuropsychological tests. Again, given that all variables were found to be normally distributed, parametric Pearson r correlations were utilized for relational analyses, with attendant results for comparison of VR-RASAT with traditional neuropsychological measures found in Table 4. There is not a significant relationship between age or education and VR-PASAT scores. Convergent validity was evident via modest correlations noted among comparisons of VR-PASAT performance: (1) paper-and-pencil stroop (DKEFS Stroop) Inhibition was correlated with both VR-PASAT 2.0 s and 3.0 s; (2) DKEFS Stroop Inhibition\Switching scores were correlated with VR-PASAT 3.0 s, but not VR-PASAT 2.0 s; (3) ANAM computerautomated Stroop scores, Mathematical Processing, and Procedural T.D. Parsons, C.G. Courtney / Journal of Neuroscience Methods 222 (2014) 15–23 19 Table 1 Descriptives for PASAT modalities. 3s VR-PASAT # Correct RT % PASAT 200 # Correct RT % 2s Composite Mean SD SE Var Mean SD SE Var Mean SD SE Var 39.62 4.91 .66 10.44 1.48 .17 1.48 .21 .02 108.89 2.18 .03 32.96 4.12 .55 10.79 1.64 .18 1.53 .23 .03 116.49 2.69 .03 72.58 4.46 .60 18.95 1.33 .16 2.68 .19 .02 359.27 1.77 .02 39.22 4.13 .78 10.05 1.23 .20 1.42 .17 .03 100.99 1.52 .04 33.56 3.36 .67 10.36 1.38 .21 1.47 .20 .03 107.31 1.91 .04 72.78 3.71 .73 18.58 1.14 .19 2.63 .16 .03 345.07 1.29 .03 Note: For all analyses, N = 50. SE = Standard Error; SD = Standard Deviation; Var = Variance. Table 2 Pearson’s r correlation coefficients between VR PASAT Scores and PASAT 200. VR-PASAT 3.0 s PASAT 200 3.0 s RT % PASAT 200 2.0 s RT % PASAT 200 composite RT % VR-PASAT 2.0 s VR-PASAT composite RT % RT % RT % .970** −.952** −.936** .967** .571** −.577** −.579** .605** .860** −.856** −.845** .877** .594** −.622** −.591** .633** .911** −.858** −.811** .923** .852** −.852** −.788** .874** .849** −.862** −.831** .876** .822** −.791** −.786** .842** .948** −.938** −.905** .961** Note: For all analyses, N = 50. RT = response time; % = percentage; 3.0 s = 3 s; and 2.0 = 2 s. * Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed). Table 3 Descriptives for neuropsychological tests. Mean D-KEFS Color–Word Interference Test Inhibition Inhibition/Switching Automated Neuropsychological Assessment Metrics Procedural Reaction Time Code Substitution—Learning Code Substitution—Delayed Memory Mathematical Processing Matching to Sample Stroop SD SE 49.60 60.52 12.96 13.77 1.83 1.95 571.62 1038.76 1208.82 2778.60 1590.04 831.19 73.14 215.02 344.42 568.69 418.29 206.80 10.34 30.41 48.71 80.43 59.16 29.25 Var 168.08 189.64 5349.63 46,233.90 118,627.86 323,409.88 174,965.71 42,767.12 Note: For all analyses, N = 50. SE = Standard Error; SD = Standard Deviation; Var = Variance. Table 4 Pearson’s r correlation coefficients between VR PASAT Scores and Neuropsychology Tests. VR-PASAT 3 s RT Demographics Age Education D-KEFS Color–Word Interference Test Inhibition Inhibition/Switching Automated Neuropsychological Assessment Metrics Stroop Mathematical Processing Procedural Reaction Time Code Substitution—Learning Code Substitution—Delayed Memory Matching to Sample −.158 −.223 VR-PASAT 2 s % .151 .218 VR-PASAT composite RT % RT −.229 .022 .210 −.139 −.201 −.101 % .203 .041 .431* .356* −.421** −.332* .530** .241 −.492** −.266 .566** .367** −.512** −.334* .345* .406** .432** −.071 .037 .128 −.347* −.440** −.387** .082 −.060 −.138 .386** .526** .278 .041 .098 .146 −.412** −.516** −.362** −.074 −.183 −.136 .436** .541** .420** .003 .082 .184 −.426** −.536** −.419** .003 −.137 −.153 Note: For all analyses, N = 50. RT = response time; % = percentage. * Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed). 20 T.D. Parsons, C.G. Courtney / Journal of Neuroscience Methods 222 (2014) 15–23 Reaction time were correlated with VR-PASAT at all levels. Divergent validity was evident via a lack of significant correlation among levels (2.0 s and 3.0 s) of VR-PASAT and measures of learning (Code Substitution—Learning Phase) and memory (Code Substitution—Recall Phase). Further, there was no correlation among VR-PASAT levels (2.0 s and 3.0 s trials) and a measure of visuo-spatial abilities (Matching to Sample). 2.3. Predictive relations between VR-PASAT and traditional measures of attentional processing In order to further clarify the relationships between VR-PASAT and the attentional processing measures, two step-wise multiple regression analyses were conducted with five attentional processing measures (the DKEFS Stroop inhibition and inhibition/switching, and the ANAM Stroop, Mathematical Processing, and Procedural Reaction Time) and education serving as predictor variables and VR-PASAT 3.0 s Trial (first stepwise) and VR-PASAT 2.0 s Trial (second stepwise)serving as the criterion variables. Results of the first stepwise yielded a significant regression equation that accounted for 25.5% of the variance in VR-PASAT 3.0 s Trial scores (R2 = 0.255), F (2,47) = 9.37, P < 0.001. ANAM Procedural Reaction Time was the first variable to enter the equation, accounting for approximately 19% of the variance in VR-PASAT 3.0 s Trial scores (R2 = 0.186), followed by Mathematical Processing which accounted for an additional 10% of the variance in VR-PASAT 3.0 s scores (R2 change = 0.099). Results of the second stepwise yielded a significant regression equation that accounted for 41% of the variance in VR-PASAT 2.0 s Trial scores (R2 = 0.408), F (2,47) = 16.16, P < 0.001. DKEFS Stroop Inhibition was the first variable to enter the equation, accounting for approximately 28% of the variance in VR-PASAT 2.0 s Trial scores (R2 = 0.280), followed by Mathematical Processing which accounted for an additional 13% of the variance in VR-PASAT 2.0 s scores (R2 change = 0.127). The assumptions of multiple regression were met as indicated by observation of standardized residual plots and by results of a Durbin–Watson test that indicated independence of errors. 3. Discussion This study provides preliminary information on the VR-PASAT’s convergent and divergent validity for a college age population. Comparisons among scores for traditional approaches to PASAT assessment (PASAT 200) and a virtual reality instantiation (VRPASAT) revealed moderate to large correlations for all comparisons. Further, self-report of “likability” revealed a unanimous preference for the virtual reality-based PASAT. In addition to findings relative to direct comparisons of VR-PASAT to traditional the PASAT modality, convergent and discriminant validity were evaluated using neuropsychological tests chosen a priori, according to the multitrait–multimethod matrix approach. The VR-PASAT was significantly related to measures of attentional processing and executive functioning. Further, following expectation, VRPASAT scores did not correlate with measures of learning, memory, and visuospatial processing drawn from the traditional neuropsychological test battery. Together, these findings suggest that the VR-PASAT assesses a construct that is similar to those measured by the other attentional processing tests (e.g., Stroop tasks; Spikman et al., 2001; Sherman et al., 1997) in this study. Although the VR-PASAT shares common variance with other attentional processing tests, it appears to also measure a component of attentional processing not measured, or measured to a lesser extent, by more traditional tasks. Such an hypothesis is supported by the rather modest relationships observed between VR-PASAT and the attentional processing tests utilized in the present study. When we used traditional attentional processing measures and education as predictors of the 3.0 s VR-PASAT Trial, we found that only the ANAM Procedural Reaction Time and Mathematical Processings cores were significant predictors. On the more difficult 2.0 s VR-PASAT Trial, DKEFS Stroop Inhibition was the most predictive, whereas Mathematical Processing accounted for a smaller portion of variance in VR-PASAT 2.0 s Trial scores not tapped by DKEFS Stroop Inhibition. Some researchers consider the PASAT to be multifactorial because it requires the successful completion of multiple neurocognitive functions. For example, Gronwall and Sampson (1974) stated that the various speeded presentations of the PASAT reflect Broadbent (1958) “channel capacity,” which refers to the rate at which information transmission occurs in the nervous system. According to Cicerone (1997), the PASAT appears to have two components: (1) attentional processing required to complete the PASAT tasks; and (2) speed of information processing. The different findings for the 3.0 s VR-PASAT and 2.0 s VR-PASAT trials may reflect findings from studies that have characterized the PASAT as tapping into different types of cognitive processes. Performance on the VR-PASAT 3.0 s trials appears to reflect findings that (1) performance on the PASAT is affected by math ability (Chronicle and MacGregor, 1998; Gronwall and Wrightson, 1981; Hiscock et al., 1998; Royan et al., 2004; Sherman et al., 1997; Tombaugh et al., 2004); and (2) processing speed (Demaree et al., 1999; Madigan et al., 2000; Ponsford and Kinsella, 1992). Performance on the VRPASAT 2.0 seems to reflect attentional processing and executive functioning. This comports well with factor analytic studies that have found that the 2.0 s presentation of the PASAT loaded on attention/concentration factors variously referred to as “attention switching” (Bate et al., 2001). These findings are important to keep in mind when considering the VR-PASAT as an exclusive test for assessing attentional processing. In certain populations (e.g., multiple sclerosis patients), the 3.0 s ISI may be preferred rather than the 2.0 s ISIs. This affirms the choice to employ the 3.0 s and 2.0 s in the Brief Repeatable Battery of Neuropsychological Tests for multiple sclerosis (Rao, 1990) and the addition of a 3.0 s ISI after omitting the 1.2 s trial in studying the neuropsychological effects of HIV infection (Heaton et al., 1995). Further, the finding that Mathematical Processing was predictive in both the 3.0 s and 2.0 s VR-PASAT trials reflects a need to make judicious decisions relative to the participant’s mathematical abilities (Strauss et al., 1994). The use of the multitrait–multimethod analyses allowed us to examine the extent of convergent and divergent validity. Accordingly, we concluded that the VR-PASAT had appropriate levels of convergent and divergent validity in that the degree to which convergent validity coefficients (assessing attentional processing domain) derived from the VR-PASAT 3.0 s and 2.0 s trial scores and the traditional neuropsychological measures of attentional processing were larger than correlations of different measures assessing domains other than attentional processing within the same array of measures. Evidence for discriminant validity was indicated in that correlations of different scales assessed using different measures were lower than the convergent validity coefficients. The presence of significant relations among VR-PASAT scores and attentional processing measures, as well as the lack of association between VR-PASAT and the learning, memory, and visuospatial processing measures offers further support to the purported sensitivity of the VR-PASAT as an attentional processing measure. Recent research indicates that nonclinical, undergraduate research participants in a “neuropsychological experiment” may put forth suboptimal effort (An et al., 2012). For the VR-PASAT, we examine performance using scoring algorithms to screen for outliers and assess data integrity (Wu et al., 2013). It is important to T.D. Parsons, C.G. Courtney / Journal of Neuroscience Methods 222 (2014) 15–23 note that the data are not free of potential confounds due to poor effort. Screening was done at the time of testing and also during the data analysis process to eliminate obvious poor effort, but data may still contain some individuals who provided less than optimal effort. This situation is true for most normative neuropsychological data and particularly true for computerized testing given the reduced interaction with the examiner. Our findings should be understood in the context of some limitations. These findings are based on a fairly small sample size of college aged participants. As a necessary next step, the reliability and validity of the test needs to be established using a larger sample of participants across the lifespan to ensure that the current findings are not an anomaly due to sample size or age cohort. While self-report of “likability” revealed a unanimous preference for the virtual reality-based PASAT, future studies should incorporate a more fully developed questionnaire that taps into multiple aspects of participant experience and preference for paper-and-pencil, two-dimensional computer automations of traditional measures; and three-dimensional virtual gaming environment adaptations. Given the fact that the study sample was made up of college age students, the novelty of the high tech atmosphere may account for the preference. Additionally, as indicated previously, the diagnostic utility of this VR-PASAT assessment tool must be determined. The ability of the VR-PASAT to accurately classify participants into attentional processing impaired and nonimpaired groups based on carefully established critical values must be evaluated. This will involve the generation of specific cut-off points for classifying a positive (attentional processing impaired likely) or negative (attentional processing impaired unlikely) finding. The VR-PASAT’s prediction of attentional processing impairment must be evaluated by the performance indices of sensitivity, specificity, predictive value of a positive test, and predictive value of a negative test. Even though reliability is considered to be a unique asset of testing in computergenerated VEs, issues of test–retest reliability must be addressed. Our goal was to conduct an initial pilot study to validate the VR-PASAT through the use of a standard neuropsychological battery for the assessment of nonclinical college age participants. We believe that this goal was met. We recognize, however, that the current findings are only a first step in the development of this tool. Many more steps are necessary to continue the process of test development and to fully establish the VR-PASAT as a measure that contributes to existing assessment procedures for the diagnosis of attentional processing decline across the lifespan. Although the VR-PASAT as a measure must be fully validated, current findings provide preliminary data regarding the convergent and divergent validity of the VE as an attentional processing measure. The VR-PASAT was correlated with widely accepted assessment tools. Nevertheless, the fairly small sample size in a college age cohort requires that the reliability and validity of the VR-PASAT be established using a larger sample of well-matched participants across the lifespan. This will ensure that current findings are not a sample size or cohort-related anomaly. Finally, the ability of the VR-PASAT to accurately classify participants not involved in the initial validation study must be examined for cross-validation purposes. Appendix A. VR-PASAT The Virtual Reality Paced Auditory Serial Addition Test (VRPASAT) is a measure of cognitive function that specifically assesses auditory information processing speed and flexibility, as well as calculation ability. The VR-PASAT is presented in a Virtual City. The participant follows a guide through 5 zones of the Virtual City. Navigation through the scenario uses a common USB 21 Logitech game pad device. While the participant is following the guide, s/he will hear background chatter and a number presented at varying intervals. Single digits are presented either every 3.0 s (3 PASAT) or every 2.0 s (2 PASAT) and the participant must add each new digit to the one immediately prior to it. The test result is the number of correct sums given (out of 49 possible). ADMINISTRATION Examiners are to verify that they have the correct Record Form (Form A) before they start reading the instructions for the 3 Practice Trial to the participant. PASAT-3 Practice Trials For Part 1 (stimuli every 3 ) the examinee states, “In the following scenario, as you follow the sergeant through the city, you are going to hear radio chatter and a series of single digit numbers that will be presented at the rate of one every 3 seconds. Listen for the first two numbers, add them up, and tell me your answer. When you hear the next number, add it to the one you heard on the radio right before it. Continue to add the next number to each preceding one. Remember, you are not being asked to give me a running total, but rather the sum of the last two numbers that were heard from the radio.” Then the examinee gives the following example: “For example, if the first two numbers were ‘5’ and ‘7,’ you would say ‘12.’ If the next number were ‘3,’ you would say ‘10.’ Then if the next number were ‘2,’ you would say ‘5.”’ If the participant is having difficulty understanding these instructions, the examiner writes 5, 7, 3, and 2 on a sheet of paper and repeat the instructions, demonstrating how the task is done. The examiner then says, “This is a challenging task. If you lose your place, just jump right back in – listen for two numbers in a row and add them up and keep going. There are some practice items. Let’s try those first.” The examiner then says the sample items, stopping after the last practice item. The examiner repeats the practice items, if necessary, until the participant understands the instructions (up to three times). The examiner always administers at least one practice trial before administering the actual test. If the participant begins to give a running total, the examiner stops the practice immediately and explain the task again, emphasizing that he/she is not to give a running total. The examiner then starts the practice items again from the beginning. If the participant begins adding each number to the number two previous to it, the examiner stops the practice immediately—explaining the correct way to do the task, and starts the practice items from the beginning. If the participant merely makes a math error, the computer continues with the practice items. After two consecutive ‘no responses,’ the examiner prompts him/her to resume by saying, “Jump back in with the next two numbers you hear.” The examiner administers the practice sequence a maximum of three times. Record answers in the space provided on the back of the PASAT Record Form. PASAT-3 Once it is clear that the subject possesses sufficient understanding of the task, the examiner begins Part 1. Before starting Part 1, the examiner reminds him/her: “Remember, if you get lost, just jump back in because I can’t stop the test once it has begun.” The examiner discourages talking and oral calculations during the test; only the participant’s answers should be spoken out loud. The participant may need prompting to continue the test if she/he gets lost. After five consecutive ‘no responses’ the examiner 22 T.D. Parsons, C.G. Courtney / Journal of Neuroscience Methods 222 (2014) 15–23 redirects the participant quickly by saying, “Jump back in,” but do not stop the scenario. PASAT-2 Before Part 2 (stimuli every 2 ) the examiner says, “Okay there is a second part to this exercise, identical to the first, except that the numbers will come a little faster, one every 2 seconds.” The examiner proceeds directly with the 2 administration. Completing the PASAT Record Form The examiner places a check next to all correct answers and writes in any incorrect responses in the space provided. The examiner places a dash when no response was given. If the subject corrects him/herself after giving a response, count the amended answer as the response. The amended response is the one that will be used in determining total correct, regardless of whether it was the correct or incorrect response. The examiner slashes through the old response and writes in ‘SC’ with a circle around it to indicate that the participant self-corrected. Each section of the VR-PASAT has a maximum of 49 correct answers (i.e., 50 digits are presented for each part). Count the total number correct (number of circled answers) for VR-PASAT-3 and record on the VR-PASAT Record Form. Repeat the same scoring procedure for VR-PASAT-2 . References An KY, Zakzanis KK, Joordens S. Conducting research with non-clinical healthy undergraduates: Does effort play a role in neuropsychological test performance? Archives of Clinical Neuropsychology 2012;27:849–57. Armstrong C, Reger G, Edwards J, Rizzo A, Courtney C, Parsons TD. Validity of the Virtual Reality Stroop Task (VRST) in active duty military. Journal of Clinical and Experimental Neuropsychology 2013;35:113–23. Astur RS, Tropp J, Sava S, Constable RT, Markus EJ. Sex differences and correlations in a virtual Morris water task, a virtual radial arm maze, and mental rotation. Behavioral Brain Research 2004;151:103–15. Aupperle RL, Beatty WW, Shelton de NAP, Gontkovsky FST. Three screening batteries to detect cognitive impairment in multiple sclerosis. Multiple Sclerosis 2002;8:382–9. Automated Neuropsychological Assessment Metrics (Version 4) [Computer software]. Norman, OK: C-SHOP; 2007. Bate AJ, Mathias JL, Crawford JR. Performance on the Test of Everyday Attention and standard tests of attention following severe traumatic brain injury. The Clinical Neuropsychologist 2001;15:405–22. Beck L, Wolter M, Mungard NF, Vohn R, Staedtgen M, Kuhlen T, et al. Evaluation of spatial processing in virtual reality using functional magnetic resonance imaging (FMRI). Cyberpsychology, Behavior, and Social Networking 2010;13:211–5. Benedict RHB, Fischer JS, Archibald CJ, Arnett PA, Beatty WW, Bobholz JB, et al. Minimal neuropsychological assessment of MS patients: a consensus approach. The Clinical Neuropsychologist 2002;16:381–97. Brittain JL, La Marche JA, Reeder KP, Roth DL, Boll TJ. Effects of age and IQ on Paced Auditory Serial Addition Task (PASAT) performance. The Clinical Neuropsychologist 1991;5:163–75. Broadbent DE. Perception and communication. London: Pergamon Press; 1958. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait–multimethod matrix. Psychological Bulletin 1959;56:81–100. Chronicle EP, MacGregor NA. Are PASAT scores related to mathematical ability? Neuropsychological Rehabilitation 1998;8:273–82. Cicerone KD. Clinical sensitivity of four measures of attention to mild traumatic brain injury. The Clinical Neuropsychologist 1997;11:266–72. Dandeneau SD, Baldwin MW, Baccus JR, Sakellaropoulo M, Pruessner JC. Cutting stress off at the pass: reducing vigilance and responsiveness to social threat by manipulating attention. Journal of Personality and Social Psychology 2007;93:651–66. Deary IJ, Ebmeier KP, MacLeod KM, Dougall N, Hepburn DA, Frier BM, et al. PASAT performance and the pattern of uptake of –super (99m)Tc-exametazime in brain estimated with single photon emission tomography. Biological Psychology 1994;38:1–18. Delis DC, Kaplan E, Kramer JH, Ober BA. Delis–Kaplan Executive Function Scale. San Antonio, TX: The Psychological Corporation; 1997. Demaree HA, DeLuca J, Gaudino EA, Diamond BJ. Speed of information processing as a key deficit in multiple sclerosis: implication for rehabilitation. Journal of Neurology, Neurosurgery and Psychiatry 1999;67:661–3. Diehr MC, Heaton RK, Miller W, Grant I. The Paced Auditory Serial Addition Task (PASAT): norms for age, education, and ethnicity. Assessment 1998;5:375–87. Diehr MC, Cherner M, Wolfson TJ, Miller S, Grant I, Heaton RK. The 50 and 100-item short forms of Paced Auditory Serial Addition Task (PASAT): demographically corrected norms and comparisons with the full PASAT in normal and clinical samples. Journal of Clinical and Experimental Neuropsychology 2003;25:571–85. Dujardin K, Deneve C, Ronval M, Krystkowiak P, Humez C, Destee A, et al. Is the paced auditory serial addition test (PASAT) a valid means of assessing executive function in Parkinson’s disease? Cortex 2007;43:601–6. Gamberini L, Barresi G, Maier A, Scarpetta F. A Game a Day Keeps the Doctor Away: a short review of Computer Games in Mental Healthcare. Journal of Cybertherapy and Rehabilitation 2008;1:127–46. Goodrich-Hunsaker NJ, Hopkins RO. Spatial memory deficits in a virtual radial arm maze in amnesic participants with hippocampal damage. Behavioral Neuroscience 2010;124:405–13. Gronwall D, Sampson H. The psychological effects of concussion. Auckland, New Zealand: Auckland University Press; 1974. Gronwall D. Paced Auditory Serial-Addition Task: a measure of recovery from concussion. Perceptual and Motor Skills 1977;44:367–73. Gronwall D, Wrightson P. Memory and information processing capacity after closed head injury. Journal of Neurology, Neurosurgery, and Psychiatry 1981;44:889–95. Heaton RK, Grant I, Butters N, White DA, Kirson D, Atkinson JH. Neuropsychology of HIV infection at different diseasestages. Journal of the International Neuropsychological Society 1995;1:231–51. Henry M, Joyal CC, Nolin P. Development and initial assessment of a new paradigm for assessing cognitive and motor inhibition: the bimodal virtual-reality Stroop. Journal of Neuroscience Methods 2012;210:125–31. Hiscock M, Caroselli JS, Kimball LE. Paced serial addition: modality-specific and arithmetic-specific factors. Journal of Clinical and Experimental Neuropsychology 1998;20:463–72. Holdwick DJ, Wingenfeld SA. The subjective experience of PASAT testing: does the PASAT induce negative mood? Archives of Clinical Neuropsychology 1999;14:273–84. Hugenholtz H, Stuss DT, Stethem LL, Richard MT. How long does it take to recover from a mild concussion? Neurosurgery 1988;22:853–8. Iverson GL, Lovell MR, Smith SS. Does brief loss of consciousness affect cognitive functioning after mild head injury? Archives of Clinical Neuropsychology 2000;15:643–8. Johnson DA, Roethig-Johnston K, Middleton J. Development and evaluation of an attentional test for head injured children. Information processing capacity in a normal sample. Journal of Child Psychology and Psychiatry 1988;29: 199–208. Johnson SK, Lange G, DeLuca J, Korn LR, Natelson BH. The effects of fatigue on neuropsychological performance in patients with chronic fatigue syndrome, multiple sclerosis, and depression. Applied Neuropsychology 1997;4: 145–53. Jovanovski D, Zakzanis KK, Campbell Z, Erb S, Nussbaum D. Development of a novel, ecologically oriented virtual reality measure of executive function: the Multitasking in the City Test. Applied Neuropsychology 2012;19:171–82. Kennedy RS, Fowlkes JE, Berbaum KS, Lilienthal MG. Use of a motion sickness history questionnaire for prediction of simulator sickness. Aviation, Space, and Environmental Medicine 1992;63:588–93. Kinsella GJ. Assessment of attention following traumatic brain injury: a review. Neuropsychological Rehabilitation 1998;8:351–75. Knight RG, Titov N. Use of virtual reality tasks to assess prospective memory: applicability and evidence. Brain Impairment 2009;10:3–13. Kujala P, Portin R, Revonsuo A, Ruutiainen J. Attention related performance in two cognitively divergent subgroups or patients with multiple sclerosis. Journal of Neurology, Neurosurgery, and Psychiatry 1995;59:77–82. Law AS, Logie RH, Pearson DG. The impact of secondary tasks on multitasking in a virtual environment. Acta Psychologica 2006;122:27–44. Lejuez CW, Kahler CW, Brown RA. A modified computer version of the Paced Auditory Serial Addition Task (PASAT) as a laboratory-based stressor. Behavior Therapist 2003;26:290–3. Levin HS, Benton AL, Grossman RG. Neurobehavioral consequences of closed head injury. The Lancet 1982;2:605–9. Madigan NK, DeLuca J, Diamond BJ, Tramontano G, Averill A. Speed of information processing in traumatic brain injury: modality specific factors. Journal of Head Trauma Rehabilitation 2000;15:943–56. McGeorge P, Phillips L, Crawford JR, Garden SE, Della Sala S, Milne AB, et al. Using virtual environments in the assessment of executive dysfunction. Presence 2001;10:375–83. Moffat SD. Aging and spatial navigation: what do we know and where do we go? Neuropsychology Review 2009;19:478–89. O’Jile JR, Ryan LM, Betz B. Information processing following mild head injury. Archives of Clinical Neuropsychology 2006;21:293–6. Parsons TD, Bowerly T, Buckwalter JG, Rizzo AA. A controlled clinical comparison of attention performance in children with ADHD in a virtual reality classroom compared to standard neuropsychological methods. Child Neuropsychology 2007;13:363–81. Parsons TD, Rizzo AA. Affective outcomes of virtual reality exposure therapy for anxiety and specific phobias: a meta-analysis. Journal of Behavior Therapy and Experimental Psychiatry 2008a;39:250–61. Parsons TD, Rizzo AA. Initial validation of a virtual environment for assessment of memory functioning: Virtual Reality Cognitive Performance Assessment Test. Cyberpsychology and Behavior 2008b;11:17–25. Parsons TD, Courtney C, Rizzo AA, Edwards J, Reger G. Virtual Reality Paced Serial Addition Tests for neuropsychological assessment of a military cohort. Studies in Health Technology and Informatics 2012;173:331–7. T.D. Parsons, C.G. Courtney / Journal of Neuroscience Methods 222 (2014) 15–23 Plancher G, Tirard A, Gyselinck V, Nicolas S, Piolino P. Using virtual reality to characterize episodic memory profiles in amnestic mild cognitive impairment and Alzheimer’s disease: influence of active/passive encoding. Neuropsychologia 2012;50:592–602. Ponsford J, Kinsella G. Attentional deficits following closed-head injury. Journal of Clinical and Experimental Neuropsychology 1992;14:822–38. Prevey ML, Delaney RC, Cramer JA, Mattson RH. Complex partial and secondarily generalized seizure patients: cognitive functioning prior to treatment with antiepileptic medication: VA Epilepsy Cooperative Study 264 Group. Epilepsy Research 1998;30:1–9. Prins PJ, Dovis S, Ponsioen A, Ten-Brink E, Van der Oord S. Does computerized working memory training with game elements enhance motivation and training efficacy in children with ADHD? Cyberpsychology, Behavior, and Social Networking 2011;14:115–22. Rao SM, Cognitive Function Study Group of the National Multiple Sclerosis Society. A manual for the Brief Repeatable Battery of Neuropsychological Tests in Multiple Sclerosis. New York: National Multiple Sclerosis Society; 1990. Renison B, Ponsford J, Testa R, Richardson B, Brownfield K. The ecological and construct validity of a newly developed measure of executive function: the Virtual Library Task. Journal of the International Neuropsychological Society 2012;18:440–50. Roman DD, Edwall GE, Buchanan RJ, Patton JH. Extended norms for the Paced Auditory Serial Addition Task. The Clinical Neuropsychologist 1991;5:33–40. Royan J, Tombaugh TN, Rees L, Francis M. The Adjusting-Paced Serial Addition Test (Adjusting-PSAT): thresholds for speed of information processing as a function of stimulus modality and problem complexity. Archives of Clinical Neuropsychology 2004;19:131–43. Rudick R, Antel J, Confavreux C, Cutter G, Ellison G, Fischer J. Recommendations from the national multiple sclerosis society clinical outcomes assessment task force. Annals of Neurology 1997;42:379–82. Sampson H. Serial addition as a function of stimulus duration and pacing. Canadian Journal of Psychology 1958;12:179–83. Sampson H, MacNeilage PF. Temporal integration and the serial addition paradigm. Australian Journal of Psychology 1960;12:70–88. Schultheis MT, Himelstein J, Rizzo AA. Virtual reality and neuropsychology: upgrading the current tools. The Journal of Head Trauma Rehabilitation 2002;17:378–94. 23 Schweitzer JB, Faber TL, Grafton ST, Tune LE, Hoffman JM, Kilts CD. Alterations in the functional anatomy of working memory in adult attention deficit hyperactivity disorder. American Journal of Psychiatry 2000;157:278–80. Sherman EMS, Strauss E, Spellacy F. Validity of the Paced Auditory Serial Addition Test (PASAT) in adults referred for neuropsychological assessment after head injury. The Clinical Neuropsychologist 1997;11:34–45. Shucard JL, Parrish J, Shucard DW, McCabe DC, Benedict RHB, Ambrus J. Working memory and processing speed deficits in systemic lupus erythematosus as measured by the paced auditory serial addition test. Journal of the International Neuropsychological Society 2004;10:35–45. Spikman JM, Kiers HAL, Deelman BG, van Zomeren AH. Construct validity of concepts of attention in healthy controls and patients with CHI. Brain and Cognition 2001;47:446–60. Strauss E, Spellacy F, Hunter M, Berry T. Assessing believable deficits on measures of attention and information processing capacity. Archives of Clinical Neuropsychology 1994;9:483–90. Tombaugh TN. Administrative manual for The Adjusting-Paced Serial Addition Test (Adjusting-PSAT). Ottawa, Ontario: Carleton University; 1999. Tombaugh TN, Rees L, Baird B, Kost J. The effects of list difficulty and modality of presentation on a computerized version of the Paced Serial Addition Test (PSAT). Journal of Clinical and Experimental Neuropsychology 2004;26:257–65. Tombaugh TN. A comprehensive review of the Paced Auditory Serial Addition Test (PASAT). Archives of Clinical Neuropsychology 2006;21:53–76. Vincent AS, Roebuck-Spencer T, Gilliland K, Schlegal R. Automated Neuropsychological Assessment Metrics (v4) traumatic brain injury battery: military normative data. Military Medicine 2012;177:256–69. Wiens AN, Fuller KH, Crossen JR. Paced Auditory Serial Addition Test: adult norms and moderator variables. Journal of Clinical and Experimental Neuropsychology 1997;19:473–83. Wingenfeld SA, Holdwick DJ, Davis JL, Hunter BB. Normative data on Computerized Paced Auditory Serial Addition Task performance. The Clinical Neuropsychologist 1999;13:268–73. Wu D, Courtney C, Lance B, Narayanan SS, Dawson M, Oie K, Parsons TD. Optimal arousal identification and classification for affective computing: virtual reality stroop task. IEEE Transactions on Affective Computing 2010;1:109–18. Wu D, Lance B, Parsons TD. Collaborative filtering for brain–computer interaction using transfer learning and active class selection. PLoS ONE 2013:1–18.