In this paper we describe a study conducted to evaluate the effectiveness of task division on collaboration between pairs of users in a synchronous groupware simulation. The study involved measuring and analyzing over 50 performance criteria on 16 team pairs. Statistically significant results show task division increased team interaction and improved group performance in subsequent unrelated group activities when compared to a control version of the software. The simulation, CollabBillboard, was developed as part of Rensselaer Polytechnic
Institute’s technology enhanced collaborative classroom.
from the users, creation of new tasks based on the actions of the users, and rich interaction possibilities due to large number entities (user and computer controlled) simultaneously supported [2] .
In the entertainment arena, multiuser simulations enhance the recreational experience because they allow cooperation/competition with live users. Presumably a live user will offer more interesting challenges than a computer generated opponent. A synchronous simulated automotive race is much more interesting if the car being challenged belongs to a friend down the hall (or in the next state!) [3].
Intellectual teamwork, in which student teams share knowledge and collaborate on projects and assignments has emerged as a major instructional technique at many colleges and universities [17]. To foster intellectual teamwork, Rensselaer Polytechnic Institute has initiated
“studio format” classes that stress technological support for instruction and student collaboration. In addition to several technology-enhanced classrooms with workstations and advanced projection systems, a unique, conference style facility known as the “classroom-in-the-round” features a novel circular room design coupled with special furniture, hardware, software, and courseware intended to enhance collaboration [14,15,16].
CollabBillboard, was developed as a component of
Rensselaer’s “classroom-in-the-round” next generation learning platform. The goal of CollabBillboard is to facilitate intellectual teamwork using simulation software.
Simulations involving multiple participants have become commonplace with the ubiquity of networked computers.
Collaborative simulations have found uses in industry including: Defense, Aeronautics, and Entertainment. The
U.S. Department of Defense has been actively developing networked simulators over the past decade. The result of this effort is the Distributed Interactive Simulation (DIS), a set of protocols that allow network connected simulators to participate in synchronous combat operations using a shared electronic terrain [1] . Advantages of DIS over single user simulators include: group instead of individual training, support for user participation anywhere on earth, time sensitive challenges that demand immediate responses
A major problem with multiuser gaming simulations is that they do not promote collaboration, on the contrary they are
(presumably unintentionally) designed to discourage it.
Most of these games offer multiuser capability, but have a single user design. A single user without help from anyone, can achieve the game objective. Each user has a set of physical and on-screen controls that provide complete and self-sufficient control of an entity or group of entities. The game objectives promote individual over group success.
Typical performance measurements (even in team environments) include who has the most items, who has the best items, who defeated the most opponents, who completes the level first. Communication is typically a text based chat system. Since the simulation runs continuously, users are penalized for taking time out to communicate with others. The central theme running though game’s design is one of independence, self-sufficiency, and selfcenteredness.
CollabBillboard is a synchronous face-to-face two player simulation that attempts to address some shortcomings of previous multiuser simulations through explicit task division and group evaluation . Explicit task division requires each user to take on a specific role during the simulation. These roles are complementary, but nonoverlapping. Both users must cooperate within their roles in order to achieve the simulation goal. Group evaluation uses performance criteria that is team, rather than individual based.
A study was conducted to evaluate the effect
CollabBillboard had on collaboration between pairs of
users. The study used two versions of the program, one with and another without explicit task division. Time to completion, percent of time spent conversing, and accurate billboard placement were some performance criteria measured. Subjects were then given a paper and pencil collaborative exercise. The results of this exercise were compared against a solution key. Finally, the subjects were given a survey to complete that allowed them to express their subjective feelings about the simulation, and about collaborative experiences during the session.
Analysis of measurements taken during these activities provided statistically significant indication that subjects using the task division simulation experienced increased team interaction and improved group performance in subsequent unrelated group activities when compared to a control version of the software.
The rest of this paper is organized as follows: Architecture
discusses the architecture (software, hardware, physical
environment) of the CollabBillBoard application. Method
outlines the experiments performed using explicit task
during the experiment and their analysis.
completes the paper with a discussion of limitations, related research, and future work.
The CollabBillboard application was designed as a face-toface, two player game, with players sitting at separate network connected computers. This section discusses the software and hardware architecture of the application as well as the design of the room used by the players.
2.1
Software
CollabBillBoard was designed for networked connected personal computers running Microsoft Windows.
Microsoft’s Visual C++ (VC++) was the development environment used. Important software libraries used included: Microsoft Foundation Classes (MFC) for GUI support, DirectX for high performance graphics, and
Winsock for communication.
Applications developed with VC++ and MFC have a structure oriented around the user interface. Each dialog is associated with a C++ class. Events generated by widgets in the dialog are converted to messages that invoke dialog class methods.
To enable multiuser capabilities, we developed a collaborative software design pattern for a shadow socket class with each user interface dialog. The socket shadow contained methods for communication setup/takedown, sending special events, and receiving special events. Send event methods reported local events and data that were of interest to remote users. The receive event method converted remote user messages to a local event and data format.
This design pattern the core of a scalable peer-to-peer collaborative framework we developed for enhancing
CollabBillboard and building future collaborative simulations using Microsoft technology.
2.2
Hardware
CollabBillboard was designed to run on two networked connected personal computers running Microsoft Windows.
The application required one machine per user. The complete simulation state was replicated on each machine.
Although DirectX would have theoretically allowed the simulation to run on any system, some form of graphics accelerator was required for acceptable performance.
2.3
Room Design
CollabBillboard’s network connectivity allowed participants to be at different physical locations. However, the game was designed with activities that require high bandwidth communication between participants. For this reason, a face-to-face environment was used.
Figure 1: Sketch of Room Design
A long desk with monitors at opposite ends was set up in an office. Users sat on different sides of the desk near a monitor. The monitor faced the user, but was set up so that it could not be seen by the user on the opposite side of the table. The monitors were angled so that both users would sit between a three foot gap in the table. This allowed lineof-site viewing for non-verbal communication (see Figure
1).
This section discusses the experimental method used in the study and consists of five subsections. Participants is a description of the subjects in the study. Task Overview lists the activities performed by the participants . Team
Performance covers how success was measured. Data
Analysis and Statistical Power discusses how results were collected and prepared for analysis.
3.1
Participants
Research participants were 40 students at a medium sized northeastern engineering university. Subjects were predominantly male graduate students. All were volunteers solicited via e-mail or a brief presentation in several advanced computer science courses. A detailed breakdown
of participant characteristics can be found in Variables on page 7.
3.2
Task Overview
Research participants worked on one of two versions of the
CollabBillboard simulation. One version of the simulation used explicit task division, while the other (the control) did not. Participants were grouped into pairs, with each pair using one version of CollabBillboard. When the simulation was completed, participants worked through a classic paper and pencil collaborative exercise called Lost At Sea. At the end of the experiment, the pair were asked to complete a survey about their experiences.
3.2.1
CollabBillBoard Application
Pairs of participants were scheduled for a one hour session.
When they arrived, they were introduced to each other, the tasks to be performed were explained, and they were asked to sign a consent form. A tape recorder was started to record the audio exchange during the CollabBillBoard portion of the session. Participants started the
CollabBillBoard application on their respective machines.
When network communication was established, one of the users pressed the OK button on the initial dialog window, and both users were presented with a task menu.
The session moderator explained that the participants were part of a fictitious advertising company. The company wanted to place a billboard in the city of Boston. Two major tasks were needed to complete the application: select a site in the city to place the billboard; assemble the scrambled pieces of the billboard on the site’s billboard frame.
Telepointers were used to indicate remote user focus on the map. As users moved over possible sites, an informational window appeared describing the site. When a site was selected, it was highlighted. These actions appeared on both participant’s maps, with separate colors indicating a local or remote action. Once participants selected a site, they proceeded to the second task.
The second task, BillBoard Assembly, involved assembling randomly placed pieces of the billboard in the correct order and properly centering them on a billboard frame. At this point, the task division and control versions of the program diverged. The control version brought up a shared billboard frame that users could add billboard pieces to. Each new pieced appeared simultaneously in the same random location on both participant’s screens. Participants could grab and move any piece of the billboard at any time.
The frame contained a green box representing the local user’s position in the frame. A red box represented the remote user’s position. To move a billboard piece, a user placed the green box on a billboard piece, selected the grab button, then used the directional arrows. A zoom window was included for fine grained piece movement (see Figure
3).
Figure 2: CollabBillBoard Site Selection Task
The first task, Site Selection, brought up a shared map of the city of Boston, Massachusetts (see Figure 2).
Figure 3: Control Billboard Assembly Dialog
Figure 4: Task Division View Billboard Dialog
The Task Division version of the program split the billboard piece assembly into separate tasks: View
Placement and Place Billboard. The View Placement task presented the user with a zoomed out view of the billboard
frame. This user could see all billboard pieces and a green box which represented the Place Billboard user’s view.
The View Placement user could add new pieces to the frame, and move the other user’s view. However, the View user could not move a billboard piece even if the Place user was currently grabbing one (see Figure 4).
The Place Billboard task presented the user with a zoomed in section of the billboard frame. The Place Billboard user could navigate around the billboard frame using the dialog’s arrow widget. The user could also grab, move, and drop billboard pieces (see Figure 5).
Figure 5: Task Division Place Billboard Dialog
Complications arise with task division because neither user can complete the simulation goal independently. The Place
Billboard subtask has a view that represents a small portion of the billboard frame (approximately ¼ of a billboard piece). This view can be very disorienting. The View
Placement task has a good view of the frame, but does not allow the user to move billboard pieces. As a result, both users must depend on each other to complete the billboard assembly.
At first, this dependence may appear awkward or contrived.
However, it was modeled after the real world collaboration that occurs when hanging a picture in a room. When hanging a picture one person manipulates the picture up close, while another stands back to view the picture placement in the context of the room.
Once the Billboard has been assembled in either the control or task division version of the program, the Done task was selected and the team received a score. The score was based on four factors: choice of billboard site, properly assembled billboard, properly centered billboard, and time to completion. A brief discussion about the score with the moderator then ensued. At this point the tape recorder was turned off.
3.2.2
Lost at Sea Exercise
The second stage of the session involved a classic paper and pencil collaborative exercise called Lost At Sea [12].
Participants were told to read a brief scenario where they imagined themselves on a sinking ship. They had to rank
15 items in order that they would be taken because the ship might sink at any moment. After the task was completed, the moderator discussed the US Merchant Marine’s
Ranking of the same items. The purpose of this test was to look for differences in the quality of subsequent collaborative activity between the task division and control simulation groups.
3.2.3
Exit Survey
The final stage of the session was a survey. The survey covered three areas: subjective feelings about
CollabBillboard, subjective feelings about collaboration during the session, and personal information. When the survey was completed, the participants were debriefed by the moderator. Like the Lost At Sea exercise, the purpose of this survey was to look for differences between the task division and control groups.
3.3
Team Performance
Team performance was determined using a variety of measurements depending on the stage of the session. For the CollabBillboard stage five team measurements were used: choice of billboard site, properly assembled billboard, properly centered billboard, time to completion, and amount of conversation as a percentage of task completion time.
For the Lost at Sea stage, 17 team measurements were made. The first 15 were absolute values of the difference between the correct ranking for each item and the team’s ranking of the item. Next was a cumulative sum of these deltas. Finally, time to complete the Lost at Sea stage was measured.
For the Exit Survey stage, 31 questions were asked to subjectively assess CollabBillboard and collaborative experiences during the session. Most of these questions used a rating scale from one to five, with lower numbers representing a more positive feeling about the question and higher numbers indicating a negative feeling. A “no opinion” option was available for each question.
3.4
Variables
The survey included a personal information section that gathered data about each participant. The information was used to discount measurements because of unique characteristics of a team or individual. Questions covered areas including computer and computer game usage, class in school, full-time/part-time status, and familiarity with partner.
3.5
Data Analysis and Statistical Power
Fifty three measurements were taken of each team that participated in the study. The results were separated into two groups: task division and control. Team and individual means for each measurement were calculated for both groups.
Sample sizes varied by measurement for various reasons
(see Table 1). For example, early problems with audio
equipment reduced the number of conversations that could
be recorded. Some subjects were already familiar with the
Lost at Sea scenario and had to be given a different, but comparable exercise. Finally, some measurements applied to the entire team, while others applied to individuals.
Measurement
CollabBillboard
Conversation
Lost at Sea
Exit Survey
10
5
7
20
Task Division
Sample Size
(N td
)
6
4
6
12
Control
Sample Size
(N c
)
Table 1: Sample Sizes for Study
A t -test was performed on the sample means to determine if differences between the task division and control groups were statistically significant The t -test is used when comparing the means of two population groups measured against the same characteristic, but with different sample sizes. The procedure is relatively straightforward. First the group’s sample means, standard deviations, and mean difference are calculated. Then a standard deviation of the mean difference between groups is computed. This is used to calculate, v , the effective number of degrees of freedom. v is a computed sample size that takes the sample sizes of both groups into account. Finally, a confidence interval is computed. If the interval spans zero, the mean difference between the groups is not considered statistically significant
[4].
Statistically significant mean differences were marked as potentially interesting. Other measurements whose mean difference were not significant were also marked if they seemed relevant to collaboration. An interpretation of the results was then performed.
This section analyzes significant results from the study.
The results are explained and analyzed in the order that they were collected during experiment sessions. A discussion at the end of the section summarizes the findings.
The tables presented in this section use a special notation.
The Measurement column contains descriptions of measurements taken during the experiment. N td
and N c indicate the task division and control group sample sizes for the values that appear in the table. x td and s td
are the mean and standard deviation for the task division group. x c
and s c are the same values for the control group. Conf. Int.
is the t -test confidence interval. This represents the level of confidence that the task division and control group means for the measurement are statistically different. A *** in this column indicates no significant difference.
4.1.1
CollabBillboard Application
The CollabBillboard Application stage of the experiment measured performance using five characteristics (see
Pieces In Order
quality of the billboard assembly. It represents the number of pixels off each billboard piece was from its neighbors. If each piece was next to its correct neighbor, then the team received a score of zero. Higher scores indicated a lower quality assembly.
Time to Completion
(see Table 2) measured how much time
it took the team to complete the entire simulation. It took the task division group almost seven minutes longer to complete. The reasons for this time difference are straightforward. The control group was able to assemble the billboard in parallel, because both participants had access to the move piece function. In the task division group, only one participant was able to move pieces. The process of figuring out where pieces should be placed was also easier for the control group. Both participants could see the entire billboard frame, which allowed them to quickly decide where pieces belonged. With the task division group, only the participant with the View
Placement dialog could view of the entire billboard frame.
This forced both participants into a verbal dialogue to move pieces to their proper place in the frame. Note that although it took longer for the task division group to complete the assembly task, the quality of the result was higher. x td s td x c s c
Measurement
N td
= 10, N c
=6
Pieces in Order (pixels)
Time to Completion
(seconds)
0
1567
0
297.8
23.3 36.7
1156.8 193.7
Conf.
Int.
80 %
90%
Table 2: CollabBillboard Application Measurements
4.1.2
Conversation
While the team was using CollabBillboard, their conversation was recorded for later analysis. The analysis developed a ratio of time spent in conversation over total task time. This ratio indicated the amount of time, as a percentage, that the team spent in conversation while performing the task. Ratios were calculated for the simulation’s two major tasks: Site Selection and Billboard
Assembly (see Task Overview ).
Site Selection
(see Table 3) conversation ratio was just over
70% for both groups, with no significant difference between the means. There shouldn’t be a difference between groups because the task was exactly the same for both groups, and was the first activity performed during the session. The similarity of the ratios is evidence of the population homogeneity between groups.
Billboard Assembly
(see Table 3) conversation ratio
showed a significant difference between groups. Task division teams were involved in conversation 85% of the time. The amount of conversation is not surprising because the Place Billboard participant had to rely on the View
Placement participant’s verbal directions to place a piece
properly on the frame. If no conversation took place, it would have been impossible to assemble the billboard correctly.
The control teams were involved in conversation only 44% of the time. Again, this was not surprising because neither participant needed the other in order to complete the assembly task. Both participants had a full view of the billboard frame, and a move piece capability.
It is clear that the task division simulation induced more conversation in teams than the control version. Although not necessarily a direct measurement of collaboration, it indicates more interaction between team members. Team interaction is a requirement for collaboration.
Measurement
N td
= 5, N c
=4
Site Selection
Conversation/Total Time
(seconds)
Billboard Assembly
Conversation/ Total Time
(seconds) x td s td x c s c
Conf.
Int.
0.7304 0.0745 0.7160 0.1603 ***
0.8503 0.0624 0.4444 0.0989 90%
Table 3: Conversation Measurements
4.1.3
Lost at Sea Exercise
The Lost at Sea exercise measured performance using 17 characteristics (see
Team Performance ). Many of the
characteristic means were statistically different between groups, however, the most meaningful measurements were the Mean Delta from Correct Ranking and Time to
Completion
The Mean Delta from Correct Ranking was used to calculate how well a team performed on the exercise. First, the absolute value of the difference between a team’s ranking for each item and the correct ranking of the item was determined. For example, if the team gave the Sextant a rank of 10, and the correct ranking was 15, then the item’s delta value would be 5. Next a mean delta was calculated for each team. Finally a mean delta was calculated for both groups. The smaller the delta value, the better the items were ranked. The task division group performed almost a full rank better on average than the control group in this exercise.
Time to Completion measured the number of seconds it took for the team to complete the Lost at Sea exercise. The task division group completed the exercise a full five minutes faster on average than the control group.
Measurement
N td
= 7, N c
=6
Mean Delta from Correct
Ranking x td
3.295
Time to Completion
(seconds)
1118 s td x c
0.6959 4.089
148.19 1420 s c
0.830
Conf.
Int.
90 %
432.11 80%
Table 4: Lost at Sea Exercise Measurements
The results from this exercise indicate that groups using a task division simulation may have enhanced collaborative abilities on subsequent tasks.
4.1.4
Exit Survey
The exit survey asked each participant 26 subjective questions about the CollabBillboard application and the collaborative experience during the session. The response means for the majority of the questions showed no significant variance. However, there were several
responses that were statistically different (see Table 5).
Cooperative/Independent asked if the participant felt interaction in the CollabBillboard application was more cooperative or more independent. A value more towards one indicated a feeling of cooperation. A value towards five indicated a feeling of independence. The task division group scored almost a full point (20%) closer feeling of cooperation.
Goal Directed Behavior asked if the participant that the team acted in an organized, productive manner when completing tasks during the session. A value towards one indicated more organization while a value towards five indicated less organization. The task division group scored a half point (10%) greater feeling of organized behavior.
Good Leadership asked when someone took on a leadership role during the session, was the leader effective? A value towards one indicated good leadership, while a value towards 5 indicated poor leadership. The task division group scored a ¾ of a point (15%) higher feeling about good leadership.
Easier by Self asked if the participant felt it would have been easier to complete the CollabBillboard simulation without a partner. A value towards one indicated a greater desire to work independently while a value towards five indicated a desire to work with a partner. The control group scored a 1.7 point (33%) higher desire to work cooperatively than the task division group. It is interesting to note that although the task division teams found it more difficult to assemble the billboard, and it took significantly more time, they did a higher quality job.
Satisfactory Outcome asked if the participant felt that the work the team performed during the session was satisfactory. A value towards one indicated a high level of satisfaction, while a value towards five indicated a lower satisfaction. The task division group scored a half point
(10%) higher feeling of satisfaction with the team’s performance.
Try other Simulations solicited the participant’s desire to try other simulations like CollabBillboard. A value towards one indicated a greater willingness to try other simulations while a value towards five indicated less desire. The task division group scored a half point (10%) greater desire to try similar simulations.
Prefer CollabBillboard or Lost at Sea asked which collaborative exercise was preferred by the participant. A value towards one indicated a preference for
CollabBillboard while a value towards five indicated a preference for Lost at Sea. Both groups seemed to be neutral in their feelings about the best exercise with no significant difference between the means. s td x c
Measurement x td
N td
= 20, N c
=12
Cooperative/Independent 1.263
Easier by Self? 1.684
Try other Simulations?
Satisfactory Outcome
Goal Directed Behavior
Good Leadership
Prefer CollabBillboard or
Lost at Sea?
1.737
1.684
1.389
2.267
2.737
0.4524 2.167
1.376 3.333
0.6534 2.273
0.5824 2.083
0.5016 1.833
0.4577 3.000
1.147 2.917
S c
Conf.
Int.
0.9374 90%
0.9847 90%
1.191 80%
0.7930 80%
0.9374 80%
0.7560 90%
1.621 ***
Table 5: Exit Survey Measurements
4.1.5
Variables
There were statistically significant differences between groups in almost all of the variables that measured personal
characteristics (see Table 6). Both groups used the
computer for work a significant number of hours during the week. However, the control group used the computer 10 hours a week more on average for work activities. The task division group used the computer almost 10 hours a week more for games and 10 hours a month more for multiuser games.
Class in School recorded the participant’s current class as an undergraduate (1=Freshman…4=Senior), or graduate
(5=Master’s (two years of graduate school), 6=Doctorate
(three or more years of graduate school)). The task division group was made up of predominantly Master’s students, while the control group was mostly Doctoral students.
Friend Level of Partner recorded the participant’s relationship to the other team member. A value of one indicated they had never met, while a value of five indicated they were best friends. In both groups, the team members tended to be acquaintances, rather than strangers or close friends.
Measurement
N td
= 20, N c
=12
Computer Work
(Hrs/Week)
Computer Recreation
(Hrs/Week) x td s td x c s c
25.105 13.334 38.182 21.43
18.052 14.014 10.250 8.080
Conf.
Int.
90%
90%
Multiuser Computer
Games (Hrs/Month)
Class in School
Friend Level of Partner
12.368 11.427 1.5
4.833
2.263
1.339
1.368
5.75
2.833
3.605 90%
0.6216 90%
1.528 ***
Table 6: Personal Information Measurements
4.1.6
Discussion
The results and analysis of the study seem to support the hypothesis that task division can improve collaboration both in the simulation and in subsequent group activities.
Although it took longer for the task division group to complete the simulation, they produced higher quality results indicating more effective collaboration.
Conversation, another measure of collaboration, occurred during 85% of the assembly task for task division and only
44% of the assembly task for the control. On the second collaborative activity, the task division group completed the work in less time with superior results. In every instance that the exit survey had statistically valid mean differences, the responses were more positive about collaboration in the task division group.
One anomaly, the Easier by Self question, showed that task division participants felt it would have been easier to complete the simulation without a partner. This would have involved allowing both the View Placement and Place
Billboard functions to be executed by a single participant.
This intuition might have been validated by the more cooperative feelings the control group had for this question.
The control group had both functions integrated into a single dialog window accessed by both participants.
The variables measured in the study offered mixed conclusions. The control group used the computer 10 hours a week more for work, and were pursuing a more advanced degree. Presumably this might have given them an edge in the use of the simulation, and also caused them to perform better on the paper and pencil exercise. The task division group used the computer more for games, and were pursuing a less advanced degree. This might have given them an edge in the simulation, but probably not on the paper and pencil exercise. Better collaboration due to a previous relationship between members of a team can be ruled out in both groups because in general team members tended to be acquaintances, but not close friends.
5.1
Limitations
Several limitations of the study are addressed in this section. First, the sample size for the statistics gathered during the experiment should be increased to eliminate concerns about a small unbalanced sample.
The roles imposed by the task division version of the program sometimes created frustration in the team. This was particularly true for the Place Billboard partner. Recall that this subtask involved moving the billboard pieces under the other partner’s direction. The Place Billboard
participant sometimes felt like an extension of the View
Placement participant’s hand rather than an equal partner.
Positive feelings about cooperation might have been improved if a better balance between the two subtasks had been established.
Although CollabBillboard was a fully networked, distributed application, the ability to work in parallel was only utilized in the control version. In the control version, both participants were able to assemble billboard pieces at the same time. The task division serialized the assembly process. The View Placement participant could only give verbal directions while a piece was being moved by the
Place Billboard participant. The task of assembling the billboard might have been accelerated if a more parallel version of the process had been developed for the task division group.
The Lost at Sea exercise has been predominantly used as a measurement of larger group collaboration (i.e. groups of four to six members). It is possible that this might not have been an effective tool for measuring collaboration in team pairs. Additionally, studies have shown that group characteristics have a strong influence on the outcome of this exercise [11]. The variables measured in this study did not take into account personality type.
Personal information recorded in the exit survey indicated statistically significant differences between the task division and control groups. Although these differences indicated the control group should have performed better on the exercises, the task division group actually had superior performance. Despite this finding, the results of the study would have had more validity if there were no significant differences between the variable means recorded .
5.2
Related Work
We know of no other work that has been done to evaluate the collaboration with task division in synchronous groupware simulations. However we do know of several research efforts that have developed collaborative simulation systems. These include: SharedARK,
MacCandy, and TurboTurtle.
SharedARK is a system for creating synchronous, shared microworlds [5]. A SharedARK microworld is an infinite, shared, two-dimensional “flatland” with a small portion visible on any one computer display. Users manipulate objects using a special mouse driven hand. The system can operating in face-to-face and distance environments. When users encounter each other in SharedARK they can set up audio/video links. A basic model of the physical world is built into the system. Users can experiment and create objects that have mass, density, and momentum. Several applications have been created including the “Puckland” [6] simulator for elastic collisions and “ARKCola” [7], a simulation of a soft drink processing plant. Experiments with SharedARk systems have shown that students are more engaged and perform deeper evaluations of problem sets than paper and pencil exercises [5].
Other microworlds include MacCandy [8] and TurboTurtle
[9]. MacCandy simulates a candy factory where candies are packed in rolls of ten and rolls are packed in boxes of ten. The system was designed to help second grade students learn about estimation, symbology, and addition/subtraction. The microworld is the focus of shared discussion when displayed on the instructor’s screen at the front of the classroom . TurboTurtle is a system for exploring Newtonian physics, similar to SharedARK. A distinguishing feature of the system is its sophisticated support for awareness of other users including user lists, telepointers, and shared widget controls.
5.3
Future Work
There are several areas for further exploration of task division and collaboration. Feedback and observation during the study have generated ideas for improvement.
The flexibility of CollabBillboard’s architecture, and its network capability will allow new kinds of studies to be conducted. Finally, several advanced task division simulations are proposed.
5.3.1
Improving CollabBillboard
Several application design flaws and bugs became apparent during the study. Teams were almost always penalized for incorrectly centering the billboard on the billboard frame because they used the top of a set of lights that appeared on the frame as a bottom of the frame, rather than the actual bottom of the frame. The lights should be removed. The control version of the simulation experienced local performance problems when the remote user flooded the local system with billboard piece move events. Coupling was loosened so that these events didn’t affect local user performance, but the local user complained about the lack of immediate feedback on remote user piece movement.
There is a suspicion that this may be a Microsoft Windows problem. This needs to be investigated further.
Occasionally the client and server processes would not be able to connect to each other across the network. If the machines were rebooted, the problem would usually go away. This needs to be investigated further.
A frequent complaint from the task division group, was the feeling that the assembly process took too long, and that the
Place Billboard subtask didn’t seem to be on equal footing with View Placement. Discovering a way to parallelize the assembly process for the task division version of the simulation might reduce this complaint. One idea for parallelization is to give the View Placement subtask gross piece movement control, and allow the Place Billboard subtask to do fine piece adjustment with the ability to automatically jump from piece to piece.
To reduce the differences in the personal characteristics of the groups, a pretest should be performed. The pretest
should also included a check for personality type based on
Myers-Briggs Type Indicator [13]. The results of the pretest would be used to assign participants to the task division and control groups in a balanced fashion.
It would be interesting to get statistics on team pairs that only complete the Lost at Sea exercise to understand the effect any version of the simulation is having on collaboration. The amount of conversation that occurs during the exercise should also be recorded for all versions of the experiment as another measurement.
5.3.2
Additional Studies
CollabBillboard’s network capability would allow it to be used in a distance collaboration setting. Limited work has been done in the area of groupware collaboration using audio/video versus audio only communication [10]. It would be helpful to contribute to this literature with another study.
Competition between teams may be a useful way to improve the performance within a team. A study could be conducted where teams use CollabBillboard at the same time in the same room with live updates of every team’s assembly process and performance the simulation. The Lost At Sea exercise could also be digitized so that teams could see each other’s performance on the activity. The results of the competition study would be compared against the study in this paper.
CollabBillboard was implemented as a team pair system, however it’s peer-to-peer architecture allows it to be scaled to larger groups. It would be interesting to investigate patterns of group organization and collaboration when larger teams participate in the simulation. This investigation would include both the task division and control versions of the program.
5.3.3
Next Generation Task Division Simulations
We see task division simulations as an important component of next generation learning platforms and are considering several possibilities for future development.
These include CollabBot, CollabComp, and CollabSail.
CollabBot is a collaboration robot. This simulation allows a group of students to guide a robot through a maze. Each student is responsible for a separate robot system: navigation, vision, environment manipulation, and driving.
The goal of this simulation is to build and enhance collaborative skills in the team.
CollabComp is a collaborative computer. This scenario allows a group of students to participate in the simulation of a modern multitasking computer. Students divide up the computer’s subsystems and then operate these subsystems cooperatively to service requests from a typical user. The goal of this simulation is to learn about computer organization and operating systems.
CollabSail is a collaborative sailing ship. This scenario places a group of students onboard a simulated sailing ship.
Each student has a different responsibility on the ship: navigation, helm, sails, and artillery. The goal of the simulation is to learn about meteorology, celestial navigation, chart reading, and mechanical physics.
5.4
Summary
This paper described a study conducted to evaluate the effectiveness of task division on collaboration between pairs of users in a synchronous groupware simulation.
Statistically significant results show task division increased team interaction and improved group performance in subsequent unrelated group activities when compared to a control version of the software. The promising results of the study and the flexibility of CollabBillboard’s architecture have led to the proposal of several new studies that will investigate the effect of inter-team competition on intra-team collaboration, and organization and collaboration patterns in larger teams. The promise of task division has also led us to consider development of several advanced task division simulations that would help students learn about collaboration, computers, and sailing.
A copy of CollabBillboard , the Lost at Sea exercise, and the Exit Survey can be downloaded from the following web address: http://www.cs.rpi.edu/~dugan/collabbilboard.html
This research was supported in part by the National Science
Foundation under award CDA-9634485. The authors would also like to thank Ed Rogers for his helpful suggestions during the development of the CollabBillboard. Matt
Champagne gave excellent advice on constructing and statistical analysis of the evaluation experiment. Most importantly, we wish to acknowledge the thirty two students who gave their time as subjects in the study.
1. Mastaglio, T.W. and R. Callahan, A Large-Scale Complex
Virtual Environment for Team Training.
IEEE Computer,
1995. 28,(8): p. 49-56.
2. Fitzsimmons, E.A. and J.D. Fletcher, Beyond DOD: Nondefense Training and Education Applications of DIS.
Proceedings of the IEEE, 1995. 83(8): p. 1179-1186.
3. Ryan, M., Multiplayer Gaming: Two or More Is War , in PC
Magazine . 1996. p. 337-340.
4. Jain, R., The Art of Computer Systems Performance Analysis .
1991, New York: John Wiley and Sons. 685.
5. Smith, R.B., A Prototype Futuristic Technology for Distance
Education in Proceedings of the NATO Advanced Workshop on New Directions in Educational Technology , November 10-
13, 1988, Cranfield, UK. p. 131-138.
6. Whitelock, D., et al.
, Investigating the Role of Socio-
Cognitive Conflict in Computer Supported Learning about
Elastic Collisions , . 1992, Centre for Information Technology in Education, The Open University.
7. Gaver, W.W., R.B. Smith, and T. O'Shea. Effective Sounds in
Complex Systems in Human Factors in Computing Systems,
CHI 1991 . 1991: ACM PRess.
8. Bowers, J. An Alternative Perspective for Developing a
Mathematical Microworld . in Computer Support for
Collaborative Learning . 1995. Indianna University.
9. Cockburn, A. and S. Greenberg. TurboTurtle: A Collaborative
Microworld for Exploring Newtonian Physics . in Computer
Support for Collaborative Learning . 1995. Indianna
University.
10. Gale, S. Adding audio and video to an office environment . in
First European Conference on Computer Supported
Cooperative Work (EC-CSCW '89) . 1989. Gatwick, London,
UK: Computer Sciences House, Slough, UK.
11. Rogelberg, S.G, J.L. Barnesefarrell, and C.A. Lowe. The
Stepladder Technique – An Alternative Group-Structure
Facilitating Effective Group Decision-Makin g. Journal of
Applied Psychology 1992. 77(5): p. 730-737.
12. Nemiroff, P.M., and L. Rasmore. Lost At Sea: A Consensus-
Seeking Task in W.Pieffer and J. Jones (eds.) 1975 Handbook for Group Facilitators.
1975. San Diego, California.
University Associates. p. 28-34.
13. Myers, I.B. Introduction to Type , 6th ed. Palo Alto, CA:
Consulting Psychologists Press, 1998.
14. Dugan, Robert F., Eric A. Briemer, Darren Lim, Ephraim
Glinert, Mark Goldberg, Matt Champagne, Exploring
Collaborative Learning in Rensselaer's Classroom in the
Round , Technical Report-98-01 Rensselaer Polytechnic
Institute, Troy, NY, April 1998
15. Ferraro, A., Rogers, E., and C. Geisler, Team Learning
Through Computer Supported Collaborative Design , in
Proceedings of Computer Support for Collaborative Learning,
CSCL ’95, 1995: p. 120-124.
16. Rogers, E., and C. Geisler, A Collaborative Learning
Environment for Intellectual Teamwork Across the
Curriculum , in Proceedings of Frontiers in Education, 1997,
Pittsburgh: CD-Paper 1491.
17. Hammonds, K.H., et al., The New U: A Tough Market is
Reshaping Colleges, Business Week , p. 96-102, December 22,
1997.