1 - Andrew.cmu.edu - Carnegie Mellon University

advertisement
27th Annual Conference on Distance Teaching & Learning For more resources: http//www.uwex.edu/disted/conference Practicing Mobile Tele-Presence at University:
Physical Avatars in Education
M. N. Clark, P.I.
Adjunct Faculty
clarkmn@cmu.edu
David Root
Director of MSE Distance Education
droot@cs.cmu.edu
Institute for Software Research
Carnegie Mellon University
Summary
We describe several variations of immersing a teleoperated mobile platform [1][2] on campus and in a
distance course. Locations were across several university departments and settings to explore its
capabilities and social acceptance. The mobile platform’s popularity quickly provided us a diverse
population of over a hundred participants; from professors and university staff to middle and high school
students. Our study was 16 weeks. Overall, we collected surveys from 15 operators along with local
surveys from students. We placed the robot in classrooms with on-campus students studying contextual
software design methods [10].
Our results provide optimism for physical avatars in education, especially as their underlying core
technologies improve in streaming video, voice, mobile bandwidth, and software architecture.
Presenters Bios
M. N. Clark is a distance instructor for courses in Software Architecture, Real Time Systems, and
Contextual Software Design Methods. He has close associations with the University’s Robotic Institute,
Field Robotic Center.
David Root is the director of Distance Education for the University’s Masters of Software Engineering
and Director for its Studio Projects. Mr. Root is an instructor for all core courses on campus and courses
offered from a distance.
Contact Information:
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh. PA 15213
Office: 412-268-5198
Fax:
412-268-5413
First we define our avatar as a real-world mobile device that represents its human operator so that he/she
can see, hear, speak, and can physically move within environments without being corporally present.
Background: Before starting, we presented objectives and processes to our department heads. Next we
selected a graduate course “Software Methods” [3] for showcasing the robot because it was offered both
on and off campus. We briefed their instructors and received permissions to use a tele-presence robot in
the classrooms.
Copyright 2011 Board of Regents of the University of Wisconsin System 1 27th Annual Conference on Distance Teaching & Learning For more resources: http//www.uwex.edu/disted/conference Our questions were:
1. Does the mobile tele-presence technology add learning value from the perspectives of:
a. distance student
b. off-campus instructor
c. on-campus instructor
d. distance group settings
e. on-campus group settings
f. administrators
2. How does mobile tele-presence compare to traditional synchronous learning communications?
a. to virtual conference rooms
b. to speaker phone
c. to computer chat sessions
3. What specific benefits can be achieved with mobile tele-presence, what’s unique?
4. What are the lessons learned when introducing mobile tele-presence in academic settings?
We invited several people to participate and provided them with instructional videos. Individuals operated
the platform using a web interface. The principle investigator, a distance instructor and who uses a wheel
chair approached the project as a method to enhance his student’s educational experiences. Specifically
asking: would a physical presence in machine form add value to students?
Our avatar drivers and observers completed surveys to capture fundamental issues: A) was the machine
accepted; B) was the experience helpful? We addressed those two points by using these roles:
1. Student, traditional classroom
2. Teacher, traditional classroom
3. Member of a semester Capstone
project
4. Briefing Department
5. Giving staff introductions
6. Student consultant
7. Briefing visitors and groups
8.
9.
10.
11.
12.
13.
14.
Facility tour guide
Business meeting attendee
Distance lecturer
Open-House greeter
Library assistant
Mentor for project reviews
Recruiter
Innovation: Our innovation is in immersing a surrogate that is three dimensional, dynamic, and selfcontained. Whereas collaboration technologies afford some synchronous interaction, most involve static
infrastructures. One example is virtual conference centers. These dedicated rooms of microphones,
cameras and projectors are valuable but place burdens on the distance student, staff, instructors, and the
local student community. Even with inexpensive mobile web-cameras, the success of distance interaction
relies on both parties to use technology. Instead, a person speaking to our avatar does not require
technology on their side to interact. The operator controls the robot from a distance to follow its host until
the campus environment is understood. This is exactly how a person welcomes a new colleague or guest.
Our avatar represents its operator in abstract, machine form, i.e. it sees, listens, and speaks for its human
operator.
Awareness Benefit: Instructors and colleagues are less likely to forget distant students in avatar form,
and thereby, foster attentiveness. Likewise, our belief is that persons at a distance increase their learning
opportunities when they own their exploration space. This technology is unique among present static
devices in that side-bar conversation in hallways and student study-rooms are accessible to mobile
Copyright 2011 Board of Regents of the University of Wisconsin System 2 27th Annual Conference on Distance Teaching & Learning For more resources: http//www.uwex.edu/disted/conference avatars. Additionally, our avatars can seek individuals or groups. In contrast, chat rooms, web cams, and
Skype methods rely on both parties to converge with equipment at a place and time.
Virtual Conference (VC) centers have high infrastructure costs and are typically few in number.
Furthermore, professors often leave the comfort of their nearby lecture rooms to accommodate distance
students in VCs. Office hours for one-on-one discussions with distance students use phone chats.
Whereas a mobile avatar can travel wherever there is sufficient wi-fi or cell-phone service available,
including the professor’s office. Today’s universities have pervasive wi-fi coverage so we were surprised
to discover gaps in wireless connectivity. We learned that traveling continuously across access points
require special infrastructure considerations [4]. However, our premise is that smart phone bandwidth and
coverage will continue to increase making it possible for distance students, professors, and staffers to use
avatars ubiquitously.
Degree of Sensing: How much autonomy does one build into physical, mobile avatars? This question
addresses a design trade-decision. While an ideal platform would be capable of robust obstacle avoidance
and re-planning to reach a goal, this typically involves uploading local maps to the mobile robot, adding
sensors, and placing fiduciaries as navigation guides. The CoBot project [5] [6] [7] has had impressive
demonstrations in this regard.
Instead, our platform relied on its operators, in the role of a virtual-visitor, to understand their
environment by following human hosts. As needed, the host introduced the avatar to classrooms,
hallways, staff offices, and obstacles in labs. essentially accepting the human operator’s virtual presence.
If the visitor had prior knowledge of the area, then the host’s role was more relaxed, perhaps nonexistent.
Subsequent avatar visits from the same operator did not require such first-time “hand-holding”.
New Relationships (HMH): One observation from our case-study is that there exists an emerging
relationship: human to machine to human (HMH). While a novelty at first encounter, the distant visitor
and resident person progress quickly to real work.
Another case-study observation is our expectation to have the avatar work “out-of-the-box”. That is, when
the robot arrives from its shipper and is turned on, the receiver expects it to function with little
preparation. The distance user expects to login and see, listen, and speak with clarity along with having a
fine degree of mobility. People who interact with the avatar expect to be seen, heard, and most
importantly, feel non-threatened by the device. In no case do we expect either party in HMH to
understand its underlying technologies. Additionally we should find it reasonable to ship the machine to
other campuses and have a virtual presence ready within an hour of its arrival. Given this perspective,
careful consideration should be given if requiring fiduciaries and local maps be uploaded before the robot
is ready to function. In this regard our mobile platform uses human-assist technology to augment the
operator’s skills without adding front-end complexity.
Overview in Pictures: We direct the reader to Figure 1 and the narrative below for examples from our
case-study.
Insert “A” in the upper left is a snapshot of the beta platform shipped to Carnegie Mellon University from
Anybots Corporation on August 23, 2010. Note its single box in the background. An adjustable threesectional telescoping mast allows the device to reach 5’10”. Its height can be seen in Insert “D”. The
paper’s principle investigator holds the avatar at the campus’s student center. The platform balances on
two wheels and weighs about 30 pounds.
Insert “B” shows the avatar attending a weekly Friday meeting. The distance operator was a Carnegie
Mellon University professor residing in Florida. For more than 12 months, the professor attended a
Copyright 2011 Board of Regents of the University of Wisconsin System 3 27th Annual Conference on Distance Teaching & Learning For more resources: http//www.uwex.edu/disted/conference weekly meeting by speaker phone. In this insert, the meeting’s host asked the professor if he got the
meeting slides in last night’s email. He responded through his avatar: “yes, but they are in another room. I
will just move closer to your screen.” Every chair was occupied but the professor nimbly navigated his
avatar to the front where the screen was. After the meeting, the host, a world renowned roboticist [8]
complimented the professor on his machine’s participation abilities.
Insert “C”, with the backdrop sign “Pittsburgh Supercomputing Center” places the avatar on the first floor
of a shared-space building. Supercomputing takes place on floors three and four. Our classrooms were on
the second floor. Frequent wireless disconnects to the robot were initially associated with activities on the
floors above. We learned later that our access point was at fault. It had 8 radios, each having their own
directional antenna. Tuning the array improved our mobility performance.
Insert “E” and “G” is a glimpse of student Capstone presentations. These are joint, end of semester
programs with our Portugal colleagues. The avatar entered several briefing rooms where students present
their findings. Mentors and sponsors give advice and ask questions as they travel from room to room. One
professor from Portugal logged into our avatar and participated as a mentor [9].
Insert “H” places the avatar on the left, with two lunar-rover prototypes in the frame. The location is the
Planetary Robotics high-bay within the Gates Center for Computer Science. The rovers are working
prototypes in the university’s international competition for Google’s Lunar X-Prize. Our mobile avatar
enabled distant researchers and sponsors to experience the project. We also gave tours to high-school and
middle school students using the avatar. Our platform allowed remote scientists to observe other robots in
the high-bay like Zoe: The solar Atacama Desert crawler and SCARAB: an extreme incline drilling
platform.
Insert “I” shows how students accepted the avatar. In the weeks before Halloween, students decorated the
halls and classrooms with skeletons, pumpkins, and cobwebs. They felt comfortable dressing up the
avatar with a paper hatchet.
Insert “F” is a small snapshot during a demonstration at our Science and Technology library. Many
students would later say how inspired they were at seeing the avatar. The senior librarian discussed
several practical ideas that we will share in our conference presentation.
References
[1] Anybots Corporation: http://anybots.com
[2] E. Guizzo; When My Avatar Went to Work in IEEE Spectrum: Volume 47, number 9, page 26
[3] Carnegie Mellon University ISR graduate course, J. Herbsleb author of - Methods: Deciding What
to Design (F10-17652-D)
[4] Aruba Networks; Solution Guide in Optimizing Aruba WLANS for Roaming Devices, version 3.3
[5] J. Biswas and M. Veloso; WiFi Localization and Navigation for Autonomous Indoor Mobile
Robots- in Robotics and Automation ICRA, 2010 IEEE International Conference on, pages 43794384
[6] S. Rosenthal, J. Biswas, M. Veloso; An Effective Personal Mobile Robot Agent Through
Symbiotic Human-Robot Interaction in AAMAS ‘10: Proceedings of the 9th International
Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
[7] M.Veloso; CoBots – Companion Mobile Robots in http://www.cs.cmu.edu/~mmv/; Description:
CoBot2, functional since early 2010, was developed in collaboration with Dean Pomerleau, from the
Intel Labs, Pittsburgh. CoBot2 has a stargazer that it uses to localize and navigate. It can move in
bounded and open spaces, robustly and safely avoiding obstacles. A compelling demonstration of
Copyright 2011 Board of Regents of the University of Wisconsin System 4 27th Annual Conference on Distance Teaching & Learning For more resources: http//www.uwex.edu/disted/conference those algorithms was given at the Intel Open House on September 28, 2010, among large crowds of
visitors.
[8] William “Red” Whittaker, team leader for Astrobotic Technology and Field Robotic Center at
Carnegie Mellon University: http://lr.astrobotic.net/
[9] Paulo Rapino at University of Coimbra, Portugal: Carnegie Mellon and Portugal To Launch Major
Research and Education Collaboration in Press Release 26 October 2006
http://www.cmuportugal.org/tiercontent.aspx?id=38&ekmensel=568fab5c_70_0_38_1
[10] M. Shaw, J. Herbsleb, I. Ozkaya, D. Root -Deciding What to Design: Closing a Gap in Software
Education in ICSE 2005 Education Track, LNCS 4309, pp. 28 – 58, 2006. © Springer-Verlag Berlin
Heidelberg 2006
Figure 1: Examples of Avatar Roles
Copyright 2011 Board of Regents of the University of Wisconsin System 5 
Download