Grading • Describe the interface and context of use (5 points) • Descr

advertisement
Grading •  Describe the interface and context of use (5 points) • 
Describe how you conducted the interview (5 points) • 
Report the significant data from your interview (both positive and negative aspects of the interface) and the conclusions the data supports. Be especially careful to use your data to ground your conclusions (10 points) • 
Describe design changes motivated by your data (10 points) • 
Overall clarity of presentation and style (5 points) When return will include your wiki posting scores Today   Interaction Techniques   SenseCam and memory aids Exam II Nov 18th (week from Thursday)   Textbook and lecture questions for Exam II available on wiki   Four questions will be selected randomly, two from the textbook and two from lecture   You need only being pen or pencil. We will provide beautifully colored paper. We don’t want to see any notes or copies of the exam questions.   Given you will be writing on four questions you should have enough time to edit and even rewrite your answers. Microsoft SenseCam Vicon Revue SenseCam is a wearable digital camera that is designed to take photographs passively, without user intervention, while it is being worn. SenseCam takes pictures at VGA resolution (640x480 pixels) and stores them as compressed .jpg files on internal flash memory. Has 1Gb of flash memory, which can typically store over 30,000 images.   Unlike a regular digital camera or a cameraphone, SenseCam does not have a viewfinder or a display that can be used to frame photos.   Fitted with a wide-­‐angle (fish-­‐eye) lens that maximizes its field-­‐of-­‐view. This ensures that nearly everything in the wearer’s view is captured by the camera. SenseCam also contains a number of different electronic sensors.   These include light-­‐intensity and light-­‐color sensors, a passive infrared (body heat) detector, a temperature sensor, and a tri-­‐axis accelerometer.   These sensors are monitored by the camera’s microprocessor, and certain changes in sensor readings can be used to automatically trigger a photograph to be taken.   For example, a significant change in light level, or the detection of body heat in front of the camera can cause the camera to take a picture. Alternatively, the user may elect to set SenseCam to operate on a timer, for example taking a picture every 30 seconds. Mrs B, severe memory impairment following limbic encephalitis. Control condition, a written diary was used to record and remind her of autobiographical events. After viewing SenseCam images, Mrs B was able to recall approximately 80% of recent, personally experienced events. Retention of events was maintained in the long-­‐term, 11 months afterwards, and without viewingSenseCam images for three months. After using the written diary, Mrs B was able to remember around 49% of an event; after one month with no diary readings she had no recall of the same events. Memory research suggests that as information passes from sensory memory to short-­‐term memory and to long-­‐term memory the amount of perceptual detail stored decreases. Within a few hundred milliseconds of perceiving an images, sensory memory confers what is like a photographic experience, enabling you to report any of the images details. Seconds later, short-­‐term memory enables you to report only sparse details from the image. Days later, you might be able to report only the gist of what you have seen. A series of early landmark studies demonstrated that after viewing say 10,000 scenes for a few seconds each, people could determine which of a pair of two images had been seen with ~80-­‐90% accuracy. This level of performance indicates the existence of a large storage capacity for images. But the thinking has been that this required only remembering the gist of the image or perhaps only a few features. Typically these studies took images from magazines and the foils were random images from the same collection. Thus, foils could be quite different from the studied images. This makes it impossible to conclude whether the memories for each item consisted of only the gist (“I saw an image of a wedding not a beach.”) , categories of the image, or whether they contained specific details of the images. It was certainly thought that the memories for each item might have consisted of only the gist or category of the item.   All the recent “change blindness” work in which people fail to notice significant changes in visual scenes (e.g., a large background object disappears) is certainly consistent with this view that the amount of information is quite low.   Also the work of Loftus that the details of visual memories can be interfered with by experimenter suggestion is consistent with visual memories being very sparse. Thus, for a long time the consensus has been that few details were remembered. About five years ago, Holligworth showed that after viewing many hundreds of pictures of objects, subjects remained significantly above chance at remembering which exemplar of an object they had seen (e.g., “Did you see this power drill or that one?”). This suggests that memory is capable of storing visual representations details over periods longer than that of working memory. Still this would only require a few details to make the correct decision. In 2008 Brady, Konkle, Alvarez, and Oliva published a paper in the Proceedings of the National Academy of Science.   Manipulated both the quantity and fidelity of the images.   Used isolated objects rather than scenes   Used subtle visual discriminations to probe the fidelity of the visual representations   Presented people with several thousand pictures of objects Ss presented with 2,500 pictures for 3 seconds each. Over 5.5 hours. Repeat-­‐Detection Task during presentation 2-­‐alternative forced-­‐choice Three conditions  
 
 
Novel Exemplar State Performance: Novel (92.5%, 1.6%), Exemplar (87.6%, 1.8%), State (87.2%,
1.8%) www.mapblast.com Directions using LineDrive Hinckley, K., Sinclair, M., Touch-Sensing Input Devices, ACM CHI'99 Conference on
Human Factors in Computing Systems, 223-230
Dasher This week we will cover a number of interaction techniques. First we will look at some influential work from the past on the development of multitouch interaction. Then we will cover current projects. Took a little more than 30 years from time mouse was invented (1963) until it was ubiquitous (1995 with Windows 95) A Bill Buxton axiom: Everything is best for something and worst for something else.  
 
 
 
The trick is knowing for what, when, for whom, where, and most importantly, why. Buxton has argued that those who try to replace the mouse play a fool’s game. The mouse is great for many things. Just not everything. The challenge with new input is to find devices that work together, simultaneously with the mouse (such as in the other hand), or things that are strong where the mouse is weak, thereby complimenting it. Bill Buxton is a key figure in the development of interaction techniques. Ivan Sutherland, Sketchpad: A Man-­‐Machine Graphical Communication System, 1963 1968 Doug Engelbart, A Research Center for Augmenting Human Intellect 1968 Fall Joint Computer Conference, a demonstration that later became known as 1963, Mouse
"The Mother of All Demos.” GRAphic Input Language (GRAIL) method of programming by drawing flowcharts. Early exploration of gestures While it may seem a long way from multi-­‐touch screens, the story of multi-­‐touch starts with keyboards. Yes they are mechanical devices, "hard" rather than "soft" machines. But they do involve multi-­‐touch of a sort. First, most obviously, we see sequences, such as the SHIFT, Control, Fn or ALT keys in combination with others. These are cases where we want multi-­‐touch. Second, there are the cases of unintentional, but inevitable, multiple simultaneous key presses which we want to make proper sense of. This is termed n-­‐key rollover (where you push the next key before releasing the previous one). http://en.wikipedia.org/wiki/Plato_computer Plato helped establish key on-­‐line concepts: forums, message boards, online testing, e-­‐mail, chat rooms, picture languages, instant messaging, remote screen sharing, and multi-­‐player games. Touch screens started to be developed in the second half of the 1960s. By 1971 a number of different techniques had been disclosed All were single-­‐touch and none were pressure-­‐sensitive . One of the first to be generally known was the terminal for the PLATO IV computer assisted education system, deployed in 1972. As well as its use of touch, it was remarkable for its use of real-­‐time random-­‐access audio playback, and the invention of the flat panel plasma display. The touch technology used was a precursor to the infrared technology still available. The initial implementation had a 16 x 16 array of touch-­‐sensitive locations The first multi-­‐touch system designed for human input to a computer system. Consisted of a frosted-­‐glass panel whose local optical properties were such that when viewed behind with a camera a black spot whose size depended on finger pressure appeared on an otherwise white background. This with simple image processing allowed multi touch input picture drawing, etc. Mehta, Nimish (1982), A Flexible Machine Interface, M.A.Sc. Thesis, Department of Electrical Engineering, University of Toronto supervised by Professor K.C. Smith. A vision based system that tracked the hands and enabled multiple fingers, hands, and people to interact using a rich set of gestures. Implemented in a number of configurations, including table and wall. Essentially “wrote the book” in terms of unencumbered (i.e., no gloves, mice, styli, etc.) rich gestural interaction. Work that was more than a decade ahead of its time and hugely influential, yet not as acknowledged as it should be. His use of many of the hand gestures that are now starting to emerge can be clearly seen in the following 1988 video, including using the pinch gesture to scale and translate objects: http://youtube.com/watch?v=dmmxVA5xhuo There are many other videos that demonstrate this system. Anyone in the field should view them, as well as read his books: • 
• 
• 
Krueger, Myron, W. (1983). Artificial Reality. Reading, MA:Addison-­‐Wesley. Krueger, Myron, W. (1991). Artificial Reality II. Reading, MA: Addison-­‐Wesley. Krueger, Myron, W., Gionfriddo, Thomas., &Hinrichsen, Katrin (1985). VIDEOPLACE -­‐ An Artificial Reality, Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’85), 35 -­‐ 40. Developed a touch tablet capable of sensing an arbitrary number of simultaneous touch inputs, reporting both location and degree of touch for each. To put things in historical perspective, this work was done in 1984, the same year the first Macintosh computer was introduced. Used capacitance, rather than optical sensing so was thinner and much simpler than camera-­‐based systems. A Multi-­‐Touch Three Dimensional Touch-­‐
Sensitive Tablet (1985). Videos at: http://www.billbuxton.com/
buxtonIRGVideos.html Issues and techniques in touch-­‐sensitive tablet input.(1985). Videos at: http://www.billbuxton.com/
buxtonIRGVideos.html McAvinney, P. (1986). The Sensor Frame -­‐ A Gesture-­‐Based Device for the Manipulation of Graphic Objects. Paul McAvinney Carnegie-­‐
Mellon University, The device used optical sensors in the corners of the frame to detect fingers. At the time that this was done, miniature cameras were essentially unavailable. It could sense up to three fingers at a time fairly reliably (but due to optical technique used, there was potential for misreadings due to shadows. In a later prototype variation built with NASA funding, the Sensor Cube, the device could also could detect the angle that the finger came in to the screen. A classic paper in the literature on augmented reality. Wellner, P. (1991). The Digital Desk Calculator: Tactile manipulation on a desktop display. Proceedings of the Fourth Annual Symposium on User Interface Software and Technology (UIST '91), 27-­‐33. An early front projection tablet top system that used optical and acoustic techniques to sense both hands/fingers as well as certain objects, in particular, paper-­‐based controls and data. Clearly demonstrated multi-­‐touch concepts such as two finger scaling and translation of graphical objects, using either a pinching gesture or a finger from each hand, among other things. Bill Buxton, Xerox PARC A multi-­‐touch pad integrated into the bottom of a keyboard. You flip the keyboard to gain access to the multi-­‐touch pad for rich gestural control of applications. In 1992 Wacom introduced their digitizing tablets. These were special in that they had mutli-­‐device / multi-­‐point sensing capability. They could sense the position of the stylus and tip pressure, as well as simultaneously sense the position of a mouse-­‐like puck. This enabled bimanual input. Working with Wacom, Buxton’s lab at the University of Toronto developed a number of ways to exploit this technology for two-­‐
handed input. See also: Leganchuk, A., Zhai, S.& Buxton, W. (1998)
Manual and Cognitive Benefits of Two-­‐Handed Input: An Experimental Study.Transactions on Human-­‐Computer Interaction, 5(4), 326-­‐359. Passive Real-World Interface Props for Neurosurgical Visualization Ken Hinckley, Randy Pausch, John
C. Goble, and Neal F. Kassel, 1994
Hinckley, K., Pausch, R., Proffitt, D., Kassell, N., Two-Handed Virtual Manipulation, ACM Transactions
on Computer-Human Interaction (TOCHI), Vol. 5, No. 3, September 1998, pp. 260-302.
Xerox Parc Bier, Fishkin, and Stone Made a range of touch tablets with multi-­‐touch sensing capabilities, including the iGesture Pad. They supported a fairly rich library of multi-­‐point / multi-­‐finger gestures. Founded by two University of Delaware academics, John Elias and Wayne Westerman Product largely based on Westerman’s thesis: Westerman, Wayne (1999). Hand Tracking,Finger Identification, and Chordic Manipulation on a Multi-­‐
Touch Surface. U of Delaware PhD Dissertation: Westerman The company was acquired in early 2005 by Apple Computer. Elias and Westerman moved to Apple. Fingerworks ceased operations as an independent company. Capable of distinguishing which person's fingers/hands are which, as well as location and pressure. http://www.csl.sony.co.jp/person/
rekimoto/smartskin/ SmartSkin: an architecture for making interactive surfaces that are sensitive to human hand and finger gestures. This sensor recognizes multiple hand positions and their shapes as well as calculates the distances between the hands and the surface by using capacitive sensing and a mesh-­‐shaped antenna. In contrast to camera-­‐based gesture recognition systems, all sensing elements can be integrated within the surface, and this method does not suffer from lighting and occlusion problems. SmartSkin: An Infrastructure for Freehand Manipulation on Interactive Surfaces. Proceedings of ACM SIGCHI. PlayAnywhere: A Compact Interactive Tabletop Projection-­‐Vision System Andrew D. Wilson Microsoft Research We introduce PlayAnywhere, a front-­‐projected computer vision-­‐based interactive table system which uses a new commercially available projection technology to obtain a compact, self-­‐contained form factor. PlayAnywhere's configuration addresses installation, calibration, and portability issues that are typical of most vision-­‐based table systems, and thereby is particularly motivated in consumer applications. PlayAnywhere also makes a number of contributions related to image processing techniques for front-­‐projected vision-­‐based table systems, including a shadow-­‐based touch detection algorithm, a fast, simple visual bar code scheme tailored to projection-­‐vision table systems, the ability to continuously track sheets of paper, and an optical flow-­‐based algorithm for the manipulation of onscreen objects that does not rely on fragile tracking algorithms. Wilson, A. D. 2005. PlayAnywhere: a compact interactive tabletop projection-­‐
vision system. In Proceedings of the 18th Annual ACM Symposium on User interface Software and Technology (Seattle, WA, USA, October 23 -­‐ 26, 2005). UIST '05. ACM, New York, NY, 83-­‐92. Very elegant implementation of a number of techniques and applications on a table format rear projection surface. Multi-­‐Touch Sensing through Frustrated Total Internal Reflection (2005). Video on website. Formed Peceptive Pixel in 2006 in order to further develop the technology in the private sector See the more recent videos at the Perceptive Pixel site: http://www.perceptivepixel.com/ Commercial multi-­‐touch sensor Can sense finger and stylus simultaneously unlike most touch sensors that support a stylus, this incorporates specialized stylus sensor result is much higher quality digital ink from stylus Incorporated into some recent Tablet-­‐PCs Technology scales to larger formats, such as table-­‐top size 
Download