November 30, 2015

advertisement
Welcome to the November 30, 2015 edition of ACM TechNews, providing timely information for IT
professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets
(click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE












Teaching Machines How to Learn
Microsoft Is Teaching Computers to See Like People
Smile, Frown, Grimace, and Grin--Your Facial Expression Is the Next Frontier in Big Data
Why Ball Tracking Works for Tennis and Cricket but Not Soccer or Basketball
Computer Scientists Achieve Breakthrough in Pheromone-Based Swarm Communications in Robots
Computers Learn to Create Photos of Bedrooms and Faces on Demand
No Lens? No Problem for FlatCam
Email Security Improving, but Far From Perfect
Strategy Based on Human Reflexes May Keep Legged Robots, Prosthetic Legs From Tripping
UCLA Computer Science Class Integrates Virtual World Into Reality
Seeking Data Wisdom
Stanford Students Put Computer Science Skills to Social Good
Teaching Machines How to Learn
ETH Zurich (11/30/15)
ETH Zurich and the Max Planck Society are collaborating on the creation of a new center that will
study the theoretical principles of learning and apply them to robots and software. As robots and
autonomous systems begin to operate in areas that will likely present them with new and novel
challenges, it is important they have the ability, like humans, to learn how to cope with these new
challenges. The new Max Planck ETH Center for Learning Systems was conceived as a place in which
the next generation of scientists can be trained in the field of robotic learning and where collaborative
research in the field can be carried out using a shared infrastructure. "We want to achieve a
fundamental understanding of how people perceive, learn, and then react appropriately to situations,"
says Thomas Hofmann, a professor at ETH Zurich's Institute for Machine Learning and co-head of the
new center. "If we can develop a better understanding of how learned aspects can be transferred
between different tasks, we may be able to create artificial systems that learn in a similar way to
living beings," says Bernhard Scholkopf, director of the Max Planck Institute for Intelligent Systems in
Tubingen and co-head of the new center.
View Full Article | Return to Headlines | Share
Microsoft Is Teaching Computers to See Like People
eWeek (11/28/15) Pedro Hernandez
Researchers from Microsoft and Carnegie Mellon University have combined computer vision, deep
learning, and language understanding into a system that can analyze images and answer questions in
the same manner as humans, according to Microsoft's Athima Chansanchai. The resulting model
"applies multi-step reasoning to answer questions about pictures," Chansanchai says. The imageanalysis system is based on earlier work by Microsoft on automatic photo-captioning technologies,
which "helps train the computer to understand the image the way a person would," says Chansanchai.
She notes the new system uses deep neural networks to absorb information as a human set of eyes
and brain would, studying a scene's action and the relationships between multiple visual objects.
Chansanchai says researchers from Microsoft's Deep Learning Technology Center are imbuing the
system with attentional ability, and enabling it to concentrate on visual cues and infer answers
progressively to solve problems. Microsoft hopes the technology will lead to systems capable of
predicting human needs and providing real-time recommendations. The company also says the
development of systems that answer questions based on visual input are essential to creating artificial
intelligence tools.
View Full Article | Return to Headlines | Share
Smile, Frown, Grimace, and Grin--Your Facial Expression Is the Next Frontier in Big Data
Smithsonian (12/15) Jerry Adler
Affectiva co-founder Rana el Kaliouby sees the use of computers to detect and interpret human facial
expressions as the next logical step in the progression from keyboard to mouse to touchscreen to
voice recognition. The field of "affective computing" seeks to close the communication gap between
human beings and machines by adding a new mode of interaction, including the nonverbal language of
smiles, smirks, and raised eyebrows, according to el Kaliouby. She notes emotions can guide or inform
our rational thinking, but they are missing from the digital experience. "Your smartphone knows who
you are and where you are, but it doesn't know how you feel," el Kaliouby says. She believes devices
could control a car or things in the home such as lighting, temperature, and music more effectively if
they know how humans feel. The core customers of Affectiva have been advertising, marketing, and
media companies, but el Kaliouby believes the company's technology will be a boon to healthcare
when it comes to getting feedback from patients on drug testing or treatment programs.
View Full Article | Return to Headlines | Share
Why Ball Tracking Works for Tennis and Cricket but Not Soccer or Basketball
Technology Review (11/26/15)
Tracking balls in some sports--such as basketball, volleyball, and soccer--is significantly harder for
machine-vision algorithms than it is for other sports. Swiss Federal Institute of Technology in
Lausanne scientist Andrii Maksai and colleagues have outlined a new means for tracking balls that
improves over other approaches. Such systems assume two dissimilar strategies: in one, ball
movement is followed in three dimensions (3D) to predict likely future trajectories, which are
narrowed down as more data becomes available. But this method tends to fail when the ball is hidden
or when players engage with the ball in unforeseen ways. The second technique involves tracking the
players and observing when they have the ball, with the motion of the ball assumed to follow the
player and when it is transferred between players. However, this strategy can generate imprecise
tracks when lacking physics-based limits on ball movement. "We explicitly model the interaction
between the ball and the players as well as the physical constraints the ball obeys when far away from
the players," says Maksai's research team. They have assessed the algorithm on video sequences of
volleyball, soccer, and basketball games recorded on multiple cameras at different angles to produce a
3D model.
View Full Article | Return to Headlines | Share
Computer Scientists Achieve Breakthrough in Pheromone-Based Swarm Communications in
Robots
University of Lincoln (11/26/15) Elizabeth Allen
University of Lincoln researchers have developed a system that can replicate in robots all of the
aspects of pheromone-based communication of insect swarms. Called COS-phi (Communication
System via Pheromone), the system includes a low-cost open hardware micro robot and an open
source localization system, which tracks the robots' trajectories and releases artificial pheromones.
The researchers say the system is reliable and accurate. When using the system, the team's micro
robots were able to follow the leader, or pheromone distributor, without any explicit direction or
communication. "The system means that we can produce precise and high-resolution trails, control the
diffusion, evaporation, and density of the pheromones, and encode individual pheromones using
different colors," says Ph.D. researcher Farshad Arvin. The team has made the system available to
robotics and artificial intelligence researchers. Research in swarm robotics has had applications in
vehicle-collision sensors, surveillance technology, and video-game programming. "Nature is one of the
best sources of inspiration for solutions to different problems in different domains, and this is why
swarm robotics has developed into such an important area of study," Arvin says.
View Full Article | Return to Headlines | Share
Computers Learn to Create Photos of Bedrooms and Faces on Demand
New Scientist (11/25/15) Jacob Aron
The capabilities of artificial neural networks are increasingly impressive, particularly when it comes to
learning how to correctly identify objects. However, due to the opaque way such systems operate,
researchers have a relatively weak understanding of how artificial neural networks do what they do.
Recent efforts to investigate the processes artificial neural networks use to identify objects have
yielded, among other things, Google's DeepDream project. Now researchers at Facebook and Bostonbased machine learning firm indico are trying to determine how artificial neural networks work by
asking them to "imagine" pictures. The focus of their effort is a type of artificial neural network called
a generative adversarial network, in which one part of the system tries to create fake data to fool the
other part that it is looking at training data. The idea is that by pitting the network against itself, it will
learn to produce better images. The team asked the network to create images of bedrooms and faces
and by tweaking their requests they were able to see the ways the network was developing concepts
for different elements of a scene, such as a TV or a window, and how they relate to one another.
View Full Article | Return to Headlines | Share
No Lens? No Problem for FlatCam
Rice University (11/23/15) David Ruth; Mike Williams
Rice University researchers Richard Baraniuk and Ashok Veeraraghavan have developed FlatCam, a
thin sensor chip with a mask that replaces lenses in a traditional camera. FlatCam is equipped with
algorithms that process what the sensor detects and converts the sensor measurements into images
and videos. Veeraraghavan says FlatCams can be fabricated like microchips, with the precision, speed,
and the associated reduction in costs. "Our design decouples the two parameters, providing the ability
to utilize the enhanced light-collection abilities of large sensors with a really thin device,"
Veeraraghavan notes. The researchers say FlatCams could be applied to security or disaster-relief
applications, as well as to flexible, foldable, wearable, and disposable cameras. The hand-built
prototypes use off-the-shelf sensors and produce 512-by-512 images in seconds, but the researchers
think the resolution will improve as more advanced manufacturing techniques and reconstruction
algorithms are developed. "Smartphones already feature pretty powerful computers, so we can easily
imagine computing at least a low-resolution preview in real time," says Carnegie Mellon University
professor Aswin Sankaranarayanan.
View Full Article | Return to Headlines | Share
Email Security Improving, but Far From Perfect
University of Illinois at Urbana-Champaign (11/18/15) August Schiess
Email security has improved significantly in the past two years, but widespread issues remain,
according to a report from University of Illinois at Urbana-Champaign professor Michael Bailey in
collaboration with colleagues at the University of Michigan and Google. The report notes networking
protocols that underlie the Internet were not originally built to be secure, and security protocols were
"bolted on" to the existing systems years later. Such measures are available to address security
issues, but each individual server still has the choice whether to adopt the protocols, Bailey and
colleagues found. The study also determined companies such as Google are now using these
protocols, which have helped boost email security in recent years, but many other servers do not. The
researchers measured the adoption of email security protocols at scale and also highlighted some of
the implications of "bolted-on security." For example, the STARTTLS command is vulnerable to an
attack that would force email exchanges to continue without encryption, the researchers note. "We
found that there's a significant number of email exchanges in which there's an adversary between two
mail servers who's trying to intentionally downgrade the communication," Bailey says.
View Full Article | Return to Headlines | Share
Strategy Based on Human Reflexes May Keep Legged Robots, Prosthetic Legs From Tripping
Carnegie Mellon News (PA) (11/18/15) Byron Spice
Carnegie Mellon University (CMU) researchers have developed a robotic leg prosthesis that could help
users recover their balance by using techniques based on the way human legs are controlled. CMU
professor Hartmut Geyer says the control strategy was devised by studying human reflexes and other
neuromuscular control systems, and it shows promise in simulation and laboratory testing, producing
stable walking gaits over uneven terrain and better recovery from trips and shoves. The technology
will be further developed and tested over the next three years thanks to a $900,000 U.S. National
Science Foundation grant. "Our work is motivated by the idea that if we understand how humans
control their limbs, we can use those principles to control robotic limbs," Geyer says. He thinks the
research also could be applied to legged robots. The researchers evaluated the neuromuscular model
by using computer simulations and a cable-driven device about half the size of a human leg. They
found the neuromuscular control method can reproduce normal walking patterns and it effectively
responds to disturbances as the leg begins to swing forward. "Robotic prosthetics is an emerging field
that provides an opportunity to address these problems with new prosthetic designs and control
strategies," Geyer says.
View Full Article | Return to Headlines | Share
UCLA Computer Science Class Integrates Virtual World Into Reality
Daily Bruin (11/18/15) Nate Nickolai
Diana Ford, a lecturer in the University of California, Los Angeles (UCLA) computer science
department, wants to develop gaming as a subfield of graphics at UCLA through her courses on virtual
reality (VR) and artificial intelligence (AI). Her VR class currently is using gesture tracking to create its
own VR games, and Ford's spring class presented games at the ACM SIGGRAPH conference in August.
Ford says the interactions between players and AI gave rise to pre-patterns of coding, and developers
around the world can use the pre-patterns to build games on an already developed base of code.
Using Oculus Rift headsets, Ford says her spring class used Unreal Engine 4, a collection of game
development tools, to generate three-dimensional worlds within which the students created original
games with coding. The games focused on interactions between players and AI, which students
produced using original programming methods in a variety of games. The pre-patterns produced by
the spring class reduce coding time and enable game creators to focus on adding more aspects to VR
games. Ford also presented the pre-patterns at the SIGGRAPH conference.
View Full Article | Return to Headlines | Share
Seeking Data Wisdom
Berkeley Research (11/17/15) Wallace Ravven
Data wisdom is needed to make discoveries and assure the significance of the results of data-intensive
research, says Bin Yu, a statistician and data scientist at the University of California, Berkeley. She
describes the best of applied statistics as an essentially "soft lens" when working with large amounts
of data, similar to a powerful telescope or precision gene microarray. Yu participated in Berkeley's
"mind-reading" project in 2011, in which researchers used a type of magnetic resonance imaging
(MRI) to detect indirect neuron firing at precise locations in the brain's visual processing area, and
then determined the rough outlines of what experimental subjects were seeing in movie clips. Yu's
team analyzed a torrent of functional MRI data to identify from thousands of movie clips the 100
frames that most likely matched a given voxel activity pattern, and then "averaged" these shapes to
yield the outline of what subjects were seeing. Yu says only a powerful interlocking of science,
computation, and statistics made this possible. "In computational neuroscience, it is important to
gauge how much variation there is in the signals--in this case, how much of this variation is due to the
movies and how much is due to 'noise,'" she says.
View Full Article | Return to Headlines | Share
Stanford Students Put Computer Science Skills to Social Good
Stanford Report (11/19/15) Bethany Augliere
When Stanford University computer science (CS) undergraduate Lawrence Lin Murata heard a lecturer
say last fall no campus organizations existed that used CS to make a positive social impact, he
wondered if he could change that. A year later, Murata has created CS+Social Good with the help of
three other Stanford students and Keith Schwarz, their faculty sponsor. The group is focused on giving
students opportunities to explore and practice their CS skills in the context of doing social good. It
organizes speaking events and even organized a class this semester that focuses on bringing students
together with nonprofit partners. Murata says the projects being worked on by the students in the
Using Web Technologies to Change the World class "will reach over 25 million people by the end of the
year." The projects include a group of students working with the government of Delhi, India, to create
a website to track the progress of government programs, and a partnership with nonprofit group
SIRUM to help connect institutions with surplus medication to safety-net clinics serving poor and
uninsured populations. CS+Social Good also plans to launch a new project this winter that will help
four student teams identify and develop technological solutions for problems in areas such as
healthcare and education.
Download