http://www.wired.com/2015/08/ibms-rodent-brain-chip-make-phones-hyper-smart/ IBM’s ‘Rodent Brain’ Chip Could Make Our Phones Hyper-Smart Click to Open Overlay GalleryAt a lab near San Jose, IBM has built the digital equivalent of a rodent brain---roughly speaking. It spans 48 of the company's experimental TrueNorth chips, a new breed of processor that mimics the brain's biological building blocks. IBM Dharmendra Modha walks me to the front of the room so I can see it up close. About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a ’70s sci-fi movie, but Modha describes it differently. “You’re looking at a small rodent,” he says. He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent. Modha oversees the cognitive computing group at IBM, the company that created these “neuromorphic” chips. For the first time, he and his team are sharing their unusual creations with the outside world, running a three-week “boot camp” for academics and government researchers at an IBM R&D lab on the far side of Silicon Valley. *** we had several of our colleagues attend this boot camp and will provide an update in a future QuEST meeting ** Plugging their laptops into the digital rodent brain at the front of the room, this eclectic group of computer scientists is exploring the particulars of IBM’s architecture and beginning to build software for the chip dubbed TrueNorth. 'We want to get as close to the brain as possible while maintaining flexibility.' Dharmendra Modha, IBM Some researchers who got their hands on the chip at an engineering workshop in Colorado the previous month have already fashioned software that can identify images, recognize spoken words, and understand natural language. Basically, they’re using the chip to run “deep learning” algorithms, the same algorithms that drive the internet’s latest AI services, including the face recognition on Facebook and the instant language translation on Microsoft’s Skype. But the promise is that IBM’s chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches. “What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption,” says Brian Van Essen, a computer scientist at the Lawrence Livermore National Laboratory who’s exploring how deep learning could be applied to national security. “It lets us tackle new problems in new environments.” The TrueNorth is part of a widespread movement to refine the hardware that drives deep learning and other AI services. Companies like Google and Facebook and Microsoft are now running their algorithms on machines backed with GPUs (chips originally built to render computer graphics), and they’re moving towards FPGAs (chips you can program for particular tasks). For Peter Diehl, a PhD student in the cortical computation group at ETH Zurich and University Zurich, TrueNorth outperforms GPUs and FPGAs in certain situations because it consumes so little power. … http://www.technologyreview.com/news/540751/startup-aims-to-beat-google-to-market-with-selfdriving-golf-cart/ Startup Aims to Beat Google to Market with Self-Driving Golf Cart The startup Auro says its self-driving golf cart will lead to autonomous shuttles for theme parks, vacation resorts, and retirement communities. By Tom Simonite on August 20, 2015 Why It Matters Even if limited to private roads, autonomous vehicles could significantly improve many people’s quality of life. This golf cart has been modified with sensors and other equipment so that it can drive itself. Spend enough time on the roads of Mountain View, California, and you might spot one of Google’s prototype self-driving cars. Visit the campus of Santa Clara University a few miles away, and you can see a self-driving golf cart that Nalin Gupta says will shake up everyday transport sooner. Google and automakers pursuing autonomous vehicles are bent on seeing them take to public roads (see “Proceed With Caution Towards the Self-Driving Car”). Gupta’s company Auro Robotics is focused on the more modest goal of seeing slower, less showy autonomous vehicles ferry people around the private grounds of universities, retirement communities, and resorts. “We are closer to deploying our shuttles in the market,” says Gupta. “It’s technologically much easier.” Like Google’s cars, Auro’s vehicles require a detailed 3-D map of the environment where they operate. Collecting that data for a private campus and keeping it up-to-date is easier, says Gupta. Such environments are also less physically complex, have lower speed limits, and present fewer complicated traffic situations, he says. Organizations such as universities and theme parks are generally free to operate autonomous vehicles on their property without regulation. Although U.S. states including California and Nevada have passed laws that enable testing of autonomous cars on public roads, the legal and insurance frameworks needed for such vehicles to enter general circulation are lacking. Auro’s current prototypes are golf carts modified with laser scanners, radar, cameras, GPS, computers, and other components needed to steer themselves. One is already being tested on the grounds of Santa Clara University. Gupta says he has signed agreements to begin similar tests at other universities, as well as a retirement community and a resort in the Bay Area later in the year. … http://www.technologyreview.com/news/540156/team-designs-robots-to-build-things-in-messyunpredictable-situations/ Team Designs Robots to Build Things in Messy, Unpredictable Situations Researchers have developed simple robots that can build structures with malleable materials such as foam and sandbags. By Julia Sklar on August 20, 2015 Why It Matters Robots are often limited because they can’t handle malleable materials or work in unpredictable environments. One of Nagpal and Napp’s robots has no top or bottom. It can keep working even after falling and flipping over. Researchers at Harvard University and SUNY at Buffalo are designing robots to function outside of ideal, predictable environments such as warehouses or factories and instead work in places where there may be unexpected obstructions, and where predictive algorithms can’t be used to plan several thousand steps ahead. The goal for such “builder bots,” which are designed to handle inconsistent and malleable building materials, is to be deployed as disaster relief agents. Radhika Nagpal, professor of computer science at Harvard, and Nils Napp, an assistant professor of computer science at SUNY at Buffalo and a former post-doctoral fellow in Nagpal’s lab, have designed two robots: one that deposits expandable, self-hardening foam and another that drags and piles up sandbags. Robots built for construction can usually handle only discrete materials, such as blocks or bricks. The materials these new robots build with are useful in a range of real-world environments, but they are highly unpredictable. The foam can stick to most surfaces and expand to fill holes, but it starts off as a liquid, so it’s impossible to know exactly how far it’ll run before it hardens; sandbags are frequently used in disaster relief as retaining walls, but the granules inside sandbags have a tendency to shift around when manipulated. To combat this unpredictability, Nagpal and Napp’s robots are equipped with an infrared sensor that takes scans and assesses the environment in between laying down a building material. The scan is integral to making the bots so adaptable. …