File - Wildcat Freshmen English

advertisement
The New Eyes of Surveillance: Artificial Intelligence and
Humanizing Technology
TYPICALLY WHEN WE hear the term “Artificial Intelligence,” images of aliens, spaceships landing on
Earth and Will Smith come to mind. While not exactly the extraterrestrial scene we may envision, Artificial
Intelligence, or AI, is bringing human intelligence to everyday technologies. We are now able to form a
relationship with our technology, use it to teach it about our behaviors and to improve how our businesses and
communities operate.
Consider all the ways AI makes our lives easier…
We are already accustomed to Amazon’s anticipatory shipping practices, where the company identifies items
we may want to buy before we even begin our search, and Netflix is aptly curating movie recommendations in
advance of any decisions we make. AI is transforming how we operate and rely on technology, enabling
humans to work more efficiently and effectively than ever before, making our jobs simpler, our efforts more
calculated and our outputs more accurate. Whether technology is simplifying our everyday experiences or
predicting what we will want next, it is bringing a deeply personal experience to us all.
But how is Artificial Intelligence impacting our personal security and the way we keep our organizations
safe? Enter the new wave of security, where AI meets traditional surveillance practices: intelligent video
analytics.
While some traditional security measures in place today do have a significant impact in terms of decreasing
crime or preventing theft, today video analytics gives security officers a technological edge that no surveillance
camera alone can provide.
Surveillance systems that include video analytics analyze video footage in real-time and detect abnormal
activities that could pose a threat to an organization’s security. Essentially, video analytics technology helps
security software “learn” what is normal so it can identify unusual, and potentially harmful, behavior that a
human alone may miss.
It does this in two ways; first by observing objects in a monitored environment and detecting when humans and
vehicles are present, and second by taking operator feedback about the accuracy of various events and
incorporating this intelligence into the system itself, thus improving its functionality. This interaction between
operator and technology results in a “teachable” system: Artificial Intelligence at its best in the realm of security
where ultimately, human oversight takes a backseat to the fine-tuned capabilities of intelligent video analytics.
Eliminating human error is a key driver behind bringing Artificial Intelligence to security through intelligent
video analytics. Studies have shown that humans engaged in mundane tasks have a directed attention capacity
for up to 20 minutes, after which the human attention span begins to decrease. In addition, when humans are
faced with multiple items at one time, attention spans will decrease even more rapidly. Therefore, video
analytics are beginning to take the place of initial human judgment in an effort to increase operational
efficiency.
While a security officer might miss a person sneaking into a poorly lit facility, a camera backed with intelligent
video analytics is designed to catch a flash on the screen and recognize it as a potential threat. Or it will spot a
person loitering at the perimeter of a schoolyard and alert on-the-ground security officials to investigate and
take action if necessary, all without missing a beat and keeping close watch on the many cameras and locations.
Rather than depend on solely human monitoring, AI-powered systems instead notify security teams of potential
threats as they happen, helping businesses prevent break-ins or illegal activity, as well as increasing human
accuracy.
Artificial Intelligence helps people do their jobs better, thereby making our lives easier and our locations safer.
Whether securing our businesses, cities or homes, or providing more curated online shopping and entertainment
experiences, Artificial Intelligence is making technology more personal and purposeful than ever before.
Saptharishi, Mahesh, Dr. "The New Eyes of Surveillance: Artificial Intelligence and Humanizing
Technology: WIRED." Wired.com. Conde Nast Digital, 1 Aug. 2014. Web. 12 Mar. 2015.
Google develops computer program capable of learning tasks independently
‘Agent’ hailed as first step towards true AI as it gets adept at playing 49 retro computer games and comes up
with its own winning strategies
Google scientists have developed the first computer program capable of learning a wide variety of tasks
independently, in what has been hailed as a significant step towards true artificial intelligence.
The same program, or “agent” as its creators call it, learnt to play 49 different retro computer games, and came
up with its own strategies for winning. In the future, the same approach could be used to power self-driving
cars, personal assistants in smartphones or conduct scientific research in fields from climate change to
cosmology.
The research was carried out by DeepMind, the British company bought by Google last year for £400m, whose
stated aim is to build “smart machines”.
Picture: The computer program named agent starts off playing random
moves, but after 600 games works out the optimal strategy. Credit: Google
DeepMind (with permission from Atari Interactive Inc.)
Demis Hassabis, the company’s founder said: “This is the first
significant rung of the ladder towards proving a general
learning system can work. It can work on a challenging task
that even humans find difficult. It’s the very first baby step
towards that grander goal ... but an important one.”
The work is seen as a fundamental departure from previous
attempts to create AI, such as the program Deep Blue, which famously beat Gary Kasparov at chess in 1997 or
IBM’s Watson, which won the quiz show Jeopardy! in 2011.
In both these cases, computers were pre-programmed with the rules of the game and specific strategies and
overcame human performance through sheer number-crunching power.
“With Deep Blue, it was team of programmers and grand masters that distilled the knowledge into a program,”
said Hassabis. “We’ve built algorithms that learn from the ground up.”
The DeepMind agent is simply given a raw input, in this case the pixels making up the display on Atari games,
and provided with a running score.
When the agent begins to play, it simply watches the frames of the game and makes random button presses to
see what happens. “A bit like a baby opening their eyes and seeing the world for the first time,” said Hassabis.
The agent uses a method called “deep learning” to turn the basic visual input into meaningful concepts,
mirroring the way the human brain takes raw sensory information and transforms it into a rich understanding of
the world. The agent is programmed to work out what is meaningful through “reinforcement learning”, the basic
notion that scoring points is good and losing them is bad.
Tim Behrens, a professor of cognitive neuroscience at University College London, said: “What they’ve done is
really impressive, there’s no question. They’ve got agents to learn concepts based on just rewards and
punishment. No one’s ever done that before.”
In videos provided by Deep Mind, the agent is shown making random and largely unsuccessful movements at
the start, but after 600 hundred rounds of training (two weeks of computer time) it has figured out what many of
the games are about.
In some cases, the agent came up with winning strategies that the researchers themselves had never considered,
such as tunnelling through the sides of the wall in Breakout or, in one submarine-based game, staying deeply
submerged at all times.
Vlad Mnih, one of the Google team behind the work, said: “It’s definitely fun to see computers discover things
you haven’t figured out yourself.”
Hassabis stops short of calling this a “creative step”, but said it proves computers can “figure things out for
themselves” in a way that is normally thought of as uniquely human. “One day machines will be capable of
some form of creativity, but we’re not there yet,” he said.
Behrens said that watching the agent learn leaves the impression that “there’s something human about it” –
probably because it is borrowing the concept of trial and error, one of the main methods by which humans learn.
The study, published in the journal Nature, showed that the agent performed at 75% of the level of a
professional games tester or better on half of the games tested, which ranged from side-scrolling shooters to
boxing to 3D car-racing. On some games, such as Space Invaders, Pong and Breakout, the algorithm
significantly outperformed humans, while on others it fared far worse.
The researchers said this was mostly because the algorithm, as yet, has no real memory meaning that it is unable
to commit to long-term strategies that require planning. With some of the games, this meant the agent got stuck
in a rut, where it had learnt one basic way to score a few points, but never really grasped the game’s overall
objective. The team is now trying to build in a memory component to the system and apply it to more realistic
3D computer games.
Last year, the American entrepreneur, Elon Musk, one of Deep Mind’s early investors, described AI as
humanity’s greatest existential threat. “Unless you have direct exposure to groups like Deepmind, you have no
idea how fast [AI] is growing,” he said. “The risk of something seriously dangerous happening is in the five
year timeframe. Ten years at most.”
However, the Google team played down the concerns. “We agree with him there are risks that need to be borne
in mind, but we’re decades away from any sort of technology that we need to worry about,” Hassabis said.
Devlin, Hannah. "Google Develops Computer Program Capable of Learning Tasks Independently." Artificial
Intelligence (AI). Guardian News and Media Limited, 25 Feb. 2015. Web. 12 Mar. 2015.
Download