File - Wildcat Freshmen English

advertisement
Why Elon Musk is scared of artificial intelligence — and Terminators
By Justin Moyer November 18, 2014
Elon Musk — the futurist behind PayPal, Tesla and SpaceX — has been caught criticizing artificial intelligence again.
“The risk of something seriously dangerous happening is in the five year timeframe,” Musk wrote in a comment since
deleted from the Web site Edge.org, but confirmed to Re/Code by his representatives. “10 years at most.”
The very future of Earth, Musk said, was at risk.
“The leading AI companies have taken great steps to ensure safety,” he wrote. “They recognize the danger, but believe
that they can shape and control the digital super intelligences and prevent bad ones from escaping into the Internet.
That remains to be seen.”
Musk seemed to sense that these comments might seem a little weird coming from a Fortune 1000 chief executive
officer.
“This is not a case of crying wolf about something I don’t understand,” he wrote. “I am not alone in thinking we should
be worried.”
Unfortunately, Musk didn’t explain how humanity might be compromised by “digital super intelligences,” “Terminator”style.
He never does. Yet Musk has been holding forth on-and-off
about the apocalypse artificial intelligence might bring for much
of the past year.
Elon Musk warns that artificial intelligence could be 'our biggest
existential threat'
Tesla chief executive Elon Musk warned that artificial
intelligence could be our biggest existential threat and believes
there should be some regulatory oversight at the national and
international level, while speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium in
October 2014. (MIT Dept. of Aeronautics and Astronautics)
In October: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the
pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
In August: “We need to be super careful with AI. Potentially more dangerous than nukes.”
In June: “In the movie ‘Terminator,’ they didn’t create AI to — they didn’t expect, you know some sort of ‘Terminator’like outcome. It is sort of like the ‘Monty Python’ thing: Nobody expects the Spanish inquisition. It’s just — you know,
but you have to be careful.”
Musk wouldn’t even condone a plan to move to another planet to escape AI. “The AI will chase us there pretty quickly,”
he said.
It gets weirder: Musk has invested in at least two artificial intelligence companies — one of which, DeepMind, he
appeared to slight in his recent deleted blog post.
“Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to
exponential,” Musk wrote.
DeepMind was acquired by Google in January. But it turns out Musk was just supporting AI companies to keep an eye on
them.
“It’s not from the standpoint of
actually trying to make any
investment return,” he said.
“It’s purely I would just like to
keep an eye on what’s going on
with artificial intelligence.”
Musk, head of a company that
is rarely, if ever, profitable, is a
man with much invested in the
future. This future includes
some version of a self-driving
car as well as private space
travel. He has said he wants to
“die on Mars.”
It bears asking: Why is this guy so scared of artificial intelligence? Isn’t that like Henry Ford being scared of the assembly
line?
Since Musk isn’t quite making his position clear, let’s articulate futurists’ core fear of artificial intelligence: that robots
will replace humans.
“Horses were initially complemented by carriages and ploughs, which greatly increased the
horse’s productivity,” Oxford professor Nick Bostrom wrote in “Super intelligence: Paths,
Dangers, Strategies,” a book released in September. “Later, horses were substituted for by
automobiles and tractors. When horses became obsolete as a source of labor, many were sold
off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United
States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained.”
If humans aren’t useful — or are only useful, as in “The Matrix,” as batteries — AI will have no
problem enslaving, imprisoning or liquidating us.
“If ethics can’t be built into a machine, then we’ll be creating super-intelligent psychopaths, creatures without moral
compasses, and we won’t be their masters for long,” James Barrat, author of “Our Final Invention: Artificial Intelligence
and the End of the Human Era,” said earlier this year.
One nightmare scenario: “gray goo,” the catchy name for what happens when master machines designed to replicate
themselves go haywire.
“Gray goo is what would happen if one of the auto-assemblers went haywire and the self-replication never stopped,”
the New York Times explained in 2003. ” … In just 10 hours an unchecked self-replicating auto-assembler would spawn
68 billion offspring; in less than two days the auto-assemblers would outweigh
the earth.”
Does Musk fear any or all of the above — redundancy, liquidation or gray goo
— at the hands of AI? If so, he should just say so, preferably without
references to Arnold Schwarzenegger films.
As crazy as gray goo sounds, people might think him less eccentric if he just
talked about it.
http://www.washingtonpost.com/news/morning-mix/wp/2014/11/18/why-elon-musk-is-scared-of-killer-robots/
Download