Uploaded by Andrzej Szabała

guardian view on machine learning

advertisement
https://www.theguardian.com/commentisfree/2019/sep/22/the-guardian-view-on-machine-learning-a-computer-cleverer-than-you
The Guardian view on machine learning: a
computer cleverer than you?
There are dangers of teaching computers to learn the things humans do best – not least
because makers of such machines cannot explain the knowledge their creations have acquired
‘What happens if machines make worse decisions than humans faster.’ Photograph: Jens Schlüter/EPA
Brad Smith, Microsoft’s president, last week told the Guardian that tech companies should
stop behaving as though everything that is not illegal is acceptable. Mr Smith made a good
argument that technology may be considered morally neutral but technologists can’t be. He is
correct that software engineers ought to take much more seriously the moral consequences of
their work.
This argument operates on two levels: conscious and unconscious. It is easy to see the ethical
issue that appeared to arise when, as a result of a series of its own confusing blog posts,
Microsoft appeared to be selling facial recognition technology to US Immigration and
Customs Enforcement while the Trump administration was separating children from parents
at the US’s southern border. This was, as Microsoft later confirmed, false. The moral stance
of more than 3,000 Google employees who protested about its Maven contract – where
machine learning was to be used for military purposes, starting with drone imaging – with the
US Department of Defense should be applauded. Google let the contract lapse. But people
with different ethical viewpoints can take different views. In the case of the Maven contract, a
rival with fewer qualms picked up the work. Much is contingent on public attitudes. Opinion
polls show that Americans are not in favour of developing artificial intelligence technology
for warfare, but this changes as soon as the country’s adversaries start to develop them. There
is an economic aspect to be considered too. Shoshana Zuboff’s insight, that the exploitation of
behavioural predictions covertly derived from the surveillance of users is capitalism’s latest
stage, is key. What is our moral state when AI researchers are paid $1m a year but the people
who label and classify the input data are paid $1.47 an hour.
However, the most difficult human skills to replicate are the unconscious ones, the product of
millennia of evolution. In AI this is known as Moravec’s paradox. “We are all prodigious
Olympians in perceptual and motor areas, so good that we make the difficult look easy,”
wrote the futurist Hans Moravec. It is these that our brains excel in, hidden but complex
processes that machine learning attempts to replicate. This presents the maker of such
technology with a unique problem of accountability. If a building falls, the authorities can
investigate, spot a failure, and put it down to engineering. We absorb the lessons and hope to
learn from our mistakes.
But is this true for programmers? It is in the nature of AI that makers do not, and often cannot,
predict what their creations do. We know how to make machines learn. But programmers do
not understand completely the knowledge that intelligent computing acquires. If we did, we
wouldn’t need computers to learn to learn. We’d know what they did and program the
machines directly ourselves. They can recognise a face, a voice, be trained to judge people’s
motivations or beat a computer game. But we cannot say exactly how. This is the genius and
madness behind the technology. The promise of AI is that it will imbue machines with the
ability to spot patterns from data, and make decisions faster and better than humans do. What
happens if they make worse decisions faster? Governments need to pause and take stock of
the societal repercussions of allowing machines over a few decades to replicate human skills
that have been evolving for millions of years. But individuals and companies must take
responsibility too.
• This article was amended on 24 September 2019 to clarify that Microsoft had not sold facial
recognition technology to US Immigration and Customs Enforcement.
https://www.bbc.co.uk/news/uk-england-beds-bucks-herts-49801267
Security gadgets 'making people more
vulnerable' from hackers
Hacking experts have warned many gadgets intended to keep people safe are putting them at
risk of being spied on.
A Bedfordshire family allowed their devices to be infiltrated by Buckinghamshire cyber
security expert Ken Munro to demonstrate just how easily their privacy could be
compromised.
Brian Green, 68, bought a tracker designed for older people, but the team was able to listen to
his private conversations.
Mr Green said: "The last thing I would have thought of is how vulnerable it makes people,
when you actually are buying it to make people less vulnerable."
In another case, the family's "pet cam" was hacked, enabling Mr Munro and his team to spy
on them at home.
Mr Munro said: "Two of the cameras we looked at, the plastic moulds are identical, yet one's
fairly secure and one's really not very secure at all. How are you supposed to know?"
https://www.youtube.com/watch?v=-P0rSnt2HSU
Download