Generative Adversarial Network

advertisement
Generative
Adversarial Network
Bahir Dar institute of technology
Faculty of computing
Seminar presentation
1
Introduction
A.I. is the study of how to make
computers do things at which, at the
moment, people are better.
Machine learning is a branch of AI
focusing on systems that learn
from their environment (i.e. data)
The goal is to generalize this
training to act intelligent in new
environments.
2
Short history of AI
First general purpose digital
computer. Powered by vacuum
Tubes.
Built by US Army, used by John
von Neumann to develop the HBomb
.
3
Short history of AI
Arthur Samuel is recognized as
the first learning machine which
leaned to play (and win)
checkers.
His algorithms used a heuristic
search memory to learn from
experience.
By the mid 1970’s his program
was beating capable human
players.
4
Short history of AI
The Perceptron was the first
artificial neural network.
Developed by Frank Rosenblatt
at the US office of Naval
Research for visual recognition
tasks.
5
Short history of AI
In the mid 1980’s multiple people
independently (re)discovered the
Back propagation algorithm.
Allowed more powerful neural
networks with hidden layers to be
trained.
Many people excited about Neural
Nets as model for the mind/brain
(connectionism) & commercial
application
6
Short history of AI
IBM’s Deep Blue beats Chess
Grandmaster
Garry Kasparov.
7
Deep learning on the way
The expression “deep learning” was
first used when talking about Artificial
Neural Networks (ANNs) by Igor
Aizenberg and colleagues in or around
2000..
Deep Learning is a subfield of
machine learning concerned with
algorithms inspired by the structure
and function of the brain called
artificial neural networks.
8
Generative Adversarial neural networks
Generative adversarial
networks (GANs) are deep
neural net architectures
comprised of two nets, pitting
one against the other (thus the
“adversarial”).
9
Generative Adversarial neural networks
GANs were introduced by Ian Goodfellow
and other researchers at the University of
Montreal, including Yoshua Bengio, in 2014
Goodfellow
“the most
interesting idea in the
last 10 years in ML.”
Yann – lecun : AI Engineer at face book and
deep radiologist
10
What makes it different
Now a day powerful machine
learning algorithms are created that
are able to identify an image having
“pedestrian crossing a rode” on it
from a million pictures.
It can be even perfect by training
the learner with quality test sets and
lots of repetition.
One thing is missing
Imagination
11
What makes it different
Thus, GANs are Robot artists
GANs’ potential is huge,
because they can learn to
mimic any distribution of data.
That is, GANs can be taught to
create worlds eerily similar to
our own in any domain:
images, music, speech, prose.
12
Architecture of GAN
GAN has two components generator and discriminator.
This two components duel each other to become more strengthen.
Discriminator
Generator
Classify input data to some thing based
on the previous training.
Generate class from features
Model distribution of labels/classes
Maps features to labels.
Learn the boundary between labels from
features.
13
How GAN works
One neural network, called the
generator, generates new data
instances, while the other, the
discriminator, evaluates them for
authenticity.
i.e. the discriminator decides
whether each instance of data it
reviews belongs to the actual
training dataset or not.
14
How GAN works
Here are the steps a GAN takes:
The generator takes in random numbers and returns an image.
This generated image is fed into the discriminator alongside a stream of
images taken from the actual dataset.
The discriminator takes in both real and fake images and returns
probabilities, a number between 0 and 1, with 1 representing a prediction
of authenticity and 0 representing fake.
So you have a double feedback loop:
The discriminator is in a feedback loop with the ground truth of the images,
which we know.
The generator is in a feedback loop with the discriminator
You can think of a GAN as the combination of a counterfeiter and a cop in a
game of cat and mouse, where the counterfeiter is learning to pass false
notes, and the cop is learning to detect them
15
Application area of GAN
Photo-Realistic Single Image Super-Resolution Using a Generative
Adversarial Network
Generative Adversarial Text to Image Synthesis
Predicting the next frame in movie
Generating fake images for training
….
16
Research areas in GAN
Improving generated image quality.
An applying this technology for sound and video as well.
Then you can give a machine a ability to imagine and learn by it
self.
“Is artificial intelligence less than our
intelligence?”
—Spike Jonze
17
References
https://www.cs.toronto.edu/~duvenaud/courses/csc2541/slides/ganapplications.pdf
https://hackernoon.com/generative-adversarial-networks-a-deep-learningarchitecture-4253b6d12347
http://blogs.teradata.com/data-points/building-machine-learninginfrastructure-2
https://www.quora.com/What-are-somerecent- and-potentially-upcomingbreakthroughs-in-deep-learning
https://deeplearning4j.org/generative-adversarial-network
History of Machine Learning bob colner slide share
https://www.import.io/post/history-of-deep-learning/
https://machinelearningmastery.com/what-is-deep-learning/
(https://arxiv.org/abs/1406.2661
18
Download