Uploaded by Bhushan Borole

Tensorflow for Poets

TensorFlow for Poets
1 hourFree
Rate Lab
TensorFlow is an open source library for numerical computation, specializing
in machine learning applications. In this lab you will learn how to install and
run TensorFlow on a single machine, then train a simple classifier to classify
images of flowers.
This lab uses transfer learning to train your machine. In transfer learning,
when you build a new model to classify your original dataset, you reuse the
feature extraction part and re-train the classification part with your dataset.
This method uses less computational resources and training time. Deep
learning from scratch can take days, but transfer learning can be done in short
The model for this lab is trained on the ImageNet Large Visual Recognition
Challenge dataset. These models can differentiate between 1,000 different
classes, like "Zebra", "Dalmatian", and "Dishwasher". In a production
environment, you'll have a choice of model architectures, so you can
determine the right tradeoff between speed, size, and accuracy for your
At the end of the lab you'll have the opportunity to retrain the same model to
tell apart a small number of classes based on your own examples.
What you will learn:
How to use Python and TensorFlow to train an image classifier
How to classify images with your trained classifier
What you need:
A basic understanding of Linux commands
Your own images of flowers to upload into the lab
Set up
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The
timer, which starts when you click Start Lab, shows how long Cloud resources
will be made available to you.
This Qwiklabs hands-on lab lets you do the lab activities yourself in a real
cloud environment, not in a simulation or demo environment. It does so by
giving you new, temporary credentials that you use to sign in and access the
Google Cloud Platform for the duration of the lab.
What you need
To complete this lab, you need:
Access to a standard internet browser (Chrome browser recommended).
Time to complete the lab.
Note: If you already have your own personal GCP account or project, do not
use it for this lab.
Note: If you are using a Pixelbook please open an Incognito window to run this
How to start your lab and sign in to the Console
1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens
for you to select your payment method. On the left is a panel populated
with the temporary credentials that you must use for this lab.
2. Copy the username, and then click Open Google Console. The lab spins
up resources, and then opens another tab that shows the Choose an
account page.
Tip: Open the tabs in separate windows, side-by-side.
3. On the Choose an account page, click Use Another Account.
4. The Sign in page opens. Paste the username that you copied from the
Connection Details panel. Then copy and paste the password.
Important: You must use the credentials from the Connection Details
panel. Do not use your Qwiklabs credentials. If you have your own GCP
account, do not use it for this lab (avoids incurring charges).
5. Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication
(because this is a temporary account).
 Do not sign up for free trials.
After a few moments, the GCP console opens in this tab.
Note: You can view the menu with a list of GCP Products and Services by clicking the Navigation
menu at the top-left, next to “Google Cloud
Create a development machine in
Compute Engine
In the Google Cloud Console, go to Compute Engine > VM Instances > Create.
Configure the instance this way:
Name: devhost
Region: us-central1
Zone: us-central1-a
Machine Type: 4 vCPUs (n1-highcpu-4) instance
Boot disk: Click Change, then select Ubuntu 16.04 LTS, then click Select
Identity and API Access: "Allow full access to all Cloud APIs"
Click Create. This will serve as your development bastion host.
SSH into devhost by clicking the SSH button on the Console.
Now you're ready to install TensorFlow onto this machine.
Test Completed Task
Click Check my progress to verify your performed task. If you have completed
the task successfully you will be granted with an assessment score.
Create a development machine in Compute Engine
Check my progress
Install TensorFlow
In the SSH window, run the following to install Python package manager:
sudo apt-get update
sudo apt-get install -y python-pip python-dev python3-pip python3-dev
Install and configure Virtual Environment for using Python 3.5
pip install virtualenv
virtualenv -p python3 venv
source venv/bin/activate
Now install TensorFlow with the following:
pip install --upgrade
Start Python in the SSH window:
At the Python prompt ( >>> ), import TensorFlow:
import tensorflow as tf
Note: Ignore depreciation warnings.
Create a constant that contains a string:
hello = tf.constant('Hello, TensorFlow!')
Then create a TensorFlow session:
sess = tf.Session()
You can ignore the warnings that the TensorFlow library wasn't compiled to
use certain instructions.
Display the value of hello:
If successful, the system outputs:
Hello, TensorFlow!
Stop the Python interactive shell:
Test Completed Task
Click Check my progress to verify your performed task. If you have completed
the task successfully you will granted with an assessment score.
Install TensorFlow
Check my progress
Clone the git repository
All the code used in this lab is in this git repository. In the SSH window, run the
following command to clone the repository:
git clone https://github.com/googlecodelabs/tensorflow-for-poets-2
Now cd into it:
cd tensorflow-for-poets-2
Test Completed Task
Click Check my progress to verify your performed task. If you have completed
the task successfully you will granted with an assessment score.
Clone the git repository
Check my progress
Download the training images
Before you start any training, you'll need a set of images to teach the model
about the new classes you want to recognize. We've created an archive of
creative-commons licensed flower photos to use initially. Download the
photos (218 MB) by invoking the following commands in the SSH:
curl http://download.tensorflow.org/example_images/flower_photos.tgz \
| tar xz
You should now have a copy of the flower photos in your working directory.
Confirm the contents of your working directory by issuing the following
ls flower_photos
Example Output: You should see the following objects:
Move the flower images directory into tf_files:
mv flower_photos tf_files
Test Completed Task
Click Check my progress to verify your performed task. If you have completed
the task successfully you will granted with an assessment score.
Download the training images and move them to the tf_files directory
Check my progress
(Re)training the network
Configure your MobileNet
The retrain script can retrain either Inception V3 model or a MobileNet. In this
lab you will use a MobileNet. The principal difference is that Inception V3 is
optimized for accuracy, while the MobileNets are optimized to be small and
efficient, at the cost of some accuracy.
Inception V3 has a first-choice accuracy of 78% on ImageNet, but the model is
85MB, and requires many times more processing than even the largest
MobileNet configuration, which achieves 70.5% accuracy, with just a 19MB
The following configuration options are available:
Input image resolution: 128,160,192, or 224px. Unsurprisingly, feeding in a
higher resolution image takes more processing time, but results in better
classification accuracy. We recommend 224 as an initial setting.
The relative size of the model as a fraction of the largest MobileNet: 1.0, 0.75,
0.50, or 0.25. We recommend 0.5 as an initial setting. The smaller models run
significantly faster, at a cost of accuracy.
With the recommended settings, it typically takes only a couple of minutes to
retrain on a laptop.
For this lab you'll use the highest settings. In the SSH window, pass the
settings inside Linux shell variables with the following commands:
export IMAGE_SIZE=224
export ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"
The graph below shows the first-choice-accuracies of these configurations (yaxis), vs the number of calculations required (x-axis), and the size of the
model (circle area).
16 points are shown for MobileNet. For each of the 4 model sizes (circle area
in the figure) there is one point for each image resolution setting. The 128px
image size models are represented by the lower-left point in each set, while
the 224px models are in the upper right.
An extended version of this figure is available in slides 84-89 here.
Open a firewall for Tensorboard on port 6006
TensorBoard is a monitoring and inspection tool included with TensorFlow.
You will use it to monitor the training progress. In order for your VM to get to
TensorBoard, you'll create a firewall rule.
In the Console, click on devhost.
Scroll down and under Network Interfaces, click on default under
the Network heading.
Click on Firewall Rules on the left, then "Create Firewall Rule" at the top of the
Now set up your firewall:
Name: tensorboard
Targets: All instances in the network
Source IP ranges:
Protocol and ports: check tcp, and then type "6006"
Test Completed Task
Click Check my progress to verify your performed task. If you have completed
the task successfully you will granted with an assessment score.
Open a firewall for Tensorboard on port 6006
Check my progress
Start TensorBoard
In the SSH window, launch tensorboard in the background.
tensorboard --logdir tf_files/training_summaries &
Note: This command will fail with the following error if you already have a
TensorBoard process running:ERROR:tensorflow:TensorBoard attempted
to bind to port 6006, but it was already in use
You can kill all existing TensorBoard instances with:
pkill -f "tensorboard"
This URL will be how you get to TensorBoard:
Replace devhost with the external IP address for your VM and refresh the
browser. Now you're ready.
Note: You can find the external IP for devhost in the VM instances screen (Navigation
menu > Compute Engine > VM instances).
Investigate the retraining script
The retrain script is part of the tensorflow repo, but it is not installed as part of
the pip package. For simplicity, it has been included it in the lab repository.
You can run the script using the python command. Take a minute to skim its
"help". You might need the press Enter again to return to the command line in
the SSH window.
python -m scripts.retrain -h
Run the training
As noted in the introduction, Imagenet models are networks with millions of
parameters that can differentiate a large number of classes. We're only
training the final layer of that network, so training will end in a reasonable
amount of time.
In the SSH, start your retraining with one big command (note the -summaries_dir option, sending training progress reports to the directory that
tensorboard is monitoring):
python -m scripts.retrain \
--bottleneck_dir=tf_files/bottlenecks \
--how_many_training_steps=500 \
--model_dir=tf_files/models/ \
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
--output_graph=tf_files/retrained_graph.pb \
--output_labels=tf_files/retrained_labels.txt \
--architecture="${ARCHITECTURE}" \
This script downloads the pre-trained model, adds a new final layer, and trains
that layer on the flower photos you've downloaded to the lab.
ImageNet does not include any of these flower species you're training on. The
kinds of information that make it possible for ImageNet to differentiate
among 1,000 classes are useful for distinguishing other objects. By using this
pre-trained network, you're using that information as input to the final
classification layer that distinguishes your flower classes.
This will take several minutes to complete. Read through the next couple of
sections while the training is running.
More about Bottlenecks
This section and the next provide background on how this retraining process
The first phase analyzes all the images on disk and calculates the bottleneck
values for each of them. What's a bottleneck?
These ImageNet models are made up of many layers stacked on top of each
other, a simplified picture of Inception V3 from TensorBoard, is shown above
(all the details are available in this paper, with a complete picture on page 6).
These layers are pre-trained and are already very valuable at finding and
summarizing information that will help classify most images. For this lab, you
are training only the last layer (final_training_ops in the figure below).
While all the previous layers retain their already-trained state.
In the above figure, the node labeled "softmax", on the left side, is the output
layer of the original model. All the nodes to the right of the "softmax" were
added by the retraining script.
A bottleneck is an informal term often used for the layer just before the final
output layer that actually does the classification. "Bottelneck" is not used to
imply that the layer is slowing down the network, but because near the output,
the representation is much more compact than in the main body of the
Every image is reused multiple times during training. Calculating the layers
behind the bottleneck for each image takes a significant amount of time.
Since these lower layers of the network are not being modified their outputs
can be cached and reused.
So the script is running the constant part of the network, everything below the
node labeled Bottlene... above, and caching the results.
The command you ran saves these files to the bottlenecks/ directory. If you
rerun the script, they'll be reused, so you don't have to wait for this part again.
More about Training
Once the script finishes generating all the bottleneck files, the actual training
of the final layer of the network begins.
The training operates efficiently by feeding the cached value for each image
into the Bottleneck layer. The true label for each image is also fed into the
node labeled GroundTruth. Just these two inputs are enough to calculate the
classification probabilities, training updates, and the various performance
As it trains you'll see a series of step outputs, each one showing training
accuracy, validation accuracy, and the cross entropy:
The training accuracy shows the percentage of the images used in the current
training batch that were labeled with the correct class.
Validation accuracy is the precision (percentage of correctly-labelled images)
on a randomly-selected group of images from a different set.
Cross entropy is a loss function that gives a glimpse into how well the
learning process is progressing (lower numbers are better here).
The figures below show an example of the progress of the model's accuracy
and cross entropy as it trains. When your model has finished generating the
bottleneck files you can check your model's progress by opening
TensorBoard, and clicking on the figure's name to show them. TensorBoard
may print out warnings to your command line. These can safely be ignored.
A true measure of the performance of the network is to measure its
performance on a data set that is not in the training data. This performance is
measured using the validation accuracy. If the training accuracy is high but
the validation accuracy remains low, that means the network is overfitting, and
the network is memorizing particular features in the training images that don't
help it classify images more generally.
The training's objective is to make the cross entropy as small as possible, so
you can tell if the learning is working by keeping an eye on whether the loss
keeps trending downwards, ignoring the short-term noise.
By default, this script runs 4,000 training steps. Each step chooses 10 images
at random from the training set, finds their bottlenecks from the cache, and
feeds them into the final layer to get predictions. Those predictions are then
compared against the actual labels to update the final layer's weights through
a backpropagation process.
As the process continues, you should see the reported accuracy improve.
After all the training steps are complete, the script runs a final test accuracy
evaluation on a set of images that are kept separate from the training and
validation pictures. This test evaluation provides the best estimate of how the
trained model will perform on the classification task.
You should see an accuracy value of between 85% and 99%, though the exact
value will vary from run to run since there's randomness in the training
process. (If you are only training on two classes, you should expect higher
accuracy.) This number value indicates the percentage of the images in the
test set that are given the correct label after the model is fully trained.
Go back to TensorBoard tab
When the command line returns that means that the training is complete. Go
look at the TensorBoard tab again to see what your results look like.
Using the Retrained Model
The retraining script writes data to the following two files:
tf_files/retrained_graph.pb, which contains a version of the selected
network with a final layer retrained on your categories.
tf_files/retrained_labels.txt, which is a text file containing labels.
Classifying an image
Next you will run the script on this image of a daisy:
flower_photos/daisy/21652746_cc379e0eea_m.jpg Image CC-BY by
python -m scripts.label_image \
--graph=tf_files/retrained_graph.pb \
Each execution will print a list of flower labels, in most cases with the correct
flower on top (though each retrained model may be slightly different).
You should get results like this for the daisy photo:
Daisy 0.99508375
Dandelion 0.0028086917
sunflowers 0.002093148
Roses 1.37025945e-05
Tulips 6.3252025e-07
This indicates a high confidence (~99%) that the image is a daisy, and low
confidence for any other label.
Try it again with a new image:
flower_photos/roses/2414954629_3708a1a04d.jpg Image CC-BY by Lori
python -m scripts.label_image \
--graph=tf_files/retrained_graph.pb \
How did this image do compared to the first one?
(Optional) Trying other hyperparameters
if you have time
You can use label_image.py to classify any image file you choose, either
from your downloaded collection, or new ones. You just have to change the -image file name argument to the script.
The retraining script has several other command line options you can use.
You can read about these options in the help for the retrain script:
python -m scripts.retrain -h
Try adjusting some of these options to see if you can increase the final
validation accuracy.
For example, the --learning_rate parameter controls the magnitude of the
updates to the final layer during training. So far we have left it out, so the
program has used the default learning_rate value of 0.01. If you specify a
small learning_rate, like 0.005, the training will take longer, but the overall
precision might increase. Higher values of learning_rate, like 1.0, could train
faster, but typically reduces precision, or even makes training unstable.You
need to experiment carefully to see what works for your case.
It is very helpful for this type of work if you give each experiment a unique
name, so they show up as separate entries in TensorBoard.
It's the --summaries_dir option that controls the name in tensorboard. Earlier
--summaries_dir=training_summaries/basic was used.
TensorBoard is monitoring the contents of the training_summaries directory,
so setting --summarys_dir to training_summaries or any subdirectory
of training_summaries will work.
You may want to set the following two options together, so your results are
clearly labeled:
--learning_rate=0.5 --summaries_dir=training_summaries/LR_0.5
(Optional) Training on your own
After you see the script working on the flower example images, you can start
looking at teaching the network to recognize different categories.
In theory, all you need to do is run the tool, specifying a particular set of subfolders. Each sub-folder is named after one of your categories and contains
only images from that category.
If you complete this step and pass the root folder of the subdirectories as the
argument for the --image_dir parameter, the script should train the images
that you've provided, just like it did for the flowers.
The classification script uses the folder names as label names, and the
images inside each folder should be pictures that correspond to that label, as
you can see in the flower archive:
Collect as many pictures of each label as you can and try it out!
Test your knowledge
Test your knowledge on the Google Cloud Platform by taking our quiz.
In transfer learning, when you build a new model to classify your original dataset, you reuse the
feature extraction part and re-train the classification part with your dataset. This method uses
less computational resources and training time. Deep learning from scratch can take days, but
transfer learning can be done in short order.
You've taken your first steps into a larger world of deep learning!
Finish your Quest
This self-paced lab is part of the Qwiklabs Google Cloud Solutions ll: Data and
Machine Learning and Intermediate ML: TensorFlow on GCP Quests. A Quest
is a series of related labs that form a learning path. Completing a Quest earns
you the badge above, to recognize your achievement. You can make your
badge (or badges) public and link to them in your online resume or social
media account. Enroll in an above Quest and get immediate completion credit
if you've taken this lab. See other available Qwiklabs Quests.
Take your next lab
Continue your Quest with Creating an Obect Detection Application using
TensorFlow, or try one of these suggestions:
Creating Custom Interactive Dashbosards with Bokeh and BigQuery
Running Distributed TensorFlow on Compute Engine
Next steps / learn more
You can see more about using TensorFlow at the TensorFlow website or the
TensorFlow Github project. There are lots of other resources available for
TensorFlow, including a discussion group and whitepaper.
If you make a trained model that you want to run in production, you should
also check out TensorFlow Serving, an open source project that makes it
easier to manage TensorFlow projects.
If you're interested in running TensorFlow on mobile devices try the second
part of this tutorial which will show you how to optimize your model to run on
Go have some fun in the TensorFlow Playground!
Sign up for the full Coursera Course on Machine Learning
Google Cloud Training & Certification
...helps you make the most of Google Cloud technologies. Our classes include
technical skills and best practices to help you get up to speed quickly and
continue your learning journey. We offer fundamental to advanced level
training, with on-demand, live, and virtual options to suit your busy
schedule. Certifications help you validate and prove your skill and expertise in
Google Cloud technologies.
Manual last updated July 29, 2019
Lab last tested June 6, 2019
Copyright 2020 Google LLC All rights reserved. Google and the Google logo
are trademarks of Google LLC. All other company and product names may be
trademarks of the respective companies with which they are associated.