TensorFlow Machine Learning on the Amazon Deep Learning AMI

TensorFlow is a popular framework used for machine learning. The Amazon Deep Learning AMI comes bundled with everything you need to start using TensorFlow from development through to production. In this post, you will develop, visualize, serve, and consume a TensorFlow machine learning model using the Amazon Deep Learning AMI.


Upon completion of this post you will be able to:

  • Create machine learning models in TensorFlow
  • Visualize TensorFlow graphs and the learning process in TensorBoard
  • Serve trained TensorFlow models with TensorFlow Serving
  • Create clients that consume served TensorFlow models, all with the Amazon Deep Learning AMI


You should be familiar with:

  • Working at the Linux command line
  • The Python programming language
  • Some linear algebra knowledge is beneficial (basic vector and matrix operations)
  • Basic understanding of neural networks is beneficial

Lab Environment

Before completing the Lab instructions, the environment will look as follows:

After completing the Lab instructions, the environment should look similar to:

First get yourself in into amazon console

Select the US West 2 region using the upper right drop-down menu on the AWS Management Console:


SSH tunnels allow you to connect to ports on a remote server through the encrypted SSH channel. This allows you to securely connect to ports on the remote server that you otherwise wouldn’t be able to because of system firewall, or security group rules. In this post Step, you will establish an SSH connection with a tunnel from port 8000 on your local system to port 8888 on the remote server. The tunnel will allow you to connect to a Jupyter Notebook server later in the Lab to interactively develop Tensorflow machine learning models after you learn the basics at the command-line.

1. Navigate to the EC2 Management Console and copy the IPv4 Public IP address of the Lab instance.

Note: It may take a minute or two for the instance to appear in the list. Refresh the list every 15 seconds until it appears.

2. Proceed to the Connecting using Linux / macOS or Connecting using Windows instructions depending on your local operating system.

Connecting using Linux / macOS

Linux distributions and macOS include an SSH client that accepts standard PEM keys. Complete the following steps to connect using the included terminal applications:

a. Open your terminal application. If you need assistance finding the terminal application, search for terminal using your operating system’s application finder or search commands.

b. Enter the following command and press Enter:

ssh -i /Path/To/Your/KeyPair.pem ubuntu@YourIPv4Address -L127.0.0.1:8000:

where the command details are:

ssh initiates the SSH connection.

-i specifies the identity file.

/Path/To/Your/Keypair.pem specifies the location and name of your key pair. An example location might be /Home/YourUserName/Downloads/KeyPair.pem.

YourIPv4Address is the IPv4 address noted earlier in the instructions.

-L specifies to bind on your local machine to on the remote machine.

Note: Your SSH client may refuse to start the connection due to key permissions. If you receive a warning that the key pair file is unprotected, you must change the permissions. Enter the following command and try the connection command again:

chmod 600 /Path/To/Your/KeyPair.pem

c.  After successfully connecting to the virtual machine, you should reach a terminal prompt similar to the one shown in the image below.

Note: If you receive a warning that the host is unknown, enter y or yes to add the host and complete the connection.


Connecting using Windows

Windows does not include an SSH client. You must download an application that includes one. A free and useful utility is called PuTTY. PuTTY supports SSH connections as well as key generation and conversion. Download PuTTY at http://www.putty.org. Complete the following steps to use PuTTY to create an SSH connection.


a. Open PuTTY and insert the IPv4 public IP address in the Host Name (or IP address) field.


b. Navigate to the Connection > SSH > Auth section. Select the PPK key pair you downloaded earlier.

c. Select Tunnels under the SSH menu item, Add a new forwarded port with the following values, click add, and then click Open:

Source port8000


d. After waiting a few seconds, enter ubuntu at the prompt for a username. 

Your End result should look like this

TensorFlow is a popular framework used for machine learning. It works by defining a dataflow graph. Tensors, or arrays of arbitrary dimension, flowthrough the graph performing operations defined by the nodes in the graph. Machine learning algorithms can be modeled using this kind of dataflow graph.

When you write code using TensorFlow, there are two phases: graph definition, and evaluation. You define the entire computation graph before executing it. With this strategy, TensorFlow can scan the graph and perform optimizations on the graph to reduce computation time, increase parallelism. These two phases are something to keep in mind when developing code with TensorFlow.

Now a short snippet how to write code with TensorFlow

  1. Activate your virtual environment.
source activate tensorflow_p27

2. Start the interactive Python interpreter by entering:


3. Enter the following import statements to import the print function and the TensorFlow module:

from __future__ import print_function
import tensorflow as tf

4. Define a dataflow graph with two constant tensors as input and use the tf.add operation to produce the output:

# Explicitly create a computation graph
graph = tf.Graph()
with graph.as_default():
  # Declare one-dimensional tensors (vectors)
input1 = tf.constant([1.0, 2.0])
input2 = tf.constant([3.0, 4.0])
# Add the two tensors
output = tf.add(input1, input2)

The graph keeps track of all the inputs and operations you define so the results can be computed when you run a TensorFlow session with the graph.

5. Print the graph to see that it stored the inputs and operations:


6. Evaluate the graph by creating a session with the graph and calling the output.eval() function:

# Evaluate the graph in a session
with tf.Session(graph = graph):
  result = output.eval()
  print("result: ", result)


The output displays an information message letting you know that the session will run on the instance’s graphics processing unit (GPU). The result is printed on the last line.

7. When you are only using a single graph in a session, you can use the default graph as shown in the following example that repeats the computation using the default graph:

# Evaluate using the default graph
with tf.Session():
input1 = tf.constant([1.0, 2.0])
input2 = tf.constant([3.0, 4.0])
output = tf.add(input1, input2)
# Show the operations in the default graph
result = output.eval()
print("result: ", result)

In the above code, the default graph is implicitly passed to all Tensorflow API functions. It can be convenient to use the default graph, but you may need multiple graphs when you develop separate training and test graphs for machine learning algorithms.

8. Multiply a matrix by a vector with the following annotated example:

matmul_graph = tf.Graph()
with matmul_graph.as_default():
# Declare a 2x2 matrix and a 2x1 vector
matrix = tf.constant([[1.0, 2.0], [3.0, 4.0]])
vector = tf.constant([[1.0], [2.0]])
# Matrix multiply (matmul) the two tensors
output = tf.matmul(matrix, vector)

with tf.Session(graph = matmul_graph):
result = output.eval()


You have now seen how to add vectors and multiply a matrix by a vector in TensorFlow. These two operations are the building blocks of several machine learning algorithms. The examples have only used constant inputs so far. TensorFlow supports using variables to allow tensors to be updated with different values as graph evaluation proceeds.

9. Use variables to store the result of repeatedly multiplying a matrix by a vector as in the following annotated example:

# Evaluate a matrix-vector multiplication
var_graph = tf.Graph()
with var_graph.as_default():
# Declare a constant 2x2 matrix and a variable 2x1 vector
matrix = tf.constant([[1.0, 1.0], [1.0, 1.0]])
vector = tf.Variable([[1.0], [1.0]])
# Multiply the matrix and vector 4 times
for _ in range(4):
# Repeatedly update vector with the multiplication result
vector = tf.matmul(matrix, vector)

with tf.Session(graph = var_graph):
# Initialize the variables we defined above.
result = vector.eval()


The output value of vector shows that the value has been updated. This is similar to what you would expect using a variable in Python. One catch is that variables must be initialized with an explicit call. tf.global_variables_initializer().run() initializes all variables in a global variable collection. By default, every variable is added to the global variable collection.

10. Exit the Python interpreter by entering:


The remainder of the Lab will use Jupyter notebooks for developing with code.

11. Deactivate the TensorFlow virtual environment:

source deactivate 

The (tensorflow_p27) prefix of the shell prompt is no longer displayed. The Jupyter notebook server needs to be started outside of any virtual environment to be able to discover the available virtual environments.

Starting A Jupyter NoteBook Server:

1. In your SSH shell, enter the following command to start the Jupyter notebook server in the background:

nohup jupyter notebook &

The nohup command, stands for no hangup and allows the Jupyter notebook server to continue running even if your SSH connection is terminated. After a couple seconds a message about writing output for the process to the nohup.out file will be displayed:


This will allow you to continue to enter commands at the shell prompt.

2. Press enter to move to a clean command prompt, and tail the notebook’s log file to watch for when the notebook is ready to connect to:

tail -f nohup.out

The notebook is ready when you see The Jupyter Notebook is running at:


3. Press ctrl+c to stop tailing the log file.

4. Enter the following to get an authenticated URL for accessing the Jupyter notebook server:

jupyter notebook list

By default, Jupyter notebooks prevent access to anonymous users. After all, you can run arbitrary code through the notebook interface. The tokenURL parameter is one way to authenticate yourself when accessing the notebook server. The /home/ubuntu at the end of the command indicates the working directory of the server.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s