Tensorflow dictionary

Post updated February 10, 2019 by Rokas Balsys

In this page I will write mostly used tensorflow command lines and explain what they do, because it's hard to do something if you don't know what each command does.

tf.constant():

tf.constant() creates a constant tensor. What is a Tensor? TensorFlow programs use a data structure called tensor to represent all the data. Any type of data you plan to use for your model can and should be stored in Tensors. Simply a Tensor is a multi-dimensional array (0-D tensor: scalar, 1-D tensor: vector, 2-D tensor: matrix, and so on). So here is tf.Constant() definition:

tf.constant(value, dtype=None, shape=None, name='Const', verify_shape=False)

value: A constant value (or list) of output type dtype.
dtype: The type of the elements of the resulting tensor.
shape: Optional dimensions of resulting tensor.
name: Optional name for the tensor.
verify_shape: Boolean that enables verification of a shape of values.

The argument value can be a constant value, or a list of values of type dtype. If value is a list, then the length of the list must be less than or equal to the number of elements implied by the shape argument. In the case where the list length is less than the number of elements specified by shape, the last element in the list will be used to fill the remaining entries.

# Constant Tensor populated with single value.
tensor = tf.constant(5) => 5

# Constant 1-D Tensor populated with value list.
tensor1 = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]

# Constant 2-D tensor populated with scalar value -1.
tensor2 = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.]
[-1. -1. -1.]]
# Constant 2-D tensor populated with scalar value -1.
tensor3 = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.]
[4. 5. 6.]]

Later we'll cover how to print these values.

tf.Session() and GRAPH:

When you start learning TensorFlow you may bee confused when you hear something about tf.Graph and tf.Session. But it’s simple:
• TensorFlow graph defines the computation. It doesn’t compute anything, it doesn’t hold any values, it just defines the operations that you specified in your code.
• TensorFlow session allows to execute graphs or part of graphs. It allocates resources (on one or more machines) for that and holds the actual values of intermediate results and variables.

GRAPH:

The main idea about Tensorflow is that all the numerical computations are expressed as a computational graph. In other words, the backbone of any Tensorflow program is a Graph. Anything that happens in your model is represented by the computational graph. Here is quoted citation from the TensorFlow website: "A computational graph (or graph in short) is a series of TensorFlow operations arranged into a graph of nodes". Basically, it means a graph is just an arrangement of nodes that represent the operations in your model.

So First let's see what does a node and operation mean. The graph is composed of a series of nodes connected to each other by edges (from the image above). Each node in the graph is called op (short for operation). So we'll have one node for each operation; either for operations on tensors (like math operations) or generating tensors (like variables and constants). Each node takes zero or more tensors as inputs and produces a tensor as an output. We'll do an example to visualise this.

Let's start with a basic arithmatic operation like addition to demonstrate a graph. The code adds two values, say a=2 and b=3, using TensorFlow. To do so, we need to call tf.add(). In further tutorial I will cover each method to get a clear idea of what it can do (documentation can be found at tensorflow.org or you can just use google to get to the required page in the documentation). The tf.add() has three arugments 'x', 'y', and 'name' where x and y are the values to be added together and name is the operation name, i.e. the name associated to the addition node on the graph.

So here is simple example where we create graph without a session (session will be covered later):

import tensorflow as tf
# To clear the defined variables and operations of the previous cell
tf.reset_default_graph()
# create graph
a = tf.constant(2, name="a")
b = tf.constant(3, name="b")
# creating the writer out of the session
writer = tf.summary.FileWriter('./graphs', tf.get_default_graph())

As a result, our python will print number 5. So now don't close you python shell. Now our code created a new folder called "graphs" in a local repository you are running your code. Open this folder and from there open a command line. Now open tensorboard by typing following line:

tensorboard --logdir=graphs

Now our tensorboard should run, now go to your browser and open http://localhost:6006. And you should see something like this: Above graph is generated using Tensorboard, it is a visualization tool for the graph and will be discussed in detail in future. Remember earlier, we talked about the two parts of a TensorFlow code. First step is to create a graph and to actually evaluate the nodes, we must run the computational graph within a Session. In simple words, the written code only generates the graph which only determines the expected sizes of Tensors and operations to be executed on them. However, it doesn't assign a numeric value to any of the Tensors. TensorFlow does not execute the graph unless it is specified to do so with a session. Hence, to assign these values and make them flow through the graph, we need to create and run a session.

Therefore a TensorFlow Graph is something like a function definition in Python. It WILL NOT do any computation for you (just like a function definition will not have any execution result). It ONLY defines computation operations.

SESSION:

To compute anything, a graph must be launched in a session. Technically, session places the graph on hardware such as CPU or GPU and provides methods to execute them. I will write an example, to run the graph and get the value for 'a' the following code will create a session and execute the graph by running 'a':

import tensorflow as tf

a = tf.constant(2, name="a")
sess = tf.Session()
print(sess.run(a))
sess.close()

Above code creates a Session object (assigned to sess), and then (the second line) invokes its run method to run enough of the computational graph to evaluate 'a'. This means that it only runs that part of the graph which is necessary to get the value of 'a' (remember the flexibility of using TensorFlow? In this simple example, it runs the whole graph). While using this method remember to close the session at the end of the session. That is done using the last line in the above code.

The following code does the same thing and is more commonly used. The only difference is that there is no need to close the session at the end as it gets closed automatically.

import tensorflow as tf

a = tf.constant(2, name="a")
with tf.Session() as sess:
print(sess.run(a))

Now lets write exactly the same function we wrote before, but now lets create and save graph inside session:

import tensorflow as tf
# To clear the defined variables and operations of the previous cell
tf.reset_default_graph()
# create graph
a = tf.constant(2, name="a")
b = tf.constant(3, name="b")
# Launch the graph in a session
with tf.Session() as sess:
# Creating the writer inside the session
writer = tf.summary.FileWriter('./graphs', sess.graph)
print(sess.run(c))

I hope this short post has helped you to understand the concept of Graph and Session in TensorFlow. If I forgot to mention something useful please let me know.

tf.Print():

While using TensorFlow it's quite complicated to use print function. The easiest way to evaluate the actual value of a Tensor object is to pass it to the Session.run() method, or call Tensor.eval() when you have a default session (i.e. in a "with tf.Session():"). In general, you cannot print the value of a tensor without running some code in a session.

If you are experimenting with the programming model, and want an easy way to evaluate tensors, the tf.InteractiveSession() lets you open a session at the start of your program, and then use that session for all Tensor.eval() (and Operation.run()) calls. This can be easier in an interactive setting, such as the shell or using notebook.

So below is simple example to print the value of a tensor without returning it to your Python program, you can use the tf.Print() operator:

# Initialize session
import tensorflow as tf
sess = tf.InteractiveSession()

# Some constant tensor we want to print the value of
a = tf.constant(99)
b = tf.constant(77)

# Add print operation
a = tf.Print(a,[b], message="This is a constant value: ")

# Add print operation
a.eval()

Here is the output of above code:

This is a constant value: 
99

Now when we know what result we will get from above code, lets explain it:

tf.Print(input_, data, message=None, first_n=None, summarize=None, name=None)

From defined Print function above you can see that two first arguments must be given to a function: argument 'input_' is a tensor passed through this operation, argument 'data' is a list of tensors to print out when op is evaluated. This is the reason why Print function gave us two results. Anyway, now we know how to use this ft.Print() function.

With this method we can only print simple stuff, like tf.constants(). If we'll try to print tf.variable() we'll get an error. So it's better to use following print method "with tf.Session():". You may tray try write some string instead of 99.

import tensorflow as tf

a = tf.constant(99)

with tf.Session() as session:
print(session.run(a))

With above code, we'll receive result as usual in python shell.
But if we would like to do some math inside this function, this would be different. So how to use print function while we are doing some math we will learn in tf.Variable() tutorial.

tf.Variable()

TensorFlow is a way of representing computation without actually performing it until asked. In this sense, it is a form of lazy computing, and it allows for some great improvements to the running of code:
• Faster computation of complex variables.
• Distributed computation across multiple systems, including GPUs.
• Reduced redundency in some computations.

For example if we would like to do a very basic python maths cript it would look somethink like this:

x = 77
y = x + 33
print(y)

This above script basically just says "create a variable x with value 77, set the value of a new variable y to that plus 33, which is currently 100, and print it out". The value 100 will print out when you run this program.

The same print function while using tensorflow would look like following:

import tensorflow as tf

x = tf.constant(77, name='x')
y = tf.Variable(x + 33, name='y')

model = tf.global_variables_initializer()

with tf.Session() as session:
session.run(model)
print(session.run(y))

The difference is that y isn't given "the current value of x + 33" as in our previous example. Instead, it is effectively an equation that means "when this variable is computed, take the value of x (as it is then) and add 33 to it". The computation of the value of y is never actually performed in the above program.

In tensorflow function we have removed the print(y) statement, and instead we have code that creates a session, and actually computes the value of y. At first this may look quite a bit fuzzily, but it works like this:
• Import the tensorflow module and call it tf.
• Create a constant value called x, and give it the numerical value 77.
• Create a Variable called y, and define it as being the equation x + 33.
• Initialize the variables with tf.global_variables_initializer(). In this step, a tensorflow graph is created of the dependencies between the variables. In this case, the variable y depends on the variable x, and that value is transformed by adding 33 to it. Keep in mind that this value isn’t computed until last step, as up until then, only equations and relations are computed.
• Create a session for computing the values.
• Run the model created in 4th line.
• Run just the variable y and print out its current value.

Constants can also be arrays. Try to predict what this code will do, then run it to confirm.

import tensorflow as tf

x = tf.constant([11, 33, 66], name='x')
y = tf.Variable(x + 33, name='y')

model = tf.global_variables_initializer()

with tf.Session() as session:
session.run(model)
print(session.run(y))

Next we can do more difficult exercise. This task will help you better understand tf.Variable(). So the task is: create constant numbers array of 100 random numbers from 0 up to 1000. Next create a variable storing the equation: y=5x2-3x-18. And in final step print y to shell and check what you get.
To make it easyer for you here is the way to generate numpy array:

import numpy as np
data = np.random.randint(1000, size=10)

As a general rule, you should always use numpy for larger lists/arrays of numbers, as it is significantly more memory efficient and faster to compute on than lists. It also provides a significant number of functions that aren't normally available to lists.
So here is the code of the task, if you can't solve it by your self:

import tensorflow as tf
import numpy as np

x = tf.constant(np.random.randint(1000, size=1000), name='x')
y = tf.Variable(5*x*x-3*x-18, name='y')

model = tf.global_variables_initializer()

with tf.Session() as session:
session.run(model)
print(session.run(y))

You can try to update variables in loops, which later we may use in harder tasks. Take a look at this code, and predict what it will do (then try it your self):

import tensorflow as tf

x = tf.Variable(0, name='x')

model = tf.global_variables_initializer()

with tf.Session() as session:
session.run(model)
for i in range(10):
x = x + 2
print(session.run(x))