19.11.2018

Pros And Cons Of Using TensorFlow In The Production Environment

Pros And Cons Of Using TensorFlow In The Production Environment
As the hype around deep learning gains momentum, many frameworks and libraries emerge around it. I get the chance to use some of them in practice and want to share my opinion and observations on what works well and what you should be aware of while working with these frameworks. One step at a time, let’s start with TensorFlow.

TensorFlow is one (and often referred to as the best) of many frameworks that are used when working with neural networks. It is Google's open-source library for numerical computation, which also offers a set of tools for designing, training and fine-tuning neural networks.

At NeuroSYS we used it for problems concerning computer vision like image classification or generating artificial images, as in the case of creating a real-time object detection or a bacterial classification system (you can learn more about this project at its early stage here ).

But enough of broad descriptions, let’s jump straight to the point.

TensorFlow Pros and Cons


Pros:


#1 - Great data vizualization tool - TensorBoard


TensorBoard is a suite of visualization tools in the TensorFlow library that helps to understand, debug, and optimize neural networks easier. It lets you present neural network graphs, input, output, training progress and any additional information in a clean readable way using only a few lines of code.


(TensorBoard graph visualization)

Here is how you can easily visualize data in TensorBoard:


# define from which variables generate summary from
loss_summary = tf.summary.scalar('loss_1', loss1)
image_summary = tf.summary.image('generated_image', result)
# merge whole summary into one instruction
summary = tf.summary.merge([loss_summary, image_summary])

# define summary writer
summary_writer = tf.summary.FileWriter('path/to/summary/',
                                       graph=tf.Session().graph)
# run summary along computation
result, summary_values = tf.Session().run([network_output, summary],
                                          feed_dict={input: input_data})
#write summary to disk and view it in TensorBoard
summary_writer.add_summary(summary_values)



#2 - Easy production-ready model sharing


With TensorFlow one can easily share a trained model. This can sound like a must-have feature, but it’s still not a standard across different frameworks.

Many frameworks require you to provide a full code of a model in order to load its weights into. TensorFlow, on the other hand, requires only a checkpoint file and the knowledge of layers names you need for inference (the input layer is the most important here because without it we cannot run a computation graph).

This feature leads us to pro #3, as it makes the TensorFlow model useful for a broad spectrum of applications. It allows you to implement the same model (without rewriting or recompiling it) in various projects no matter what language they are written in.

It's still not a one-liner, but at least you don't need to define the whole model:


# imported graph to be used as default later
imported_graph = tf.Graph()
with imported_graph.as_default():
    # read graph definition from file
    graph_def = tf.GraphDef()
    with tf.gfile.GFile('path/to/model', 'rb') as model:
        # parse it
        graph_def.ParseFromString(model.read())
    # and import to TensorFlow
    tf.import_graph_def(graph_def, name="imported_model")

with imported_graph.as_default():
    output = tf.Session().run("output:0", feed_dict={"input:0": our_input})



#3 - Multiple language support


TensorFlow is designed to support multiple client languages. It officially supports Python, C++, JavaScript, Go, Java and Swift. Although only Python, as the most commonly used one, supports all available features.

Due to high popularity, TensorFlow community created bindings to use the framework in other languages, such as C# (which we have used and I can say that was pretty good) and Ruby.

This ensures portability and allows developers to use machine learning models for desktop, mobile and even web applications.

Cons:


#1 - Too cluttered code


This is something that TensorFlow developers are working on, and already announced to prevent in 2.0 release. Sadly, the current state of the framework is inconsistent:

Which one to use?

  • tf.nn.conv2d

  • tf.nn.convolution

  • tf.layers.conv2d

  • tf.layers.Conv2d

  • tf.contrib.layers.conv2d

  • tf.contrib.layers.convolution2d

Even typing “tf conv2d” in google leads to 3 of these options making it really frustrating when you want to just find out which operation to use.

#2 - Need for extra code


As you can see in examples above, the amount of code needed for adding functionality is not that big. Nevertheless, naming conventions may be inconsistent, and the complexity of modules can be overwhelming.

Every computation needs to be called from a session handler. This creates a situation that using TensorFlow is like using a language within another language. One can forget about writing a clean pythonic code when even something as simple as a for-loop needs to be called using a TensorFlow equivalent.

Sometimes the documentation even “forgets” to tell you that you need to include additional instructions in your code to make some things work. This happened to me when I was trying to write my own data loader using the TensorFlow pipeline and provide multiple workers to parallelize computation.

However, what was not included in the documentation, is that you need to manually launch these workers. Without this part, the whole script just halted waiting for workers to provide data. With no error or warning, it just stops and waits for data from workers which were never launched.

This is an example of how those modules aren’t always seamlessly connected with each other, which leads to the lack of communication between them, and eventually, situations like the one described above.

#3 - Frequent releases


For someone, this might sound like an advantage. But in reality, new releases every 1-2 months are better to avoid in the production environment, especially when they tend to break backward compatibility.

We find it especially harmful for using bindings in different languages like C# TensorSharp. My team once encountered a problem after new arguments were introduced in one of the commonly used function, which broke compatibility with TensorSharp. The easiest solution we found was to use the older version of TensorFlow to re-export the whole model.

We understand that some changes in the rapidly developed framework are inevitable, but perhaps the community would benefit more if releases were less frequent, but more attention was paid to the consistency of the framework.

Few tips for easier work with TensorFlow


Tip #1


For better performance avoid running session for predicting only one result. Computing many operations one by one is gonna take more time than doing one session run with gathering all these operations at once. This is possible due to the TensorFlow's ability to parallelize computation, as well as avoiding the overhead created by initializing a computation session multiple times.

Tip #2


Avoid clutter in your code! This is rather general advice, but it’s really essential when working with TensorFlow. Moving a definition of a network to another file is a good idea. This way you can easily modify it later with no need to search through large files.

To Sum Up


Despite all the cons, TensorFlow is one of the most widely used frameworks for deep learning projects, adopted by such giants as IBM, Twitter, 9GAG, Ocado... and (surprise-surprise!) Google. It stays in my top list also, although I truly wish those flaws are fixed one day (and the sooner the better).

At the same time, I think TensorFlow can be a bit overwhelming for the beginners, as it often provides too many implementation options. This might be confusing for those with no experience or basic understanding of differences among suggested implementations. If this scares you off, it’s better to opt for simpler alternatives.

So this is it. I hope you find something useful in this blog post. Stay tuned for further updates on deep learning frameworks. And don’t hesitate to drop me a line if you have any questions - t.bonus@neurosys.com.



About the author:



Tomasz Bonus
Deep Learning Researcher at NeuroSYS.
Passionate about artificial intelligence, data science
and the future of tech.


Related articles
Learning methods go through tremendous changes. Right after the e-learning and video tutorials boom, another technology emerged that promises to revolutionize the way we share information. Find out why there is such hype around augmented reality and how it can transform your business in terms of training and knowledge retention. READ MORE
Retrospective meetings are a great occasion for generating insights and ideas to improve workflow efficiency. Sometimes such improvements start travelling from one project to another and become best practices eventually. We want to share our top 7 tried and tested retrospective findings - hope they come in handy for you as well. READ MORE
For tech professionals, staying up-to-date is more than a natural curiosity, it’s a must. How to be wired up without spending too much time on it, and what blogs to follow - further in the blog post. READ MORE
Smooth collaboration and effective project management are crucial for software development. No wonder there are so many approaches and tools designed especially for software project management. And whereas there is no single cure-all tool, you might find our experience in using the following tools helpful! READ MORE
Inspired with the idea to improve the reliability of computer object detection and eliminate the necessity to handle it manually, we initiated a proof-of-concept project on automating microbiological analysis with the help of deep learning for one of our clients. After few rounds of the meticulous testing, we are happy to share the outcomes. READ MORE
The role of a CTO involves constant challenges. And a product release doesn't mean a retirement for a CTO, but a new mode of working and new issues to be prepared for. Check out these top 5 challenges that every CEO of a growing company might face and our tips for handling them. READ MORE
When there are so many examples of tremendous success done by pioneers of certain technologies, the fear of missing out becomes quite reasonable. Don’t miss out two biggest trends of 2018 (from our perspective) and learn how your business can respond to them by reading this blog post. READ MORE
A university-industry partnership is a win-win deal that gives a competitive advantage for both parties. Read more about our experience of collaborating with a university and how we benefited from it - in this blog post. READ MORE
The transition to Scrum is an exciting learning experience for everyone involved. However, it might be overshadowed with human skepticism and doubts. Read more to learn what challenged to expect during the Scrum adoption and how to avoid them. READ MORE
Changing the way how people used to work might be quite a pain. And doing it by introducing Scrum is not an exception. If you’re considering to adopt Scrum or started the transition process already, this blog post is for you. READ MORE
When development becomes painful and time-consuming the debate over “what to do now?” starts. Fueled by developers’ claims “the code is a mess”, “we can’t work with the legacy code”, “let’s rewrite it” from one side, and the business trying to solve the issue less dramatically - from the other, the discussion seems to be never-ending. This blog post will guide you through potential pitfalls while deciding whether to improve the existing code (by refactoring, running unit tests etc.) or rewrite it from scratch. READ MORE
Bringing together people to work on the same project from different locations is becoming a common practice. But distance and lack of physical connection take its toll and may bring impediments to the workflow. After 7 years of experience in setting up distributed teams using Scrum, we cut our teeth on the possible pitfalls and collected several rules to make things work... READ MORE