TensorFlow is an open source software library for high-performance numerical computation that allows users to create sophisticated deep learning and machine learning applications. The flexible architecture of this library enables you to deploy computation to one or more CPUs or GPUs. TensorFlow also includes TensorBoard, a data visualization toolkit.

There are a number of methods that can be used to install TensorFlow, such as using pip to install the wheels available on PyPI. However, the commended way is to install TensorFlow using the conda package and management system, which offers many benefits over pip. It is as easy as running "conda install tensorflow" from the command line interface and this will install all the necessary and compatible dependencies for the package. For more information on TensorFlow in Anaconda please red the following  article.

Whether using  CPU's or GPU's, you can install TensorFlow directly with conda. You will need to first setup up your own Anaconda environment.

CPU

          [login.cx1}$ module load anaconda3/personal

          [login.cx1}$ conda install tensorflow

* Intel has added support for the Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) to TensorFlow. TensorFlow with Intel MKL-DNN contains details on the MKL optimizations. These include optimizations to TensorFlow for Intel Xeon and Intel Xeon Phi Microprocessors. If you wish to use this version, instead of the above, use:

          [login.cx1}$ conda install tensorflow-mkl

Make sure that you set "ompthreads" to the number of ncpus in the PBS resource selection so that all CPUs are used:

          #PBS -l select=N:ncpus=X:mem=Y:ompthreads=X

GPU

Many of the functions in TensorFlow can be accelerated using NVIDIA GPUs. The gain in acceleration can be especially large when running computationally demanding deep learning applications.

          [login.cx1}$ module load anaconda3/personal

          [login.cx1}$ conda install tensorflow-gpu

If using GPU accelerated TensorFlow, please read more on GPU jobs.

Remember to load the corresponding Cuda library (available via "module load cuda") in your PBS job script.

Finally, consider setting up multiple conda environments and compare the performance of different TensorFlow versions.