Building Tensorflow 2.2 with CUDA 11 support and TensorRT 7 on Ubuntu 18.04 LTS

autitya
4 min readJul 28, 2020

--

Step 1. Download required files

download CUDA Toolkit 11.0

#mkdir /home/localrepo
#cd /home/localrepo
#wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
#sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
#wget http://developer.download.nvidia.com/compute/cuda/11.0.2/local_installers/cuda-repo-ubuntu1804-11-0-local_11.0.2-450.51.05-1_amd64.deb

download TensorRT

  1. Go to: https://developer.nvidia.com/tensorrt.
  2. Click Download Now.
  3. login with user id and password
  4. Select the TensorRT 7
  5. Select the check-box to agree to the license terms.
  6. Click the TensorRT 7.1 GA
  7. Click the package TensorRT 7.1.3.4 for Ubuntu 1804 and CUDA 11.0 DEB local repo packages
  8. Now move the package to /home/localrepo

Download TensorFlow 2.2

#cd /home/localrepo
#git clone https://github.com/tensorflow/tensorflow.git

Step 2. Install CUDA Toolkit 11.0 and TensorRT

#cd /home/localrepo
#sudo dpkg -i cuda-repo-ubuntu1804–11–0-local_11.0.2–450.51.05–1_amd64.deb
#sudo apt-key add /var/cuda-repo-ubuntu1804–11–0-local/7fa2af80.pub
#sudo dpkg -i nv-tensorrt-repo-${os}-${tag}_1–1_amd64.deb
#sudo apt-key add /var/nv-tensorrt-repo-${tag}/7fa2af80.pub
#sudo apt-get update
#sudo apt-get -y install cuda
#sudo apt-get install tensorrt cuda-nvrtc-11–0
#sudo apt-get install uff-converter-tf

If using Python 2.7:

#sudo apt-get install python-libnvinfer-dev

If using Python 3.x:

#sudo apt-get install python3-libnvinfer-dev

Step 3 . Building Tensorflow 2.2

#sudo apt install python3 python3-dev python python-dev python3-pip python-pip
#pip install six ‘numpy<1.19.0’ wheel setuptools mock ‘future>=0.17.1’
#pip install keras_applications — no-deps
#pip install keras_preprocessing — no-deps

#cd /home/localrepo
#wget https://github.com/bazelbuild/bazel/releases/download/3.1.0/bazel-3.1.0-installer-linux-x86_64.sh
#sudo ./bazel-3.1.0-installer-linux-x86_64.sh
#cd tensorflow
#git checkout r2.2

#./configure

You have bazel 3.1.0 installed.
Please specify the location of python. [Default is /usr/bin/python3]:

Found possible Python library paths:
/usr/lib/python3/dist-packages
/usr/local/lib/python3.6/dist-packages
Please input the desired Python library path to use. Default is [/usr/lib/python3/dist-packages]

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: Y
CUDA support will be enabled for TensorFlow.

Do you wish to build TensorFlow with TensorRT support? [y/N]: Y
TensorRT support will be enabled for TensorFlow.

Found CUDA 11.0 in:
/usr/local/cuda-11.0/targets/x86_64-linux/lib
/usr/local/cuda-11.0/targets/x86_64-linux/include
Found cuDNN 8in:
/usr/lib/x86_64-linux-gnu
/usr/include
Do you want to use clang as CUDA compiler? [y/N]: N
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:

Please specify optimization flags to use during compilation when bazel option “ — config=opt” is specified [Default is -march=native -Wno-sign-compare]:

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:N
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding “ — config=<>” to your build command. See .bazelrc for more details.
— config=mkl # Build with MKL support.
— config=monolithic # Config for mostly static monolithic build.
— config=ngraph # Build with Intel nGraph support.
— config=numa # Build with NUMA support.
— config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
— config=v2 # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
— config=noaws # Disable AWS S3 filesystem support.
— config=nogcp # Disable GCP support.
— config=nohdfs # Disable HDFS support.
— config=nonccl # Disable NVIDIA NCCL support.
Configuration finished

Build the pip package for TensorFlow 2.2

bazel build //tensorflow/tools/pip_package:build_pip_package

If getting Error Loading Package @io_bazel_rules_docker when building Tensorflow

add this to the top of WORKSPACE file inside the tensorflow folder

load(“@bazel_tools//tools/build_defs/repo:http.bzl”, “http_archive”)

# Download the rules_docker repository at release v0.14.4
http_archive(
name = “io_bazel_rules_docker”,
sha256 = “4521794f0fba2e20f3bf15846ab5e01d5332e587e9ce81629c7f96c793bb7036”,
strip_prefix = “rules_docker-0.14.4”,
urls = [“https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz"],
)

If you do the same, you probably want to ensure you get the latest version here https://github.com/bazelbuild/rules_docker and copy the code snippet from the README

The bazel build command creates an executable named build_pip_package — this is the program that builds the pip package. Run the executable as shown below to build a .whl package in the /tmp/tensorflow_pkg directory.

To build from a release branch:

./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

To build from master, use — nightly_flag to get the right dependencies:

./bazel-bin/tensorflow/tools/pip_package/build_pip_package — nightly_flag /tmp/tensorflow_pkg

Install the package
The filename of the generated .whl file depends on the TensorFlow version and your platform. Use pip install to install the package, for example:

pip install /tmp/tensorflow_pkg/tensorflow-version-tags.whl

--

--