Been looking around for a solid resource on how to get Tensorflow to run on the Jetson TK1. Most what I found was how to get TF 0.8 to run, which was the last TF version to allow usage of cuDNN 6 that is the latest version available for the TK1.
The TK1 is an aging platform with halted support, but it is still a cheap option for high-powered embedded compute. Unfortunately, being so outdated it’s impossible to get the latest and greatest of DNN to work on the CUDA GPU on the TK1, but we can certainly use the CPU!
So a word of disclaimer – this compiled TF version will not use the GPU, just the CPU. However, it will let you run the most recent NN architectures with the latest layer implementations.
Cross compilation for the TK1 solves the acute problem of space on the device itself, as well as speed of compilation. On the other hand it required bringing up a compilation toolchain, which took a while to find.
I am going to be assuming a Ubuntu 16.04 x86_64 machine, which is what I have, and really you can do this in a VM or a Docker container just as well on Windows.
I’ve learned a great deal from the following resources:
- https://lengerrong.blogspot.com/2017/09/cross-compile-tensorflow-for-armv7l.html (this was a godsend)
- https://github.com/bazelbuild/bazel/wiki/Building-with-a-custom-toolchain
- https://devtalk.nvidia.com/default/topic/813810/cmake-cross-compilation-for-jetson-tk1-makefile-for-nsight/
But I had my own recipe of course, otherwise I wouldn’t have posted this.
Preparation
Starting from the host, the following toolchain needs to be installed: http://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/arm-linux-gnueabihf/gcc-linaro-4.9.4-2017.01-x86_64_arm-linux-gnueabihf.tar.xz
It’s just GCC 4.9, which is based on libc 2.19 that powers the TK1 vanilla install.
Initially I toyed with the gcc-5-arm-linux-gnueabihf
from the apt sources on ubuntu – but don’t bother with it. It uses libc 2.21, which is too recent for the TK1.
Once you have that untared in a location you know,
Get tensorflow from github, and checkout v1.5 (although this will probably work with later versions):
$ git clone https://github.com/tensorflow/tensorflow $ cd tensorflow/ $ git checkout r1.5
Next up, install Bazel.
Follow the instructions from the TF docs: https://www.tensorflow.org/install/install_sources, which direct you here: https://docs.bazel.build/versions/master/install.html
Install relevant python libs:
$ sudo apt-get install python-numpy python-dev python-pip python-wheel
Now I followed the instructions here https://lengerrong.blogspot.com/2017/09/cross-compile-tensorflow-for-armv7l.html – to modify and add the Bazel scripts to cross compile.
But it didn’t work right off the bat, I had to make some modifications.
In essence, that post is correct.
You’d need to add the following to end of your $TF_ROOT/WORKSPACE:
new_local_repository( name = 'toolchain_target_arm_linux_gnueabi', path = '$GCC_ROOT', build_file = 'arm-compiler/cross_toolchain_target_arm_linux_gnueabi_host_x86-64.BUILD' )
(replace $GCC_ROOT with the actual path to the linaro gcc toolchain)
Open a new directory:
$ mkdir $TF_ROOT/arm-compiler
Then you’d need the $TF_ROOT/arm-compiler/cross....BUILD
file, the $TF_ROOT/arm-compiler/BUILD
file and the $TF_ROOT/arm-compiler/CROSSTOOL
file.
They’re pretty standard, and come from Bazel’s docs, but still modified for this purpose.
Remember to replace $GCC_ROOT with the place you unpacked gcc-4.9-arm cross compile toolchain.
Build TF
At this point you’re ready to configure Tensorflow:
$ echo -e "\n\nn\nn\nn\nn\n\n\n\n\n\n\n-march=armv7-a -std=c++11\n\n" | ./configure
This will set most everything to No, and ask to add some compiler flags for architecture and stdc++11.
And now for the main event – building TF:
$ bazel build --crosstool_top=//arm-compiler:toolchain --cpu=armeabi-v7a --config=opt -s tensorflow/examples/label_image/...
We’re not only going to build TF but also the “label_image” sample.
This will take ~45 minutes to build.
Run the example on the TK1
At the end, if no errors arose, you should be able to see a bazel-bin
directory with the outputs.
Copy the directory to the TK1 and let it rip!
(to run label-image you’d need the Inception precompiled net and the sample image, follow these instructions from the TF team: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/label_image)
ubuntu@tegra-ubuntu:~/Downloads/label_image$ LD_LIBRARY_PATH=label_image.runfiles/org_tensorflow/tensorflow/:/lib/arm-linux-gnueabihf/ ./label_image2018-03-08 19:56:47.825285: I tensorflow/examples/label_image/main.cc:250] military uniform (653): 0.834305 2018-03-08 19:56:47.825620: I tensorflow/examples/label_image/main.cc:250] mortarboard (668): 0.0218693 2018-03-08 19:56:47.825826: I tensorflow/examples/label_image/main.cc:250] academic gown (401): 0.010358 2018-03-08 19:56:47.825996: I tensorflow/examples/label_image/main.cc:250] pickelhaube (716): 0.00800815 2018-03-08 19:56:47.826166: I tensorflow/examples/label_image/main.cc:250] bulletproof vest (466): 0.00535088
(see the LD_LIBRARY_PATH needs to be set properly for the exec to find TF)
Best of luck!
Roy