flanaras' adventures

Anything that I might come along :)

Build TensorFlow on an offline computer

Hello all,

I wanted to build TensorFlow from sources on a computer that doesn’t have access to the Internet. After some research on the Internet I didn’t find anyone successfully managing that specifically with CUDA support. There is an offline helper tensorflow-offline ( by @amutu but that supports TensorFlow version 1.2.1. To be fair I didn’t try if it would work for the currently latest one (1.5.0), but I need CUDA support.

TL;DR Configure application, bazel fetch and bazel build for a few seconds on an online computer using “bazel –output_user_root=`pwd`/../tf_tmp %REST_OF_COMMAND%. Package and transfer ../tf_tmp/%HASH1%/external to the remote. Identically configure application, build same as the online (it will fail), extract package to ../tf_tmp/%HASH2%/external and then build again. Voila!

The suggested solution is universal, for other bazel based applications as well.

Here it goes!

This will require to create twice the developer environment. For me this means more or less having cuda (9.1 locally and 9.0 on the server) installed, cudnn (7.0 both) and nccl (2.1.4-1), using bazel release 0.10.1.

Setup both environments identically, as seen on the build guide (

Online server

Run (check * Important in the end of the post):

 > bazel --output_user_root=`pwd`/../tf_tmp fetch --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

Until it fully succeeds, you might need to run it multiple times or delete ../tf_tmp/install folder to succeed. Then start building the application because there might be some packages that will be checked at that moment:

 > bazel --output_user_root=​pwd`/../tf_tmp build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

exit after a few seconds, when you see it has started to compile.

Navigate to ../tf_tmp/%HASH1%/ where 22b7191bf872641ec533c3a935b4af91 is my hash, i.e.

 > cd ../tf_tmp/22b7191bf872641ec533c3a935b4af91/

Create an archive with the external folder

 > tar czf external.tar.gz external

Offline server:


 > bazel --output_user_root=​pwd`/../tf_tmp build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

This will create the folder structure in tf_tmp so we can add the external folder

Online server:

Move to the offline server:

 > scp external.tar.gz server:/path/on/remote

Offline server:

extract external.tar.gz to /path/to/tf_tmp/%HASH2%/

 > tar xzf external.tar.gz -C $HOME/tf_tmp/7d826155cbbfce6e938d5e7e034e981b/

Build TensorFlow and it will succeed.

If by any chance you change the location of the TensorFlow source path or rename the folder,  you will need to extract external.tar.gz again.

* Important: this only adds the –output_user_root parameter which should not be withing the same folder as the source code. When the bazel command is

 > bazel build

this becomes

 > bazel --output_user_root=​/some/other/path build

solution based on @truatpasteurdotfr’s ( idea.

Hope this helps someone

— flanaras

— update [2018/03/16]: Add credits and bazel version


Fix Spotify’s freeze on i3

Hello all,

I have be working with i3 over the few last months and there is an issue with the Spotify client that I hadn’t found a solution for. When listening to a song and selecting another one, doesn’t matter how i.e. next or specifying the song, the client was not responding for around 20-30 seconds and after that the song(s) would change. I found this very irritating. Fortunately I found a solution to that today.

It seems the Linux Spotify client doesn’t oblige by the “dbus rules”. I found and quote “… Deadlocks could be caused by scripts called by Awesome, which rely on buggy spotify dbus properties…” from ArchLinux’s wiki page “Spotify”.

My specs

  • Spotify version
  • i3 version 4.14.1
  • openSUSE Tumbleweed 20180124

The solution for this is simple, go to



and add this line


Restart the spotify client and it should work.


Enjoy your uninterrupted music.



Fix offline Linux installation

Hello all,

Recently I had a problem with a computer, I was trying to install the nvidia drivers on a computer and I mistakenly removed my kernel from the system. After this I couldn’t boot the computer.

The idea here is to take a live USB and use it to execute commands on an offline system. This is possible through chroot, which allows to execute statements as if we would be on the offline computer (ish). It can use distribution tools like zypper, yum, apt-get, etc and do all the recovery that is needed.

Download your favourite live CD/USB, I will be using the GNOME Live CD ( with openSUSE. This is what I’m used to use.

Burn it on a USB. Use your favourite tool, I won’t be going much into this since I usually have problems with this. A few picks are Rufus (, UNetbootin (, SUSE Studio Image Writer ( depending on your available OS.

Determine the partition that you want to mount. You can do this by opening “files” (nautilus), click on “+ Other Locations” and view the partition that interests you. If you mount the partition with nautilus don’t forget to unmount it before continuing. Look the image below for reference, you would be looking for something like /deb/sdXY.


After you have located the partition that you want to mount, in this case /dev/sda5, open a terminal and type (thanks to stackoverflow: How do I run update-grub from a LiveCD? []):

> sudo mount /dev/sda5 /mnt
> sudo mount --bind /dev /mnt/dev
> sudo mount --bind /sys /mnt/sys
> sudo mount --bind /proc /mnt/proc
> sudo chroot /mnt

Congratulations, now you have access to the offline system.

You can do all nifty things as:

> sudo zypper ref
> sudo zypper up -y


> sudo apt-get update
> sudo apt-get upgrade

or anything else you might want to. To exit press ctrl + d.

When done, unmount the mounted partitions:

> sudo umount /mnt/dev
> sudo umount /mnt/sys
> sudo umount /mnt/proc
> sudo umount /mnt/

PS. If you try to ping a domain name and you get a error such as:

> ping
ping: Name or service not known

but you can ping IP addresses i.e.:

> ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=55 time=5.24 ms

You need to add a DNS server in /etc/resolv.conf as shown here:

> sudo bash -c "echo nameserver >> /etc/resolv.conf"

Enjoy till the next time

— flanaras

openSUSE Virtual box kernel error

Hello again, another problem I’ve been encountering comes as following. I used to use Virtual box on my openSUSE tumbleweed box but at some point it would complain about the kernel modules missing. I had all of the tools need to compile the module and still didn’t work. At the same time the message given by virtual box didn’t help at all. The error as shown “Kernel driver not installed (rc=-1908)”, with the hint of executing “/sbin/vboxconfig” which doesn’t exit on my openSUSE box. The other error that’s also shown is “NS_ERROR_FAILURE (0x80004005)” which is related to missing kernel modules. See below:

screenshot of virtual box showing rc=-1908 error

Assuming that you have also already installed the virtual box kernel modules you should just run as root, every time you want to run virtual box (once per boot):

 > modprobe vboxdrv

Happy Virtual boxing.


Setup OpenCL on openSUSE [Intel Integrated GPU]

Hello all,

This time I will go on how to setup and run OpenCL applications on openSUSE using an Intel HD Graphics 4400 GPU. As long you’re having an Intel integrated GPU that supports OpenCL, these steps should be able to work for your as well.

You can check in Intel_HD_and_Iris_Graphics [] about your hardware’s compatibility and what version of OpenCL you can work with.

[Note: 1] Important: This would be more or less how to setup OpenCL on a computer. The only thing that would differ from this guide is how to deal with the “Missing OpenCL environment” section.

[Note: 2] I will be skipping the verification of the binaries.

My specifications:

  • openSUSE tumbleweed
  • Intel(R) Core(TM) i5-4300U CPU
  • Intel(R) HD Graphics 4400
  • g++ (SUSE Linux) 6.2.1 20161209 [gcc-6-branch revision 243481]

I point to clDeviceQuery.cpp []  as the test source code that will be trying to run.

Now, lets try to compile it. First of all we need to add the -lOpenCL library in order to compile an OpenCL application.

To add the library to the system install libOpenCl1.

 > zypper in libOpenCL1

// Missing headers

Lets try to compile the source code

 > gcc -lOpenCL clDeviceQuery.cpp
0clDeviceQuery.cpp:8:19: fatal error: CL/cl.h: No such file or directory
#include <CL/cl.h>
compilation terminated.

We see that the OpenCL headers are absent. To install them we need the package called opencl-headers.

 > zypper in opencl-headers

// Link problem

Lets try to compile it again.

 > g++ -lOpenCL clDeviceQuery.cpp
/usr/lib64/gcc/x86_64-suse-linux/6/../../../../x86_64-suse-linux/bin/ld: cannot find -lOpenCL
collect2: error: ld returned 1 exit status

Now we have a different problem, g++ can’t find libOpenCL. We need to add a symbolic link for /usr/lib64/ to point to /usr/lib64/

 > ln -s /usr/lib64/ /usr/lib64/

// Missing OpenCL environment

Compilation succeeds this time.

 > g++ -lOpenCL clDeviceQuery.cpp
 > ./a.out
clDeviceQuery Starting...

Error -1001 in clGetPlatformIDs Call!
System Info:

Local Time/Date = 13:52:14, 01/31/2017
CPU Name: Intel(R) Core(TM) i5-4300U CPU @ 1.90GHz
# of CPU processors: 4
Linux version 4.9.6-1-default (geeko@buildhost) (gcc version 6.2.1 20161209 [gcc-6-branch revision 243481] (SUSE Linux) ) #1 SMP PREEMPT Thu Jan 26 09:09:16 UTC 2017 (d1207ac)

Again we hit on another problem, we are missing the OpenCL runtime environment/driver for my GPU. It is required to install the OpenCL runtime for the specific computer. This will work more or less with most (?) Intel Integrated GPUs. We will need to download “OpenCL™ 2.0 GPU/CPU driver package for Linux (64-bit)” from Intel’s page containing “Driver and library(runtime) packages” [].

After downloading and unpacking we have to install the file called intel-opencl-r4.0-59481.x86_64.rpm.

 > zypper in intel-opencl-r4.0-59481.x86_64.rpm

// All working

Trying again to compile and run will give us the following result.

 > g++ -lOpenCL clDeviceQuery.cpp
 > ./a.out
clDeviceQuery Starting...

1 OpenCL Platforms found

OpenCL Device Info:

1 devices found supporting OpenCL on: Intel(R) OpenCL

Device Intel(R) HD Graphics
CL_DEVICE_NAME: Intel(R) HD Graphics
CL_DEVICE_VENDOR: Intel(R) Corporation

clDeviceQuery, Platform Name = Intel(R) OpenCL, Platform Version = OpenCL 1.2 , NumDevs = 1, Device = Intel(R) HD Graphics

System Info:

Local Time/Date = 14:16:40, 01/31/2017
CPU Name: Intel(R) Core(TM) i5-4300U CPU @ 1.90GHz
# of CPU processors: 4
Linux version 4.9.6-1-default (geeko@buildhost) (gcc version 6.2.1 20161209 [gcc-6-branch revision 243481] (SUSE Linux) ) #1 SMP PREEMPT Thu Jan 26 09:09:16 UTC 2017 (d1207ac)

All tests have passed! This is the point that we wanted to reach. We can also see that my I run OpenCL 1.2.

Have lots of fun with your OpenCL programming.

Enjoy until next time

— flanaras

Transactional memory on Fedora [24]

Hello again,

This time I will going through on how to compile a program with g++ that has transactional memory support on Fedora 24.

The compilation command  will look like:

g++ -std=c++11 -fgnu-tm transactions.cpp -o r

Running this on Fedora will result in an error

/usr/bin/ld: cannot find -litm
collect2: error: ld returned 1 exit status

An easy way to overcome this is to install libitm and libitm-static

 sudo yum install libitm libitm-static

After that if we try to compile again we get no errors and we can also run the program with no problems!

 > g++ -std=c++11 -fgnu-tm transactions.cpp -o r
 > ./r
[meaningful results]


Have fun

— flanaras

Fix DNS problem on openSUSE


today for no reason the DNS resolving stopped working on my openSUSE tubleweed for no reason. I tried to supply a DNS server from the connection’s settings but it wouldn’t work, but directly pinging on an ip worked. On a quick look on google I found out that messing with /etc/resolv.conf and adding “nameserver DNS_IP”, it would do the trick although it doesn’t look the right way to do it.

[Check 07/07/2017 edit as well]

Originally my resolv.conf file looked like:

### /etc/resolv.conf file autogenerated by netconfig!

and with the fast patch would look like

### /etc/resolv.conf file autogenerated by netconfig!

As shown on openSUSE forum on an older thread removing previously mentioned file and restarting the NetworkServices fixes the problem. Remove the resolv.conf so it would get generated again.

 > su -c 'rm /etc/resolv.conf && systemctl restart NetworkManager.service'

— update [26/09/2016]: It seems that there is an other nifty solution to the same problem, check goinggnu’s post about the same problem!

— update [07/07/2017]: The same problem has resurfaced, DNS information from DHCP do not get used properly from the connection manager and DNS fields are not set. Check the information from the dhclient and manually insert them in the network profile. This is especially necessary when connected on a limited network that filters remote DNS queries, else you can use any other DNS server of your preference in the /etc/resolv.conf file or per profile.

 # view dhclient info
 > cat /var/lib/dhcp/dhclient.leases
 # restart network services
 > systemctl restart network

In case that you have set some DNS server in /etc/resolv.conf and you are in a filtered environment it won’t work. To make it work delete the file and then continue with the instructions in the update section.


best regards

— flanaras


Install okular on gnome

If you would like to install a good pdf viewer in gnome you might pick a kde application, sure that’s great, but it might not look so good at first.

I like to use okular in gnome but if you just install okular like

> zypper in okular

it will get installed but at the same time all of the icons will be missing from the menus and even the application image wont be there. Sure it is fully functional but it doesn’t look good.

A way to get these icons is to install a kde icon package such as oxygen (if I start okular from the command line it complains a lot about oxygen icons) but a more concrete way to do this is to install the kde base pattern. In my case on tumbleweed I had to install the “kde” pattern (more info could be on [1] but info could also be outdated, I know this will install a lot of unnecessary things but IMHO it’s safer and easier to do it this way.

> zypper install -t pattern kde

If your are not in openSUSE you should check for a pattern (if your distribution has patterns) like “KDE Desktop environment” or something that could contain the terms “KDE”, “base”, “environment” or on your distribution’s forum for help installing the KDE environment.

If you re in openSUSE and you don’t find this pattern go on yast and search for something similar under patterns.

The result would be something like this.

screenshot of okular in gnome with working icons in place


Use git with GitHub’s Two-Factor Authentication in windows

Hello everyone,

This time I will be explaining how to use Two-Factor Authentication from GitHub on git  on a command prompt or any terminal emulator on Windows.

You probably have tried already to do something like:

 > git clone

entered your username, password and never asked for your 6 digit authentication code which led to authentication error similar to this one:

 > fatal: Authentication failed for ''

The solution to this is easy, use SSH! To use SSH you need to create a private/public key pair. One way to do that is to download PuTTYgen, run it and select the number of bits that you desire for your key, in this case 4096.

selecting the number of bits for the key

press generate and start moving your mouse around. Then you will have something like this.

Screenshot 2016-06-10 23.25.05.png

Enter the passphrase you desire, this will be used when accessing your private key so do not forget it. As pointed out by @Sakrifor, you need to create the private key by the option Conversions -> Export OpenSSΗ key. Create a folder named .ssh under your user’s folder and save it as “id_rsa”, for instance:


Copy your public key, go to GitHub settings -> SSH and GPG keys, click on “New SSH Key”, give it a tittle to know where you use this key and paste the public key. After that, go to GitHub and get a SSH link to your repository by “Clone with SSH”. Continue on a command prompt and …

 > git clone
   Cloning into 'my-private-repository'...
   The authenticity of host ' (' can't be established.
   RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
   Are you sure you want to continue connecting (yes/no)? y
   Please type 'yes' or 'no': yes
   Warning: Permanently added ',' (RSA) to the list of known hosts.
   Enter passphrase for key '/c/Users/flanaras/.ssh/id_rsa':
   remote: Counting objects: 60, done.
   Receiviemote: Total 60 (delta 0), reused 0 (delta 0), pack-reused 60ng objects:

   Receiving objects: 100% (60/60), 22.97 KiB | 0 bytes/s, done.
   Resolving deltas: 100% (25/25), done.
   Checking connectivity... done.

It works, well done! Now you will be asked to enter your passphrase and not your GitHub credentials.

Note that we previously checked the validity of the RSA fingerprint, Github’s SSH keys.

See you around


Create a free website or blog at

Up ↑