Let’s Talk Artificial Intelligence

There has been a lot of buzz around Artificial Intelligence as Bixby, Alexa, and Siri become household names. Today, self driving cars share the roads with human drivers, hog farms keep an “artificial eye” on the vitals of individual animals in the pens, and soon enough we’ll be able to monitor heart health by staring into your eyes as a substitute to being hooked up to an EKG machine.

I wanted to know how this is happening and learn about the brain behind all of these advancements, so I recently set out to understand and learn the landscape of various Linux AI projects including NuPIC, Caffe and TensorFlow.  This post will cover some of the important things I learned about each of these platforms.

NuPIC

NuPIC stands for Numenta Platform for Intelligent Computing, and I found the theory behind it to be fascinating. It’s an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence that’s inspired by and based on the neuroscience of the neocortex, and it’s well suited for finding anomalies in live data streams. The algorithms and framework are implemented in Python,  JavaScript, C++, Shell, and Java.

I picked nupic and nupic-core (implementation of core NuPIC algorithms in C++) to play with on x86_64 (Ubuntu 17.10) and odroid-xu4 (Ubuntu 16.04) platforms. I had good luck compiling nupic and nupic-core from sources on both platforms. I was able to run unit_tests on x86_64, however, on odroid_xu4 I couldn’t run it due to an older version of pip. NuPIC requires pip version 9.0.1, so I’ll have to come back another time to build pip 9.01 from sources.

Here is a short summary of the steps to install NuPIC and run tests.

git clone https://github.com/numenta/nupic.core.git

cd nupic.core/
export NUPIC_CORE=$PWD
mkdir -p $NUPIC_CORE/build/scripts
cd ../nupic.core/build/scripts/
cmake $NUPIC_CORE -DCMAKE_BUILD_TYPE=Release \
   -DCMAKE_INSTALL_PREFIX=../release \
-DPY_EXTENSIONS_DIR=$NUPIC_CORE/bindings/py/src/nupic/bindings

# While still in $NUPIC_CORE/build/scripts
make -j3

# While still in $NUPIC_CORE/build/scripts
make install

# Validate install running cpp_region_test and unit_tests
cd $NUPIC_CORE/build/release/bin
./cpp_region_test
./unit_tests

#Note:
#This step worked on x86_64 as it has pip 9.0.1 and failed on
# odroid-xu4 due to older pip 8.1.1 ./unit_tests
git clone https://github.com/numenta/nupic
cd nupic

#Follow the detailed instructions in README.md to compile and install

# Install from local source code
sudo pip install -e .

sudo apt-get install python-pytest

Note:
This step worked on x86_64 as it has pip 9.0.1 and failed on
odroid-xu4 due to older pip 8.1.1 py.test tests/unit

Caffe

Caffe is Deep learning framework from Berkeley AI Research (BAIR) and The Berkeley Vision and Learning Center (BVLC). Tutorials and installation guides can be found in the project’s README.

I compiled Caffe on the odroid-xu4 successfully and was able to run tests. I had to install several dependencies for “make all” to complete successfully, and at the end I was able to run “make runtest” which ran 1110 tests from 152 test cases. The experience is relatively painless!

Here is a short summary of the install steps for installing and running tests.

https://github.com/BVLC/caffe.git
cd caffe
# Created Makefile.config from Makefile.config.example
# Uncomment CPU_ONLY=1 for Caffe cpu version

# Install dependenices
sudo apt-get install protobuf-compiler libboost-all-dev
sudo apt-get install libgoogle-glog-dev
sudo apt-get install libopenblas-dev
sudo apt-get install libopencv-highgui-dev
sudo apt-get install libhdf5-dev
export CPATH="/usr/include/hdf5/serial/"
sudo apt-get install libleveldb-dev
sudo apt-get install liblmdb-dev
sudo apt-get install libatlas-base-dev
sudo apt-get install libgflags-dev

make all
make test
make runtest
[==========] 1110 tests from 152 test cases ran. (197792 ms total)
[  PASSED  ] 1110 tests.

The Caffe command can be found under .build_release/tools/caffe.

caffe: command line brew
usage: caffe <command> <args>

commands:
  train           train or finetune a model
  test            score a model
  device_query    show GPU diagnostic information
  time            benchmark model execution time

  Flags from tools/caffe.cpp:
    -gpu (Optional; run in GPU mode on given device IDs separated by ','.Use
      '-gpu all' to run on all available GPUs. The effective training batch
      size is multiplied by the number of devices.) type: string default: ""
    -iterations (The number of iterations to run.) type: int32 default: 50
    -level (Optional; network level.) type: int32 default: 0
    -model (The model definition protocol buffer text file.) type: string
      default: ""
    -phase (Optional; network phase (TRAIN or TEST). Only used for 'time'.)
      type: string default: ""
    -sighup_effect (Optional; action to take when a SIGHUP signal is received:
      snapshot, stop or none.) type: string default: "snapshot"
    -sigint_effect (Optional; action to take when a SIGINT signal is received:
      snapshot, stop or none.) type: string default: "stop"
    -snapshot (Optional; the snapshot solver state to resume training.)
      type: string default: ""
    -solver (The solver definition protocol buffer text file.) type: string
      default: ""
    -stage (Optional; network stages (not to be confused with phase), separated
      by ','.) type: string default: ""
    -weights (Optional; the pretrained weights to initialize finetuning,
      separated by ','. Cannot be set simultaneously with snapshot.)
      type: string default: ""

TensorFlow

Now, on to TensorFlow: a machine learning framework. It provides a C API defined in c_api.h, which is suitable for building bindings for other languages; I played with TensorFlow and the C API, and I compiled TensorFlow and Bazel, a pre-requisite, from sources. Compiling from sources took a long time on the odroid-xu4 and ran into the same pip version issue I mentioned earlier. On the x86_64 platform, I managed to write a hello_tf.c using the C API and watch it run.

The Tensorflow website has a great guide that explains how to prepare your Linux environment. There are two choices for installing Bazel, you can either install Bazel binaries, or compile from sources. Here are my setup notes for compiling Bazel from sources:

sudo apt-get install build-essential openjdk-8-jdk python zip
sudo apt-get install python-numpy python-dev python-pip python-wheel

mkdir bazel; cd bazel

wget https://github.com/bazelbuild/bazel/releases/download/0.10.0/bazel-0.10.0-dist.zip

unzip bazel-0.10.0-dist.zip

./compile.sh

# The above failed with "The system is out of resources."
# message. Workaround reference:
# https://github.com/bazelbuild/bazel/issues/1308

vi scripts/bootstrap/compile.sh

# find the line:
"run "${JAVAC}" -classpath "${classpath}" 
 -sourcepath "${sourcepath}""
add -J-Xms256m -J-Xmx384m
as shown in 
"run "${JAVAC}" -J-Xms256m -J-Xmx384m -classpath
 "${classpath}" -sourcepath "${sourcepath}""

# Run compile again: Compile should work now.
./compile.sh

sudo pip install six numpy wheel

cd tensorflow
export PATH=$PATH:../linux_ai/bazel/output
./configure

# The bazel build command builds a script named build_pip_package. Running this script as follows will build a .whl file within the /tmp/tensorflow_pkg directory:

bazel build --jobs 1 --local_resources 2048,0.5,1.0 --verbose_failures --config=opt //tensorflow/tools/pip_package:build_pip_package

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
sudo pip install /tmp/tensorflow_pkg/tensorflow-1.6.0rc0-cp27-cp27mu-linux_armv7l.whl

You can validate your TesnforFlow installation by running python in a different directory and executing the following code:

python

Python 2.7.14 (default, Sep 23 2017, 22:06:14) 
[GCC 7.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))

#If you see "Hello, TensorFlow!" as a result, you are good to start
#developing TensorFlow programs.

For added fun, I also installed the TensorFlow C API, here is the sample program:

#include <stdio.h>
#include <tensorflow/c/c_api.h>

int main()
{
	printf("Hello from TensorFlow C library version %s\n", TF_Version());
	return 0;
}
gcc -I/usr/local/include -L/usr/local/lib hello_tf.c -ltensorflow
./hello_tf 
Hello from TensorFlow C library version 1.5.0

Summary

My experience has given me a few tips and takeaways I think are worth sharing:

  • Most AI frameworks depend on new tool chain versions, so it’s important to ensure you have latest versions of things like pip and python.
  • Compiling TensorFlow can be very tedious.
  • Caffe installed easily on odroid-xu4.
  • Unlike TesnsorFlow Java and TensorFlow Go, TensorFlow for C appears to be covered by the TensorFlow API stability guarantees.

This blog should have provided you with a quick summary of the select AI projects I experimented with, and I hope some of my notes might be useful to you. My work in this exciting area continues, and I will share more new discoveries in the future. If you’re working on some interesting AI projects, we’d love to hear about them in the comments below.

A New AI, Medical Robots, and More in This Week’s Wrap Up

Open Source Wrap Up: November 7 -13, 2015

Google Releases Open Source AI Engine: TensorFlow

Google has released TensorFlow, deep learning that is used in many of the company’s products, as open source. The software uses a library for numerical computation inside data flow graphs that pass dynamically sized, multi-dimensional arrays, called tensors, between nodes. The nodes perform processing asynchronously, in parallel once all tensors on their incoming edges become available. This design makes TensorFlow flexible and portable as it is relatively simple to reconstruct data flow graphs and use high-level programming languages to develop new computational systems. Google has used this to connect artificial intelligence research with product teams that build new products that use this software. By releasing it as open source, the company hopes to bring more people into this convergence effort and standardize the set of tools people rely on for this work.

To learn more, visit the TensorFlow website.

Linux Foundation Announces Launch of OpenHPC

The Linux Foundation has announced the intent to form the OpenHPC Collaborative Project which is aimed at promoting and developing open source frameworks for high performance computing environments. The organization includes educational institutions, governmental organizations, and major corporations. The members of this group will work together to create a stable environment for testing and validation, reduce costs, provide a robust and diverse open source software stack, and develop a flexible framework for configuration of high performance computing. They will do this through the development of tools like provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries.

For more information, visit the OpenHPC community page.

Vanderbilt’s Medical Capsule Robot Software Goes Open Source

Researchers at the Vanderbilt University School of Engineering have released a set of hardware and software designs for medical robots that are designed to be swallowed (once the technology is small enough). Named Pillforge, the team had built three different robots for the colon, stomach, and to stop internal bleeding, and they realized they were reusing a lot of components. This led to the desire to release these building blocks to other researchers and product developers in order to reduce the barrier to entry for others and allow other researchers and developers to improve the designs. The robots run TinyOS for their operating system, and support a variety of modules including some for wireless communication, sensing, actuating,  and more.

To learn more, visit the Pillforge website.

Let’s Encrypt Public Beta Announced

Let’s Encrypt, a Linux Foundation Collaborative project to encrypt the web, will go into public beta on December 3, 2015. Let’s Encrypt is a project aimed an encrypting a greater number of places on the web by generating TLS/SSL certificates that can be installed on web servers with a simple application. On December 3, anyone who wants to request a certificate will be able to. The limited beta launched on September 12, 2015, and has issued over 11,000 certificates since; this experience has given the project enough confidence to open the system for public beta.

Other News