More Webthings on IoT.js, MCUs, Tizen…

Connect webthings devices to gateway

So called “Webthings” (understand “servers”) are implementing WebThing API (this relates to W3C Web of Things (WoT) Thing Description).
Various languages can be used for implementation, today “Things Framework” page is listing NodeJS, Python, Java, Rust and ESP8266 (+ESP32).

Today’s challenge is to try IoT.js (the Internet of Things framework using JavaScript), as alternate runtime to NodeJS (on v8) and thus gain performance.

The consequence for application developers, is without adding complexity they can now also target more constrained devices using JavaScript high level language.

Make Webthings powered by IoTjs

My latest effort was to port webthing-node to IoT.js, this has been done by removing a couple of features (Actions, Events, Websockets, mDNS), then rewrite some JS parts that used latest ECMA features not supported by Jerryscript the underlying JavaScript interpreter, Also to preserve code structure I also reimplemented parts of express.js on IoT.js’ HTTP module.

This article will explain how to replicate this from scratch using Edison board running community Debian port., but it should be easy to adapt to other configurations. For GNU/Linux based OS this should flawlessly work too (as there is nothing specific here only GPIO pin for demo) while platforms supported by IoT.js should be possible too (Tizen:RT supported by ARTIK05x or NuttX as supported by STM32F4).

Setup Edison

Since IoT.js landed in Debian, I wanted to test it on Edison board running community maintained Debian port: jubilinux v9.

Note if you’re using (outdated) Poky/Yocto reference OS, it’s easy to rebuild on device too (just install libc6-staticdev).

Anyway here is the procedure to flash distro to Edison:

cd jubilinux-stretch
# Unplug edison
time sudo bash flashall.sh
# Plug USB cables to edison
Using U-Boot target: edison-blankcdc
Now waiting for dfu device 8087:0a99
Please plug and reboot the board

Flashing IFWI
(...)

(While I am on this, here is my bottle in the ocean, do you know How to unbrick a unflashable Intel Edison ?)

Then log in and configure system:

# Connect to terminal
picocom  -b 115200 /dev/ttyUSB0
U-Boot 2014.04 (Feb 09 2015 - 15:40:31)
(...)
Jubilinux Stretch (Debian 9) jubilinux ttyMFD2
rootlinux login: 
# Log in as root:edison and change password
root@jubilinux:~# passwd
root@jubilinux:~# cat /etc/os-release 
PRETTY_NAME="Jubilinux Stretch (Debian 9)"
NAME="Jubilinux Stretch"
VERSION_ID="9+0.4"
VERSION="9+0.4 (stretch jubilinux)"
root@jubilinux:~# cat /proc/version 
Linux version 3.10.98-jubilinux-edison (robin@robin-i7) (gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) ) #3 SMP PREEMPT Sun Aug 13 04:22:45 EDT 2017

Setup Wifi, the quick and dirty way:

ssid='Private' # TODO: Update with your network credentials
password='password' # TODO:
sed -e "s|.*wpa-ssid.*|wpa-ssid \"$ssid\"|g" -b -i /etc/network/interfaces
sed -e "s|.*wpa-psk.*|wpa-psk \"$password\"|g" -b -i /etc/network/interfaces

Optionally, we can change device’s hostname to custom one (iotjs or any name), it will be easier to find later:

sed -e 's|jubilinux|iotjs|g' -i /etc/hosts /etc/hostname

System should ready to use after rebooting:

reboot
apt-get update ; apt-get install screen ; screen

Then you should be ready to install precompiled IoT.js 1.0 package using apt-pining as explained before:

IoT.js landed in Raspbian

This will work, but for demo purpose, you can skip this and rebuild latest snapshot with GPIO module (not supported in default profile of version 1.0).

Rebuild IoT.js snapshot package

Based on Debian iotjs’s packaging file, you can rebuild a snapshot package from my development branch very easily:

apt-get update ; apt-get install git make time
branch=sandbox/rzr/devel/master
git clone https://github.com/tizenteam/iotjs -b "$branch" --recursive --depth 1
cd iotjs
time ./debian/rules && sudo debi

It took only 15 min to build on device, if short of resources you can eventually build deb packages out of device like explained for Raspberry Pi 0:

How to Run IoT.js on the Raspberry PI 0

WebThing for IoT.js

Now we have an environment ready to try my development branch of “webthing-node” ported to IoT.js (until ready for iotjs-modules):

cd /usr/local/src/
git clone https://github.com/tizenteam/webthing-node -b sandbox/rzr/devel/iotjs/master --recursive --depth 1 

Then just start “webthing” server:

cd webthing-node
iotjs example/simplest-thing.js

In other shell check if thing is alive:

curl -H "Content-Type: application/json"  http://localhost:8888/

{"name":"ActuatorExample","href":"/","type":"onOffSwitch","properties":{"on":{"type":"boolean","description":"Whether the output is changed","href":"/properties/on"}},"links":[{"rel":"properties","href":"/properties"}],"description":"An actuator example that just log"}

Then we can control our resource’s property (which actually just a LED on GPIO, but it could be a relay or any other actuator)

curl -X PUT -H "Content-Type: application/json" --data '{"on": true }' http://localhost:8888/properties/on
gpio: writing: true
curl http://localhost:8888/properties/on
{ "on": true }

Install webthing service

Systemd will help us to start the webthing server on boot listing on default http port:

unit="webthing-iotjs"
dir="/usr/local/src/$unit"
exedir="$dir/bin"
exe="$exedir/$unit.sh"
servicedir="$dir/lib/systemd/"
service="$servicedir/$unit.service"

mkdir -p "$exedir"
cat<"$exe" && chmod a+rx "$exe"
#!/bin/sh
set -x
set -e
PATH=$PATH:/usr/local/bin
cd /usr/local/src
cd webthing-node
# Update port and GPIO if needed
iotjs example/simplest-thing.js 80 45
EOF


mkdir -p "$servicedir" && cat<$service
[Unit]
Description=$unit
After=network.target

[Service]
ExecStart=$exe
Restart=always

[Install]
WantedBy=multi-user.target
EOF

Enable service:

killall iotjs
systemctl enable $service
systemctl start $unit
systemctl status $unit
reboot

Connect to gateway

Set up “Mozilla IoT gateway” as explained earlier, you can add some “virtual resources” to check it’s working too:

Connecting sensors to Mozilla’s IoT Gateway

Connecting our “ActuatorExample webthing” to gateway will require you to fill the explicit URL of Edison device.

Using your browser, open gateway location at mozilla-iot.org or http://gateway.local :

On “/things” page click on “+” button to add a thing, then “Add by URL…” wait for form:


Enter web thing URL:
iotjs.local

press “Submit” it will display:


ActuatorExample
On/Off Switch from iotjs.local:80

then press “Save” and “Done”, it should appear on Dashboard, you can click on “Actuator Example”‘s icon to turn it off or on.

Let’s verify is working by connecting a LED (Red) to Intel Edison’s minibreakout board (pinout) which is delivering 1.8V :

  • GPIO45: on J20 (bottom row), use the 4th pin (from right to left)
  • GND: on J19 (just above J20), use the 3th pin (from right to left)

Note, for later the reason why it was not scanned automagically is because I removed mDNS feature to ease the port, but a native or pure js module could be reintroduced later in, to behave like NodeJS webthing.

RGBLamp on Microcontrollers

Another platform to consider is ESP8266 (or ESP32), it’s a Wifi enabled microcontroller.
ESPxx are officially supported by Mozilla with a partial implementation of WebThings currently using native Ardiuno APIs. You can even try out my native implementation of RGBLamp.

If curious, you can also get (or make!) OpenSourceHardware light controller from Tizen community’s friend Leon.

Then it would worth comparing with a JavaScript version since JerryScript is supporting ESP8266.
Note that If IoT.js should be downsized for ESP8266, maybe ESP32 could be considered too (on FreeRTOS ?).

Other devices supported by IoT.js can be considered too, such as ARTIK05x (on TizenRT)

Standalone Tizen WebApp

Finally as a bonus chapter, while hacking on Tizen TM1, I made a standalone app that can login to Mozilla gateway and browse resources.

First time it should retrieve OAuth token, and then the browser is able to list existing resources.

If you don’t have a Tizen TM1 device, you can try using SDK or even your desktop browser (with CORS feature configured):

rm -rf tmp/chromium
mkdir -p tmp/chromium
chromium-browser --disable-web-security  --user-data-dir="tmp/chromium" https://rzr.github.io/webthings-webapp/

Feel free to contribute, for debugging purposes this URL can also be used
https://tizenteam.github.io/webthings-webapp/

More details on this experiment at https://github.com/rzr/webthings-webapp.

Further Reading

Connecting sensors to Mozilla’s IoT Gateway

Here is a 1st post about Mozilla’s IoT effort, and specifically the gateway project which is illustrating “Web Of Things” concept to create a decentralized Internet of Things, using Web technologies.

Today we will focus on the gateway, as it is the core component of the whole framework. Version 0.4.0 was just released, so you can try it your own on Raspberry Pi 3.  The Raspberry Pi 3 is the reference platform, but it should be possible to port to other single board computers (like ARTIK, etc).

The post will explain how to get started and how to establish basic automation using I2C sensors and actuators on gateway’s device (without any cloud connectivity).

To get started, first install the gateway according to these straightforward instructions:

Prepare SD Card

You need to download the Raspbian based gateway-0.4.0.img.zip (1GB archive) and dump it to SD card (2.6GB min).

lsblk # Identify your sdcard adapter ie:
disk=/dev/disk/by-id/usb-Generic-TODO
url=https://github.com/mozilla-iot/gateway/releases/download/0.4.0/gateway-0.4.0.img.zip
wget -O- $url | funzip | sudo dd of=$disk bs=8M oflag=dsync

If you only want to use the gateway and not hack on it, you can skip this next part which enables a developer shell though SSH.  However, if you do want access to a developer shell, mount the 1st partition called “boot” (you may need to replug your SD card adapter) and add a file to enable SSH:

sudo touch /media/$USER/boot/ssh
sudo umount /media/$USER/*

First boot

Next, install the SD card in your Raspberry PI 3 (Older RPis could work too, particularly if you have a wifi adapter).

When it has completed the first boot, you can check that the Avahi daemon is registering “gateway.local” using mDNS (multicast DNS)

ping gateway.local
ssh pi@gateway.local # Raspbian default password for pi user is "raspberry"

Let’s also track local changes to /etc by installing etckeeper, and change the default password.

sudo apt-get install etckeeper
sudo passwd pi

Logging in

You should now be able to access the web server, which is running on port 8080 (earlier version used 80):

http://gateway.local:8080/

It will redirect you to a page to configure wifi:

URL: http://gateway.local:8080/
Welcome
Connect to a WiFi network?
FreeWifi_secure
FreeWifi
OpenBar
...
(skip)

We can skip it for now:

URL: http://gateway.local:8080/connecting
WiFi setup skipped
The gateway is now being started. Navigate to gateway.local in your web browser while connected to same network as the gateway to continue setup.
Skip

After a short delay, the user should be able to reconnect to the entry page:

http://gateway.local:8080/

The gateway can be registered on mozilla.org for remote management, but we can skip this for now.

Then administrator is now welcome to register new users:

URL: http://gateway.local:8080/signup/
Mozilla IoT
Welcome
Create your first user account:
user: user
email: user@localhost
password: password
password: password
Next

And we’re ready to use it:

URL: http://gateway.local:8080/things
Mozilla IoT
No devices yet. Click + to scan for available devices.
Things
Rules
Floorplan
Settings
Log out

Filling dashboard

You can start filling your dashboard with Virtual Resources,

First hit the “burger menu” icon, go to settings page, and then go to the addons page.

Here you can enable a “Virtual Things” adapter:

URL: http://gateway.local:8080/settings/addons/
virtual-things-adapter 0.1.4
Mozilla IoT Virtual Things Adapter
by Mozilla IoT

Once enabled It should be listed along ThingURLAdapter on the adapters page:

URL: http://gateway.local:8080/settings/adapters
VirtualThingsAdapter
virtual-things
ThingURLAdapter
thing-url-adapter

You can then go back to the 1st Things page (it’s the first entry in the menu):

We can start adding “things” by pressing the bottom menu.

URL: http://gateway.local:8080/things
Virtual On/Off Color Light
Color Light
Save

Then press “Done” at bottom.

From this point, you can decide to control a virtual lamp from the UI, and even establish some basic rules (second entry in menu) with more virtual resources.

Sensing Reality

Because IoT is not about virtual worlds, let’s see how to deal with the physical world using sensors and actuators.

For sensors, there are many way to connect them to computers using analog or digital inputs on different buses.  To make it easier for applications developers, this can be abstracted using W3C’s generic sensors API.

While working on IoT.js‘s modules, I made a “generic-sensors-lite” module that abstracted a couple of I2C drivers from the NPM repository.  To verify the concept, I have made an adapter for Mozilla’s IoT Gateway (which is running Node.js), so I published the generic-sensors-lite NPM module first.

Before using the mozilla-iot-generic-sensors-adapter, you need to enable the I2C bus on the gateway (version 0.4.0, master has I2C enabled by default).

sudo raspi-config
Raspberry Pi Software Configuration Tool (raspi-config)
5 Interfacing Options Configure connections to peripherals
P5 I2C Enable/Disable automatic loading of I2C kernel module
Would you like the ARM I2C interface to be enabled?
Yes
The ARM I2C interface is enabled
ls -l /dev/i2c-1
lsmod | grep i2c
i2c_dev 16384 0
i2c_bcm2835 16384 0

Of course you’ll need at least one real sensor attached to the I2C pin of the board.  Today only 2 modules are supported:

You can double check if addresses are present on I2C the bus:

sudo apt-get install i2c-tools
/usr/sbin/i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- 23 -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- 77

Install mozilla-iot-generic-sensors-adapter

Until sensors adapter is officially supported by the mozilla iot gateway, you’ll need to install it on the device (and rebuild dependencies on the target) using:

url=https://github.com/rzr/mozilla-iot-generic-sensors-adapter
dir=~/.mozilla-iot/addons/generic-sensors-adapter
git clone --depth 1 -b 0.0.1 $url $dir
cd $dir
npm install

Restart gateway (or reboot)
sudo systemctl restart mozilla-iot-gateway.service
tail -F /home/pi/.mozilla-iot/log/run-app.log

Then the sensors addon can be enabled by pressing the “enable” button on the addons page:

URL: http://gateway.local:8080/settings/addons
generic-sensors-adapter 0.0.1
Generic Sensors for Mozilla IoT Gateway

It will appear on the adapters page too:

URL: https://gateway.local/settings/adapters
VirtualThingsAdapter
virtual-things
ThingURLAdapter
thing-url-adapter
GenericSensorsAdapter
generic-sensors-adapter

Now we can add those sensors as new things (Save and done buttons):

URL: http://gateway.local:8080/things
Ambient Light Sensor
Unknown device type
Save
Temperature Sensor
Unknown device type
Save

Then they will appear as:

  • http://gateway.local:8080/things/0 (for Ambient Light Sensor)
  • http://gateway.local:8080/things/1 (for Temperature Sensor)

To get value updated in the UI, they need to turned on first (try again if you find a big, and file tickets I will forward to drivers authors).

A GPIO adapter can be also used for actuators, as shown in this demo video.

If you have other sensors, check if the community has shared a JS driver, and please let me know about integrating new sensors drivers in generic-sensors-lite

Let’s Talk Artificial Intelligence

There has been a lot of buzz around Artificial Intelligence as Bixby, Alexa, and Siri become household names. Today, self driving cars share the roads with human drivers, hog farms keep an “artificial eye” on the vitals of individual animals in the pens, and soon enough we’ll be able to monitor heart health by staring into your eyes as a substitute to being hooked up to an EKG machine.

I wanted to know how this is happening and learn about the brain behind all of these advancements, so I recently set out to understand and learn the landscape of various Linux AI projects including NuPIC, Caffe and TensorFlow.  This post will cover some of the important things I learned about each of these platforms.

NuPIC

NuPIC stands for Numenta Platform for Intelligent Computing, and I found the theory behind it to be fascinating. It’s an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence that’s inspired by and based on the neuroscience of the neocortex, and it’s well suited for finding anomalies in live data streams. The algorithms and framework are implemented in Python,  JavaScript, C++, Shell, and Java.

I picked nupic and nupic-core (implementation of core NuPIC algorithms in C++) to play with on x86_64 (Ubuntu 17.10) and odroid-xu4 (Ubuntu 16.04) platforms. I had good luck compiling nupic and nupic-core from sources on both platforms. I was able to run unit_tests on x86_64, however, on odroid_xu4 I couldn’t run it due to an older version of pip. NuPIC requires pip version 9.0.1, so I’ll have to come back another time to build pip 9.01 from sources.

Here is a short summary of the steps to install NuPIC and run tests.

git clone https://github.com/numenta/nupic.core.git

cd nupic.core/
export NUPIC_CORE=$PWD
mkdir -p $NUPIC_CORE/build/scripts
cd ../nupic.core/build/scripts/
cmake $NUPIC_CORE -DCMAKE_BUILD_TYPE=Release \
   -DCMAKE_INSTALL_PREFIX=../release \
-DPY_EXTENSIONS_DIR=$NUPIC_CORE/bindings/py/src/nupic/bindings

# While still in $NUPIC_CORE/build/scripts
make -j3

# While still in $NUPIC_CORE/build/scripts
make install

# Validate install running cpp_region_test and unit_tests
cd $NUPIC_CORE/build/release/bin
./cpp_region_test
./unit_tests

#Note:
#This step worked on x86_64 as it has pip 9.0.1 and failed on
# odroid-xu4 due to older pip 8.1.1 ./unit_tests
git clone https://github.com/numenta/nupic
cd nupic

#Follow the detailed instructions in README.md to compile and install

# Install from local source code
sudo pip install -e .

sudo apt-get install python-pytest

Note:
This step worked on x86_64 as it has pip 9.0.1 and failed on
odroid-xu4 due to older pip 8.1.1 py.test tests/unit

Caffe

Caffe is Deep learning framework from Berkeley AI Research (BAIR) and The Berkeley Vision and Learning Center (BVLC). Tutorials and installation guides can be found in the project’s README.

I compiled Caffe on the odroid-xu4 successfully and was able to run tests. I had to install several dependencies for “make all” to complete successfully, and at the end I was able to run “make runtest” which ran 1110 tests from 152 test cases. The experience is relatively painless!

Here is a short summary of the install steps for installing and running tests.

https://github.com/BVLC/caffe.git
cd caffe
# Created Makefile.config from Makefile.config.example
# Uncomment CPU_ONLY=1 for Caffe cpu version

# Install dependenices
sudo apt-get install protobuf-compiler libboost-all-dev
sudo apt-get install libgoogle-glog-dev
sudo apt-get install libopenblas-dev
sudo apt-get install libopencv-highgui-dev
sudo apt-get install libhdf5-dev
export CPATH="/usr/include/hdf5/serial/"
sudo apt-get install libleveldb-dev
sudo apt-get install liblmdb-dev
sudo apt-get install libatlas-base-dev
sudo apt-get install libgflags-dev

make all
make test
make runtest
[==========] 1110 tests from 152 test cases ran. (197792 ms total)
[  PASSED  ] 1110 tests.

The Caffe command can be found under .build_release/tools/caffe.

caffe: command line brew
usage: caffe <command> <args>

commands:
  train           train or finetune a model
  test            score a model
  device_query    show GPU diagnostic information
  time            benchmark model execution time

  Flags from tools/caffe.cpp:
    -gpu (Optional; run in GPU mode on given device IDs separated by ','.Use
      '-gpu all' to run on all available GPUs. The effective training batch
      size is multiplied by the number of devices.) type: string default: ""
    -iterations (The number of iterations to run.) type: int32 default: 50
    -level (Optional; network level.) type: int32 default: 0
    -model (The model definition protocol buffer text file.) type: string
      default: ""
    -phase (Optional; network phase (TRAIN or TEST). Only used for 'time'.)
      type: string default: ""
    -sighup_effect (Optional; action to take when a SIGHUP signal is received:
      snapshot, stop or none.) type: string default: "snapshot"
    -sigint_effect (Optional; action to take when a SIGINT signal is received:
      snapshot, stop or none.) type: string default: "stop"
    -snapshot (Optional; the snapshot solver state to resume training.)
      type: string default: ""
    -solver (The solver definition protocol buffer text file.) type: string
      default: ""
    -stage (Optional; network stages (not to be confused with phase), separated
      by ','.) type: string default: ""
    -weights (Optional; the pretrained weights to initialize finetuning,
      separated by ','. Cannot be set simultaneously with snapshot.)
      type: string default: ""

TensorFlow

Now, on to TensorFlow: a machine learning framework. It provides a C API defined in c_api.h, which is suitable for building bindings for other languages; I played with TensorFlow and the C API, and I compiled TensorFlow and Bazel, a pre-requisite, from sources. Compiling from sources took a long time on the odroid-xu4 and ran into the same pip version issue I mentioned earlier. On the x86_64 platform, I managed to write a hello_tf.c using the C API and watch it run.

The Tensorflow website has a great guide that explains how to prepare your Linux environment. There are two choices for installing Bazel, you can either install Bazel binaries, or compile from sources. Here are my setup notes for compiling Bazel from sources:

sudo apt-get install build-essential openjdk-8-jdk python zip
sudo apt-get install python-numpy python-dev python-pip python-wheel

mkdir bazel; cd bazel

wget https://github.com/bazelbuild/bazel/releases/download/0.10.0/bazel-0.10.0-dist.zip

unzip bazel-0.10.0-dist.zip

./compile.sh

# The above failed with "The system is out of resources."
# message. Workaround reference:
# https://github.com/bazelbuild/bazel/issues/1308

vi scripts/bootstrap/compile.sh

# find the line:
"run "${JAVAC}" -classpath "${classpath}" 
 -sourcepath "${sourcepath}""
add -J-Xms256m -J-Xmx384m
as shown in 
"run "${JAVAC}" -J-Xms256m -J-Xmx384m -classpath
 "${classpath}" -sourcepath "${sourcepath}""

# Run compile again: Compile should work now.
./compile.sh

sudo pip install six numpy wheel

cd tensorflow
export PATH=$PATH:../linux_ai/bazel/output
./configure

# The bazel build command builds a script named build_pip_package. Running this script as follows will build a .whl file within the /tmp/tensorflow_pkg directory:

bazel build --jobs 1 --local_resources 2048,0.5,1.0 --verbose_failures --config=opt //tensorflow/tools/pip_package:build_pip_package

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
sudo pip install /tmp/tensorflow_pkg/tensorflow-1.6.0rc0-cp27-cp27mu-linux_armv7l.whl

You can validate your TesnforFlow installation by running python in a different directory and executing the following code:

python

Python 2.7.14 (default, Sep 23 2017, 22:06:14) 
[GCC 7.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))

#If you see "Hello, TensorFlow!" as a result, you are good to start
#developing TensorFlow programs.

For added fun, I also installed the TensorFlow C API, here is the sample program:

#include <stdio.h>
#include <tensorflow/c/c_api.h>

int main()
{
	printf("Hello from TensorFlow C library version %s\n", TF_Version());
	return 0;
}
gcc -I/usr/local/include -L/usr/local/lib hello_tf.c -ltensorflow
./hello_tf 
Hello from TensorFlow C library version 1.5.0

Summary

My experience has given me a few tips and takeaways I think are worth sharing:

  • Most AI frameworks depend on new tool chain versions, so it’s important to ensure you have latest versions of things like pip and python.
  • Compiling TensorFlow can be very tedious.
  • Caffe installed easily on odroid-xu4.
  • Unlike TesnsorFlow Java and TensorFlow Go, TensorFlow for C appears to be covered by the TensorFlow API stability guarantees.

This blog should have provided you with a quick summary of the select AI projects I experimented with, and I hope some of my notes might be useful to you. My work in this exciting area continues, and I will share more new discoveries in the future. If you’re working on some interesting AI projects, we’d love to hear about them in the comments below.

How to Run IoT.js on the Raspberry PI 0

IoT.js is a lightweight JavaScript platform for building Internet of Things devices; this article will show you how to run it on a few dollars worth of hardware. The First version of it was released last year for various platforms including Linux, Tizen, and NuttX (the base of Tizen:RT). The Raspberry Pi 2 is one of the reference targets, but for demo purposes we also tried to build for the Raspberry Pi Zero, which is the most limited and cheapest device of the family. The main difference is the CPU architecture, which is ARMv6 (like the Pi 1), while the Pi 2 is ARMv7, and the Pi 3 is ARMv8 (aka ARM64).

IoT.js upstream uses a python helper script to crossbuild for supported devices, but instead of adding support to new device I tried to build on the device using native tools with cmake and the default compiler options; it simply worked! While working on this, I decided to package iotjs for debian to see how well it will support other architectures (MIPS, PPC, etc), we will see.

Unfortunately, Debian armel isn’t optimized for ARMv6 and FPU, both of which are present on the Pi 1 and Pi 0, so the Raspbian project had to rebuild Debian for the ARMv6+VFP2 ARM variant to support all Raspberry Pi SBC’s.

In this article, I’ll share hints for running IoT.js on Raspbian: the OS officially supported by the Raspberry foundation; the following instructions will work on any Pi device since a portability strategy was preferred over optimization. I’ll demonstrate three separate ways to do this: from packages, by building on the device, and by building in a virtual machine. By the way, an alternative to consider is to rebuild Tizen Yocto for  the Pi 0, but I’ll leave that as an exercise for the reader, you can accomplish this with a bitbake recipe, or you can ask for more hints in the comments section.

How to Run IoT.js on the Raspberry PI 0 - tizen-pizero

Install from Packages

iotjs landed in Debian’s sid, and until it is in testing branch (and subsequently Raspbian and Ubuntu), the fastest way is to download it is via precompiled packages from my personal Raspbian repo

url='https://dl.bintray.com/rzr/raspbian-9-armhf'
source="/etc/apt/sources.list.d/bintray-rzr-raspbian-9-armhf.list"
echo "deb $url raspbian main" | sudo tee "$source"
sudo apt-get update
apt-cache search iotjs
sudo apt-get install iotjs
/usr/bin/iotjs
Usage: iotjs [options] {script | script.js} [arguments]

Use it

Usage is pretty straightforward, start with a hello world source:

echo 'console.log("Hello IoT.js !");' > example.js
iotjs  example.js 
Hello IoT.js !

More details about the current environment can be used (this is for iotjs-1.0 with the default built-in modules):

echo 'console.log(JSON.stringify(process));' > example.js
iotjs  example.js 
{"env":{"HOME":"/home/user","IOTJS_PATH":"","IOTJS_ENV":""},"native_sources":{"assert":true,"buffer":true,"console":true,"constants":true,"dns":true,"events":true,"fs":true,"http":true,"http_client":true,"http_common":true,"http_incoming":true,"http_outgoing":true,"http_server":true,"iotjs":true,"module":true,"net":true,"stream":true,"stream_duplex":true,"stream_readable":true,"stream_writable":true,"testdriver":true,"timers":true,"util":true},"platform":"linux","arch":"arm","iotjs":{"board":"\"unknown\""},"argv":["iotjs","example.js"],"_events":{},"exitCode":0,"_exiting":false} null 2

From here, you can look to use other built-in modules like http, fs, net, timer, etc.

Need More Features?

More modules can be enabled in the master branch, so I also built snapshot packages that can be installed to enable more key features like GPIO, I2C, and more. For your convenience, the snapshot package can be installed to replace the latest release:

root@raspberrypi:/home/user$ apt-get remove iotjs iotjs-dev iotjs-dbgsym iotjs-snapshot
root@raspberrypi:/home/user$ aptitude install iotjs-snapshot
The following NEW packages will be installed:
  iotjs-snapshot{b}
(...)
The following packages have unmet dependencies:
 iotjs-snapshot : Depends: iotjs (= 0.0~1.0+373+gda75913-0~rzr1) but it is not going to be installed
The following actions will resolve these dependencies:
     Keep the following packages at their current version:
1)     iotjs-snapshot [Not Installed]                     
Accept this solution? [Y/n/q/?] n
The following actions will resolve these dependencies:

     Install the following packages:                 
1)     iotjs [0.0~1.0+373+gda75913-0~rzr1 (raspbian)]
Accept this solution? [Y/n/q/?] y
The following NEW packages will be installed:
  iotjs{a} iotjs-snapshot 
(...)
Do you want to continue? [Y/n/?] y
(...)
  iotjs-snapshot https://dl.bintray.com/rzr/raspbian-9-armhf/iotjs-snapshot_0.0~1.0+373+gda75913-0~rzr1_armhf.deb
  iotjs https://dl.bintray.com/rzr/raspbian-9-armhf/iotjs_0.0~1.0+373+gda75913-0~rzr1_armhf.deb

Do you want to ignore this warning and proceed anyway?
To continue, enter "yes"; to abort, enter "no": yes
Get: 1 https://dl.bintray.com/rzr/raspbian-9-armhf raspbian/main armhf iotjs armhf 0.0~1.0+373+gda75913-0~rzr1 [199 kB]
Get: 2 https://dl.bintray.com/rzr/raspbian-9-armhf raspbian/main armhf iotjs-snapshot armhf 0.0~1.0+373+gda75913-0~rzr1 [4344 B]
(...)

If you the run console.log(process) again, you’ll see more interesting modules to use, like gpio, i2c, uart and more, and external modules can be also used; check on the work in progress for sharing modules to the IoT.js community. Of course, this can be reverted to the latest release by simply installing the iotjs package because it has higher priority than the snapshot version.

root@raspberrypi:/home/user$ apt-get install iotjs
(...)
The following packages will be REMOVED:
  iotjs-snapshot
The following packages will be upgraded:
  iotjs
(...)
Do you want to continue? [Y/n] y
(...)

Build on the Device

It’s also possible to build the snapshot package from source with extra packaging patches, found in the community branch of IoT.js (which can be rebased on upstream anytime).

sudo apt-get install git time sudo
git clone https://github.com/tizenteam/iotjs
cd iotjs
./debian/rules
sudo debi

On the Pi 0, it took less than 30 minutes over NFS for this to finish. If you want to learn more you can follow similar instructions for building IoTivity on ARTIK;
it might be slower, but it will extend life span of your SD Cards.

Build on a Virtual Machine

A faster alternative that’s somewhere between building on the device and setting up a cross build environment (which always has a risk of inconsistencies) is to rebuild IoT.js with QEMU, Docker, and binfmt.

First install docker (I used 17.05.0-ce and 1.13.1-0ubuntu6), then install the remaining tools:

sudo apt-get install qemu qemu-user-static binfmt-support time
sudo update-binfmts --enable qemu-arm

docker build 'http://github.com/tizenteam/iotjs.git'

It’s much faster this way, and took me less than five minutes. The files are inside the container, so they need to be copied back to host. I made a helper script for setup and to get the deb packages ready to be deployed on the device (sudo dpkg -i *.deb) :

curl -sL https://rawgit.com/tizenteam/iotjs/master/run.sh | bash -x -
./tmp/out/iotjs/iotjs-dbgsym_0.0-0_armhf.deb
./tmp/out/iotjs/iotjs-dev_0.0-0_armhf.deb
./tmp/out/iotjs/iotjs_0.0-0_armhf.deb

I used the Resin/rpi-raspbian docker image (thanks again Resin!). Finally, I want to also thank my coworker Stefan Schmidt for the idea after he setup a similar trick for EFL’s CI.

Further Reading

If you want to learn more, here are some additional resources to take your understanding further.

Raspberry Pi is a trademark of the Raspberry Pi Foundation\

Common EFL Focus Pitfalls

I started patching the focus subsystem of the EFL widget toolkit quite some time ago. During this time, people have started to assign me everything that somehow looks like an issue with focus, it sometimes only takes the existence of the word “focus” somewhere in a backtrace for this to happen. I’ve discovered that most people mix up these different types, so in this blog post I hope to provide some clarity about them.

How EFL Gets Focused

There are 3 different places focus happens in EFL:

  • ecore-evas – the window manager abstraction
  • evas – the canvas library
  • elementary – the widget toolkit.

First of all, I should point out what focus itself is, I think a good example is to consider your typical smartphone interaction. While interacting with your smartphone, your complete attention is given to its screen and all interactions are with the interface of the device; you will also likely cease interactions with the outside environment entirely. In the same way, each abstraction in EFL has its own interaction partners:

  • The focused canvas object gets the attention from the keyboard
  • The focused widget is highlighted visually, so the user can see where his attention should go
  • And the window manager in the end focuses an application, which is probably an EFL Application.

These differences are often the source of people’s confusion when it comes to focus in EFL. For example, loosing the toolkit focus on a window object does not mean that the window lost the input from the user; instead, it means that another widget got the toolkit focus and the window manager still has focus on this window.

For another example, consider a toolkit widget that’s built out of two objects: an image and a text field below the image. In this example, the widget receives the toolkit’s focus, and the focus of the canvas moves to the image. Then, the user presses some key bindings to change the name of the image and the canvas focus moves to the text field. In this case, the canvas focus moves, creating update events on the canvas focus. However, the widget’s focus stayed the same, and the user is meant to have their attention on that widget, meaning there was no change to it.

Some Tips for Understanding EFL Focus

The focus property can only be true on one single object in an entity, in practice this means:

  • One window focused per user window manager session
  • One object focused per user canvas
  • One widget focused per widget tree in the toolkit

Additionally:

  • Canvas focus is only used for injecting keyboard inputs from the input system of the display technology.
  • Widget focus is used for navigation, in the case of focus movement initiated by key bindings, this is the position from where the next upper/down/right/left element is calculated.

If you ever use change events for some kind of a focus property, here’s what you need to know about focus in EFL:

  • It’s window manager focus if the user has their attention on your application.
  • It’s canvas focus if you need to know where the keyboard inputs are coming from.
  • It’s a widget focus if the user has moved their attention to a subset of canvas objects that are bound together as a widget implementation.

If you have any questions about any of this content, head to the comments section!

How to Securely Encrypt A Linux Home Directory

These days our computer enables access to a lot of personal information that we don’t want random strangers to access, things like financial login information comes to mind. The problem is that it’s hard to make sure this information isn’t leaked somewhere in your home directory like the cache file for your web browser. Obviously, in the event your computer gets stolen you want your data at rest to be secure; for this reason you should encrypt your hard drive. Sometimes this is not a good solution as you may want to share your device with someone who you might not want to give your encryption password to. In this case, you can  encrypt only the home directory for your specific account.

Note: This article focuses on security for data at rest after that information is forever out of your reach, there are other threat models that may require different strategies.

Improving Upon Linux Encryption

I have found the home directory encryption feature of most Linux distributions to be lacking in some way. My main concern is that I can’t change my password as it’s used to directly unlock the encrypted directory. So, what I would like is the ability to have is a pass phrase that is used as a encryption key, but is itself encrypted with my password. This way, I can change my password by re-encrypting my pass phrase with a different password. I want it to be a pass phrase because that makes it possible to back it up in my global password manager which protects me in the case the file holding the key gets corrupted, and it allows me to share it with my IT admin who can then put it in cold storage.

Generate a Pass Phrase

So, how do we do this? First generate the key, for that I have slightly modified a Bitcoin wallet pass phrase generator that I use to generate the key as follows:

$ git clone https://github.com/Bluebugs/bitcoin-passphrase-generator.git
$ cd bitcoin-passphrase-generator
$ ./bitcoin-passphrase-generator.sh 32 2>/dev/null | openssl aes256 -md sha512 -out my-key

This will prompt you for the password to encrypt your pass phrase, and this will become your user password. Once this command has been called, it will generate a file that contains a pass phrase using 32 words out of your system dictionary. On Arch Linux, this dictionary contains 77,649 words, so one possibility in 77649^32. Anything more than 2^256, should be secure enough for some time. To check the content of your pass phrase and back it up, run the following:

$ openssl aes256 -md sha512 -d -in my-key

Once you have entered your password it will output your pass phrase: a line of words that you can now backup in your password manager.

Encrypt Your Home Directory

Now, let’s encrypt the home directory. I have chosen to use ext4 per directory encryption because it enables you to share the hard drive with other users in the future instead of requiring you to split it into different partitions. To initialize your home directory, first move your current one out of the way (This means stopping all processes that are running under your user ID and login under another ID). After this, load the key into the kernel:

$ openssl aes256 -md sha512 -d -in my-key | e4crypt add_key 2>1

This will ask you for the password you used to protect your pass phrase; once entered, it adds it to your current user kernel key ring. With this, you can now create you home directory, but first get the key handle with the following command:

$ keyctl show | grep ext4
12345678 --alsw-v    0   0   \_ logon: ext4:abcdef1234567890

The key handle is the text that follows ext4 at the end of the 2nd line above, this can now be used to create your new encrypted home directory :

$ mkdir /home/user
$ e4crypt set_policy abcdef1234567890 /home/user

After this, you can copy all of the data from your old user directory to the new directory and it will be properly encrypted.

Setup PAM to Unlock and Lock Your Home Directory

Now, we need to play with the PAM session handle to get it to load the key and unlock the home directory. I will assume that you have stored my-key at /home/keys/user/my-key. We’ll use pam_mount to run a custom script that will add the key in the system at login and remove it at log out. The script for mounting and unmounting can be found on GitHub.

The interesting bit that I’ve left out of the mounting script is the possibility to get the key from an external device or maybe even your phone. Also, you’ll notice the need to be part of the wheel group to cleanly unmount the directory as otherwise the file will still be in cache after log out, and it will be possible to navigate the directory and create files until they are removed. I think a clean way to handle this might be to have a script for all users that can flush the cache without asking for a sudo password; this will be an easy improvement to make. Finally, modify the pam system-login file as explained in the Arch Linux documentation with just one additional change to disable pam_keyinit.so.

In /etc/pam.d/system-login, paste the following:

#%PAM-1.0

auth required pam_tally.so onerr=succeed file=/var/log/faillog
 auth required pam_shells.so
 auth requisite pam_nologin.so
 auth optional pam_mount.so
 auth include system-auth

account required pam_access.so
 account required pam_nologin.so
 account include system-auth

password optional pam_mount.so
 password include system-auth

session optional pam_loginuid.so
 #session optional pam_keyinit.so force revoke
 session [success=1 default=ignore] pam_succeed_if.so service = systemd-user quiet
 session optional pam_mount.so
 session include system-auth
 session optional pam_motd.so motd=/etc/motd
 session optional pam_mail.so dir=/var/spool/mail standard quiet
 -session optional pam_systemd.so
 session required pam_env.so

Now, if you want to change your password, you need to decrypt the key, change the password, and re-encrypt it. The following command will do just that:

$ openssl aes256 -md sha512 -d -in my-old-key | openssl aes256 -md sha512 -out my-new-key

Et voila !

 

EFL Multiple Output Support with Wayland

Supporting multiple outputs is something most of us take for granted in the world of X11 because the Xorg team has had years to implement and perfect support for multiple outputs. While we have all enjoyed the fruits of their labor, this has not been the case for Wayland. There are two primary types of multiple output configurations that are relevant: cloned and extended outputs.

Cloned Outputs

Cloned output mode is the one that most people are familiar with. This means that the contents of the primary output will be duplicated on any additional outputs that are enabled.  If you have ever hot plugged an external monitor into your Windows laptop, then you have seen this mode in action.

EFL Multiple Output Support with Wayland - mirrored-display
An example of cloned output

Extended Outputs

Extended output mode is somewhat less common, yet still very important. In this mode, the desktop is extended to span across multiple outputs, while the primary output retains sole ownership of any panels, shelves, or gadgets that reside there. This enables the secondary monitor to act as an extended desktop space where other applications can be run.

EFL Multiple Output Support with Wayland - extended-display
An example of extended outputs

Multiple Output Support in Weston and EFL

Weston, the reference Wayland compositor, has had the ability to support multiple outputs in an extended configuration for quite some time now. This extended mode configuration allows each output to run with completely separate framebuffer and repaint loops. There are patches currently being worked on to enable cloned output support, but these have not been pushed upstream yet.

While EFL has had support for multiple outputs when running under X11 for quite some years now, the support for multiple outputs under Wayland has been missing. A major hurdle to implementing this support has been that Evas, the EFL rendering library, was not accounting for multiple outputs in it’s rendering update loop. With this commit, our very own Cedric Bail has removed that hurdle and enabled work to progress further on implementing this feature.

Complete Multiple Output Support is on the Way

While the current implementation of multiple output support in EFL Wayland only supports outputs in a cloned configuration, support for extended mode is in development and is expected to be available in upstream EFL soon. When these patches get published upstream, this will bring our EFL Wayland implementation much closer in parity to our X11 support. Thankfully there are little to no changes needed to add this support to our Enlightenment Wayland compositor.

How to Prototype Insecure IoTivity Apps with the Secure Library

IoTivity 1.3.1 has been released, and with it comes some important new changes. First, you can rebuild packages from sources, with or without my hotfixes patches, as explained recently in this blog post. For ARM users (of ARTIK7), the fastest option is to download precompiled packages as .RPM for fedora-24 from my personal repository, or check ongoing works for other OS.

Copy and paste this snippet to install latest IoTivity from my personal repo:

distro="fedora-24-armhfp"
repo="bintray--rzr-${distro}"
url="https://bintray.com/rzr/${distro}/rpm"
rm -fv "/etc/yum.repos.d/$repo.repo"
wget -O- $url | sudo tee /etc/yum.repos.d/$repo.repo
grep $repo /etc/yum.repos.d/$repo.repo

dnf clean expire-cache
dnf repository-packages "$repo" check-update
dnf repository-packages "$repo" list --showduplicates # list all published versions
dnf repository-packages "$repo" upgrade # if previously installed
dnf repository-packages "$repo" install iotivity-test iotivity-devel # remaining ones

I also want to thank JFrog for proposing bintray service to free and open source software developers.

Standalone Apps

In a previous blog post, I explained how to run examples that are shipped with the release candidate. You can also try with other existing examples (rpm -ql iotivity-test), but some don’t work properly. In those cases, try the 1.3-rel branch, and if you’re still having problems please report a bug.

At this point, you should know enough to start making your own standalone app and use the library like any other, so feel free to get inspired or even fork demo code I wrote to test integration on various OS (Tizen, Yocto, Debian, Ubuntu, Fedora etc…).

Let’s clone sources from the repo. Meanwhile, if you’re curious read the description of tutorial projects collection.

sudo dnf install screen psmisc git make
git clone http://git.s-osg.org/iotivity-example
# Flatten all subprojects to "src" folders
make -C iotivity-example
# Explore all subprojects
find iotivity-example/src -iname "README*"

 

 

Note that most of the examples were written for prototyping, and security was not enabled at that time (on 1.2-rel security is not enabled by default except on Tizen).

Serve an Unsecured Resource

Let’s go directly to the clock example which supports security mode, this example implements OCF OneIoT Clock model and demonstrates CoAP Observe verb.

cd iotivity-example
cd src/clock/master/
make
screen
./bin/server -v
log: Server:
Press Ctrl-C to quit....
Usage: server -v
(...)
log: { void IoTServer::setup()
log: { OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: { FILE* override_fopen(const char*, const char*)
(...)
log: } FILE* override_fopen(const char*, const char*)
log: Successfully created oic.r.clock resource
log: } OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: } void IoTServer::setup()
(...)
log: deadline: Fri Jan  1 00:00:00 2038
oic.r.clock: { 2017-12-19T19:05:02Z, 632224498}
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

Then, start the observer in a different terminal or device (if using GNU screem type : Ctrl+a c and Ctrl+a a to get switch to server screen):

./bin/observer -v
log: { IoTObserver::IoTObserver()
log: { void IoTObserver::init()
log: } void IoTObserver::init()
log: } IoTObserver::IoTObserver()
log: { void IoTObserver::start()
log: { FILE* override_fopen(const char*, const char*)
(...)
log: { void IoTObserver::onFind(std::shared_ptr)
resourceUri=/example/ClockResUri
resourceEndpoint=coap://192.168.0.100:55236
(...)
log: { static void IoTObserver::onObserve(OC::HeaderOptions, const OC::OCRepresentation&, const int&, const int&)
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

OK, now it should work and the date should be updated, but keep in mind it’s still *NOT* secured at all, as the resource is using a clear channel (coap:// URI).

To learn about the necessary changes, let’s have look at the commit history of the clock sub project. The server’s side persistent storage needed to be added to the configuration; we can see from trace that Secure Resource Manager (SRM) is loading credentials by overriding fopen functions (those are designed to use hardware security features like ARM’s Secure Element/ Secure key storage). The client or observer also needs a persistent storage update.

git diff clock/1.2-rel src/server.cpp

(...)
 
+static FILE *override_fopen(const char *path, const char *mode)
+{
+    LOG();
+    static const char *CRED_FILE_NAME = "oic_svr_db_anon-clear.dat";
+    char const *const filename
+        = (0 == strcmp(path, OC_SECURITY_DB_DAT_FILE_NAME)) ? CRED_FILE_NAME : path;
+    return fopen(filename, mode);
+}
+
+
 void IoTServer::init()
 {
     LOG();
+    static OCPersistentStorage ps {override_fopen, fread, fwrite, fclose, unlink };
     m_platformConfig = make_shared
                        (ServiceType::InProc, // different service ?
                         ModeType::Server, // other is Client or Both
                         "0.0.0.0", // default ip
                         0, // default random port
                         OC::QualityOfService::LowQos, // qos
+                        &ps // Security credentials
                        );
     OCPlatform::Configure(*m_platformConfig);
 }

This example uses generic credential files for the client and server with maximum privileges, just like unsecured mode (or default 1.2-rel configuration).

cat oic_svr_db_anon-clear.json
{
    "acl": {
       "aclist2": [
            {
                "aceid": 0,
                "subject": { "conntype": "anon-clear" },
                "resources": [{ "wc": "*" }],
                "permission": 31
            }
        ],
        "rowneruuid" : "32323232-3232-3232-3232-323232323232"
    }
}

These files will not be loaded directly because they need to be compiled to the CBOR binary format using IoTivity’s json2cbor (which despite the name does more than converting json files to cbor, it also updates some fields).

Usage is straightforward:

/usr/lib/iotivity/resource/csdk/security/tool/json2cbor oic_svr_db_client.json  oic_svr_db_client.dat

To recap, the minimal steps that are needed are:

  1. Configure the resource’s access control list to use new format introduced in 1.3.
  2. Use clear channel (“anon-clear”) for all resources (wildcards wc:*), this is the reason coap:// was shown in log.
  3. Set verbs (CRUD+N) permissions to maximum according to OCF_Security_Specification_v1.3.0.pdf document.

One very important point: don’t do this in a production environment; you don’t want to be held responsible for neglecting security, but this is perfectly fine for getting started with IoTivity and to prototype with the latest version.

 

Still Unsecured, Then What?

With faulty security configurations you’ll face the OC_STACK_UNAUTHORIZED_REQ error (Response error: 46). I tried to sum up the minimal necessary changes, but keep in mind that you shouldn’t stop here. Next, you need to setup ACLs to match the desired policy as specified by Open Connectivity, check the hints linked on IoTivity’s wiki, or use a provisioning tool, >stay tuned for more posts that cover these topics.

 

 

 

EFL: Enabling Wayland Output Rotation

The Enlightenment Foundation Libraries have long supported the ability to do output rotation when running under X11 utilizing the XRandR (resize and rotate) extension. This functionality is exposed to the user by way of the Ecore_X library which provides API function calls that can be used to rotate a given output.

Wayland

While this functionality has been inside EFL for a long time, the ability to rotate an output while running under the Enlightenment Wayland Compositor has not been possible. This is due, in part, to the fact that the Wayland protocol does not provide any form of RandR extension. Normally this would have proved a challenge when implementing output rotation inside the Enlightnement Wayland compositor, however EFL already has the ability to do this.

Software-Based Rotation

EFL’s Ecore_Evas library, which is used as the base of the Enlightenment Compositor canvas, has the ability to perform software-based rotation. This means that when a user asks for the screen to be rotated via Enlightenment’s Screen Setup dialog, the Enlightenment Compositor will draw it’s canvas rotated to the desired degree using the internal Evas engine code. While this works for any given degree of rotation, it is not incredibly efficient considering that modern video cards can do rotation themselves.

Hardware-Based Rotation

Many modern video cards support some form of hardware-based rotation. This allows the hardware to rotate the final scanout buffer before displaying to the screen and is much more efficient than software-based rotation. The main drawback is that many will only support rotating to either 0 degrees or 180 degrees. While this is a much more practical and desired approach to output rotation, the lack of other available degrees of rotation make it a less than ideal method.

Hybrid Rotation

In order to facilitate a more ideal user experience, we have decided to implement something I call hybrid rotation. Hybrid rotation simply means that when asked to rotate the compositor canvas, we will first check if the hardware is able to handle the desired rotation itself. In the event that the hardware is unable to accommodate the necessary degree of rotation, it will then fall back to the software implementation to handle it. With this patch we can now handle any degree of output rotation in the most efficient way available.

The major benefit for application developers is that they do not need to know anything about the capabilities of the underlying hardware. They can continue to use their existing application code to rotate the canvas:

/**
 * Ecore example illustrating the basics of ecore evas rotation usage.
 *
 * You'll need at least one Evas engine built for it (excluding the
 * buffer one). See stdout/stderr for output.
 *
 * @verbatim
 * gcc -o ecore_evas_basics_example ecore_evas_basics_example.c `pkg-config --libs --cflags ecore evas ecore-evas`
 * @endverbatim
 */

#include <Ecore.h>
#include <Ecore_Evas.h>
#include <unistd.h>

static Eina_Bool
_stdin_cb(void *data EINA_UNUSED, Ecore_Fd_Handler *handler EINA_UNUSED)
{
   Eina_List *l;
   Ecore_Evas *ee;
   char c;

   int ret = scanf("%c", &c);
   if (ret < 1) return ECORE_CALLBACK_RENEW;

   if (c == 'h')
     EINA_LIST_FOREACH(ecore_evas_ecore_evas_list_get(), l, ee)
       ecore_evas_hide(ee);
   else if (c == 's')
     EINA_LIST_FOREACH(ecore_evas_ecore_evas_list_get(), l, ee)
       ecore_evas_show(ee);
   else if (c == 'r')
     EINA_LIST_FOREACH(ecore_evas_ecore_evas_list_get(), l, ee)
       ecore_evas_rotation_set(ee, 90);

   return ECORE_CALLBACK_RENEW;
}

static void
_on_delete(Ecore_Evas *ee)
{
   free(ecore_evas_data_get(ee, "key"));
   ecore_main_loop_quit();
}

int
main(void)
{
   Ecore_Evas *ee;
   Evas *canvas;
   Evas_Object *bg;
   Eina_List *engines, *l;
   char *data;

   if (ecore_evas_init() <= 0)
     return 1;

   engines = ecore_evas_engines_get();
   printf("Available engines:\n");
   EINA_LIST_FOREACH(engines, l, data)
     printf("%s\n", data);
   ecore_evas_engines_free(engines);

   ee = ecore_evas_new(NULL, 0, 0, 200, 200, NULL);
   ecore_evas_title_set(ee, "Ecore Evas basics Example");
   ecore_evas_show(ee);

   data = malloc(sizeof(char) * 6);
   sprintf(data, "%s", "hello");
   ecore_evas_data_set(ee, "key", data);
   ecore_evas_callback_delete_request_set(ee, _on_delete);

   printf("Using %s engine!\n", ecore_evas_engine_name_get(ee));

   canvas = ecore_evas_get(ee);
   if (ecore_evas_ecore_evas_get(canvas) == ee)
     printf("Everything is sane!\n");

   bg = evas_object_rectangle_add(canvas);
   evas_object_color_set(bg, 0, 0, 255, 255);
   evas_object_resize(bg, 200, 200);
   evas_object_show(bg);
   ecore_evas_object_associate(ee, bg, ECORE_EVAS_OBJECT_ASSOCIATE_BASE);

   ecore_main_fd_handler_add(STDIN_FILENO, ECORE_FD_READ, _stdin_cb, NULL, NULL, NULL);

   ecore_main_loop_begin();

   ecore_evas_free(ee);
   ecore_evas_shutdown();

   return 0;
}

 

Summary

While not all hardware can support the various degrees of output rotation, EFL provides the ability to rotate an output via software. This gives the Enlightenment Wayland Compositor the ability to handle output rotation in the most efficient method available while being transparent to the user.

One Small Step to Harden USB Over IP on Linux

The USB over IP kernel driver allows a server system to export its USB devices to a client system over an IP network via USB over IP protocol. Exportable USB devices include physical devices and software entities that are created on the server using the USB gadget sub-system. This article will cover a major bug related to USB over IP in the Linux kernel that was recently uncovered; it created some significant security issues but was resolved with help from the kernel community.

The Basics of the USB Over IP Protocol

There are two USB over IP server kernel modules:

  • usbip-host (stub driver): A stub USB device driver that can be bound to physical USB devices to export them over the network.
  • usbip-vudc: A virtual USB Device Controller that exports a USB device created with the USB Gadget Subsystem.

There is one USB over IP client kernel module:

  • usbip-vhci: A virtual USB Host Controller that imports a USB device from a USB over IP capable server.

Finally, there is one USB over IP utility:

  • (usbip-utils): A set of user-space tools used to handle connection and management, this is used on both the client and server side.

The USB/IP protocol consists of a set of packets that get exchanged between the client and server that query the exportable devices, request an attachment to one, access the device, and finally detach once finished. The server responds to these requests from the client, and this exchange is a series of TCP/IP packets with a TCP/IP payload that carries the USB/IP packets over the network. I’m not going to discuss the protocol in detail in this blog, please refer to usbip_protocol.txt to learn more.

Identifying a Security Problem With USB Over IP in Linux

When a client accesses an imported USB device, usbip-vhci sends a USBIP_CMD_SUBMIT to the usbip-host, which submits a USB Request Block (URB). An endpoint number, transfer_buffer_length, and number of ISO packets are among the valid URB fields that will be in a USBIP_CMD_SUBMIT packet. There are some potential vulnerabilities with this, specifically a malicious packet could be sent from a client with an invalid endpoint, or a very large transfer_buffer_length with a large number of ISO packets. A bad endpoint value could force the usbip-host to access memory out of bounds unless usbip-host validates the endpoint to be within valid range of 0-15. A large transfer_buffer_length could result in the kernel allocating large amounts of memory. Either of these malicious requests could adversely impact the server system operation.

Jakub Jirasek from Secunia Research at Flexera reported these security vulnerabilities in the driver to the malicious input. In addition, he reported an instance of a socket pointer address (kernel memory address) leaked in a world-readable sysfs USB/IP client side file and in debug output. Unfortunately, the USB over IP driver had these security vulnerabilities since the beginning. The good news is that they have been found and fixed now, I sent a series of 4 fixes to address all of the issues reported by Jakub Jirasek. In addition, I am continuing to look for other potential problems.

All of these problems are a result of a lack of validation checks on the input and an incorrect handling of error conditions; my fixes add the missing checks and take proper action. These fixes will propagate into the stable releases within the next few weeks. One exception is the issue with kernel address leaks which is an intentional design decision to provide a convenient way to find IP address from socket addresses that opened a security hole.

Where are these fixes?

The fixes are going in to the 4.15 and stable releases.  The fixes can be found in the following two git branches:

Secunia Research has created the following CVEs for the fixes:

  • CVE-2017-16911 usbip: prevent vhci_hcd driver from leaking a socket pointer
    address
  • CVE-2017-16912 usbip: fix stub_rx: get_pipe() to validate endpoint number
  • CVE-2017-16913 usbip: fix stub_rx: harden CMD_SUBMIT path to handle
    malicious input
  • CVE-2017-16914 usbip: fix stub_send_ret_submit() vulnerability to null
    transfer_buffer

Do it Right the First Time

This is a great example of how open source software can benefit from having many eyes looking over it to identify problems so the software can be improved. However, after solving these issues, my takeaway is to be proactive in detecting security problems, but better yet, avoid introducing them entirely. Failing to validate input is an obvious coding error in any case, that can potentially allow users to send malicious packets with severe consequences. In addition, be mindful of exposing sensitive information such as the kernel pointer addresses. Fortunately, we were able to work together to solve this problem which should make USB over IP more secure in the Linux kernel.