More Webthings on IoT.js, MCUs, Tizen…

Connect webthings devices to gateway

So called “Webthings” (understand “servers”) are implementing WebThing API (this relates to W3C Web of Things (WoT) Thing Description).
Various languages can be used for implementation, today “Things Framework” page is listing NodeJS, Python, Java, Rust and ESP8266 (+ESP32).

Today’s challenge is to try IoT.js (the Internet of Things framework using JavaScript), as alternate runtime to NodeJS (on v8) and thus gain performance.

The consequence for application developers, is without adding complexity they can now also target more constrained devices using JavaScript high level language.

Make Webthings powered by IoTjs

My latest effort was to port webthing-node to IoT.js, this has been done by removing a couple of features (Actions, Events, Websockets, mDNS), then rewrite some JS parts that used latest ECMA features not supported by Jerryscript the underlying JavaScript interpreter, Also to preserve code structure I also reimplemented parts of express.js on IoT.js’ HTTP module.

This article will explain how to replicate this from scratch using Edison board running community Debian port., but it should be easy to adapt to other configurations. For GNU/Linux based OS this should flawlessly work too (as there is nothing specific here only GPIO pin for demo) while platforms supported by IoT.js should be possible too (Tizen:RT supported by ARTIK05x or NuttX as supported by STM32F4).

Setup Edison

Since IoT.js landed in Debian, I wanted to test it on Edison board running community maintained Debian port: jubilinux v9.

Note if you’re using (outdated) Poky/Yocto reference OS, it’s easy to rebuild on device too (just install libc6-staticdev).

Anyway here is the procedure to flash distro to Edison:

cd jubilinux-stretch
# Unplug edison
time sudo bash flashall.sh
# Plug USB cables to edison
Using U-Boot target: edison-blankcdc
Now waiting for dfu device 8087:0a99
Please plug and reboot the board

Flashing IFWI
(...)

(While I am on this, here is my bottle in the ocean, do you know How to unbrick a unflashable Intel Edison ?)

Then log in and configure system:

# Connect to terminal
picocom  -b 115200 /dev/ttyUSB0
U-Boot 2014.04 (Feb 09 2015 - 15:40:31)
(...)
Jubilinux Stretch (Debian 9) jubilinux ttyMFD2
rootlinux login: 
# Log in as root:edison and change password
root@jubilinux:~# passwd
root@jubilinux:~# cat /etc/os-release 
PRETTY_NAME="Jubilinux Stretch (Debian 9)"
NAME="Jubilinux Stretch"
VERSION_ID="9+0.4"
VERSION="9+0.4 (stretch jubilinux)"
root@jubilinux:~# cat /proc/version 
Linux version 3.10.98-jubilinux-edison (robin@robin-i7) (gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) ) #3 SMP PREEMPT Sun Aug 13 04:22:45 EDT 2017

Setup Wifi, the quick and dirty way:

ssid='Private' # TODO: Update with your network credentials
password='password' # TODO:
sed -e "s|.*wpa-ssid.*|wpa-ssid \"$ssid\"|g" -b -i /etc/network/interfaces
sed -e "s|.*wpa-psk.*|wpa-psk \"$password\"|g" -b -i /etc/network/interfaces

Optionally, we can change device’s hostname to custom one (iotjs or any name), it will be easier to find later:

sed -e 's|jubilinux|iotjs|g' -i /etc/hosts /etc/hostname

System should ready to use after rebooting:

reboot
apt-get update ; apt-get install screen ; screen

Then you should be ready to install precompiled IoT.js 1.0 package using apt-pining as explained before:

IoT.js landed in Raspbian

This will work, but for demo purpose, you can skip this and rebuild latest snapshot with GPIO module (not supported in default profile of version 1.0).

Rebuild IoT.js snapshot package

Based on Debian iotjs’s packaging file, you can rebuild a snapshot package from my development branch very easily:

apt-get update ; apt-get install git make time
branch=sandbox/rzr/devel/master
git clone https://github.com/tizenteam/iotjs -b "$branch" --recursive --depth 1
cd iotjs
time ./debian/rules && sudo debi

It took only 15 min to build on device, if short of resources you can eventually build deb packages out of device like explained for Raspberry Pi 0:

How to Run IoT.js on the Raspberry PI 0

WebThing for IoT.js

Now we have an environment ready to try my development branch of “webthing-node” ported to IoT.js (until ready for iotjs-modules):

cd /usr/local/src/
git clone https://github.com/tizenteam/webthing-node -b sandbox/rzr/devel/iotjs/master --recursive --depth 1 

Then just start “webthing” server:

cd webthing-node
iotjs example/simplest-thing.js

In other shell check if thing is alive:

curl -H "Content-Type: application/json"  http://localhost:8888/

{"name":"ActuatorExample","href":"/","type":"onOffSwitch","properties":{"on":{"type":"boolean","description":"Whether the output is changed","href":"/properties/on"}},"links":[{"rel":"properties","href":"/properties"}],"description":"An actuator example that just log"}

Then we can control our resource’s property (which actually just a LED on GPIO, but it could be a relay or any other actuator)

curl -X PUT -H "Content-Type: application/json" --data '{"on": true }' http://localhost:8888/properties/on
gpio: writing: true
curl http://localhost:8888/properties/on
{ "on": true }

Install webthing service

Systemd will help us to start the webthing server on boot listing on default http port:

unit="webthing-iotjs"
dir="/usr/local/src/$unit"
exedir="$dir/bin"
exe="$exedir/$unit.sh"
servicedir="$dir/lib/systemd/"
service="$servicedir/$unit.service"

mkdir -p "$exedir"
cat<"$exe" && chmod a+rx "$exe"
#!/bin/sh
set -x
set -e
PATH=$PATH:/usr/local/bin
cd /usr/local/src
cd webthing-node
# Update port and GPIO if needed
iotjs example/simplest-thing.js 80 45
EOF


mkdir -p "$servicedir" && cat<$service
[Unit]
Description=$unit
After=network.target

[Service]
ExecStart=$exe
Restart=always

[Install]
WantedBy=multi-user.target
EOF

Enable service:

killall iotjs
systemctl enable $service
systemctl start $unit
systemctl status $unit
reboot

Connect to gateway

Set up “Mozilla IoT gateway” as explained earlier, you can add some “virtual resources” to check it’s working too:

Connecting sensors to Mozilla’s IoT Gateway

Connecting our “ActuatorExample webthing” to gateway will require you to fill the explicit URL of Edison device.

Using your browser, open gateway location at mozilla-iot.org or http://gateway.local :

On “/things” page click on “+” button to add a thing, then “Add by URL…” wait for form:


Enter web thing URL:
iotjs.local

press “Submit” it will display:


ActuatorExample
On/Off Switch from iotjs.local:80

then press “Save” and “Done”, it should appear on Dashboard, you can click on “Actuator Example”‘s icon to turn it off or on.

Let’s verify is working by connecting a LED (Red) to Intel Edison’s minibreakout board (pinout) which is delivering 1.8V :

  • GPIO45: on J20 (bottom row), use the 4th pin (from right to left)
  • GND: on J19 (just above J20), use the 3th pin (from right to left)

Note, for later the reason why it was not scanned automagically is because I removed mDNS feature to ease the port, but a native or pure js module could be reintroduced later in, to behave like NodeJS webthing.

RGBLamp on Microcontrollers

Another platform to consider is ESP8266 (or ESP32), it’s a Wifi enabled microcontroller.
ESPxx are officially supported by Mozilla with a partial implementation of WebThings currently using native Ardiuno APIs. You can even try out my native implementation of RGBLamp.

If curious, you can also get (or make!) OpenSourceHardware light controller from Tizen community’s friend Leon.

Then it would worth comparing with a JavaScript version since JerryScript is supporting ESP8266.
Note that If IoT.js should be downsized for ESP8266, maybe ESP32 could be considered too (on FreeRTOS ?).

Other devices supported by IoT.js can be considered too, such as ARTIK05x (on TizenRT)

Standalone Tizen WebApp

Finally as a bonus chapter, while hacking on Tizen TM1, I made a standalone app that can login to Mozilla gateway and browse resources.

First time it should retrieve OAuth token, and then the browser is able to list existing resources.

If you don’t have a Tizen TM1 device, you can try using SDK or even your desktop browser (with CORS feature configured):

rm -rf tmp/chromium
mkdir -p tmp/chromium
chromium-browser --disable-web-security  --user-data-dir="tmp/chromium" https://rzr.github.io/webthings-webapp/

Feel free to contribute, for debugging purposes this URL can also be used
https://tizenteam.github.io/webthings-webapp/

More details on this experiment at https://github.com/rzr/webthings-webapp.

Further Reading

Connecting sensors to Mozilla’s IoT Gateway

Here is a 1st post about Mozilla’s IoT effort, and specifically the gateway project which is illustrating “Web Of Things” concept to create a decentralized Internet of Things, using Web technologies.

Today we will focus on the gateway, as it is the core component of the whole framework. Version 0.4.0 was just released, so you can try it your own on Raspberry Pi 3.  The Raspberry Pi 3 is the reference platform, but it should be possible to port to other single board computers (like ARTIK, etc).

The post will explain how to get started and how to establish basic automation using I2C sensors and actuators on gateway’s device (without any cloud connectivity).

To get started, first install the gateway according to these straightforward instructions:

Prepare SD Card

You need to download the Raspbian based gateway-0.4.0.img.zip (1GB archive) and dump it to SD card (2.6GB min).

lsblk # Identify your sdcard adapter ie:
disk=/dev/disk/by-id/usb-Generic-TODO
url=https://github.com/mozilla-iot/gateway/releases/download/0.4.0/gateway-0.4.0.img.zip
wget -O- $url | funzip | sudo dd of=$disk bs=8M oflag=dsync

If you only want to use the gateway and not hack on it, you can skip this next part which enables a developer shell though SSH.  However, if you do want access to a developer shell, mount the 1st partition called “boot” (you may need to replug your SD card adapter) and add a file to enable SSH:

sudo touch /media/$USER/boot/ssh
sudo umount /media/$USER/*

First boot

Next, install the SD card in your Raspberry PI 3 (Older RPis could work too, particularly if you have a wifi adapter).

When it has completed the first boot, you can check that the Avahi daemon is registering “gateway.local” using mDNS (multicast DNS)

ping gateway.local
ssh pi@gateway.local # Raspbian default password for pi user is "raspberry"

Let’s also track local changes to /etc by installing etckeeper, and change the default password.

sudo apt-get install etckeeper
sudo passwd pi

Logging in

You should now be able to access the web server, which is running on port 8080 (earlier version used 80):

http://gateway.local:8080/

It will redirect you to a page to configure wifi:

URL: http://gateway.local:8080/
Welcome
Connect to a WiFi network?
FreeWifi_secure
FreeWifi
OpenBar
...
(skip)

We can skip it for now:

URL: http://gateway.local:8080/connecting
WiFi setup skipped
The gateway is now being started. Navigate to gateway.local in your web browser while connected to same network as the gateway to continue setup.
Skip

After a short delay, the user should be able to reconnect to the entry page:

http://gateway.local:8080/

The gateway can be registered on mozilla.org for remote management, but we can skip this for now.

Then administrator is now welcome to register new users:

URL: http://gateway.local:8080/signup/
Mozilla IoT
Welcome
Create your first user account:
user: user
email: user@localhost
password: password
password: password
Next

And we’re ready to use it:

URL: http://gateway.local:8080/things
Mozilla IoT
No devices yet. Click + to scan for available devices.
Things
Rules
Floorplan
Settings
Log out

Filling dashboard

You can start filling your dashboard with Virtual Resources,

First hit the “burger menu” icon, go to settings page, and then go to the addons page.

Here you can enable a “Virtual Things” adapter:

URL: http://gateway.local:8080/settings/addons/
virtual-things-adapter 0.1.4
Mozilla IoT Virtual Things Adapter
by Mozilla IoT

Once enabled It should be listed along ThingURLAdapter on the adapters page:

URL: http://gateway.local:8080/settings/adapters
VirtualThingsAdapter
virtual-things
ThingURLAdapter
thing-url-adapter

You can then go back to the 1st Things page (it’s the first entry in the menu):

We can start adding “things” by pressing the bottom menu.

URL: http://gateway.local:8080/things
Virtual On/Off Color Light
Color Light
Save

Then press “Done” at bottom.

From this point, you can decide to control a virtual lamp from the UI, and even establish some basic rules (second entry in menu) with more virtual resources.

Sensing Reality

Because IoT is not about virtual worlds, let’s see how to deal with the physical world using sensors and actuators.

For sensors, there are many way to connect them to computers using analog or digital inputs on different buses.  To make it easier for applications developers, this can be abstracted using W3C’s generic sensors API.

While working on IoT.js‘s modules, I made a “generic-sensors-lite” module that abstracted a couple of I2C drivers from the NPM repository.  To verify the concept, I have made an adapter for Mozilla’s IoT Gateway (which is running Node.js), so I published the generic-sensors-lite NPM module first.

Before using the mozilla-iot-generic-sensors-adapter, you need to enable the I2C bus on the gateway (version 0.4.0, master has I2C enabled by default).

sudo raspi-config
Raspberry Pi Software Configuration Tool (raspi-config)
5 Interfacing Options Configure connections to peripherals
P5 I2C Enable/Disable automatic loading of I2C kernel module
Would you like the ARM I2C interface to be enabled?
Yes
The ARM I2C interface is enabled
ls -l /dev/i2c-1
lsmod | grep i2c
i2c_dev 16384 0
i2c_bcm2835 16384 0

Of course you’ll need at least one real sensor attached to the I2C pin of the board.  Today only 2 modules are supported:

You can double check if addresses are present on I2C the bus:

sudo apt-get install i2c-tools
/usr/sbin/i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- 23 -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- 77

Install mozilla-iot-generic-sensors-adapter

Until sensors adapter is officially supported by the mozilla iot gateway, you’ll need to install it on the device (and rebuild dependencies on the target) using:

url=https://github.com/rzr/mozilla-iot-generic-sensors-adapter
dir=~/.mozilla-iot/addons/generic-sensors-adapter
git clone --depth 1 -b 0.0.1 $url $dir
cd $dir
npm install

Restart gateway (or reboot)
sudo systemctl restart mozilla-iot-gateway.service
tail -F /home/pi/.mozilla-iot/log/run-app.log

Then the sensors addon can be enabled by pressing the “enable” button on the addons page:

URL: http://gateway.local:8080/settings/addons
generic-sensors-adapter 0.0.1
Generic Sensors for Mozilla IoT Gateway

It will appear on the adapters page too:

URL: https://gateway.local/settings/adapters
VirtualThingsAdapter
virtual-things
ThingURLAdapter
thing-url-adapter
GenericSensorsAdapter
generic-sensors-adapter

Now we can add those sensors as new things (Save and done buttons):

URL: http://gateway.local:8080/things
Ambient Light Sensor
Unknown device type
Save
Temperature Sensor
Unknown device type
Save

Then they will appear as:

  • http://gateway.local:8080/things/0 (for Ambient Light Sensor)
  • http://gateway.local:8080/things/1 (for Temperature Sensor)

To get value updated in the UI, they need to turned on first (try again if you find a big, and file tickets I will forward to drivers authors).

A GPIO adapter can be also used for actuators, as shown in this demo video.

If you have other sensors, check if the community has shared a JS driver, and please let me know about integrating new sensors drivers in generic-sensors-lite

IoT.js landed in Raspbian

Following previous efforts to deploy iotjs on Raspberry Pi 0, I am happy to announce that IoT.js 1.0 landed in Debian, and was sync’d to Raspbian for ArmHF and Ubuntu as well.

While the package is targeting the next distro release, it can be easily installed on current versions by adding a couple of config files for “APT pinning”.

If you haven’t set up Raspbian 9, just dump the current Raspbian image to SDcard (for the record I used version 2018-03-13-raspbian-stretch-lite)

Boot your Pi.  To keep track of changes in /etc/, let’s install etckeeper:

sudo apt-get update
sudo apt-get install etckeeper

Upgrade current packages:

sudo apt-get upgrade
sudo apt-get dist-upgrade

Declare the current release as default source:

cat<<EOT | sudo tee /etc/apt/apt.conf.d/50raspi
APT::Default-Release "stretch";
EOT

Then add a repo file for the next release:

cat /etc/apt/sources.list | sed 's/stretch/buster/g' | sudo tee /etc/apt/sources.list.d/raspi-buster.list

Unless you want to test the upcoming release, it maybe be safer to avoid upgrading all packages yet.  In other words, we prefer that only iotjs should be available from this “not yet supported” repo.

cat<<EOT | sudo tee /etc/apt/preferences.d/raspi-buster.pref
Package: *
Pin: release n=buster
Pin-Priority: -10
EOT

cat<<EOT | sudo tee /etc/apt/preferences.d/iotjs.pref
Package: iotjs
Pin: release n=buster
Pin-Priority: 1
EOT

Now iotjs 1.0-1 should appear as available for installation:

sudo apt-get update ; apt-cache search iotjs
iotjs - Javascript Framework for Internet of Things

apt-cache policy iotjs
iotjs:
  Installed: (none)
  Candidate: 1.0-1
  Version table:
     1.0-1 1
        -10 http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages

Let’s install it:

sudo apt-get install iotjs
man iotjs

Even if version 1.0 is limited in compared to the development branch, you can start by using the http module which is enabled by default (not https).

To illustrate this, when I investigated “air quality monitoring” for a TizenRT+IoT.js demo I found out that OpenWeatherMap is collecting and publishing “Carbon Monoxide” Data, so let’s try their REST API.

Create a file, example.js for example, that contains:

var http = require('http');

var location = '48,-1';
var datetime = 'current';

//TODO: replace with your openweathermap.org personal key
var api_key = 'fb3924bbb699b17137ab177df77c220c';

var options = {
  hostname: 'api.openweathermap.org',
  port: 80,
  path: '/pollution/v1/co/' + location + '/' + datetime + '.json?appid=' + api_key,
};

// workaround bug
options.headers = {
  host: options.hostname
}

http.request(options, function (res) {
  receive(res, function (data) {
    console.log(data);
  });
}).end();

function receive(incoming, callback) {
  var data = '';

  incoming.on('data', function (chunk) {
    data += chunk;
  });

  incoming.on('end', function () {
    callback ? callback(data) : '';
  });
}

And just run it:

iotjs example.js
{"time":"2018-03-27T02:24:33Z","location":{"latitude":47.3509,"longitude":-0.9081},"data":[{"precision":-4.999999987376214e-07,"pressure":1000,"value":1.5543508880000445e-07}
(...)

You can then use this to do things such as update a map or raise an alert on anything useful, or try to rebuild master branch.

How to Build A Simple Connected Light with IoT.JS and Python Django

Many web developers I meet are interested in working with embedded systems and IoT, but they always seem to have reservations on just how to make the whole system (i.e. a server, a ‘thing,’ and a client) work! The amount of information online is extensive, but it’s often hard to know where to start! This blog post will provide a very simple example of how to get a basic LED light to work in a local network, with a web client that provides a way to identify the light with no prior knowledge from the user and no required installation on the client device.

To do this we are going to use a very popular Python Django web framework and the  Samsungs IoT.JS framework. This post will provide an overview, basic code snippets, and links to more information on the GitHub repo. I’ll also provide exact links to the hardware I used for anyone who wants to tinker.

The Problem

There’s an IoT device sitting on the local network, silently exposing functionality to the public. To save money on the bill of materials, there are no screens or buttons, and configuration is all done via a web UI. How does a developer get access to the system and use it? There are several ways to handle this discovery with physical interactions from the user, such as bluetooth, RFID, QR Codes, or even a URL printed on the device. There are also discovery protocols like Bonjour, although even that will not work on many WiFi LAN networks because UDP is commonly blocked at the WiFi hub (as is with our Samsung WiFi).

In this example our device will be registered on the local WiFi network and have a local IP address. What we want is for a developer to be able to find this IP address and get access to the UI of the device. OK, lets start…

The Hardware

For this guide, I’m using the following items:

How to Build A Simple Connected Light with IoT.JS and Python Django - default-light-1-small

In this example, the Raspberry Pi Zero was placed in the 240v AC power in ‘cavity’ as seen in the picture below. Since our new light will not use 240V AC, it provides a convenient place for the Raspberry Pi to sit.

How to Build A Simple Connected Light with IoT.JS and Python Django - light-cavity-small

The LED light strip replaces the LED matrix of the original light. In this example, I’m only using the plastic water resistant housing, all the electronics and control wires have been removed. The fixture was mounted on a piece of plywood to demonstrate the device.

How to Build A Simple Connected Light with IoT.JS and Python Django - light-internall-small

For instructions on how to physically connect your LED light to the raspberry pi go here. The code to control the light is on my personal github repo, the file which controls the server is done in the file server_html.js. You can see line  63 calling the objects method: lightcontrol.showRainbowLight(). This method is exported in the lightconrol.js file which controls the light. For now I will leave details for a future blog, this is all about how we access and control the light and server. How we control the hardware pins of the LED and make that work is for blog 2 in the series.

The Server

The basic functions of the server are to register and update the IP of our IoT Light and to route the user to the correct local IP address.

Register And Update The Light

The server holds the details of the light, and again, there are a number of protocols and standards to pick from! To keep things simple, we created a REST endpoint that allows a light to register; it does this with PUT, POST and DELETE. The key to the server is to use the MAC address as a unique ID and hold the local IP address. In this example, the light will only work locally for the developer, and it’s a valid use case for certain applications to only be accessible to someone within the local network. The basic sequence diagrams looks like this:

How to Build A Simple Connected Light with IoT.JS and Python Django - LightRegistration

To ease the writing of REST endpoints, I used the very popular Django Rest Framework. Hence, the light needs to register itself and update its local IP address. It uses the MAC address as a unique value, however you are free to invent your own solution here. I created a very rich object model in my server in an attempt to future proof the DB object as best as possible, but the controller for registering the light is surprisingly simple and compact, and most of the lines are documentation.

def iot_machines_register(request):                     # TODO Add some type of authentication for iot devices
    """
    :param request: JSON {
                    device_id:      mac_adress e.g. 98:83:89:3a:96:a5
                    device_name:    "any text string"
                    local_ip:       IPV4 or IPV6 e.g. 192.168.0.1
                    }

    :return: HttpResponseForbidden, HttpResponse, HttpResponseRedirect

    Register an IoT Machine based on it's MAC address.
    If this is a POST we check it's unique. Verify the JSON package. And register the machine

    If this is a PUT we check it exists. verify the JSON package. And update the machine.

    TODO - Authentication & Authorization!

    """

    if request.method == 'POST':
        data = JSONParser().parse(request)
        print("We got the following data in the request: {}".format(data))
        serializer = IoTMachineSerializer(data=data)
        if serializer.is_valid():
            serializer.save()
            return JsonResponse(serializer.data, status=201)
        return JsonResponse(serializer.errors, status=400)

The function checks the method with request.method == ‘POST’ and creates a serializer with serializer=IoTMachineSerializer(data=data) to parse the date from the POST message. Validation is done in the model, which makes things far simpler to code. If validation succeeds, the ORM saves the new object in the SQL database with serializer.save() and returns a HTTP 201 with return JsonResponse(serializer.data, status=201). If it fails, the model generates the correct error codes and the framework responds with the appropriate error message with return JsonResponse( serializer.errors, status=400). Details of this are on the GitHub repo.

Routing The User To The Local IP

The clever part of this system must allow the user to find the IP of the light. After doing this, all communications will be handled locally between the device on the local WiFi network and the IoT Light.

How to Build A Simple Connected Light with IoT.JS and Python Django - LightRouting

The sequence diagram above shows the flow of a user hitting the server that contains the local IP and sending a redirect. After this happens, the server is no longer part of the data flow for this user. All the clever control and data flow is between the local user and the IoT Device.

def iot_machine_find(request, mac_address):
    """
    Find an IoT Machine based on it's MAC address.
    If it has a public local IP address then redirect to this IP.
    If our embedded system does not have a valid IP address then don't redirect and show the user a
    friendly page saying the machine is not active at the moment.

    """
    print("Trying to find an IoT Machine and redirect locally with mac_address of: {}".format(mac_address))

    # Make sure we can find the machine via it's mac address
    try:
        iot_device = IoTMachine.objects.get(device_id=mac_address)
    except IoTMachine.DoesNotExist:
        return HttpResponse(status=404)

    # Make sure the IP address we pass back is still valid
    try:
        ipaddress.IPv4Address(iot_device.local_ip)
    except ValueError:
        # TODO Paint a nice screen and show the user the device exists but routing is broken
        print("There was an issue with the stored IP address for the device: {} - handle it an carry on".format(iot_device.mac_address))
        return Http404

    # Happy path - direct the user to the device
    if request.method == 'GET':

        scheme = 'http'                 # TODO make this a dynamic variable from settings or DB table
        path = 'machine'                # TODO make this a dynamic variable from settings or DB table
        remote_url = "{0}://{1}/{2}".format(scheme,iot_device.local_ip,path)
        print("We found the device returning the IOT Local address: {}".format(remote_url))
        return HttpResponseRedirect(remote_url)

    # If we get this far - we don't support other methods at the minute so reply with a forbidden.
    return HttpResponseForbidden

The first and second try/except clauses simply check that the MAC address exists, and if it does, that there is a valid IP address to send back. The happy path checks for the GET and PUT method, and a GET returns the local IP address in a redirect with return HttpResponseRedirect(remote_url). The URL is hard coded in this example, however production systems would dynamically take these values from either configuration files or from data the IoT device provides.

All other HTTP methods will be caught with the return HttpResponseForbidden method. Again, much of the heavy lifting is done with the Django ORM and REST framework.

It’s important to note how flat the structure is; Pythonic code tends to move away from multiple layers of abstraction, opting for structures that are as flat as possible. This only shows the logic the server is exercising, and the framework comes, as they say, with ‘batteries included’. To get to the Django administration interface as a registered admin user, you just login and hit the admin path. It will then be possible to view the active DB, registered IoT machines, and even manipulate any values of those machines. No additional code is required for this.

How to Build A Simple Connected Light with IoT.JS and Python Django - django-screen1-small

By selecting a single IoT device, the framework will pull all relevant data from the SQL DB and format and display it in the Django admin form. All of this is generated automatically:

How to Build A Simple Connected Light with IoT.JS and Python Django - django-screen2-small

I haven’t gone into detail about how the Django ORM works in this article. If you want to learn more, it’s best to visit the Python Django experts.

How Did The Client Find The Server?

At this point you might be thinking: wait a minute, not only do I still need to know about the public server, but I also need to know what the light ID is! Remember the light as no physical buttons or screen! In this example we used a QR code that sits next to our light. The user scans the code with the phone which routes them to the server using it’s own MAC address as the ID. You can try this yourself with your Samsung browser on your phone – scan this QR code below.

How to Build A Simple Connected Light with IoT.JS and Python Django - qrcode_b827eb01d83f-small

You will be routed to my test server at www.noisyatom.tech/iot/machine/b8:27:3b:01:d8:3f, which will forward you to the UI the light exposes. (Un)fortunately, it only works if you are here in the same WiFi network. :-) Your browser will try to connect you to http://192.168.110.99/light.

Incidentally you can try this from your Samsung browser by selecting the ‘Scan QR code’ from the top right selection menu, which should show you a screen like below:

How to Build A Simple Connected Light with IoT.JS and Python Django - Screenshot_qr-small

Further Exploration

There is a lot of information here, and I’ve not really explained how the model is created or how to tell the server what parameters of the model are important! I will follow up on this in future blog posts where I’ll look at the details of the code running on the light. The light acts like both a server and client: it’s a server to the device that wants access to functionality, and a client of the central server that holds details about the device. Once the light is activated, it goes through a very nice looking rainbow dance; you could imagine this being used as a mood light or as a device that interacts with other systems. Finally, I will revisit the server side code later and explain how we make the model persist to a DB and have only the server check and verify components of that model with the REST interface in a future blog.

Check out the light in action!

How to Build A Simple Connected Light with IoT.JS and Python Django - light-rainbow-small

 

How to Run IoT.js on the Raspberry PI 0

IoT.js is a lightweight JavaScript platform for building Internet of Things devices; this article will show you how to run it on a few dollars worth of hardware. The First version of it was released last year for various platforms including Linux, Tizen, and NuttX (the base of Tizen:RT). The Raspberry Pi 2 is one of the reference targets, but for demo purposes we also tried to build for the Raspberry Pi Zero, which is the most limited and cheapest device of the family. The main difference is the CPU architecture, which is ARMv6 (like the Pi 1), while the Pi 2 is ARMv7, and the Pi 3 is ARMv8 (aka ARM64).

IoT.js upstream uses a python helper script to crossbuild for supported devices, but instead of adding support to new device I tried to build on the device using native tools with cmake and the default compiler options; it simply worked! While working on this, I decided to package iotjs for debian to see how well it will support other architectures (MIPS, PPC, etc), we will see.

Unfortunately, Debian armel isn’t optimized for ARMv6 and FPU, both of which are present on the Pi 1 and Pi 0, so the Raspbian project had to rebuild Debian for the ARMv6+VFP2 ARM variant to support all Raspberry Pi SBC’s.

In this article, I’ll share hints for running IoT.js on Raspbian: the OS officially supported by the Raspberry foundation; the following instructions will work on any Pi device since a portability strategy was preferred over optimization. I’ll demonstrate three separate ways to do this: from packages, by building on the device, and by building in a virtual machine. By the way, an alternative to consider is to rebuild Tizen Yocto for  the Pi 0, but I’ll leave that as an exercise for the reader, you can accomplish this with a bitbake recipe, or you can ask for more hints in the comments section.

How to Run IoT.js on the Raspberry PI 0 - tizen-pizero

Install from Packages

iotjs landed in Debian’s sid, and until it is in testing branch (and subsequently Raspbian and Ubuntu), the fastest way is to download it is via precompiled packages from my personal Raspbian repo

url='https://dl.bintray.com/rzr/raspbian-9-armhf'
source="/etc/apt/sources.list.d/bintray-rzr-raspbian-9-armhf.list"
echo "deb $url raspbian main" | sudo tee "$source"
sudo apt-get update
apt-cache search iotjs
sudo apt-get install iotjs
/usr/bin/iotjs
Usage: iotjs [options] {script | script.js} [arguments]

Use it

Usage is pretty straightforward, start with a hello world source:

echo 'console.log("Hello IoT.js !");' > example.js
iotjs  example.js 
Hello IoT.js !

More details about the current environment can be used (this is for iotjs-1.0 with the default built-in modules):

echo 'console.log(JSON.stringify(process));' > example.js
iotjs  example.js 
{"env":{"HOME":"/home/user","IOTJS_PATH":"","IOTJS_ENV":""},"native_sources":{"assert":true,"buffer":true,"console":true,"constants":true,"dns":true,"events":true,"fs":true,"http":true,"http_client":true,"http_common":true,"http_incoming":true,"http_outgoing":true,"http_server":true,"iotjs":true,"module":true,"net":true,"stream":true,"stream_duplex":true,"stream_readable":true,"stream_writable":true,"testdriver":true,"timers":true,"util":true},"platform":"linux","arch":"arm","iotjs":{"board":"\"unknown\""},"argv":["iotjs","example.js"],"_events":{},"exitCode":0,"_exiting":false} null 2

From here, you can look to use other built-in modules like http, fs, net, timer, etc.

Need More Features?

More modules can be enabled in the master branch, so I also built snapshot packages that can be installed to enable more key features like GPIO, I2C, and more. For your convenience, the snapshot package can be installed to replace the latest release:

root@raspberrypi:/home/user$ apt-get remove iotjs iotjs-dev iotjs-dbgsym iotjs-snapshot
root@raspberrypi:/home/user$ aptitude install iotjs-snapshot
The following NEW packages will be installed:
  iotjs-snapshot{b}
(...)
The following packages have unmet dependencies:
 iotjs-snapshot : Depends: iotjs (= 0.0~1.0+373+gda75913-0~rzr1) but it is not going to be installed
The following actions will resolve these dependencies:
     Keep the following packages at their current version:
1)     iotjs-snapshot [Not Installed]                     
Accept this solution? [Y/n/q/?] n
The following actions will resolve these dependencies:

     Install the following packages:                 
1)     iotjs [0.0~1.0+373+gda75913-0~rzr1 (raspbian)]
Accept this solution? [Y/n/q/?] y
The following NEW packages will be installed:
  iotjs{a} iotjs-snapshot 
(...)
Do you want to continue? [Y/n/?] y
(...)
  iotjs-snapshot https://dl.bintray.com/rzr/raspbian-9-armhf/iotjs-snapshot_0.0~1.0+373+gda75913-0~rzr1_armhf.deb
  iotjs https://dl.bintray.com/rzr/raspbian-9-armhf/iotjs_0.0~1.0+373+gda75913-0~rzr1_armhf.deb

Do you want to ignore this warning and proceed anyway?
To continue, enter "yes"; to abort, enter "no": yes
Get: 1 https://dl.bintray.com/rzr/raspbian-9-armhf raspbian/main armhf iotjs armhf 0.0~1.0+373+gda75913-0~rzr1 [199 kB]
Get: 2 https://dl.bintray.com/rzr/raspbian-9-armhf raspbian/main armhf iotjs-snapshot armhf 0.0~1.0+373+gda75913-0~rzr1 [4344 B]
(...)

If you the run console.log(process) again, you’ll see more interesting modules to use, like gpio, i2c, uart and more, and external modules can be also used; check on the work in progress for sharing modules to the IoT.js community. Of course, this can be reverted to the latest release by simply installing the iotjs package because it has higher priority than the snapshot version.

root@raspberrypi:/home/user$ apt-get install iotjs
(...)
The following packages will be REMOVED:
  iotjs-snapshot
The following packages will be upgraded:
  iotjs
(...)
Do you want to continue? [Y/n] y
(...)

Build on the Device

It’s also possible to build the snapshot package from source with extra packaging patches, found in the community branch of IoT.js (which can be rebased on upstream anytime).

sudo apt-get install git time sudo
git clone https://github.com/tizenteam/iotjs
cd iotjs
./debian/rules
sudo debi

On the Pi 0, it took less than 30 minutes over NFS for this to finish. If you want to learn more you can follow similar instructions for building IoTivity on ARTIK;
it might be slower, but it will extend life span of your SD Cards.

Build on a Virtual Machine

A faster alternative that’s somewhere between building on the device and setting up a cross build environment (which always has a risk of inconsistencies) is to rebuild IoT.js with QEMU, Docker, and binfmt.

First install docker (I used 17.05.0-ce and 1.13.1-0ubuntu6), then install the remaining tools:

sudo apt-get install qemu qemu-user-static binfmt-support time
sudo update-binfmts --enable qemu-arm

docker build 'http://github.com/tizenteam/iotjs.git'

It’s much faster this way, and took me less than five minutes. The files are inside the container, so they need to be copied back to host. I made a helper script for setup and to get the deb packages ready to be deployed on the device (sudo dpkg -i *.deb) :

curl -sL https://rawgit.com/tizenteam/iotjs/master/run.sh | bash -x -
./tmp/out/iotjs/iotjs-dbgsym_0.0-0_armhf.deb
./tmp/out/iotjs/iotjs-dev_0.0-0_armhf.deb
./tmp/out/iotjs/iotjs_0.0-0_armhf.deb

I used the Resin/rpi-raspbian docker image (thanks again Resin!). Finally, I want to also thank my coworker Stefan Schmidt for the idea after he setup a similar trick for EFL’s CI.

Further Reading

If you want to learn more, here are some additional resources to take your understanding further.

Raspberry Pi is a trademark of the Raspberry Pi Foundation\

How to Prototype Insecure IoTivity Apps with the Secure Library

IoTivity 1.3.1 has been released, and with it comes some important new changes. First, you can rebuild packages from sources, with or without my hotfixes patches, as explained recently in this blog post. For ARM users (of ARTIK7), the fastest option is to download precompiled packages as .RPM for fedora-24 from my personal repository, or check ongoing works for other OS.

Copy and paste this snippet to install latest IoTivity from my personal repo:

distro="fedora-24-armhfp"
repo="bintray--rzr-${distro}"
url="https://bintray.com/rzr/${distro}/rpm"
rm -fv "/etc/yum.repos.d/$repo.repo"
wget -O- $url | sudo tee /etc/yum.repos.d/$repo.repo
grep $repo /etc/yum.repos.d/$repo.repo

dnf clean expire-cache
dnf repository-packages "$repo" check-update
dnf repository-packages "$repo" list --showduplicates # list all published versions
dnf repository-packages "$repo" upgrade # if previously installed
dnf repository-packages "$repo" install iotivity-test iotivity-devel # remaining ones

I also want to thank JFrog for proposing bintray service to free and open source software developers.

Standalone Apps

In a previous blog post, I explained how to run examples that are shipped with the release candidate. You can also try with other existing examples (rpm -ql iotivity-test), but some don’t work properly. In those cases, try the 1.3-rel branch, and if you’re still having problems please report a bug.

At this point, you should know enough to start making your own standalone app and use the library like any other, so feel free to get inspired or even fork demo code I wrote to test integration on various OS (Tizen, Yocto, Debian, Ubuntu, Fedora etc…).

Let’s clone sources from the repo. Meanwhile, if you’re curious read the description of tutorial projects collection.

sudo dnf install screen psmisc git make
git clone http://git.s-osg.org/iotivity-example
# Flatten all subprojects to "src" folders
make -C iotivity-example
# Explore all subprojects
find iotivity-example/src -iname "README*"

 

 

Note that most of the examples were written for prototyping, and security was not enabled at that time (on 1.2-rel security is not enabled by default except on Tizen).

Serve an Unsecured Resource

Let’s go directly to the clock example which supports security mode, this example implements OCF OneIoT Clock model and demonstrates CoAP Observe verb.

cd iotivity-example
cd src/clock/master/
make
screen
./bin/server -v
log: Server:
Press Ctrl-C to quit....
Usage: server -v
(...)
log: { void IoTServer::setup()
log: { OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: { FILE* override_fopen(const char*, const char*)
(...)
log: } FILE* override_fopen(const char*, const char*)
log: Successfully created oic.r.clock resource
log: } OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: } void IoTServer::setup()
(...)
log: deadline: Fri Jan  1 00:00:00 2038
oic.r.clock: { 2017-12-19T19:05:02Z, 632224498}
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

Then, start the observer in a different terminal or device (if using GNU screem type : Ctrl+a c and Ctrl+a a to get switch to server screen):

./bin/observer -v
log: { IoTObserver::IoTObserver()
log: { void IoTObserver::init()
log: } void IoTObserver::init()
log: } IoTObserver::IoTObserver()
log: { void IoTObserver::start()
log: { FILE* override_fopen(const char*, const char*)
(...)
log: { void IoTObserver::onFind(std::shared_ptr)
resourceUri=/example/ClockResUri
resourceEndpoint=coap://192.168.0.100:55236
(...)
log: { static void IoTObserver::onObserve(OC::HeaderOptions, const OC::OCRepresentation&, const int&, const int&)
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

OK, now it should work and the date should be updated, but keep in mind it’s still *NOT* secured at all, as the resource is using a clear channel (coap:// URI).

To learn about the necessary changes, let’s have look at the commit history of the clock sub project. The server’s side persistent storage needed to be added to the configuration; we can see from trace that Secure Resource Manager (SRM) is loading credentials by overriding fopen functions (those are designed to use hardware security features like ARM’s Secure Element/ Secure key storage). The client or observer also needs a persistent storage update.

git diff clock/1.2-rel src/server.cpp

(...)
 
+static FILE *override_fopen(const char *path, const char *mode)
+{
+    LOG();
+    static const char *CRED_FILE_NAME = "oic_svr_db_anon-clear.dat";
+    char const *const filename
+        = (0 == strcmp(path, OC_SECURITY_DB_DAT_FILE_NAME)) ? CRED_FILE_NAME : path;
+    return fopen(filename, mode);
+}
+
+
 void IoTServer::init()
 {
     LOG();
+    static OCPersistentStorage ps {override_fopen, fread, fwrite, fclose, unlink };
     m_platformConfig = make_shared
                        (ServiceType::InProc, // different service ?
                         ModeType::Server, // other is Client or Both
                         "0.0.0.0", // default ip
                         0, // default random port
                         OC::QualityOfService::LowQos, // qos
+                        &ps // Security credentials
                        );
     OCPlatform::Configure(*m_platformConfig);
 }

This example uses generic credential files for the client and server with maximum privileges, just like unsecured mode (or default 1.2-rel configuration).

cat oic_svr_db_anon-clear.json
{
    "acl": {
       "aclist2": [
            {
                "aceid": 0,
                "subject": { "conntype": "anon-clear" },
                "resources": [{ "wc": "*" }],
                "permission": 31
            }
        ],
        "rowneruuid" : "32323232-3232-3232-3232-323232323232"
    }
}

These files will not be loaded directly because they need to be compiled to the CBOR binary format using IoTivity’s json2cbor (which despite the name does more than converting json files to cbor, it also updates some fields).

Usage is straightforward:

/usr/lib/iotivity/resource/csdk/security/tool/json2cbor oic_svr_db_client.json  oic_svr_db_client.dat

To recap, the minimal steps that are needed are:

  1. Configure the resource’s access control list to use new format introduced in 1.3.
  2. Use clear channel (“anon-clear”) for all resources (wildcards wc:*), this is the reason coap:// was shown in log.
  3. Set verbs (CRUD+N) permissions to maximum according to OCF_Security_Specification_v1.3.0.pdf document.

One very important point: don’t do this in a production environment; you don’t want to be held responsible for neglecting security, but this is perfectly fine for getting started with IoTivity and to prototype with the latest version.

 

Still Unsecured, Then What?

With faulty security configurations you’ll face the OC_STACK_UNAUTHORIZED_REQ error (Response error: 46). I tried to sum up the minimal necessary changes, but keep in mind that you shouldn’t stop here. Next, you need to setup ACLs to match the desired policy as specified by Open Connectivity, check the hints linked on IoTivity’s wiki, or use a provisioning tool, >stay tuned for more posts that cover these topics.

 

 

 

Improving Debug Code Performance in EFL

I work on EFL, a cross-platform graphical toolkit written in C. I recently decided to improve one aspect of the experience for developers using the API (otherwise known as users) by making EFL provide more information and stricter sanity checks when developing and debugging applications. A key requirement was ease of use. With these requirements, the solution was obvious, unfortunately obvious solutions don’t always work as well as you expect.

The Obvious Solution: an Environment Variable

Using an environment variable sounds like a good idea, but it comes with one major, unacceptable flaw: a significant performance impact. Unfortunately, one of the places we wanted to collect debug information was Eo, a hot-path in EFL. Believe it or not, adding a simple check to see if debug-mode is enabled was enough to degrade performance. To illustrate, consider the following code:

Note: thanks to Krister Walfridsson for pointing out a mistake in the following example.

#include <stdio.h>
#include <stdlib.h>

#define N 100000000 // Adjust for your machine

int main(int argc, char *argv[])
{
   int debug_on = !!getenv("DEBUG");
   unsigned long sum = 0;
   unsigned int i;

   for (i = 0 ; i < N ; i++)
     {
        unsigned int j;
        for (j = 0 ; j < N ; j++)
          {
#ifdef WITH_DEBUG
             if (debug_on)
                printf("Debug print.\n");
#endif

             sum += i;
          }
     }
   printf("Sum: %lu\n", sum);

   return 0;
}

Which when executed would result in:

$ # Debug disabled
$ gcc -O3 sum.c -o sum
$ time ./sum
Sum: 996882102603448320
./sum  0.11s user 0.00s system 95% cpu 0.118 total

$ # Debug enabled
$ gcc -O3 -DWITH_DEBUG=1 sum.c -o sum
$ time ./sum
Sum: 996882102603448320
./sum 0.17s user 0.00s system 98% cpu 0.176 total

This creates a 50% performance penalty just to have it compiled in;, this doesn’t even include enabling it! While this is an extreme example, code-paths with a similar impact do exist in our codebase.

This performance penalty was unacceptable, so I went back to the drawing board.

An Alternative Solution: a Compilation Option when Compiling the Library

The next idea was to add an option to compile the library in debug mode, so it will always collect the data whenever it’s used. In EFL’s source code we would have something like the following code:

EAPI Eo *efl_ref(Eo *obj)
{
   // snip ...
#ifdef EFL_DEBUG
   collect_debug_info();
#endif
   // snip ...
}

API users will have to recompile the library with debugging on and use it instead of the version included with the distro. This introduces a few issues, some of which are solvable, some are not. The primary issue was that users had to download and compile the library: a big enough hurdle to render this solution almost useless.

This had been the implementation for quite some time because it was good enough for the most of us, but I recently decided to  implement a better, user-friendly solution I had in mind.

A Better Solution: Automatically Provide Both Versions

Eo is small, so compiling it twice wouldn’t have a significant impact on either the build time or the deliverable size. Therefore, I modified our build-system to build Eo twice, once as libeo.so and once as libeo_dbg.so: the first version compiled as normal, and the second with EFL_DEBUG set.

This was already a better solution because it meant users of the API could simply link against the new library when compiling their applications to use the debug version, and when they are ready to release, link against the non-debug one. This, while much better, still requires some advanced know-how from the users that I wanted to eliminate.

An Even Better Solution: Use DLL Injection to Simplify Usage

I wanted to create an easy way for users to toggle between the two at runtime, without having to relink their applications. I realized I could use LD_PRELOAD (more info) in order to force the debug version of the library to be loaded first; problem solved.

I wrote a small script to ease usage:

#!/bin/sh
# The variables surrounded by @ are replaced by autotools
prefix="@prefix@"
exec_prefix="@exec_prefix@"
if [ $# -lt 1 ]
then
   echo "Usage: $0 <executable> [executable parameters]"
else
   LD_PRELOAD="@libdir@/libeo_dbg.so" "$@"
fi

Using this is now as simple as wrapping execution with the previous script:

$ # Debug off
$ ./myapp param1 param2

$ # Debug on
$ eo_debug ./myapp param1 param2

Mission accomplished!

Note: The script and libeo_dbg.so are automatically built and installed with the rest of EFL. This means they will be available for everyone who installs the efl package, or in some distributions the efl-dev package.

Conclusion

We now have an easy-to-use tool for API users to debug and check their applications. As a side effect, we also earned a useful tool for end users (users who use applications written with EFL) for providing better information when reporting bugs. All of this without any impact on normal execution.

Please let me know if you spotted any mistakes or have any suggestions for improvement.

This article was originally posted on Tom’s personal blog, and has been approved to be posted here.

Get Started with IoTivity Interactions on the ARTIK10 and Tizen

A curious mind recently asked me to share materials about the OCF SmartHome demo, or perhaps I should call it the “Minimalist Smart Switch” instead. The demo was displayed at the Embedded Linux Conference in Berlin, and featured IoTivity running on an ARTIK10 SoC that connected to a Tizen Gear S2 Smartwatch; both run Tizen OS.

Get Started with IoTivity Interactions on the ARTIK10 and Tizen - ocf

You will find more technical details in the following slide deck.

Install Tizen and IoTivity

If you want to run it this demo, you can download the system image and uncompress the archive directly to the SD card using QEMU tools.

lsblk # will list all your disks
disk=/dev/sdTODO # update with your disk id
file=tizen-common-artik10-20160801rzr.qcow2
sudo qemu-img convert -p "${file}" "${disk}"
sync 

Once this is completed, insert the SD card into the ARTIK10 and turn it on; it will boot Tizen and launch the IoTivity server. For more information about this, check out the previous blog posts about booting tizen on ARTIK and building software for Tizen OS. These will show you how to rebuild the latest version of IoTivity from scratch.

Of course, this can be adapted to other hardware like the ARTIK5 and maybe the ARTIK7 too, it would be a good idea to bookmark this page as a reference for Tizen on ARTIK. If you run into issues, feel free to ask about them in the ARTIK forums.

Note, You can also do it again on Yocto OS using meta-artik too, but using Tizen will save you the time and resources to rebuild everything.

Setup the Tizen Clients

For IoTivity clients on mobile devices, like the Samsung Z1, or smartwatches, such as the Samsung GearS2, the clients need a little preparation. Most of it’s documented in the Tizen page on the IoTivity wiki. I will also be personally available to assist developers at next the Tizen community meeting (2016-11-28 09h@UTC), which has more details on the Wiki’s Meeting page.

What’s Next for Tizen and IoTivity?

Micro-controllers have recently been a focus of some of my work. I shared some hints are shared in the following presentation “From Arduino Microcontrollers to Tizen using IoTivity” which I presented at IoT With the Best 2016.

Others at the Samsung Open Source Group have worked on more constrained devices by bridging different technologies together, including JerryScript, 6LoWPAN, and more. This was demonstrated at the recent Samsung Open Source Conference .

One likely challenge for ARTIK0 owners will be running iotivity-constrained on operating systems Iotivity supports, such as RIOT, Zephyr, and Tizen RT. If you’re one of these people, please keep us updated in the Tizen IRC channel, or the mailing lists!

Get Started with IoTivity Interactions on the ARTIK10 and Tizen - lfelc

Thanks to Luis De Bethencourt for helping me with the pictures of the Gear S2 on my wrist :-)

Video Decoding with the Exynos Multi-Format Codec & GStreamer

Exynos SoCs have an IP block known as the Multi-Format Codec (MFC) that allows them to do hardware accelerated video encoding/decoding, and the mainline kernel has a s5p-mfc Video for Linux2 (V4L2) driver that supports the MFC. The s5p-mfc driver is a Memory-to-Memory (M2M) V4L2 driver, it’s called M2M because the kernel moves video buffers from an output queue to a capture queue. The user-space enqueues buffers into the output queue, then the kernel passes these buffers to the MFC where they are converted and put it in the capture queue so the user-space can dequeue them.

The GStreamer (gst) multimedia framework supports V4L2 M2M devices, but only for decoders the v4l2videodec element supports. Randy Li is working to also support M2M encoders in GStreamer (v4l2videoenc), but this hasn’t landed in upstream GStreamer yet.

This post will explain how to use GStreamer and the Linux mainline kernel to do hardware video decoding on an Exynos based machine.

Install a Mainline Kernel on an Exynos Machine

The first step is to install a mainline kernel in the Exynos board. We have been working lately to make the s5p-mfc driver more stable and to have everything enabled in by default, so the Linux kernel version used should be at least v4.8-rc1 and the defconfig used should be exynos_defconfig. I’ve made previous blog posts that explain how to cross compile and install a mainline kernel on different Exynos-based machines, including Exynos5 Odroid boards and Exynos5 Chromebooks, so refer to these to learn how to do this.

GStreamer Uninstalled Setup

The GStreamer package in most Linux distributions doesn’t come with support for the v4l2videodec gst element enabled, so GStreamer should be built from source to do hardware accelerated video decoding. Fortunately, GStreamer has the gst-uninstalled script to setup a development environment that allows the use of gst elements that were built from source and not installed in the system. There are many guides on how to use gst-uninstalled, including this great tutorial from Arun Raghavan.

When building the gst-plugins-good module, the –enable-v4l2-probe configure option must be enabled to allow GStreamer to probe v4l2 devices. This is needed because v4l2 device drivers can be probed in different orders, meaning the video device nodes can change. This option needs to be  enabled explicitly before building gst-plugins-good:

$ cd gst-plugins-good
$ ./configure --enable-v4l2-probe
$ make

Find the Video Device Node for MFC Decoding

At this point, the s5p-mfc driver can be used to do hardware accelerated video decoding. You first need to know which video device nodes are for the s5p-mfc video decoder; gst v4l2videodec expects to use a gst element named v4l2videoNdec, where N is the video device node number.

First, get the video device nodes the s5p-mfc driver has registered; the v4l2-ctl command from the v4l-utils can be use for this:

$ v4l2-ctl --list-devices
s5p-mfc-dec (platform:11000000.codec):
	/dev/video5
	/dev/video6

There are two device nodes the driver has registered, v4l2-ctl can also be used to know more about these nodes:

$ v4l2-ctl --info -d /dev/video5 
Driver Info (not using libv4l2):
	Driver name   : s5p-mfc
	Card type     : s5p-mfc-dec
	Bus info      : platform:11000000.codec
	Driver version: 4.8.0
	Capabilities  : 0x84204000
		Video Memory-to-Memory Multiplanar
		Streaming
		Extended Pix Format
		Device Capabilities
	Device Caps   : 0x04204000
		Video Memory-to-Memory Multiplanar
		Streaming
		Extended Pix Format
$ v4l2-ctl --info -d /dev/video6
Driver Info (not using libv4l2):
	Driver name   : s5p-mfc
	Card type     : s5p-mfc-enc
	Bus info      : platform:11000000.codec
	Driver version: 4.8.0
	Capabilities  : 0x84204000
		Video Memory-to-Memory Multiplanar
		Streaming
		Extended Pix Format
		Device Capabilities
	Device Caps   : 0x04204000
		Video Memory-to-Memory Multiplanar
		Streaming
		Extended Pix Format

The video device node for the decoder is /dev/video5 in this case, there should be a v4l2video5dec gst element that should be used.

$ gst-inspect-1.0 v4l2video5dec
Factory Details:
  Rank                     primary + 1 (257)
  Long-name                V4L2 Video Decoder
  Klass                    Codec/Decoder/Video
  Description              Decode video streams via V4L2 API
  Author                   Nicolas Dufresne 

Plugin Details:
  Name                     video4linux2
  Description              elements for Video 4 Linux
  ...

Video Decoding

MFC supports a variety of video formats for decoding and produces decoded NV12/NV21 YUV frames. The v4l2-ctl tool can be used with the option –list-formats-out to list all supported formats.

$ v4l2-ctl -d /dev/video5 --list-formats-out
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Output Multiplanar
	Pixel Format: 'H264' (compressed)
	Name        : H.264

	Index       : 1
	Type        : Video Output Multiplanar
	Pixel Format: 'M264' (compressed)
	Name        : H.264 MVC

	Index       : 2
	Type        : Video Output Multiplanar
	Pixel Format: 'H263' (compressed)
	Name        : H.263
	...

Now that the video device node for the decoder and its corresponding v4l2videodec gst element are known, a supported video (e.g. H.264) can be decoded using GStreamer such as with this example pipeline:

$ gst-launch-1.0 filesrc location=test.mov ! qtdemux ! h264parse ! v4l2video5dec ! videoconvert ! kmssink

Happy hacking!

How Enlightenment Gadgets Handle Sizing

This tutorial will provide further detail about aspects of Enlightenment’s new gadget system. Specifically, it will explore how sizing works in different contexts and how simple sizing policies can be leveraged to provide the best view of a gadget. Let’s start with the basics: what is sizing and why does it matter?

Gadgets work a bit different than typical application widgets where one would simply pack them into a layout or use WEIGHT and ALIGN hints to fill portions of available regions. A gadget site uses an automatic sizing algorithm to fit itself into its given location. This ensures that gadgets are always the size the user has specified while also maintaining the best sizes for the gadgets so they will look the way the author intended. Finally, it also greatly simplifies the work of gadget authors since the need to calculate exact sizes and continually adjust based on available size is no longer applicable.

Example of Gadget Sizing

Looking at the relevant code from the “Start” gadget, the gadget uses the following line for its sizing:

evas_object_size_hint_aspect_set(o, EVAS_ASPECT_CONTROL_BOTH, 1, 1);

This informs the gadget site that the gadget should always maintain a 1:1 aspect for width:height, ensuring that the correct amount of space is allocated for the gadget regardless of the size at which it is displayed in a given container. The resizing can be seen in action here.

Taking a slightly more complicated case, let’s look at the clock gadget from the same video. The following function controls its sizing:

static void
_eval_instance_size(Instance *inst)
{
   Evas_Coord mw, mh;
   int sw = 0, sh = 0;
   Evas_Object *ed = elm_layout_edje_get(inst->o_clock);

   edje_object_size_min_get(ed, &mw, &mh);

   if ((mw < 1) || (mh < 1))
     {
        if (edje_object_part_exists(ed, "e.sizer"))
          {
             edje_object_part_geometry_get(ed, "e.sizer", NULL, NULL, &mw, &mh);
          }
        else
          {
             Evas_Object *owner;

             owner = e_gadget_site_get(inst->o_clock);
             switch (e_gadget_site_orient_get(owner))
               {
                case E_GADGET_SITE_ORIENT_HORIZONTAL:
                  evas_object_geometry_get(owner, NULL, NULL, NULL, &sh);
                  break;

                case E_GADGET_SITE_ORIENT_VERTICAL:
                  evas_object_geometry_get(owner, NULL, NULL, &sw, NULL);
                  break;

                default: break;
               }

             evas_object_resize(inst->o_clock, sw, sh);
             edje_object_message_signal_process(ed);

             edje_object_parts_extends_calc(ed, NULL, NULL, &mw, &mh);
          }
     }

   if (mw < 4) mw = 4;
   if (mh < 4) mh = 4;

   if (mw < sw) mw = sw;
   if (mh < sh) mh = sh;

   evas_object_size_hint_aspect_set(inst->o_clock, EVAS_ASPECT_CONTROL_BOTH, mw, mh);
}

Breaking the Code Down

The normal path for this code to take is to get the size of the layout using edje_object_size_min_get(), and then set that as the aspect for the gadget. This will trigger when the digital clock changes its face, ensuring that all the digits receive adequate space. In the case where this size has not been calculated yet, two alternate paths are provided for creating an aspect for the gadget.

In the first path, a check for the “e.sizer” part in the layout is performed and the size of this part is used if the part exists. This is a theme-defined part that may or may not exist, so checking to see which case is currently in use is important before attempting to use this size.

If this check fails, the gadget retrieves the size of the gadget site on its non-extending axis, then resizes itself to this size and forces a recalculation to make an attempt at accurate sizing. Note: in this case it is usually guaranteed that another recalculation will be triggered soon after, so this is likely to only be a temporary size.

Regardless of the method used, an aspect hint is set for the gadget that the gadget site can then use to handle any resizing that may occur. Since the gadget receives events any time a gadget site resize occurs, it’s possible for a gadget to change its aspect hints at any time to re-aspect itself and take up more or less space depending on various scenarios.

This wraps up the mechanics of gadget sizing, but stay tuned for even more gadget-related tutorials in the future!