Connecting sensors to Mozilla’s IoT Gateway

Here is a 1st post about Mozilla’s IoT effort, and specifically the gateway project which is illustrating “Web Of Things” concept to create a decentralized Internet of Things, using Web technologies.

Today we will focus on the gateway, as it is the core component of the whole framework. Version 0.4.0 was just released, so you can try it your own on Raspberry Pi 3.  The Raspberry Pi 3 is the reference platform, but it should be possible to port to other single board computers (like ARTIK, etc).

The post will explain how to get started and how to establish basic automation using I2C sensors and actuators on gateway’s device (without any cloud connectivity).

To get started, first install the gateway according to these straightforward instructions:

Prepare SD Card

You need to download the Raspbian based gateway-0.4.0.img.zip (1GB archive) and dump it to SD card (2.6GB min).

lsblk # Identify your sdcard adapter ie:
disk=/dev/disk/by-id/usb-Generic-TODO
url=https://github.com/mozilla-iot/gateway/releases/download/0.4.0/gateway-0.4.0.img.zip
wget -O- $url | funzip | sudo dd of=$disk bs=8M oflag=dsync

If you only want to use the gateway and not hack on it, you can skip this next part which enables a developer shell though SSH.  However, if you do want access to a developer shell, mount the 1st partition called “boot” (you may need to replug your SD card adapter) and add a file to enable SSH:

sudo touch /media/$USER/boot/ssh
sudo umount /media/$USER/*

First boot

Next, install the SD card in your Raspberry PI 3 (Older RPis could work too, particularly if you have a wifi adapter).

When it has completed the first boot, you can check that the Avahi daemon is registering “gateway.local” using mDNS (multicast DNS)

ping gateway.local
ssh pi@gateway.local # Raspbian default password for pi user is "raspberry"

Let’s also track local changes to /etc by installing etckeeper, and change the default password.

sudo apt-get install etckeeper
sudo passwd pi

Logging in

You should now be able to access the web server, which is running on port 8080 (earlier version used 80):

http://gateway.local:8080/

It will redirect you to a page to configure wifi:

URL: http://gateway.local:8080/
Welcome
Connect to a WiFi network?
FreeWifi_secure
FreeWifi
OpenBar
...
(skip)

We can skip it for now:

URL: http://gateway.local:8080/connecting
WiFi setup skipped
The gateway is now being started. Navigate to gateway.local in your web browser while connected to same network as the gateway to continue setup.
Skip

After a short delay, the user should be able to reconnect to the entry page:

http://gateway.local:8080/

The gateway can be registered on mozilla.org for remote management, but we can skip this for now.

Then administrator is now welcome to register new users:

URL: http://gateway.local:8080/signup/
Mozilla IoT
Welcome
Create your first user account:
user: user
email: user@localhost
password: password
password: password
Next

And we’re ready to use it:

URL: http://gateway.local:8080/things
Mozilla IoT
No devices yet. Click + to scan for available devices.
Things
Rules
Floorplan
Settings
Log out

Filling dashboard

You can start filling your dashboard with Virtual Resources,

First hit the “burger menu” icon, go to settings page, and then go to the addons page.

Here you can enable a “Virtual Things” adapter:

URL: http://gateway.local:8080/settings/addons/
virtual-things-adapter 0.1.4
Mozilla IoT Virtual Things Adapter
by Mozilla IoT

Once enabled It should be listed along ThingURLAdapter on the adapters page:

URL: http://gateway.local:8080/settings/adapters
VirtualThingsAdapter
virtual-things
ThingURLAdapter
thing-url-adapter

You can then go back to the 1st Things page (it’s the first entry in the menu):

We can start adding “things” by pressing the bottom menu.

URL: http://gateway.local:8080/things
Virtual On/Off Color Light
Color Light
Save

Then press “Done” at bottom.

From this point, you can decide to control a virtual lamp from the UI, and even establish some basic rules (second entry in menu) with more virtual resources.

Sensing Reality

Because IoT is not about virtual worlds, let’s see how to deal with the physical world using sensors and actuators.

For sensors, there are many way to connect them to computers using analog or digital inputs on different buses.  To make it easier for applications developers, this can be abstracted using W3C’s generic sensors API.

While working on IoT.js‘s modules, I made a “generic-sensors-lite” module that abstracted a couple of I2C drivers from the NPM repository.  To verify the concept, I have made an adapter for Mozilla’s IoT Gateway (which is running Node.js), so I published the generic-sensors-lite NPM module first.

Before using the mozilla-iot-generic-sensors-adapter, you need to enable the I2C bus on the gateway (version 0.4.0, master has I2C enabled by default).

sudo raspi-config
Raspberry Pi Software Configuration Tool (raspi-config)
5 Interfacing Options Configure connections to peripherals
P5 I2C Enable/Disable automatic loading of I2C kernel module
Would you like the ARM I2C interface to be enabled?
Yes
The ARM I2C interface is enabled
ls -l /dev/i2c-1
lsmod | grep i2c
i2c_dev 16384 0
i2c_bcm2835 16384 0

Of course you’ll need at least one real sensor attached to the I2C pin of the board.  Today only 2 modules are supported:

You can double check if addresses are present on I2C the bus:

sudo apt-get install i2c-tools
/usr/sbin/i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- 23 -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- 77

Install mozilla-iot-generic-sensors-adapter

Until sensors adapter is officially supported by the mozilla iot gateway, you’ll need to install it on the device (and rebuild dependencies on the target) using:

url=https://github.com/rzr/mozilla-iot-generic-sensors-adapter
dir=~/.mozilla-iot/addons/generic-sensors-adapter
git clone --depth 1 -b 0.0.1 $url $dir
cd $dir
npm install

Restart gateway (or reboot)
sudo systemctl restart mozilla-iot-gateway.service
tail -F /home/pi/.mozilla-iot/log/run-app.log

Then the sensors addon can be enabled by pressing the “enable” button on the addons page:

URL: http://gateway.local:8080/settings/addons
generic-sensors-adapter 0.0.1
Generic Sensors for Mozilla IoT Gateway

It will appear on the adapters page too:

URL: https://gateway.local/settings/adapters
VirtualThingsAdapter
virtual-things
ThingURLAdapter
thing-url-adapter
GenericSensorsAdapter
generic-sensors-adapter

Now we can add those sensors as new things (Save and done buttons):

URL: http://gateway.local:8080/things
Ambient Light Sensor
Unknown device type
Save
Temperature Sensor
Unknown device type
Save

Then they will appear as:

  • http://gateway.local:8080/things/0 (for Ambient Light Sensor)
  • http://gateway.local:8080/things/1 (for Temperature Sensor)

To get value updated in the UI, they need to turned on first (try again if you find a big, and file tickets I will forward to drivers authors).

A GPIO adapter can be also used for actuators, as shown in this demo video.

If you have other sensors, check if the community has shared a JS driver, and please let me know about integrating new sensors drivers in generic-sensors-lite

IoT.js landed in Raspbian

Following previous efforts to deploy iotjs on Raspberry Pi 0, I am happy to announce that IoT.js 1.0 landed in Debian, and was sync’d to Raspbian for ArmHF and Ubuntu as well.

While the package is targeting the next distro release, it can be easily installed on current versions by adding a couple of config files for “APT pinning”.

If you haven’t set up Raspbian 9, just dump the current Raspbian image to SDcard (for the record I used version 2018-03-13-raspbian-stretch-lite)

Boot your Pi.  To keep track of changes in /etc/, let’s install etckeeper:

sudo apt-get update
sudo apt-get install etckeeper

Upgrade current packages:

sudo apt-get upgrade
sudo apt-get dist-upgrade

Declare the current release as default source:

cat<<EOT | sudo tee /etc/apt/apt.conf.d/50raspi
APT::Default-Release "stretch";
EOT

Then add a repo file for the next release:

cat /etc/apt/sources.list | sed 's/stretch/buster/g' | sudo tee /etc/apt/sources.list.d/raspi-buster.list

Unless you want to test the upcoming release, it maybe be safer to avoid upgrading all packages yet.  In other words, we prefer that only iotjs should be available from this “not yet supported” repo.

cat<<EOT | sudo tee /etc/apt/preferences.d/raspi-buster.pref
Package: *
Pin: release n=buster
Pin-Priority: -10
EOT

cat<<EOT | sudo tee /etc/apt/preferences.d/iotjs.pref
Package: iotjs
Pin: release n=buster
Pin-Priority: 1
EOT

Now iotjs 1.0-1 should appear as available for installation:

sudo apt-get update ; apt-cache search iotjs
iotjs - Javascript Framework for Internet of Things

apt-cache policy iotjs
iotjs:
  Installed: (none)
  Candidate: 1.0-1
  Version table:
     1.0-1 1
        -10 http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages

Let’s install it:

sudo apt-get install iotjs
man iotjs

Even if version 1.0 is limited in compared to the development branch, you can start by using the http module which is enabled by default (not https).

To illustrate this, when I investigated “air quality monitoring” for a TizenRT+IoT.js demo I found out that OpenWeatherMap is collecting and publishing “Carbon Monoxide” Data, so let’s try their REST API.

Create a file, example.js for example, that contains:

var http = require('http');

var location = '48,-1';
var datetime = 'current';

//TODO: replace with your openweathermap.org personal key
var api_key = 'fb3924bbb699b17137ab177df77c220c';

var options = {
  hostname: 'api.openweathermap.org',
  port: 80,
  path: '/pollution/v1/co/' + location + '/' + datetime + '.json?appid=' + api_key,
};

// workaround bug
options.headers = {
  host: options.hostname
}

http.request(options, function (res) {
  receive(res, function (data) {
    console.log(data);
  });
}).end();

function receive(incoming, callback) {
  var data = '';

  incoming.on('data', function (chunk) {
    data += chunk;
  });

  incoming.on('end', function () {
    callback ? callback(data) : '';
  });
}

And just run it:

iotjs example.js
{"time":"2018-03-27T02:24:33Z","location":{"latitude":47.3509,"longitude":-0.9081},"data":[{"precision":-4.999999987376214e-07,"pressure":1000,"value":1.5543508880000445e-07}
(...)

You can then use this to do things such as update a map or raise an alert on anything useful, or try to rebuild master branch.

How to Build A Simple Connected Light with IoT.JS and Python Django

Many web developers I meet are interested in working with embedded systems and IoT, but they always seem to have reservations on just how to make the whole system (i.e. a server, a ‘thing,’ and a client) work! The amount of information online is extensive, but it’s often hard to know where to start! This blog post will provide a very simple example of how to get a basic LED light to work in a local network, with a web client that provides a way to identify the light with no prior knowledge from the user and no required installation on the client device.

To do this we are going to use a very popular Python Django web framework and the  Samsungs IoT.JS framework. This post will provide an overview, basic code snippets, and links to more information on the GitHub repo. I’ll also provide exact links to the hardware I used for anyone who wants to tinker.

The Problem

There’s an IoT device sitting on the local network, silently exposing functionality to the public. To save money on the bill of materials, there are no screens or buttons, and configuration is all done via a web UI. How does a developer get access to the system and use it? There are several ways to handle this discovery with physical interactions from the user, such as bluetooth, RFID, QR Codes, or even a URL printed on the device. There are also discovery protocols like Bonjour, although even that will not work on many WiFi LAN networks because UDP is commonly blocked at the WiFi hub (as is with our Samsung WiFi).

In this example our device will be registered on the local WiFi network and have a local IP address. What we want is for a developer to be able to find this IP address and get access to the UI of the device. OK, lets start…

The Hardware

For this guide, I’m using the following items:

How to Build A Simple Connected Light with IoT.JS and Python Django - default-light-1-small

In this example, the Raspberry Pi Zero was placed in the 240v AC power in ‘cavity’ as seen in the picture below. Since our new light will not use 240V AC, it provides a convenient place for the Raspberry Pi to sit.

How to Build A Simple Connected Light with IoT.JS and Python Django - light-cavity-small

The LED light strip replaces the LED matrix of the original light. In this example, I’m only using the plastic water resistant housing, all the electronics and control wires have been removed. The fixture was mounted on a piece of plywood to demonstrate the device.

How to Build A Simple Connected Light with IoT.JS and Python Django - light-internall-small

For instructions on how to physically connect your LED light to the raspberry pi go here. The code to control the light is on my personal github repo, the file which controls the server is done in the file server_html.js. You can see line  63 calling the objects method: lightcontrol.showRainbowLight(). This method is exported in the lightconrol.js file which controls the light. For now I will leave details for a future blog, this is all about how we access and control the light and server. How we control the hardware pins of the LED and make that work is for blog 2 in the series.

The Server

The basic functions of the server are to register and update the IP of our IoT Light and to route the user to the correct local IP address.

Register And Update The Light

The server holds the details of the light, and again, there are a number of protocols and standards to pick from! To keep things simple, we created a REST endpoint that allows a light to register; it does this with PUT, POST and DELETE. The key to the server is to use the MAC address as a unique ID and hold the local IP address. In this example, the light will only work locally for the developer, and it’s a valid use case for certain applications to only be accessible to someone within the local network. The basic sequence diagrams looks like this:

How to Build A Simple Connected Light with IoT.JS and Python Django - LightRegistration

To ease the writing of REST endpoints, I used the very popular Django Rest Framework. Hence, the light needs to register itself and update its local IP address. It uses the MAC address as a unique value, however you are free to invent your own solution here. I created a very rich object model in my server in an attempt to future proof the DB object as best as possible, but the controller for registering the light is surprisingly simple and compact, and most of the lines are documentation.

def iot_machines_register(request):                     # TODO Add some type of authentication for iot devices
    """
    :param request: JSON {
                    device_id:      mac_adress e.g. 98:83:89:3a:96:a5
                    device_name:    "any text string"
                    local_ip:       IPV4 or IPV6 e.g. 192.168.0.1
                    }

    :return: HttpResponseForbidden, HttpResponse, HttpResponseRedirect

    Register an IoT Machine based on it's MAC address.
    If this is a POST we check it's unique. Verify the JSON package. And register the machine

    If this is a PUT we check it exists. verify the JSON package. And update the machine.

    TODO - Authentication & Authorization!

    """

    if request.method == 'POST':
        data = JSONParser().parse(request)
        print("We got the following data in the request: {}".format(data))
        serializer = IoTMachineSerializer(data=data)
        if serializer.is_valid():
            serializer.save()
            return JsonResponse(serializer.data, status=201)
        return JsonResponse(serializer.errors, status=400)

The function checks the method with request.method == ‘POST’ and creates a serializer with serializer=IoTMachineSerializer(data=data) to parse the date from the POST message. Validation is done in the model, which makes things far simpler to code. If validation succeeds, the ORM saves the new object in the SQL database with serializer.save() and returns a HTTP 201 with return JsonResponse(serializer.data, status=201). If it fails, the model generates the correct error codes and the framework responds with the appropriate error message with return JsonResponse( serializer.errors, status=400). Details of this are on the GitHub repo.

Routing The User To The Local IP

The clever part of this system must allow the user to find the IP of the light. After doing this, all communications will be handled locally between the device on the local WiFi network and the IoT Light.

How to Build A Simple Connected Light with IoT.JS and Python Django - LightRouting

The sequence diagram above shows the flow of a user hitting the server that contains the local IP and sending a redirect. After this happens, the server is no longer part of the data flow for this user. All the clever control and data flow is between the local user and the IoT Device.

def iot_machine_find(request, mac_address):
    """
    Find an IoT Machine based on it's MAC address.
    If it has a public local IP address then redirect to this IP.
    If our embedded system does not have a valid IP address then don't redirect and show the user a
    friendly page saying the machine is not active at the moment.

    """
    print("Trying to find an IoT Machine and redirect locally with mac_address of: {}".format(mac_address))

    # Make sure we can find the machine via it's mac address
    try:
        iot_device = IoTMachine.objects.get(device_id=mac_address)
    except IoTMachine.DoesNotExist:
        return HttpResponse(status=404)

    # Make sure the IP address we pass back is still valid
    try:
        ipaddress.IPv4Address(iot_device.local_ip)
    except ValueError:
        # TODO Paint a nice screen and show the user the device exists but routing is broken
        print("There was an issue with the stored IP address for the device: {} - handle it an carry on".format(iot_device.mac_address))
        return Http404

    # Happy path - direct the user to the device
    if request.method == 'GET':

        scheme = 'http'                 # TODO make this a dynamic variable from settings or DB table
        path = 'machine'                # TODO make this a dynamic variable from settings or DB table
        remote_url = "{0}://{1}/{2}".format(scheme,iot_device.local_ip,path)
        print("We found the device returning the IOT Local address: {}".format(remote_url))
        return HttpResponseRedirect(remote_url)

    # If we get this far - we don't support other methods at the minute so reply with a forbidden.
    return HttpResponseForbidden

The first and second try/except clauses simply check that the MAC address exists, and if it does, that there is a valid IP address to send back. The happy path checks for the GET and PUT method, and a GET returns the local IP address in a redirect with return HttpResponseRedirect(remote_url). The URL is hard coded in this example, however production systems would dynamically take these values from either configuration files or from data the IoT device provides.

All other HTTP methods will be caught with the return HttpResponseForbidden method. Again, much of the heavy lifting is done with the Django ORM and REST framework.

It’s important to note how flat the structure is; Pythonic code tends to move away from multiple layers of abstraction, opting for structures that are as flat as possible. This only shows the logic the server is exercising, and the framework comes, as they say, with ‘batteries included’. To get to the Django administration interface as a registered admin user, you just login and hit the admin path. It will then be possible to view the active DB, registered IoT machines, and even manipulate any values of those machines. No additional code is required for this.

How to Build A Simple Connected Light with IoT.JS and Python Django - django-screen1-small

By selecting a single IoT device, the framework will pull all relevant data from the SQL DB and format and display it in the Django admin form. All of this is generated automatically:

How to Build A Simple Connected Light with IoT.JS and Python Django - django-screen2-small

I haven’t gone into detail about how the Django ORM works in this article. If you want to learn more, it’s best to visit the Python Django experts.

How Did The Client Find The Server?

At this point you might be thinking: wait a minute, not only do I still need to know about the public server, but I also need to know what the light ID is! Remember the light as no physical buttons or screen! In this example we used a QR code that sits next to our light. The user scans the code with the phone which routes them to the server using it’s own MAC address as the ID. You can try this yourself with your Samsung browser on your phone – scan this QR code below.

How to Build A Simple Connected Light with IoT.JS and Python Django - qrcode_b827eb01d83f-small

You will be routed to my test server at www.noisyatom.tech/iot/machine/b8:27:3b:01:d8:3f, which will forward you to the UI the light exposes. (Un)fortunately, it only works if you are here in the same WiFi network. :-) Your browser will try to connect you to http://192.168.110.99/light.

Incidentally you can try this from your Samsung browser by selecting the ‘Scan QR code’ from the top right selection menu, which should show you a screen like below:

How to Build A Simple Connected Light with IoT.JS and Python Django - Screenshot_qr-small

Further Exploration

There is a lot of information here, and I’ve not really explained how the model is created or how to tell the server what parameters of the model are important! I will follow up on this in future blog posts where I’ll look at the details of the code running on the light. The light acts like both a server and client: it’s a server to the device that wants access to functionality, and a client of the central server that holds details about the device. Once the light is activated, it goes through a very nice looking rainbow dance; you could imagine this being used as a mood light or as a device that interacts with other systems. Finally, I will revisit the server side code later and explain how we make the model persist to a DB and have only the server check and verify components of that model with the REST interface in a future blog.

Check out the light in action!

How to Build A Simple Connected Light with IoT.JS and Python Django - light-rainbow-small

 

How to Run IoT.js on the Raspberry PI 0

IoT.js is a lightweight JavaScript platform for building Internet of Things devices; this article will show you how to run it on a few dollars worth of hardware. The First version of it was released last year for various platforms including Linux, Tizen, and NuttX (the base of Tizen:RT). The Raspberry Pi 2 is one of the reference targets, but for demo purposes we also tried to build for the Raspberry Pi Zero, which is the most limited and cheapest device of the family. The main difference is the CPU architecture, which is ARMv6 (like the Pi 1), while the Pi 2 is ARMv7, and the Pi 3 is ARMv8 (aka ARM64).

IoT.js upstream uses a python helper script to crossbuild for supported devices, but instead of adding support to new device I tried to build on the device using native tools with cmake and the default compiler options; it simply worked! While working on this, I decided to package iotjs for debian to see how well it will support other architectures (MIPS, PPC, etc), we will see.

Unfortunately, Debian armel isn’t optimized for ARMv6 and FPU, both of which are present on the Pi 1 and Pi 0, so the Raspbian project had to rebuild Debian for the ARMv6+VFP2 ARM variant to support all Raspberry Pi SBC’s.

In this article, I’ll share hints for running IoT.js on Raspbian: the OS officially supported by the Raspberry foundation; the following instructions will work on any Pi device since a portability strategy was preferred over optimization. I’ll demonstrate three separate ways to do this: from packages, by building on the device, and by building in a virtual machine. By the way, an alternative to consider is to rebuild Tizen Yocto for  the Pi 0, but I’ll leave that as an exercise for the reader, you can accomplish this with a bitbake recipe, or you can ask for more hints in the comments section.

How to Run IoT.js on the Raspberry PI 0 - tizen-pizero

Install from Packages

iotjs landed in Debian’s sid, and until it is in testing branch (and subsequently Raspbian and Ubuntu), the fastest way is to download it is via precompiled packages from my personal Raspbian repo

url='https://dl.bintray.com/rzr/raspbian-9-armhf'
source="/etc/apt/sources.list.d/bintray-rzr-raspbian-9-armhf.list"
echo "deb $url raspbian main" | sudo tee "$source"
sudo apt-get update
apt-cache search iotjs
sudo apt-get install iotjs
/usr/bin/iotjs
Usage: iotjs [options] {script | script.js} [arguments]

Use it

Usage is pretty straightforward, start with a hello world source:

echo 'console.log("Hello IoT.js !");' > example.js
iotjs  example.js 
Hello IoT.js !

More details about the current environment can be used (this is for iotjs-1.0 with the default built-in modules):

echo 'console.log(JSON.stringify(process));' > example.js
iotjs  example.js 
{"env":{"HOME":"/home/user","IOTJS_PATH":"","IOTJS_ENV":""},"native_sources":{"assert":true,"buffer":true,"console":true,"constants":true,"dns":true,"events":true,"fs":true,"http":true,"http_client":true,"http_common":true,"http_incoming":true,"http_outgoing":true,"http_server":true,"iotjs":true,"module":true,"net":true,"stream":true,"stream_duplex":true,"stream_readable":true,"stream_writable":true,"testdriver":true,"timers":true,"util":true},"platform":"linux","arch":"arm","iotjs":{"board":"\"unknown\""},"argv":["iotjs","example.js"],"_events":{},"exitCode":0,"_exiting":false} null 2

From here, you can look to use other built-in modules like http, fs, net, timer, etc.

Need More Features?

More modules can be enabled in the master branch, so I also built snapshot packages that can be installed to enable more key features like GPIO, I2C, and more. For your convenience, the snapshot package can be installed to replace the latest release:

root@raspberrypi:/home/user$ apt-get remove iotjs iotjs-dev iotjs-dbgsym iotjs-snapshot
root@raspberrypi:/home/user$ aptitude install iotjs-snapshot
The following NEW packages will be installed:
  iotjs-snapshot{b}
(...)
The following packages have unmet dependencies:
 iotjs-snapshot : Depends: iotjs (= 0.0~1.0+373+gda75913-0~rzr1) but it is not going to be installed
The following actions will resolve these dependencies:
     Keep the following packages at their current version:
1)     iotjs-snapshot [Not Installed]                     
Accept this solution? [Y/n/q/?] n
The following actions will resolve these dependencies:

     Install the following packages:                 
1)     iotjs [0.0~1.0+373+gda75913-0~rzr1 (raspbian)]
Accept this solution? [Y/n/q/?] y
The following NEW packages will be installed:
  iotjs{a} iotjs-snapshot 
(...)
Do you want to continue? [Y/n/?] y
(...)
  iotjs-snapshot https://dl.bintray.com/rzr/raspbian-9-armhf/iotjs-snapshot_0.0~1.0+373+gda75913-0~rzr1_armhf.deb
  iotjs https://dl.bintray.com/rzr/raspbian-9-armhf/iotjs_0.0~1.0+373+gda75913-0~rzr1_armhf.deb

Do you want to ignore this warning and proceed anyway?
To continue, enter "yes"; to abort, enter "no": yes
Get: 1 https://dl.bintray.com/rzr/raspbian-9-armhf raspbian/main armhf iotjs armhf 0.0~1.0+373+gda75913-0~rzr1 [199 kB]
Get: 2 https://dl.bintray.com/rzr/raspbian-9-armhf raspbian/main armhf iotjs-snapshot armhf 0.0~1.0+373+gda75913-0~rzr1 [4344 B]
(...)

If you the run console.log(process) again, you’ll see more interesting modules to use, like gpio, i2c, uart and more, and external modules can be also used; check on the work in progress for sharing modules to the IoT.js community. Of course, this can be reverted to the latest release by simply installing the iotjs package because it has higher priority than the snapshot version.

root@raspberrypi:/home/user$ apt-get install iotjs
(...)
The following packages will be REMOVED:
  iotjs-snapshot
The following packages will be upgraded:
  iotjs
(...)
Do you want to continue? [Y/n] y
(...)

Build on the Device

It’s also possible to build the snapshot package from source with extra packaging patches, found in the community branch of IoT.js (which can be rebased on upstream anytime).

sudo apt-get install git time sudo
git clone https://github.com/tizenteam/iotjs
cd iotjs
./debian/rules
sudo debi

On the Pi 0, it took less than 30 minutes over NFS for this to finish. If you want to learn more you can follow similar instructions for building IoTivity on ARTIK;
it might be slower, but it will extend life span of your SD Cards.

Build on a Virtual Machine

A faster alternative that’s somewhere between building on the device and setting up a cross build environment (which always has a risk of inconsistencies) is to rebuild IoT.js with QEMU, Docker, and binfmt.

First install docker (I used 17.05.0-ce and 1.13.1-0ubuntu6), then install the remaining tools:

sudo apt-get install qemu qemu-user-static binfmt-support time
sudo update-binfmts --enable qemu-arm

docker build 'http://github.com/tizenteam/iotjs.git'

It’s much faster this way, and took me less than five minutes. The files are inside the container, so they need to be copied back to host. I made a helper script for setup and to get the deb packages ready to be deployed on the device (sudo dpkg -i *.deb) :

curl -sL https://rawgit.com/tizenteam/iotjs/master/run.sh | bash -x -
./tmp/out/iotjs/iotjs-dbgsym_0.0-0_armhf.deb
./tmp/out/iotjs/iotjs-dev_0.0-0_armhf.deb
./tmp/out/iotjs/iotjs_0.0-0_armhf.deb

I used the Resin/rpi-raspbian docker image (thanks again Resin!). Finally, I want to also thank my coworker Stefan Schmidt for the idea after he setup a similar trick for EFL’s CI.

Further Reading

If you want to learn more, here are some additional resources to take your understanding further.

Raspberry Pi is a trademark of the Raspberry Pi Foundation\

Building IoTivity for ARM on ARTIK Devices

There are several options to build IoTivity for ARM targets or any non x86 hardware, but first you have to decide which operating system you want to use. In this article, I won’t compare OS or devices; instead, I’ll give a couple of hints that apply to ARTIK 5, 7, and 10 devices (not the ARTIK 0 family, which run TizenRT). These steps can also be applied to other single board computers like the Raspberry PI.

Build for Tizen with GBS

The first and easiest way to build IoTivity is for Tizen, using GBS. This process was explained in a previous article.

For your knowledge, GBS was inspired by Debian’s git-build-package and uses an ARM toolchain that runs in a chrooted ARM environment using QEMU. Both ARTIK boards and the Raspberry Pi are used as Tizen reference platforms.

Build for Yocto with Bitbake

The second option is to crossbuild with the Yocto toolchain. Theoretically, it should be the fastest way, but in practice it might be the opposite because external packages will be rebuilt from scratch if the project doesn’t provide a Yocto SDK; this can be a long, resource consuming process.

Anyway, this is what is used in some OSS automotive projects (like AGL or GENIVI); the following slides provide a tutorial to get familiar with bitbake and OE layers (like meta-oic for IoTivity).

This can then be deployed to ARTIK 5 or 10 using meta-artik (ARTIK7 might need extra patches).

Cross Building Manually

Another option is to setup your toolchain with the scons environment, but I would not recommend it because you’ll probably lack tracking or control (output binaries will not be packaged) and risk consistency issues. If you choose this method, refer to IoTivity’s wiki page about crossbuilding.

Build on the Device

The last and most obvious option is to build on the device itself; this takes a bit longer, but is totally possible assuming your device has the resources (it worked for me on the RPI3 and ARTIK10). For this to work cleanly I suggest you build system package if your distro is supports it it (.rpm, .deb etc). This way you can track the different build configurations and avoid mistakes caused by manual operations.

Build on ARTIK10

ARTIK 10 supports building with Fedora 22, RPM packages can be built from Tizen’s spec file plus a handful of patches (I needed to use scons install and avoid duplication of packaging efforts). This work is still under review in the master branch, and will be backported to 1.3-rel branch once 1.3.1 is released. Meanwhile, you can clone my sandbox branch.

mkdir -p ${HOME}/rpmbuild/SOURCES/
git clone http://github.com/tizenteam/iotivity -b sandbox/pcoval/on/next/fedora
cd iotivity
git archive HEAD | gzip - > ${HOME}/rpmbuild/SOURCES/iotivity-1.3.1.tar.gz
rpmbuild -ba tools/tizen/iotivity.spec -D "_smp_mflags -j1" 

This is easy because the ARTIK10 has a lot of eMMC and RAM. The 8-core CPU is nice, but it won’t help because parallelizing too much will eat all of the device’s RAM. To be ensure reproductibility use -j1, but if you’re in a hurry you can try -j2 or more.

Build on ARTIK7

The next challenge is the ARITK710 which less powerful than ARTIK10 (RAM=1GB , eMMC=4GB) and uses Fedora 24. This is a slightly less favorable case because there are some extra steps due to the RAM and disk space being more limited. We’ll use extra storage to add a memory swap and mock: a nice Fedora tool to build RPM from git sources (it’s very similar to pbuilder for Debian). Extra storage can be connected to the board via SD or USB bus, but I prefer to use NFS; it’s not optimal, but it works.

Setup NFS

First, setup NFS on your development host and share a directory on the LAN; mine is Debian based:

sudo apt-get install nfs-kernel-server
mkdir -p /tmp/srv/nfs
ifconfig # Note interface IP to be used later

cat /etc/exports
# TODO: LAN's IP might be adjusted here:
/tmp/srv/nfs 192.0.0.0/255.0.0.0(rw,sync,no_root_squash,no_subtree_check)

Use NFS

Now, login to the target and install NFS and the other tools we’ll use:

dnf remove iotivity # Uninstall previous version if present
dnf install nfs-utils mock-scm mock rpm-build screen

Mount a directory for the device:

mnt=/tmp/mnt/nfs/host/$(hostname)/
host=192.168.0.2 # TODO adjust with IP of your host
mkdir -p $mnt ; mount $host:/tmp/srv/nfs $mnt

Attach a swap file:

file="$mnt/swap.tmp"
dd if=/dev/zero of=$file bs=1k count=2097152 # 2GB

losetup /dev/loop0 "$file"
mkswap /dev/loop0
swapon /dev/loop0

Because the eMMC is also limited, the build will be done in remote storage too. This won’t use an NFS shared folder directly because mock doesn’t support it, so let’s cheat it by mounting an ext4 partition file over NFS, the same way we did above:

file="$mnt/ext4.tmp"
dd if=/dev/zero of=$file bs=1k count=2097152 # 2GB

losetup /dev/loop1 "$file"
mkfs.ext4 /dev/loop1

src=/dev/loop1
dst=/var/lib/mock/
mount $src $dst

Build IoTivity

Create a user to run mock on remote filesystem:

user=abuild # TODO change if needed
adduser $user
home="/home/$user"
mnt=/tmp/mnt/nfs/host/$(hostname)/
mkdir -p "$home" "$mnt$home"
chown -Rv $user $s "$home" "$mnt$home"
mount --bind "$mnt$home" "$home"

su -l $user

Mock is pretty straightforward, but here is an example of the upcoming 1.3.1 release with the patches I mentioned above. If necessary, you can rebase your own private repo on it.

package=iotivity
url="https://github.com/TizenTeam/iotivity.git"
branch="sandbox/pcoval/on/next/fedora"
conf="fedora-24-armhfp"
spec=./tools/tizen/iotivity.spec
  
time mock -r "$conf" \
    --scm-enable \
    --scm-option method=git \
    --scm-option package="${package}" \
    --scm-option git_get=set \
    --scm-option write_tar=True \
    --scm-option branch="${branch}" \
    --scm-option git_get="git clone $url" \
    --scm-option spec="${spec}" \
    --resultdir=${HOME}/mock \
    --define "_smp_mflags -j1" \
    #eol

Now wait and check log trace:

You are attempting to run "mock" which requires administrative
privileges, but more information is needed in order to do so.
Authenticating as "root"
Password: 
INFO: mock.py version 1.3.4 starting (python version = 3.5.3)...
(...)
Start: run
(... time to clone and build iotivity repo ...)
Finish: rpmbuild iotivity-1.3.1-0.src.rpm
Finish: build phase for iotivity-1.3.1-0.src.rpm
INFO: Done(/home/abuild/mock/iotivity-1.3.1-0.src.rpm) Config(fedora-24-armhfp) 116 minutes 1 seconds
INFO: Results and/or logs in: /home/abuild/mock
INFO: Cleaning up build root ('cleanup_on_success=True')
Start: clean chroot
Finish: clean chroot
Finish: run

Depending on network bandwidth, RPMs will be produced in a reasonable time (less than a night for sure).

You can now validate reference examples:

su root -c "dnf remove -y iotivity"
su root -c "dnf install -y --allowerasing ${HOME}/mock/iotivity*.arm*.rpm"

rpm -ql iotivity-test

cd /usr/lib/iotivity/resource/examples ; ./simpleserver 2 
# 2 is needed to enable security node.

In other session:

cd /usr/lib/iotivity/resource/examples ; ./simpleclient
GET request was successful
Resource URI: /a/light
Server format in GET response:10000
Server version in GET response:2048

Bringing it all Together

There has been one major relevant change since 1.2.x, the default build configuration is using secured mode (Tizen had it enabled for longer). For developers, this means that if your application is not configured to support security ACL, it won’t work, and you might expect this OC_STACK_UNAUTHORIZED_REQ error:

onGET Response error: 46

The following presentation provides some insight on the IoTivity security features.

The fall back option is to rebuild IoTivity without SECURITY enabled (using –define SECURED 0), but this won’t be certified as OCF compliant. Finally, these build steps can be replicated for other projects using IoTivity.

How to use V4L2 Cameras on the Raspberry Pi 3 with an Upstream Kernel

A V4L2 staging driver for the Raspberry Pi (RPi) was recently merged into the Linux kernel 4.11. While this driver is currently under development, I wanted to test it and to provide help with V4L2-related issues. So, I took some time to build an upstream kernel for the Raspberry Pi 3 with V4L2 enabled. This isn’t a complex process, but it requires some tricks for it to work; this article describes the process.

Prepare an Upstream Kernel

The first step is to prepare an upstream kernel by cloning a git tree from the kernel repositories. Since the Broadcom 2835 camera driver (bcm2835-v4l2) is currently under staging, it’s best to clone the staging tree because it contains the staging/vc04_services directory with both ALSA and V4L2 drivers:

$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
$ cd staging
$ git checkout staging-next

There’s an extra patch that it is required for DT to work with the bcm2835-v4l2 driver:

[PATCH] ARM: bcm2835: Add VCHIQ to the DT.

Signed-off-by: Eric Anholt <eric@anholt.net>

commit e89ae78395394148da6c0e586e902e6489e04ed1
Author: Eric Anholt 
Date:   Mon Oct 3 11:23:34 2016 -0700

    ARM: bcm2835: Add VCHIQ to the DT.
    
    Signed-off-by: Eric Anholt 

diff --git a/arch/arm/boot/dts/bcm2835-rpi.dtsi b/arch/arm/boot/dts/bcm2835-rpi.dtsi
index 38e6050035bc..1f42190e8558 100644
--- a/arch/arm/boot/dts/bcm2835-rpi.dtsi
+++ b/arch/arm/boot/dts/bcm2835-rpi.dtsi
@@ -27,6 +27,14 @@
 			firmware = <&firmware>;
 			#power-domain-cells = <1>;
 		};
+
+		vchiq {
+			compatible = "brcm,bcm2835-vchiq";
+			reg = <0x7e00b840 0xf>;
+			interrupts = <0 2>;
+			cache-line-size = <32>;
+			firmware = <&firmware>;
+		};
 	};
 };

You need to apply this to the git tree, in order for the vciq driver to work.

Prepare a Cross-Compile Make Script

While it’s possible to build the kernel directly on any RPi, building it using a cross-compiler is significantly faster. For such builds, I do this with a helper script, rather than “make,” to set the needed configuration environment.

Before being able to cross-compile, you need to install a cross compiler on your local machine; several distributions come with cross-compiler patches, or you can download an arm cross-compiler from kernel.org. In my case, I installed the toolchain at /opt/gcc-4.8.1-nolibc/arm-linux/.

I named my make script rpi_make; it not only sets the cross-compiler and defines a place to install the final output files. It has the following content:

#!/bin/bash

# Custom those vars to your needs
ROOTDIR=/devel/arm_rootdir/
CROSS_CC_PATH=/opt/gcc-4.8.1-nolibc/arm-linux/bin
CROSS_CC=arm-linux-
HOST=raspberrypi

# Handle arguments
INSTALL="no"
BOOT="no"
ARG=""
while [ "$1" != "" ]; do
	case $1 in
	rpi_install|install)
		INSTALL=yes
		;;
	boot|reboot)
		INSTALL=yes
		BOOT=yes
		;;
	*)
		ARG="$ARG $1"
	esac
	shift
done

# Add cross-cc at PATH
PATH=$CROSS_CC_PATH:$PATH

# Handle Kernel makefile rules
if [ "$ARG" == "" ]; then
	rm -rf $ROOTDIR/

	set -eu

	make CROSS_COMPILE=$CROSS_CC ARCH=arm \
	    INSTALL_PATH=$ROOTDIR \
	    INSTALL_MOD_PATH=$ROOTDIR \
	    INSTALL_FW_PATH=$ROOTDIR \
	    INSTALL_HDR_PATH=$ROOTDIR \
	    zImage modules dtbs

	make CROSS_COMPILE=$CROSS_CC ARCH=arm \
	    INSTALL_PATH=$ROOTDIR \
	    INSTALL_MOD_PATH=$ROOTDIR \
	    INSTALL_FW_PATH=$ROOTDIR \
	    INSTALL_HDR_PATH=$ROOTDIR \
	    modules_install

	# Create a tarball with the drivers
	mkdir -p $ROOTDIR/install
	(cd $ROOTDIR/lib; tar cfz $ROOTDIR/install/drivers.tgz modules/)

	# Copy Kernel and DTB at the $ROOTDIR/install/ dir
	VER=$(ls $ROOTDIR/lib/modules)
	cp ./arch/arm/boot/dts/bcm283*.dtb $ROOTDIR/install
	cp ./arch/arm/boot/zImage $ROOTDIR/install/vmlinuz-$VER
	cp .config $ROOTDIR/install/config-$VER
	cp System.map $ROOTDIR/install/System.map-$VER
else
	make CROSS_COMPILE=$CROSS_CC ARCH=arm \
	    INSTALL_PATH=$ROOTDIR \
	    INSTALL_MOD_PATH=$ROOTDIR \
	    INSTALL_FW_PATH=$ROOTDIR \
	    INSTALL_HDR_PATH=$ROOTDIR \
	    $ARG
fi

echo "Build finished."
echo "  Install: $INSTALL"
echo "  Reboot : $BOOT"

# Install at $HOST
if [ "$INSTALL" == "yes" ]; then
	DIR=$(git rev-parse --abbrev-ref HEAD)
	if [ "$(echo $DIR|grep rpi)" == "" ]; then
		DIR=upstream
	fi

	echo "Installing new drivers at $HOST:/boot/$DIR"
	scp $ROOTDIR/install/drivers.tgz $HOST:/tmp
	ssh root@$HOST "(cd /lib; tar xf /tmp/drivers.tgz)"
	ssh root@$HOST "mkdir -p /boot/$DIR"
	scp $ROOTDIR/install/*dtb $ROOTDIR/install/*-$VER root@$HOST:/boot/$DIR
fi

# Reboots $HOST
if [ "$BOOT" == "yes" ]; then
	ssh root@$HOST "reboot"
fi

Please note, you need to change the CROSS_CC_PATH var to point to the directory where you installed the cross-compiler. You may also need to change the CROSS_CC variable on this script to match the name of the cross-compiler. In my case, the cross-compiler is called /opt/gcc-4.8.1-nolibc/arm-linux/bin/arm-linux-gcc. The ROOTDIR contains the PATH where driver, firmware, headers and documentation will be installed. In this script it’s set to /devel/arm_rootdir.

Prepare a Build Script

There are a number of drivers that are needed in.config to build the kernel, and new configuration data may be needed as kernel development progresses. Also, although RPi3 supports 64-bit kernels, the userspace provided with the NOOBS distribution is 32 bits. There’s also a TODO for the bcm2835 mentioning that it should be ported to work on arm64. So, I opted to build a 32 bit kernel; this required a hack because the device tree files for the CPU used in the RPi3 (Broadcom 2837) exist only under arch/arm64 directory. So, my build procedure had to change the kernel build systems to build it for 32 bit as well.

Instead of manually handling the required steps, I opted to use a build script, called build, to set the configuration using the scripts/config script. The advantage of this approach is that it makes it easier to maintain as newer kernel releases are added.

#!/bin/bash

# arm32 multi defconfig
rpi_make multi_v7_defconfig

# Modules and generic options
enable="MODULES MODULE_UNLOAD DYNAMIC_DEBUG"
disable="LOCALVERSION_AUTO KPROBES STRICT_MODULE_RWX MODULE_FORCE_UNLOAD MODVERSIONS MODULE_SRCVERSION_ALL MODULE_SIG MODULE_COMPRESS ARM_MODULE_PLTS BPF_JIT TEST_ASYNC_DRIVER_PROBE I2C_STUB SPI_LOOPBACK_TEST INTERVAL_TREE_TEST PERCPU_TEST TEST_LKM TEST_USER_COPY TEST_BPF TEST_STATIC_KEYS CRYPTO_TEST MODULE_FORCE_LOAD DRM_TEGRA_STAGING PRISM2_USB COMEDI RTL8192U RTLLIB R8712U R8188EU RTS5208 VT6655 VT6656 ADIS16201 ADIS16203 ADIS16209 ADIS16240 AD7606 AD7780 AD7816 AD7192 AD7280 ADT7316 AD7150 AD7152 AD7746 AD9832 AD9834 ADIS16060 AD5933 SENSORS_ISL29028 ADE7753 ADE7754 ADE7758 ADE7759 ADE7854 AD2S90 AD2S1200 AD2S1210 FB_SM750 FB_XGI USB_EMXX SPEAKUP MFD_NVEC STAGING_MEDIA STAGING_BOARD LTE_GDM724X MTD_SPINAND_MT29F LNET DGNC GS_FPGABOOT COMMON_CLK_XLNX_CLKWZRD FB_TFT FSL_MC_BUS WILC1000_SDIO WILC1000_SPI MOST KS7010 GREYBUS"

# Bluetooth
enable="$enable BNEP_MC_FILTER BNEP_PROTO_FILTER BT_HCIUART_H4 BT_HCIUART_BCSP BT_HCIUART_LL BT_HCIUART_3WIRE BT_BNEP_MC_FILTER BT_BNEP_PROTO_FILTER BT_HCIUART_BCM"
module="$module BT_BNEP BT_HCIUART BT_BCM"
disable="$disable BT_HCIUART_ATH3K BT_HCIUART_INTEL BT_HCIUART_QCA BT_HCIUART_AG6XX BT_HCIUART_MRVL BT_HCIBPA10X"

# Raspberry Pi 3 drivers
enable="$enable ARCH_BCM2835 I2C_BCM2835 BCM2835_WDT SPI_BCM2835 SPI_BCM2835AUX SND_BCM2835_SOC_I2S USB_DWC2_HOST DMA_BCM2835 BCM2835_MBOX RASPBERRYPI_POWER RASPBERRYPI_FIRMWARE STAGING BCM_VIDEOCORE"
module="$module BCM2835_VCHIQ SND_BCM2835 UIO UIO_PDRV_GENIRQ FUSE_FS CUSE"
disable="$disable BCM2835_VCHIQ_SUPPORT_MEMDUMP UIO_CIF UIO_DMEM_GENIRQ UIO_AEC UIO_SERCOS3 UIO_PCI_GENERIC UIO_NETX UIO_PRUSS UIO_MF624"

# Raspberry Pi 3 serial console
enable="$enable SERIAL_8250_EXTENDED SERIAL_8250_SHARE_IRQ SERIAL_8250_BCM2835AUX SERIAL_8250_DETECT_IRQ"
disable="$disable SERIAL_8250_MANY_PORTS SERIAL_8250_RSA"

# logitech HCI
enable="$enable HID_PID USB_HIDDEV LOGITECH_FF LOGIWHEELS_FF HIDRAW"
module="$module HID USB_G_HID I2C_HID HID_LOGITECH HID_LOGITECH_DJ HID_LOGITECH_HIDPP"
disable="$disable HID_ASUS LOGIRUMBLEPAD2_FF LOGIG940_FF"

# This is currently needed for bcm2835-v4l2 driver to work
#enable="$enable VIDEO_BCM2835"
module="$module VIDEO_BCM2835"
disable="$disable DRM_VC4"

# Settings related to the media subsystem
enable="$enable MEDIA_ANALOG_TV_SUPPORT MEDIA_DIGITAL_TV_SUPPORT MEDIA_RADIO_SUPPORT MEDIA_SDR_SUPPORT MEDIA_RC_SUPPORT MEDIA_CEC_SUPPORT DVB_NET DVB_DYNAMIC_MINORS RC_DECODERS LIRC VIDEO_VIVID_CEC MEDIA_SUBDRV_AUTOSELECT VIDEO_AU0828_V4L2 VIDEO_AU0828_RC VIDEO_CX231XX_RC SMS_SIANO_RC"
module="$module RC_MAP IR_NEC_DECODER IR_RC5_DECODER IR_RC6_DECODER IR_JVC_DECODER IR_SONY_DECODER IR_SANYO_DECODER IR_SHARP_DECODER IR_MCE_KBD_DECODER IR_XMP_DECODER VIDEO_AU0828 VIDEO_CX231XX DVB_USB DVB_USB_A800 DVB_USB_DIBUSB_MB DVB_USB_DIBUSB_MC DVB_USB_DIB0700 DVB_USB_V2 DVB_USB_AF9015 DVB_USB_AF9035 DVB_USB_AZ6007 DVB_USB_MXL111SF DVB_USB_RTL28XXU DVB_USB_DVBSKY SMS_USB_DRV IR_LIRC_CODEC VIDEO_CX231XX_ALSA VIDEO_CX231XX_DVB"
disable="$disable MEDIA_CEC_DEBUG MEDIA_CONTROLLER_DVB DVB_DEMUX_SECTION_LOSS_LOG RC_DEVICES VIDEO_PVRUSB2 VIDEO_HDPVR VIDEO_USBVISION VIDEO_STK1160_COMMON VIDEO_GO7007 VIDEO_TM6000 DVB_USB_DEBUG DVB_USB_UMT_010 DVB_USB_CXUSB DVB_USB_M920X DVB_USB_DIGITV DVB_USB_VP7045 DVB_USB_VP702X DVB_USB_GP8PSK DVB_USB_NOVA_T_USB2 DVB_USB_TTUSB2 DVB_USB_DTT200U DVB_USB_OPERA1 DVB_USB_AF9005 DVB_USB_PCTV452E DVB_USB_DW2102 DVB_USB_CINERGY_T2 DVB_USB_DTV5100 DVB_USB_FRIIO DVB_USB_AZ6027 DVB_USB_TECHNISAT_USB2 DVB_USB_ANYSEE DVB_USB_AU6610 DVB_USB_CE6230 DVB_USB_EC168 DVB_USB_GL861 DVB_USB_LME2510 DVB_USB_ZD1301 DVB_TTUSB_BUDGET DVB_TTUSB_DEC DVB_B2C2_FLEXCOP_USB DVB_AS102 USB_AIRSPY USB_HACKRF USB_MSI2500 DVB_PLATFORM_DRIVERS SMS_SDIO_DRV RADIO_ADAPTERS VIDEO_IR_I2C DVB_USB_DIBUSB_MB_FAULTY"

# GSPCA driver
module="$module USB_GSPCA USB_GSPCA_BENQ USB_GSPCA_CONEX USB_GSPCA_CPIA1 USB_GSPCA_DTCS033 USB_GSPCA_ETOMS USB_GSPCA_FINEPIX USB_GSPCA_JEILINJ USB_GSPCA_JL2005BCD USB_GSPCA_KINECT USB_GSPCA_KONICA USB_GSPCA_MARS USB_GSPCA_MR97310A USB_GSPCA_NW80X USB_GSPCA_OV519 USB_GSPCA_OV534 USB_GSPCA_OV534_9 USB_GSPCA_PAC207 USB_GSPCA_PAC7302 USB_GSPCA_PAC7311 USB_GSPCA_SE401 USB_GSPCA_SN9C2028 USB_GSPCA_SN9C20X USB_GSPCA_SONIXB USB_GSPCA_SONIXJ USB_GSPCA_SPCA500 USB_GSPCA_SPCA501 USB_GSPCA_SPCA505 USB_GSPCA_SPCA506 USB_GSPCA_SPCA508 USB_GSPCA_SPCA561 USB_GSPCA_SPCA1528 USB_GSPCA_SQ905 USB_GSPCA_SQ905C USB_GSPCA_SQ930X USB_GSPCA_STK014 USB_GSPCA_STK1135 USB_GSPCA_STV0680 USB_GSPCA_SUNPLUS USB_GSPCA_T613 USB_GSPCA_TOPRO USB_GSPCA_TOUPTEK USB_GSPCA_TV8532 USB_GSPCA_VC032X USB_GSPCA_VICAM USB_GSPCA_XIRLINK_CIT USB_GSPCA_ZC3XX"

# Hack needed to build Device Tree for RPi3 on arm 32-bits
ln -sf ../../../arm64/boot/dts/broadcom/bcm2837-rpi-3-b.dts arch/arm/boot/dts/bcm2837-rpi-3-b.dts
ln -sf ../../../arm64/boot/dts/broadcom/bcm2837.dtsi arch/arm/boot/dts/bcm2837.dtsi
git checkout arch/arm/boot/dts/Makefile
sed -i "s,bcm2835-rpi-zero.dtb,bcm2835-rpi-zero.dtb bcm2837-rpi-3-b.dtb," arch/arm/boot/dts/Makefile

# Sets enable/modules/disable configuration
for i in $enable; do ./scripts/config --enable $i; done
for i in $module; do ./scripts/config --module $i; done
for i in $disable; do ./scripts/config --disable $i; done

# Sets the max number of DVB adapters
./scripts/config --set-val DVB_MAX_ADAPTERS 16

# Use rpi_make script to build the Kernel
rpi_make

Please note, the script had to disable the VC4 DRM driver and use the simplefb driver instead. This is because the firmware is currently not compatible with both the bcm2835-v4l2 driver and vc4 driver. Also, as I wanted the ability to test a bunch of V4L2 and DVB hardware on it so I’ll be enabling several media drivers.

In order to build the kernel, I simply call:

    ./build 

Install the New Kernel on the Raspberry Pi3

I used an RPi3 with a NOOBS distribution pre-installed on its micro-SD card. I used the normal procedure to install Raspbian on it, but any other distribution should do the job. In order to make it easy to copy the kernel to it, I also connected it to my local network via WiFi or Ethernet. Please note, I’ve not yet been able to get the WiFi driver to work with the upstream kernel. So, if you want to have remote access after running the upstream kernel you should opt to use the Ethernet port.

Once the RPi3 is booted, set it up to run an SSH server; this can be done by clicking on the Raspberry Pi icon in the top menu, and selecting the Raspberry Pi Configuration application from the Preferences menu.  Switch to the Interfaces tab, select SSH, and click the button.

Then, from the directory where you compiled the kernel on your local machine, you should run:

$ export ROOTDIR=/devel/arm_rootdir/
$ scp -r $ROOTDIR/install pi@raspberrypi
pi@raspberrypi's password: [type the pi password here - default is "pi"]
System.map-4.11.0-rc1+                        100% 3605KB  10.9MB/s   00:00    
vmlinuz-4.11.0-rc1+                           100% 7102KB  11.0MB/s   00:00    
bcm2835-rpi-b-plus.dtb                        100%   11KB   3.6MB/s   00:00    
config-4.11.0-rc1+                            100%  174KB   9.7MB/s   00:00    
bcm2835-rpi-b-rev2.dtb                        100%   10KB   3.9MB/s   00:00    
bcm2837-rpi-3-b.dtb                           100%   10KB   3.7MB/s   00:00    
bcm2835-rpi-a.dtb                             100%   10KB   3.8MB/s   00:00    
bcm2836-rpi-2-b.dtb                           100%   11KB   3.8MB/s   00:00    
drivers.tgz                                   100% 5339KB  11.0MB/s   00:00    
bcm2835-rpi-zero.dtb                          100%   10KB   3.6MB/s   00:00    
bcm2835-rpi-b.dtb                             100%   10KB   3.7MB/s   00:00    
bcm2835-rpi-a-plus.dtb                        100%   10KB   3.7MB/s   00:00    
$ ssh pi@raspberrypi
pi@raspberrypi's password: [type the pi password here - default is "pi"]

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Mar 15 14:47:13 2017

pi@raspberrypi:~ $ sudo su -
root@raspberrypi:~# mkdir /boot/upstream
root@raspberrypi:~# cd /boot/upstream/
root@raspberrypi:/boot/upstream# cp -r ~pi/install/* .
root@raspberrypi:/boot/upstream# cd /lib
root@raspberrypi:/lib# tar xf /boot/upstream/drivers.tgz 

Another alternative is to setup the ssh server to accept root logins and copy the ssh keys to it, with

ssh-copy-id root@raspberrypi.

If you opt for this method, you could call this small script after building the kernel:

	echo "Installing new drivers"
	scp install/drivers.tgz raspberrypi:/tmp
	ssh root@raspberrypi "(cd /lib; tar xf /tmp/drivers.tgz)"
	scp install/*dtb install/*-$VER root@raspberrypi:/boot/upstream
	ssh root@raspberrypi "reboot"

This logic is included in the rpi_make script. So, you can call:

    rpi_make rpi_install

Adjust the RPi Config to Boot the New Kernel

Changing the RPi to boot the new kernel is simple. All you need to do is to edit the /boot/config.txt file and add the following lines:

# Upstream Kernel
kernel=upstream/vmlinuz-4.11.0-rc1+
device_tree=upstream/bcm2837-rpi-3-b.dtb

# Sets for bcm2385-v4l2 driver to work
start_x=1
gpu_mem=128

# Set to use Serial console
enable_uart=1

Use Serial Console on RPi3

Raspberry Pi3 has a serial console via its GPIO connector. If you want to use it, you’ll need to have a serial to USB converter capable of working with 3.3V. For it to work, you need to wire the following pins:

Pin 1 – 3.3V power reference
Pin 6 – Ground
Pin 8 – UART TX
Pin 10 – UART RX

This image shows the RPi pin outs. Once wired up, it will look like the following image.

How to use V4L2 Cameras on the Raspberry Pi 3 with an Upstream Kernel - raspberry_pi3
Raspberry Pi 3 with camera module v2

You should also change the Kernel command line to use ttyS0, e. g. setting /boot/cmdline.txt to:

dwc_otg.lpm_enable=0 console=ttyS0,115200 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

Use the bcm2835-v4l2 Driver

The bcm2835-v4l2 driver should be compiled as a module that’s not loaded automatically. So, to use it run the following command.

modprobe bcm2835-v4l2

You can then use your favorite V4L2 application. I tested it with qv4l2, camorama and cheese.

Current Issues

I found some problems with upstream drivers on the Raspberry Pi 3 with kernel 4.11-rc1 + staging (as found on March, 16):

  • The WiFi driver didn’t work, despite brcmfmac driver being compiled.
  • The UVC driver doesn’t work properly: after a few milliseconds it disconnects from the hardware.

That’s it. Enjoy!

The OSG Gears Up for Korea

Considering that our primary headquarters is in South Korea, it only makes sense that open source conferences in Seoul are a bit of a big deal to us. Next week we have two major conferences there: Korea Linux Forum and the Samsung Open Source Conference (SOSCON).  We are pulling all the stops for these conferences and are sending most of our team for three days of technical discussions and networking. If you are going to be at either of these events next week, keep an eye out for our team.

We have quite a few people who will be giving presentations on both technical and non technical subjects, so here’s a preview of what we’ll be talking about.

Korea Linux Forum

You can find the full event schedule here.

Why is Open Source R&D Important and What are We Doing About it? – Ibrahim Haddad (opening keynote)

Ibrahim Haddad, the head of this group, will cover why collaborative, open source research and development are important to modern tech companies. He will use the work of the Samsung Open Source Group as an example of how to adapt to new methods of collaboration that are faster and more agile.

Chromium Contributing Explained – Adenilson Cavalcanti

Making your first contribution to a project as complex and huge as Chromium can be daunting for a beginner. This talk will explain the contribution workflow of Chromium and its web engine, Blink. It will also cover the principles of good contributions and will provide tips and best practices for interacting with the Chromium community. It will address many subjects like cultural differences, writing commits logs, and more.

Media Resource Sharing Through the Media Controller API – Shuah Khan

In this talk, Shuah will discuss the Managed Media Controller API and how ALSA and au0828 use it to share media resources. In addition, she will show how media-ctl tool can be used to generate media graphs for a media device.

More Bang For Your Buck: How to Work with an Open Source Foundation – Brian Warner

One of the more encouraging trends in the software industry is the growing use of open source foundations, where a group of like-minded organizations work together to collaboratively build software. This talk will offer insight into what your organization could be doing to ensure a happy and healthy relationship with an open source foundation.

ARM64 KVM Weather Report – Mario Smarduch

ARM-KVM is maturing rapidly; over the past year, many new features have been implemented for ARM64’s virtualization extensions. This presentation will bring you up to date on many of the most important recent improvements.

A Survivor’s Guide to Contributing to the Linux Kernel – Javier Martinez Canillas

The Linux Kernel is the largest collaborative software development project in the world with a new patch being merged every few minutes. This presentation will discuss lessons learned from contributing to different subsystems that could improve interactions between individual contributors and the community.

SOSCON

The full schedule for SOSCON can be found here.

GStreamer and DRM – Thiago Sousa Santos & Luis De Bethencourt

GStreamer is an open source library for building multimedia applications, and it’s based around a pipeline model where each element processes the data returned by the previous one. In order to protect copyrighted content and support DRM (Digital Rights Management) the pipeline needs to handle encrypted content and be able to output the encrypted content to both encrypted and decrypted destinations in the pipeline. This talk will cover how this modular framework can get around the limitation of needing to render and protect encrypted content.

FFmpeg: A Retrospective – Reynaldo Verdejo

The FFmpeg project, one of the enablers of what we now know as FOSS multimedia and used by countless high-level playback and processing software, is as complex as the community that drives it. In this talk, both aspects are presented from the point of view of a long-time contributor. How its major parts fit together, what is available, what is missing, how its community works together and what major conflicts and achievements have shaped it. The talk goes through these topics using a relaxed yet historical approach, geared towards anyone interested in FOSS multimedia, regardless of experienc

Wayland, Is it Ready Yet? – Derek Foreman

Wayland has been championed by some as a replacement for X, but only recently is it finally starting to make inroads on desktops. This talk explains what Wayland is, why it’s a worthy successor to X, and some barriers to its widespread adoption. Missing functionality will be covered as well as areas where Wayland based compositors can already exceed X’s capabilities. Current upstream developments in Weston (the Wayland reference compositor) such as atomic mode setting and dmabuf support, will also be presented.

EFL: Designing a Vector Rendering API for User Interfaces that Scale – Cedric Bail

The Enlightenment Foundation Library (EFL) is a set of libraries designed to build modern Linux UI’s. Historically EFL has focused on raster graphics to provide a fast and efficient rendering pipeline, but the project recently added a Vector API to meet the demands of both developers and users. This talk will cover the existing tools and formats that build the EFL stack to provide an understanding of what is available today and the design decisions that led to the creation of this Vector API. From this understanding, the talk will individually explain the highest layers of the EFL stack in depth from the internal layer to the theme layer. This talk will be valuable to anyone who designs UIs that require vector graphics including application and toolkit developers.

6LoWPAN: An Open IoT Networking Protocol – Stefan Schmidt

With the increasing importance of the Internet of Things (IoT), suitable networking protocols are finally getting their needed attention. For some IoT scenarios a normal TCP/IP networking stack might be perfectly fine, but for small, battery powered, devices with limited wireless functionality this might be to much of an overhead. IPv6 over Low power Wireless Personal Area Networks (6LoWPAN, RFC4944) was specified to fill this gap by specifying an IPv6 adaptation layer and various compression techniques allowing IPv6 networking even on tiny IoT devices. While 6LoWPAN started out as an adaptation layer for IEEE 802.15.4 based networks it is now also used in Bluetooth LE and work is ongoing to adopt it for other technologies like NFC, DECT/ULE, power-line, etc. This talk will cover the 6LoWPAN protocol as described in the IETF RFC”s as well as its current implementation status inside the linux-wpan project and interoperability with operating systems like Contiki and RIOT.

The Internet of Smaller Things with IoTivity – Jon Cruz

The IoTivity project seeks to improve the Internet of Things by providing a portable, scalable, open source code base for developers and manufacturers to use. The current implementation targets multiple OS’s including Linux, Tizen, OS X, Android and even Arduino, and a common implementation based on open standards will be a big win for many.
This talk will cover what is took to get the project running well on constrained devices, including the Raspberry Pi 2 and Arduino. It will also cover some hints on setting up and getting started, along with details about many of the architectural and implementation decisions needed for the project to be successful.