Web, Internet of Things and GDPR

From the 25th  of May 2018, the European General Data Protection Regulation, also known as GDPR, has come into force across European Union (EU) and the European Economic Area (EEA).  This new European Regulation on privacy has far-reaching consequences on how personal data is collected, stored and used in the  Internet of Things (IoT) world and across the web. For IoT and web developers, it has a significant impact on how we start to think about technologies we deploy and services we build. One promise of the open web platform is enhanced privacy in comparison to other technology stacks. Can some of the inherent privacy issues in the currently deployed IoT architectures be mitigated by the use of web technology? We think it can.

In this blog post we will look at The Web of Things (WoT) technologies we have been involved in. By walking through the options from local device discovery to a gateway framework that enables remote access, we hope to give you a picture of a loosely coupled IoT solution using Web of Things technologies. At each stage, GDPR compliance of these technologies will be discussed.

GDPR and the Internet of Things

The 88 pages long GDPR regulation document has 99 articles on the rights for all individuals and the obligations placed on organizations. These new requirements represent a significant change on how data will be handled across businesses and industries. When it comes to the Internet of Things, GDPR compliance is particularly challenging. Concerns about the risks that the Internet of Things has posed on data protection and personal privacy have been raised for a few years.  An study from ICO in 2016 stated that “Six in ten Internet of Things devices don’t properly tell customers how their personal information is being used”. The good news is that industry and governments are aware of the problem and taking actions to change this.

With the Internet of Things growing faster and smarter, a lot of new devices and new solutions will be introduced in the coming years. When looking into technical solutions and innovation, we need to have “GDPR awareness”, especially “data protection by design” that GDPR advocates, in mind.  “Data protection by design“, previously known as “privacy by design”, encourages business to implement technical and organizational measures, safeguard privacy and data protection principles within the full life cycle right from the start.

The Web of Things

Interoperability is one of the major challenges in the Internet of Things. The Web of Things addresses this challenge by providing  a universal application layer protocol for the Things to talk to each other regardless of the physical and transport layer protocols used. Rather than reinventing the wheel, the Web of Things reuses existing and well-known web standards and web technologies. The Web of Things addresses Things via URLs and following standard APIs. Using the web as the framework makes things discoverable and linkable, and provides web developers with an opportunity to let people interact via a wide range of interfaces: screens, voice, gesture, augmented reality… even without an Internet connection!

Web of Things has been referred to as the application layer of the Internet of Things on many occasions. Bearing in mind though, the scope of IoT applications is broader and includes systems that are not necessarily accessible through the web. We prefer to think  that the Web of Things is an option of an application layer added over the network layer of the traditional Internet of Things architecture.

The Web of Things is growing fast in its standardization and implementations. The W3C has launched the Web of Things Working Group to take a leading role at addressing the challenge of fragmentation of the IoT through standards. Early this year Mozilla announced the “Project Things” framework – an open framework of software and services that bridges the gap between devices via the web. The “Project Things” is a good reflection of Mozilla’s involvement in creating an open standard with the W3C around the Web of Things, and practical implementations that provide open interfaces to devices from different vendors.

For developers, the fast growth of the Web of Things has opened up a lot opportunities when we search for solutions for the Internet of Things.  Let’s walk through an example to see what this means.

Imagine that you are a homeowner and just bought a new washing machine. To have the new device integrated into your smart home and be able to control or access it, here are some optional solutions that the Web of Things can offer:

  • Discover, pair and connect new devices around you via Physical Web and Web Bluetooth.
  • Control devices via a Progressive Web Application
  • Communicate with your device via an end-to-end encrypted channel provided by Mozilla Web of Things Gateway, locally and remotely.

So what are these technologies about? How do they address privacy and security concerns? Let’s walk through them in a bit more detail.

Physical Web and Web Bluetooth

The Physical Web is a discovery service based on Bluetooth Low Energy (B.L.E.) technology. In the Physical Web concept, any object or location can be a source of content and addressed using a URL. The idea is that smart objects broadcast relevant URLs to nearby devices by way of a BLE beacon.

The Physical Web has been brought into Samsung’s web browser, Samsung Internet, as an extension called CloseBy. When the browser handles the URL information received on the phone,  “no personal information is included and our servers do not collect any information about users at all,” stated my colleague  Peter O’Shaughnessy in his article on “Bringing the real world to your browser with CloseBy”.

Web Bluetooth is another technology based on Bluetooth Low Energy. Alongside efforts like the Physical Web, it provides a set of APIs to connect and control B.L.E. devices directly from web. With this web API, developers are able to build one solution that could work across all platforms. Although the API is still under development, there are already a few very cool projects and applications around. My colleague Peter has produced  a Web Bluetooth Parrot Drone demo to give you a sense of controlling a physical device using a web browser. And I promise you that playing Jo’s Hedgehog Curling game is simply light and fun!

The Web Bluetooth Community Group aims to provide APIs that allow websites to communicate with devices in a secure and privacy-preserving way. Although some security features are already in place, for example, the site must be served over a secure connection (HTTPS) and discovering Bluetooth devices must be triggered by a user gesture , there are still lots security implications to consider as per the Web Bluetooth security model by Jeffrey Yasskin.

Samsung Internet had Web Bluetooth enabled for developers by default since v6.4 stable Release.  With Physical Web and Web Bluetooth available, it is possible to have device on-boarding “just a tap away”.

Progressive Web Application

Progressive Web Apps (PWAs) are websites that deliver native app-like user experiences. They address issues in native mobile applications and websites with new design concepts and new Web APIs. Some key features of PWAs include:

  • “Add to Home Screen” prompts
  • Offline functionality
  • Fast loading from cache
  • (Optionally) web push notifications

These features are achieved by deploying a collection of technologies. Service Workers are the core of Progressive Web Apps and work in the background to power the offline functionality, push notifications and other striking features. With the app shell concept, PWAs can achieve a fast loading time by caching the basic shell of an application UI, while still loading fresh content when possible. The native install banner installs the mobile website to the homescreen with Web App Manifest support.

In PWAs, you can only register service workers on pages served over HTTPS. Since service workers process all network requests from the web app in the background, this is essential to prevent a man-in-the-middle attack. PWAs can work offline as we mentioned earlier. From privacy perspective, this potentially offers us possibilities to:

  • Minimize Collecting, storing, and using user data as much as possible.
  • Know where the data resides.

Since the term “Progressive Web Apps”  was first coined by Google in 2015,  we have seen big brands such as FT, Forbes and Twitter switching to PWAs in the last few years. Our Samsung Internet Developer Advocacy team has been actively contributing to and promoting this new technology [1] [2] [3] [4] [5], and even designed a community-approved Progressive Web Apps logo! Want to have hands-on experiences of PWAs? Check our demos on Podle, Snapwat, Airhorn VR, PWAcman and Biscuit Tin-der.

Mozilla “Project Things”

Mozilla “Project Things” aims at “building a decentralized ‘Internet of Things’ that is focused on security, privacy, and interoperability”, as stated by the company. The structure of the framework is shown as below –

  • Things Cloud. It provides a collection of IoT cloud services including supports for setup, backup, updates, 3rd party applications and services integration, and remote encrypted tunneling.
  • Things Gateway. Generally speaking the Things Gateway is the IoT connectivity hub for your IoT network.
  • Things Controllers. Smart devices such as smart speakers, tablets/phones, AR headsets etc, to control the connected Things.
  • Things Device Framework. It consists of a collection of sensors and actuators, or “Things” in the context of the Web of Things.

The “Things Project” has introduced an add-on system, which is loosely modeled after the add-on system in Firefox, to allow for the addition of new features or devices such as an adapter to the Things Gateway. My colleague Phil Coval has posted a blog explaining how to get started and establish basic automation using I2C sensors and actuators on a gateway’s device.

The framework has provided security and privacy solutions such as:

  • Secure remote access is achieved using HTTPS via encrypted tunneling. Basically, the “Things Project” provides a TLS tunnelling service via its registration server to allow people to easily set up a secure subdomain during first time setup . An SSL certificate is generated via LetsEncrypt and a secure tunnel from a Mozilla cloud server to the gateway is set up using PageKite.
  • The Things Gateway provides a system for safely authorizing third-party applications using the de-facto authorization standard OAuth 2.0. When a third-party application needs to access or control another person’s Things, it always requires consent from the Things’ owner. The owner can decide the scope of the access token granted to the third-party application. Things’ owner also has options to delete or revoke the tokens that are assigned to the third-party application. Details on this have been discussed at our recent blog “An End-to-End Web IoT Demo Using Mozilla Gateway” and talk “The Complex IoT Equation.

Future work

GDPR challenges us all to re-prioritize digital privacy and to reconsider how and when we need to collect, manage and store people’s data. This is an opportunity for the the Web of Things. As the technology develops, some of the security and privacy issues have been or being addressed. Still, building the Web of Things has various challenges ahead. For developers, using the right technologies are a way forward to make the Internet of Things a better place. Join us on this exciting journey!

 

An End-to-End Web IoT Demo Using Mozilla Gateway

Imagine that you are on your way to a holiday home you have booked. The weather is changing and you might start wondering about temperature settings in the holiday home. A question might pop up in you mind: “Can I change the holiday home settings to my own preference before reaching there?”

Today we are going to show you an end-to-end demo we have created that allows a holiday maker to remotely control sensor devices, or Things in the context of Internet of Things, at a holiday home. The private smart holiday home is built with the exciting Mozilla Things Gateway – an open Gateway that anybody can now create with a Raspberry Pi to control Internet of Things devices.  For holiday makers, we provided a simple Node.js holiday application to access Things via Mozilla Things Gateway. Privacy is addressed by introducing concepts of Things ownership and Things usership, which is followed by the authorization work flow.

The Private Smart Holiday Home

The private smart holiday home is the home for Gateway and Things –

Things Gateway

One of the major challenges in the Internet of Things is interoperability. Getting different devices to work nicely with each other can be painful in a smart home environment. Mozilla Things Gateway addresses this challenge and provides a platform that bridges existing off-the-shelf smart home devices to the web by providing them with web URLs and a standardized data model and API [1]Implementations of the Things Gateway follows the proposed Web of Things standard and is open sourced under Mozilla Public License 2.0.

In this demo, we chose Raspberry Pi 3 as the physical board for the Gateway.  Raspberry Pi 3 is well-supported by the Gateway community and has been a brilliant initial choice for experimenting the platform. It is worth mentioning that the Mozilla Project Things is not tied only to the Raspberry Pi, but they are looking at supporting a wide range of hardwares.

The setup of the Gateway is pretty straightforward. We chose to use the tunneling service provided by Mozilla by creating a sub-domain of mozilla-iot.orgsosg.mozilla-iot.org. To try it yourself, we recommend going through the README file at Gateway github repository.  Also a great step-by-step guide has been created by Ben Francis on “How to build a private smart home with a Raspberry Pi and things Gateway”.

Things Add-ons

The Mozilla Things Gateway has introduced an Add-on system, which is loosely modeled after the add-on system in Firefox, to allow for the addition of new feature or device such as an adapter to the Things Gateway. The tutorial from James Hobin and Michael Stegeman on “Creating an Add-on for Project Things Gateway” is a good place to grab the concepts of Add-on, Adapter, Device and to start creating your own Add-ons. In our demo, we have introduced fan, lamp and thermostat Add-ons as shown below to support our own hardware devices.

Phil Coval has posted a blog explaining how to get started and how to establish basic automation using I2C sensors and actuators on gateway’s device. It is the base for our Add-ons.

Holiday Application

The holiday application is a small Node.js program that has functionalities of a client web server, OAuth client and browser User Interface.

The application consists of two parts. First is for the holiday maker to get authorization from the Gateway for accessing Things at the holiday home. Once authorized, it moves to the second part, Things access and control.



OAuth client implementation is based on simple-oauth2, a Node.js client library for OAuth2.0. The library is open sourced under Apache License, Version 2.0  and available at github.

The application code can be accessed here. The README file provides instructions for setting up the application.

Ownership and Usership of Things

So here we have it, the relationships among Things owner, Things user, third party application, Gateway and Things.

whole_picture2

  • The holiday home owner is the Things Gateway user and has full control of the Things Gateway and Things.
  • The holiday maker is a temporary user of Things and has no access to the Gateway.
  • The holiday home owner uses the Gateway to authorize the holiday maker  accesses to the Things with scopes via gateway.
  • Holiday application accesses the Things through the Gateway.

User Authorization

The Things Gateway provides a system for safely authorizing third-party applications using the de-facto authorization standard OAuth 2.0. The work flow for our demo use case is shown in the diagram below –

The third party application user, the Holiday App User in this case, requests authorization to access the Gateway’s Web Thing API. The Gateway presents the request list to the Gateway User, the holiday home owner, as below –

With the holiday user’s input, the Gateway responds with an authentication code. The holiday application then requests to exchange the authentication code to a JSON Web Token (JWT) . The token has a scope that indicates what accesses were actually granted by the holiday home owner. It is noted that the granted token scope can only be a subset of the request scope. With the JWT granted, the holiday application can access the Things that the user is granted to.

Demo Video

We also created a demo video which tailored together different parts we talked above, and is available at https://vimeo.com/271272094.

What’s Next?

The Mozilla Gateway is a work in progress and is not yet reached production use stage.  There are a lot exciting developments happening. Why not get involved?

Connecting sensors to Mozilla’s IoT Gateway

Here is a 1st post about Mozilla’s IoT effort, and specifically the gateway project which is illustrating “Web Of Things” concept to create a decentralized Internet of Things, using Web technologies.

Today we will focus on the gateway, as it is the core component of the whole framework. Version 0.4.0 was just released, so you can try it your own on Raspberry Pi 3.  The Raspberry Pi 3 is the reference platform, but it should be possible to port to other single board computers (like ARTIK, etc).

The post will explain how to get started and how to establish basic automation using I2C sensors and actuators on gateway’s device (without any cloud connectivity).

To get started, first install the gateway according to these straightforward instructions:

Prepare SD Card

You need to download the Raspbian based gateway-0.4.0.img.zip (1GB archive) and dump it to SD card (2.6GB min).

lsblk # Identify your sdcard adapter ie:
disk=/dev/disk/by-id/usb-Generic-TODO
url=https://github.com/mozilla-iot/gateway/releases/download/0.4.0/gateway-0.4.0.img.zip
wget -O- $url | funzip | sudo dd of=$disk bs=8M oflag=dsync

If you only want to use the gateway and not hack on it, you can skip this next part which enables a developer shell though SSH.  However, if you do want access to a developer shell, mount the 1st partition called “boot” (you may need to replug your SD card adapter) and add a file to enable SSH:

sudo touch /media/$USER/boot/ssh
sudo umount /media/$USER/*

First boot

Next, install the SD card in your Raspberry PI 3 (Older RPis could work too, particularly if you have a wifi adapter).

When it has completed the first boot, you can check that the Avahi daemon is registering “gateway.local” using mDNS (multicast DNS)

ping gateway.local
ssh pi@gateway.local # Raspbian default password for pi user is "raspberry"

Let’s also track local changes to /etc by installing etckeeper, and change the default password.

sudo apt-get install etckeeper
sudo passwd pi

Logging in

You should now be able to access the web server, which is running on port 8080 (earlier version used 80):

http://gateway.local:8080/

It will redirect you to a page to configure wifi:

URL: http://gateway.local:8080/
Welcome
Connect to a WiFi network?
FreeWifi_secure
FreeWifi
OpenBar
...
(skip)

We can skip it for now:

URL: http://gateway.local:8080/connecting
WiFi setup skipped
The gateway is now being started. Navigate to gateway.local in your web browser while connected to same network as the gateway to continue setup.
Skip

After a short delay, the user should be able to reconnect to the entry page:

http://gateway.local:8080/

The gateway can be registered on mozilla.org for remote management, but we can skip this for now.

Then administrator is now welcome to register new users:

URL: http://gateway.local:8080/signup/
Mozilla IoT
Welcome
Create your first user account:
user: user
email: user@localhost
password: password
password: password
Next

And we’re ready to use it:

URL: http://gateway.local:8080/things
Mozilla IoT
No devices yet. Click + to scan for available devices.
Things
Rules
Floorplan
Settings
Log out

Filling dashboard

You can start filling your dashboard with Virtual Resources,

First hit the “burger menu” icon, go to settings page, and then go to the addons page.

Here you can enable a “Virtual Things” adapter:

URL: http://gateway.local:8080/settings/addons/
virtual-things-adapter 0.1.4
Mozilla IoT Virtual Things Adapter
by Mozilla IoT

Once enabled It should be listed along ThingURLAdapter on the adapters page:

URL: http://gateway.local:8080/settings/adapters
VirtualThingsAdapter
virtual-things
ThingURLAdapter
thing-url-adapter

You can then go back to the 1st Things page (it’s the first entry in the menu):

We can start adding “things” by pressing the bottom menu.

URL: http://gateway.local:8080/things
Virtual On/Off Color Light
Color Light
Save

Then press “Done” at bottom.

From this point, you can decide to control a virtual lamp from the UI, and even establish some basic rules (second entry in menu) with more virtual resources.

Sensing Reality

Because IoT is not about virtual worlds, let’s see how to deal with the physical world using sensors and actuators.

For sensors, there are many way to connect them to computers using analog or digital inputs on different buses.  To make it easier for applications developers, this can be abstracted using W3C’s generic sensors API.

While working on IoT.js‘s modules, I made a “generic-sensors-lite” module that abstracted a couple of I2C drivers from the NPM repository.  To verify the concept, I have made an adapter for Mozilla’s IoT Gateway (which is running Node.js), so I published the generic-sensors-lite NPM module first.

Before using the mozilla-iot-generic-sensors-adapter, you need to enable the I2C bus on the gateway (version 0.4.0, master has I2C enabled by default).

sudo raspi-config
Raspberry Pi Software Configuration Tool (raspi-config)
5 Interfacing Options Configure connections to peripherals
P5 I2C Enable/Disable automatic loading of I2C kernel module
Would you like the ARM I2C interface to be enabled?
Yes
The ARM I2C interface is enabled
ls -l /dev/i2c-1
lsmod | grep i2c
i2c_dev 16384 0
i2c_bcm2835 16384 0

Of course you’ll need at least one real sensor attached to the I2C pin of the board.  Today only 2 modules are supported:

You can double check if addresses are present on I2C the bus:

sudo apt-get install i2c-tools
/usr/sbin/i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- 23 -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- 77

Install mozilla-iot-generic-sensors-adapter

Until sensors adapter is officially supported by the mozilla iot gateway, you’ll need to install it on the device (and rebuild dependencies on the target) using:

url=https://github.com/rzr/mozilla-iot-generic-sensors-adapter
dir=~/.mozilla-iot/addons/generic-sensors-adapter
git clone --depth 1 -b 0.0.1 $url $dir
cd $dir
npm install

Restart gateway (or reboot)
sudo systemctl restart mozilla-iot-gateway.service
tail -F /home/pi/.mozilla-iot/log/run-app.log

Then the sensors addon can be enabled by pressing the “enable” button on the addons page:

URL: http://gateway.local:8080/settings/addons
generic-sensors-adapter 0.0.1
Generic Sensors for Mozilla IoT Gateway

It will appear on the adapters page too:

URL: https://gateway.local/settings/adapters
VirtualThingsAdapter
virtual-things
ThingURLAdapter
thing-url-adapter
GenericSensorsAdapter
generic-sensors-adapter

Now we can add those sensors as new things (Save and done buttons):

URL: http://gateway.local:8080/things
Ambient Light Sensor
Unknown device type
Save
Temperature Sensor
Unknown device type
Save

Then they will appear as:

  • http://gateway.local:8080/things/0 (for Ambient Light Sensor)
  • http://gateway.local:8080/things/1 (for Temperature Sensor)

To get value updated in the UI, they need to turned on first (try again if you find a big, and file tickets I will forward to drivers authors).

A GPIO adapter can be also used for actuators, as shown in this demo video.

If you have other sensors, check if the community has shared a JS driver, and please let me know about integrating new sensors drivers in generic-sensors-lite

An Introduction to IoT.js Architecture

IoT.js is a lightweight JavaScript platform for the Internet of Things. The platform keeps interoperable services at the forefront, and is designed to bring the success of Node.js to IoT devices like micro-controllers and other devices that are constrained to limited storage and only a few kilobytes of RAM. IoT.js is built on top of JerryScript: a lightweight JavaScript interpreter, and libtuv: an event driven (non-blocking I/O model) library. The project is open source under the Apache 2.0 license.

This article will introduce you to the architecture of IoT.js and the fundamentals of writing applications for it.

IoT.js Architecture

An Introduction to IoT.js Architecture - iotjs_arch

JerryScript – ECMAScript binding

JerryScript is the kernel for IoT.js on an ECMAScript binding, it’s an ultra lightweight JavaScript engine that was written from scratch at Samsung. The name “Jerry” comes from the popular character in Tom and Jerry, who’s small, smart, and fast! Since the engine is an interpreter only, it might be more precise to say JerryScript is a JavaScript interpreter.

Optimizations for memory footprint and performance have always been the top priorities of the project. The tiny engine has a base RAM footprint of less than 64KB, and the binary can accommodate less than 200KB of ROM. Amazingly, it implements the full ECMAScript 5.1 standard, and work has been ongoing recently to introduce new ES6 features such as promise, TypedArray, and more.

Like IoT.js, JerryScript was also released under the Apache 2.0 license. The community has experienced very rapid growth, especially in last couple years, and in 2016 the project was transferred to the JavaScript Foundation. A recent JavaScript Foundation press release mentioned how JerryScript was adopted in Fitbit’s latest product: Ionic.

JerryScript provides a good set of embedding APIs to compile and execute JavaScript programs, access JavaScript objects and their values, handle errors, manage the lifestyles of objects, and more. IoT.js uses these API’s to create the builtin module and native handler in IoT.js native core.

libtuv – I/O Event binding

Asynchronous I/O and threading in IoT.js are handled with libtuv: a library that focuses on asynchronous I/O and was primarily developed for use with Node.js. Samsung launched this open source project under the Apache 2.0 license, and it’s a multi-platform tiny event library that’s refactored from libuv to better serve IoT.js and on embedded systems.

Libtuv’s features include a loop, timer, poll, tcp & udp, fs event, thread, worker, and more. The platforms this library supports include i686-linux, arm-linux, arm-nuttx and arm-mbed.

IoT Binding

In the IoT.js community, there have been discussions about binding to an existing IoT platform or specification, such as IoTivity: The Open Connectivity Foundation‘s open source project. If this were to happen it would certainly add more dimension to supporting interoperability with other platforms.

IoT.js C Core

The IoT.js core layer is located above the binding layer, and it provides upper layer functionality to interact with the JavaScript engine, including running main event loops, managing I/O resources, etc. It also provides a set of builtin modules and native handlers.

Builtin modules are the basic and extended modules that are included in the IoT.js binary. Basically, these builtin modules are JavaScript objects implemented in C using the embedding API JerryScript provides, in either JavaScript or both languages. The native components of builtin modules are implemented as a native handle to access underlying systems via event handling, a C library, or system calls.

The life cycle of IoT.js is shown below:
 
An Introduction to IoT.js Architecture - iotjs_lifecycle-1

IoT.js ECMAScript API and JavaScript Modules

Like Node.js, IoT.js is a module-based system. Each module in IoT.js has its own context and provides a set of API’s associated with the module’s functionality.

IoT.js offers basic API modules and extended API modules. The basic API modules are based on Node.js and follow same form for compatibility reasons. Basic API modules include File System, Net, HTTP, Process, etc. Pretty much all application code that calls these API’s can be run in a Node.js environment without any modification.

The extended modules, on the other hand, are more IoT.js specific, and they are currently mostly hardware related (e.g. GPIO, Bluetooth Low Energy (BLE), I2C, SPI, UART, PWM, etc.). Many contributors are interested in adding new extended API modules to support their own specific hardware, so to maintain consistent usability, the IoT.js community has set guidelines and rules for introducing extended API’s.

Enabling JavaScript on the Internet of Things

The overall architecture of IoT.js is very friendly to Node.js, as a result of the asynchronous I/O and threading library, and the subset of Node.js compatible modules; it has reflected the design philosophy of providing a lightweight version of Node.js along with an inter-operable service platform. IoT.js has opened a great opportunity for JavaScript developers to develop applications for the Internet of Things, so it’s definitely an IoT platform to watch!

Join Us for the Tizen Developer Conference in San Francisco

The Tizen Developer Conference (TDC) is just around the corner; it will be held May 16 – 17 at the Hilton Union Square Hotel in San Francisco, CA. Our team contributes a ton of code to some of the critical open source software that makes up Tizen, so of course we’ll be spending some time there to network with app developers and device makers who work with Tizen.

What’s Happening with Tizen?

There has been quite a few exciting developments for Tizen over the last year; for starters Samsung joined forces with Microsoft to bring .NET to Tizen, allowing developers to build applications for Tizen using C# and Visual Studio. Additionally, Tizen has continued to show up on a growing number of consumer devices including the Gear S3, Z2Gear 360, AR9500M air conditioner, POWERbot VR7000, multiple smart TV’s, and more. Finally, Tizen RT was released last year, making it easier than ever to run Tizen on low-end devices with constrained memory requirements. All of these changes create a great environment to see some interesting new things at this conference.

These developments have made Tizen more accessible than ever for app developers and device manufacturers; now is a great time to get more deeply involved in this ecosystem. Take a look at the conference sponsors page to get an idea of what companies will be at TDC to show off their latest work.

What to Expect at a Tizen Event

TDC is a technical conference for Tizen developers, app developers, open source enthusiasts, platform designers, IoT device manufacturers, hardware and software vendors, ISVs, operators, OEMs, and anyone engaged in the Tizen ecosystem. It offers an excellent opportunity to learn from leading Tizen experts and expand your understanding of mobile, IoT, wearable tech, and IVI systems engineering and application development; events like this are the best way to network with the top companies working within the Tizen ecosystem.

Not to mention, Tizen events often include the unveiling of new devices, so visitors get to be some of the first people in the world to see the next greatest things for Tizen. The real cherry on top are the giveaways that accompany some of these announcements. Winning a new device more than makes up for the cost of admission :-)

Last but not least, the most valuable component of these events are the technical sessions that provide practical knowledge to help build better devices and apps. There are four session tracks to choose from:

  • Device Ecosystem – Learn how to create new devices with Tizen.
  • Application Ecosystem – Learn how to build apps for Tizen with .NET, Xamarin.Forms, Visual Studio, and more.
  • Platform Core – Learn about some of the platform components that make Tizen great for various use cases.
  • Product Experience – Learn about some of the interesting Tizen features that are being used to make products better.

TDC will conclude with an IoT hands-on lab that will provide training for using Tizen RT and ARTIK devices to build IoT products. This requires separate registration, so make sure to bring your laptop and coding skills so can take part in this wonderful opportunity to learn from experts.

Check out the entire conference schedule here.

What are You Waiting For?

Time is running out to plan your trip to this conference. Students can register for free (don’t forget your student ID when you come), and for everyone else, you can use the coupon code TIZSDP to get 50% off your registration fee. If you’ll be in the San Francisco area on May 16 and 17, don’t miss out on this great opportunity. Check out the registration page to learn more about attending this conference.

We hope to see you there!

Playback Synchronization & Video Walls with GStreamer

Hello again, and I hope you’re having a pleasant end of the year (if you are, you might want to consider avoiding the news until next year).

In a previous post, I wrote about synchronized playback with GStreamer, and work on this has continued apace. Since I last wrote about it, a bunch of work has been completed:

  • Added support for sending a playlist to clients (instead of a single URI),
  • Added the ability to start/stop playback,
  • Cleaned up the API considerably to improve the potential for it to be included upstream,
  • Turned the control protocol implementation into an interface to remove the necessity to use the built-in TCP server (different use-cases might want different transports),
  • Improved overall robustness of code and documentation,
  • Introduced an API for clients to send the server information about themselves, and finally
  • Added an API for the server to send video transformations for each specific client to apply before rendering.

While the other bits are exciting in their own right, in this post I’m going to talk about the last two items.

Video Walls

For those of you who aren’t familiar with the term, a video wall is an array of displays that are aligned to make a larger display; these are often used in public installations. One way to set up a video wall is to have each display connected to a small computer (such as the Raspberry Pi), and have them play a part of a video that’s cropped and scaled for the display it’s connected to.

Playback Synchronization Video Walls with GStreamer - video-wall

The tricky part, of course, is synchronization; this is where gst-sync-server comes in. Since it’s already possible to play a given stream in sync across devices on a network the only missing piece was the ability to distribute a set of per-client transformations that the clients could apply; support for this is now complete.

In order to keep things clean from an API perspective, I took the following approach

  • Clients now have the ability to send a client ID and configuration (which is just a dictionary) when they first connect to the server.
  • The server API emits a signal with the client ID and configuration, this provides the ability to know when a client connects, what kind of display it’s running, and where it’s positioned.
  • The server now has additional fields to send a map of client IDs to a set of video transformations.

This allows us to do fancy things, like having each client manage its own information with the server while dynamically adapting the set of transformations based on what’s connected. Of course, the simpler case of having a static configuration on the server also works.

Demo

Since seeing is believing, here’s a demo of the synchronised playback in action:

The setup is my laptop, which has an Intel GPU, and my desktop, which has an NVidia GPU. These are connected to two monitors.

The video resolution is 1920×800, and I’ve adjusted the crop parameters to account for the bezels, so the video looks continuous. I’ve uploaded the text configuration if you’re curious about what that looks like.

As I mention in the video, the synchronization is not as tight than I would like it to be. This is most likely because of the differing device configurations. I’ve been working with Nicolas Dufresne to address this shortcoming by using some timing extensions the Wayland protocol allows; more news on this as it breaks. More generally, I’ve also worked to quantify the degree of sync, but I’m going to leave that for another day.

p.s. the reason I used kmssink in the demo was that it was the quickest way I know of to get a full-screen video going. I’m happy to hear about alternatives, though

Future Work

Make it Real

I implemented my demo quite quickly by having the example server code use a static configuration. What I would like is to have a proper application that people can easily package and deploy on the embedded systems used in real video walls. If you’re interested in taking this up, I’d be happy to help out. Bonus points if we can dynamically calculate transformations based on client configuration (position, display size, bezel size, etc.)

Hardware Acceleration

One thing that’s bothering me is that the video transformations are applied using GStreamer software elements. This works fine(ish) for the hardware I’m developing on, but in real life OpenGL(ES) or platform specific elements should be used to have hardware-accelerated trasnformations. My initial thoughts are for this to be either an API for playbin or a GstBin that takes a set of transformations as parameters and sets up the best method to do this internally based on whatever sink is available downstream (some sinks provide cropping and other transformations).

Why not Audio?

I’ve only written about video transformations here, but we can do the same with audio transformations too. For example, multi-room audio systems allow you to configure the locations of wireless speakers — so you can set which one is left, right, center, etc. — and the speaker will automatically play the appropriate channel. It should be quite easy to implement this with the infrastructure that’s currently in place.

Here’s to the New Year!

I hope you enjoyed reading this post, I’ve had great responses from a lot of people about how they might be able to use this work. If there’s something you’d like to see, leave a comment or file an issue.

Happy end of the year, and all the best for 2017!

How to Boot Tizen on ARTIK

The fact that Tizen can be run on ARTIK is not the latest breaking news, considering it was previously demonstrated live at the 2015 Tizen Conference. There, the ARTIK project was also explained as an IoT platform of choice. Since then, ARTIK has become a new Tizen reference device, so here are a couple of hints that will help you test upcoming Tizen release on this hardware. First let me point out that Tizen’s wiki has a special ARTIK category, where you’ll find ongoing documentation efforts, you’ll want to bookmark this page.

In this article, I will provide a deeper explanation of how to use the bleeding edge version of Tizen:3.0:Common on ARTIK10, and how to start working on this platform. As explained in my previous Yocto/meta-artik article, I suggest you avoid using the eMMC for development purposes; for this article I will boot the ARTIK from an SDcard. In the long term I think it could be possible for the community to assemble Tizen infrastructure that automatically builds bootable SDcard tizen images to save time, but I’ll detail the whole process here in the meantime.

The Short and Easy Path

For your convenience, you can download an image of the build I’ll be describing here (md5=0d21e2716f67a16086fdd2724e7e11f1). It’s a binary image of the latest Tizen weekly development version (plus the latest IoTivity build too) prepared for the ARTIK10. All you have to do is flash it to the SD card, switch SW2/1 to on and SW2/2 to off, setup USB link to change a couple of U-Boot variables (set rootdev 1 ; set rootpart 3 ; boot ) as explained later in this tutorial.

# lsblk
disk=/dev/sdTODO
file=tizen-common-artik_20160627.3_common-wayland-3parts-armv7l-artik10.qcow2
# time qemu-img convert -p "${file}" "${disk}"

If you want to take the long road, or you want to do this for the ARTIK5, follow the rest of this guide; it will explain how to build Tizen and IoTivity to run on the ARTIK.

Download Tizen

Development for the Tizen project is a continuous process, the latest version can be downloaded here.

In the images/arm-wayland/ sub folder you’ll need to download 2 archives, the first is the boot image, which is different based on whether you’re using the ARTIK5 or ARTIK10. This image includes kernel and modules and the bootloader. The main difference between the ARTIK5 and ARTIK10 images is that U-Boot parameters are written at different media offsets.

I’m using the ARTIK10 for this guide, so I downloaded the tar.gz file found here. If you want to save bandwidth and time, it’s possible to grab a headless image archive tizen-common-artik_YYYYMMDD.*_common-headless-3parts-armv7l-artik.tar.gz.

If you own an ARTIK5, download the image from here; the ARTIK1 is totally different and it’s unlikely the same Tizen 3 codebase will be ever supported.

Prepare the SD Card

To write data to SD card, I’ll use the helpful sdboot script to handle the partitioning, formating and copying tasks that would be done manually using dd.

git clone https://github.com/tizenteam/sdboot -b master 

Insert the SD card into the host computer and identify the device node:

lsblk
disk=/dev/sdX # replace X with matching letter

Next, partition the SD card (8GB is minimal requirement), and format no less than 6 partitions.

sudo bash -x ./mk_sdboot.sh -f "$disk"

In the repo that was cloned earlier there is tizen-sd-bootloader-artik10.tar.gz; this contains early stages of the signed bootloader (bl1.bin, bl2.bin and ARM Trust Zone Software tzsw.bin). Note: don’t be confused by the “tz” in “twsw.bin” it means ARM TrustZone software not Tizen. The general purpose is to establish a chain of trust to ensure software integrity. The general purpose is to establish a chain of trust to ensure software integrity. U-boot.bin and its parameters file params.bin will be overridden later, and uInitrd, while not totally mandatory, will be helpful to setup systemd and mount the modules partition.

Use the mk_sdboot.sh helper script to write these files at the specific offets (Seek by 1231 for artik10’s U-Boot params file):

sudo bash -x ./mk_sdboot.sh -w /dev/sdb  tizen-sd-bootloader-artik10.tar.gz

Now, the SDcard should be able to launch U-Boot’s shell, so the next step is to prepare the operating system. Copy the Linux kernel to the 1st partition and its modules to the 2nd, and override the copied uboot and params just before the 1st partition:

sudo bash -x ./mk_sdboot.sh -w "$disk" tizen-*-boot-armv7l-artik10.tar.gz

Now, you can try to boot the kernel if you want, but let’s also dump Tizen’s rootfs to our SD card’s 3rd partition. Along with this, 2 other partitions are copied too: user and systemd-data.

sudo bash -x ./mk_sdboot.sh -w "$disk" tizen-*-3parts-armv7l-artik.tar.gz 

The device should now be capable of booting into Tizen.

Boot Tizen

For the purpose of this guide, I’m going to assume you’ve already booted your ARTIK with an existing OS (Fedora) or others (Ubuntu, Yocto) and know how to setup your debug link.

 

 

Power up the ARTIK10.

screen /dev/ttyUSB0 115200

U-Boot 2012.07-g801ab1503-TIZEN.org (Jun 20 2016 - 15:27:02) for ARTIK10

CPU: Exynos5422 Rev0.1 [Samsung SOC on SMP Platform Base on ARM CortexA7]
APLL = 800MHz, KPLL = 800MHz
MPLL = 532MHz, BPLL = 825MHz

Board: ARTIK10
DRAM:  2 GiB
WARNING: Caches not enabled

TrustZone Enabled BSP
BL1 version: /
VDD_KFC: 0x44
LDO19: 0x28

Checking Boot Mode ... SDMMC
MMC:   S5P_MSHC2: 0, S5P_MSHC0: 1
MMC Device 0: 7.4 GiB
MMC Device 1: 14.6 GiB
MMC Device 2: MMC Device 2 not found
In:    serial
Out:   serial
Err:   serial
rst_stat : 0x100
Net:   No ethernet found.
Hit any key to stop autoboot:  0 

Hit any key to stop and get a shell, if you don’t, U-Boot will try to boot the OS from the eMMC. Once you are in UBoot’s shell, change some variables temporarily:

ARTIK10 # version
U-Boot 2012.07-g801ab1503-TIZEN.org (Jun 20 2016 - 15:27:02) for ARTIK10
gcc (Tizen/Linaro GCC 4.9.2 2015.02) 4.9.2
GNU ld (GNU Binutils) 2.25.0 Linaro 2015_01-2

ARTIK10 # env default -f ;
ARTIK10 # set rootdev 1 ; set rootpart 3 ; boot 

In the next step of the boot process, the kernel, device tree and rootfs are loaded and executed:

reading zImage
5339682 bytes read in 67299 ms (77.1 KiB/s)
reading exynos5422-artik10.dtb
69754 bytes read in 117983 ms (0 Bytes/s)
reading uInitrd
1353683 bytes read in 29687 ms (43.9 KiB/s)
## Loading init Ramdisk from Legacy Image at 43000000 ...
   Image Name:   uInitrd
   Image Type:   ARM Linux RAMDisk Image (uncompressed)
   Data Size:    1353619 Bytes = 1.3 MiB
   Load Address: 00000000
   Entry Point:  00000000

Starting kernel ...

[    0.092805] [c0] /cpus/cpu@0 missing clock-frequency property
(...)
[    0.093269] [c0] exynos-snapshot: exynos_ss_init failed
[    0.335618] [c5] Exynos5422 ASV : invalid IDS value
(...)
Welcome to Tizen 3.0.0 (Tizen3/Common)!
(...)

Login to Tizen

The first boot will take a bit longer than usual, but eventually a prompt will appear that will allow you to login as root with “tizen” as the password:

Welcome to Tizen 3.0.0 (Tizen3/Common)!
(...)
localhost login: 
Password: 

# cat /etc/os-release 
NAME=Tizen
VERSION="3.0.0 (Tizen3/Common)"
ID=tizen
VERSION_ID=3.0.0
PRETTY_NAME="Tizen 3.0.0 (Tizen3/Common)"
ANSI_COLOR="0;36"
CPE_NAME="cpe:/o:tizen:tizen:3.0.0"
BUILD_ID=tizen-common-artik_20160627.3_common-wayland-3parts-armv7l-artik

# cat /proc/cmdline 
console=ttySAC3,115200n8 root=/dev/mmcblk1p3 rw rootfstype=ext4 loglevel=4 asix.macaddr=d2:40:??:??:??:?? bd_addr=C0:97:??:??:??:??

# cat /proc/version 
Linux version 3.10.93-3.1-arm-artik10 (abuild@w17) (gcc version 4.9.2 (Tizen/Linaro GCC 4.9.2 2015.02) ) #1 SMP PREEMPT Mon Jun 27 16:54:57 UTC 2016

# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mmcblk1p3        2.0G  726M  1.2G  38% /

Hotfix

I noticed a critical bug that can be worked around for now. Some daemons are causing damage to the root filesystem after a short period (less than 5 minutes), I suspect they could generate too much output and fill our low disk space, this is something that needs to be investigated. As a temporary solution, they should be stopped as soon as possible:

# systemctl stop deviced ; systemctl stop resourced ;
# systemctl disable deviced ; systemctl disable resourced

To make sure they won’t be re installed / restarted, they need to also be renamed:

# mv /usr/bin/deviced /usr/bin/deviced.orig
# mv /usr/bin/resourced /usr/bin/resourced.orig
# systemctl status deviced
Active: inactive (dead) since Mon 2016-07-25 10:05:09 PDT; 25s ago

Then, make sure the modules partition is mounted and matches the kernel version:

ls -l  /lib/modules/$(uname -r)/modules.dep
-rw-r--r-- 1 root root 21574 Jun 27 09:56 /lib/modules/3.10.93-3.1-arm-artik10/modules.dep

If not you aren’t using uInitrd, you’ll have to tweak fstab.

Connect to the Network

Check to make sure the network is working and the device has an IP address assigned to it:

ifconfig -a
eth0: flags=-28605  mtu 1500
        inet 192.168.0.42  netmask 255.255.255.0  broadcast 192.168.0.255

If it doesn’t, it can be set it up manually after loading the AX8817X USB NIC’s driver:

# modprobe -v asix

insmod /lib/modules/3.10.93-3.1-arm-artik10/kernel/drivers/net/usb/usbnet.ko 
insmod /lib/modules/3.10.93-3.1-arm-artik10/kernel/drivers/net/usb/asix.ko macaddr=d2:40:??:??:??:??

The LAN’s DHCP server should then assign an IP address to the device.

Install Packages

The headless image should already have zypper installed, it will be used for this section. If for some reason it’s not part of the image, it’s possible to use sdb to install it. This is outside the scope of this article, but if you have questions, feel free to post them to the comment section.

First, add the remote repos:

# zypper lr
Warning: No repositories defined.
Use the 'zypper addrepo' command to add one or more repositories.

# zypper addrepo http://download.tizen.org/live/Tizen:/Base/arm/Tizen:Base.repo
# zypper ar http://download.tizen.org/live/Tizen:/Common:/3.0b:/artik/arm-wayland/Tizen:Common:3.0b:artik.repo

Now it’s possible to install packages and upgrade the distro to latest snapshot.

# zypper in screen openssh
# zypper up

Take IoTivity for a Spin.

What’s next? Many people have expressed interest in using IoTivity on new platforms like Tizen, so let’s take a look at some of the IoTivity apps.

# zypper in iotivity-test iotcon-test 
# downgrade if needed
# rpm -ql iotivity-test

The binary image I shared at the beginning of this article contains the recently-released iotivity-1.1.1. I built it locally from sources using GBS on the 1.1.1 git tag with security enabled. The example apps must be run in the same directory where *.dat files are stored. Here are some instructions to launch a sample app I described in my previous article on IoTivity on ARTIK:

ls /usr/bin/*.dat
/usr/bin/oic_svr_db_client.dat
/usr/bin/oic_svr_db_server.dat
cd /usr/bin/ ; /usr/bin/simpleserver 2 
# in an other terminal :
cd /usr/bin/ ; /usr/bin/simpleclient 1 

What’s Next?

I’m going to continue hacking away at Tizen on my ARTIK10, stay tuned for more articles about developing on these platforms. Also check upcoming Tizen Community Online Meeting, a live chat about ARTIK is planned this fall, see you there.

 

How to Run IoTivity on ARTIK with Yocto

Samsung ARTIK is described by its developers as an end-to-end, integrated IoT platform that transforms the process of building, launching, and managing IoT products. I first saw one a year ago at the Samsung VIPEvent 2015 in Paris, but now there is an ARTIK10 on my desk and I would like to share some of my experiences of it with you.

In this post, I will show how to build a whole GNU/Linux system using Yocto, a project that provides great flexibility in mixing and matching components and customizing an environment to support new hardware or interesting software like IoTivity. If you’re looking for Tizen support, it’s already here (check at bottom of this article), but this post will focus on a generic Linux build.

Many of the board’s features I will be covering in this article are briefly introduced in the following video:

There are 3 ARTIK models on the market: ARTIK 1 is MIPS based system with 1MB of RAM while ARTIK 5 and 10 are powerful ARM systems based on Exynos System on Module. This article covers the ARTIK 10 and it can easily be adapted to the ARTIK 5, but not the ARTIK 1.

According to the specifications, the ARTIK 10 has an 8 Core CPU (4 Cortex-A7 and 4 Cortex-A15), 2GB of RAM, and a GPU (Mali T628) for multimedia needs. Since the ARTIK family is targeted for Internet of Things use cases, there are also a lot of connectivity controllers (Wired Ethernet, Wi-Fi : IEEE802.11a/b/g/n/ac, Bluetooth 4.1 + LE, Zigbee/Thread) as well as security elements (ARM TrustZone, TEE) which are mandatory for a safe IoT.

Unboxing and Booting the ARTIK 10

In the box you’ll find 4 antennas, a micro USB cable, and power supply (5V*5A with center positive plug). Warning: some devices like Intel NUC use the same 5.5 x 2.1mm power supply barrel, but at 12V. It’s a good idea to label power supply connectors if you have a bunch lying around because plugging in 12V+ will probably brick your ARTIK.

Setup the Debug Port

Once you plug the USB cable into the ARTIK, 2 blue lights will turn on and on your host side a serial port device will appear (even if device is powered off):

$ lsusb
Bus 002 Device 012: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC

dmesg | tail
usb 2-1.5.2: new full-speed USB device number 12 using ehci-pci
usb 2-1.5.2: New USB device found, idVendor=0403, idProduct=6001
usb 2-1.5.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-1.5.2: Product: FT232R USB UART
usb 2-1.5.2: Manufacturer: FTDI
usb 2-1.5.2: SerialNumber: ????????
ftdi_sio 2-1.5.2:1.0: FTDI USB Serial Device converter detected
usb 2-1.5.2: Detected FT232RL
usb 2-1.5.2: FTDI USB Serial Device converter now attached to ttyUSB0

$ ls -l /dev/ttyUSB*
crw-rw---- 1 root dialout 188, 0 Jun  2 15:11 /dev/ttyUSB0

As you can see the device is owned by root and belongs to the dialout group, so check if your user is in this group too (groups | grep dialout), then you can use your favorite serial terminal (minicom, picocom) or GNUscreen.

screen /dev/ttyUSB0 115200

That’s it!. If you’re on Windows, enjoy this Cybercode twins’ video to learn a bit about how to get setup.

Ready For Your First Boot

Turn off the device by pressing on the left side of the “POWER SW” 2 states button. Then, connect the power supply, wait about 10 seconds, and hold the SW3 POWER switch for a couple of seconds. After this, U-Boot’s log should be printed in the terminal.

Mine came loaded with the reference system based on Fedora 22. To use this, you can login as root:root and you’ll have at least 13G free (of 16G) to work. Here is a reference file from the flashed OS on the eMMC:

# cat /etc/artik_release

RELEASE_VERSION=1020GC0F-3AF-01Q0
RELEASE_DATE=20160308.225306
RELEASE_UBOOT=U-Boot 2012.07
RELEASE_KERNEL=3.10.93

Since we’ll be setting up a different operating system, I prefer to avoid touching this supported system and I’ll leave it to use as a reference to compare hardware support. The plan is to work on a removable microSD card and not change the eMMC at all.

Boot From an SD Card

Like many other SoC’s like the ODROID devices, booting from a microSD card is supported, but it requires a bit of manual intervention. You first need to configure it using micro switches “SW2” on the left, located between the debug and USB3 ports.

The switch block may be covered with tiny piece of plastic tape, remove it carefully with a toothpick and move both switches from the left (OFF) position to the right where “ON” is printed.

To try my experimental, compressed image, download demo-image-artik10-20160606081039.rootfs.artik-sdimg.qcow2 (md5:12c64d6631482f90d45a0d015bea5980) from our file server and copy to your microSD card using qemu-utils:

# lsblk
disk=/dev/sdTODO
file=demo-image-artik10-20160606081039.rootfs.artik-sdimg.qcow2
# time qemu-img convert -p "${file}" "${disk}"

As you did earlier, power the device off, connect the USB cable, insert the microSD card, switch it on, wait for a few seconds, and finally hold the micro switch for a couple of seconds. U-Boot should run through its configuration and some verbose output should be printed on the serial console:

U-Boot 2012.07 (Jun 06 2016 - 10:42:48) for ARTIK10

CPU: Exynos5422 Rev0.1 [Samsung SOC on SMP Platform Base on ARM CortexA7]
APLL = 800MHz, KPLL = 800MHz
MPLL = 532MHz, BPLL = 825MHz

Board: ARTIK10
DRAM:  2 GiB
WARNING: Caches not enabled

TrustZone Enabled BSP
BL1 version: ?/???
VDD_KFC: 0x44
LDO19: 0x28

Checking Boot Mode ... SDMMC
MMC:   S5P_MSHC2: 0, S5P_MSHC0: 1
MMC Device 0: 7.4 GiB
MMC Device 1: 14.6 GiB
MMC Device 2: MMC Device 2 not found
In:    serial   
Out:   serial   
Err:   serial   
rst_stat : 0x100
Net:   No ethernet found.
Hit any key to stop autoboot:  0
[Fusing Image from SD Card.]
(...)
reading zImage
5030144 bytes read in 32570 ms (150.4 KiB/s)
reading exynos5422-artik10.dtb
71014 bytes read in 31067 ms (2 KiB/s)
## Flattened Device Tree blob at 40800000
   Booting using the fdt blob at 0x40800000
   Loading Device Tree to 4ffeb000, end 4ffff565 ... OK

Starting kernel ..
(...)

And in the end you’ll get a login prompt for root without password:

Poky (Yocto Project Reference Distro) 2.0.2 artik10 /dev/ttySAC3

artik10 login: root

root@artik10:~# cat /proc/version 
Linux version 3.10.93 (philippe@WSF-1127) (gcc version 5.2.0 (GCC) ) #1 SMP PREEMPT Mon Jun 6 10:39:42 CEST 2016

root@artik10:~# cat /proc/cmdline 
console=ttySAC3,115200n8 root=/dev/mmcblk1p2 rw rootfstype=ext4 loglevel=4 rootdelay=3

Now you are ready for the section on building your own image, but if you really want to make sure everything is working, Canonical supports ARTIK so you can use an Ubuntu snappy image. For reference, I tested artik10-snappy-20160317.img.tar.xz and it reported the following for its Kernel:

Linux version 3.10.9-00008-gb745981-dirty (u@u-ThinkPad-T450) (gcc version 4.8.2 (Ubuntu/Linaro 4.8.2-16ubuntu4) ) #7 SMP PREEMPT Thu Mar 10 10:06:16 CST 2016

Build Your Own Image with meta-artik

First we start with Poky: the Yocto reference distro, because it’s the quickest way to reach today’s goal. As with all Yocto targets, you first need to identify a Board Support Package (BSP) layer that collects all the special software needed for this family of hardware. As far as I know there is no official Yocto layer for ARTIK.

I’ve known about meta-exynos but I also noticed the meta-artik community layer from ResinIo; they spoke about ARTIK BSP at the latest Samsung Developer Conference (SDC2016):

What really matters in this set is the u-boot-artik and linux-exynos recipes to fetch sources. So we’re going to make use of my small contribution to produce standalone bootable sdcard images in meta-artik’s master branch.
The patch adds a new class for SD card output; the black magic is that U-Boot params should be dumped to different places (0x00080e00 (1031*512) for ARTIK5 and 0x00099e00 (1231*512) for ARTIK10), if not done I’ll try to adapt this for Tizen too.

Let’s clone Poky, add the layer to your conf file, and build a regular image.

$ git clone -b jethro http://git.yoctoproject.org/git/poky
$ cd poky
$ git clone -b master https://github.com/resin-os/meta-artik
$ . ./oe-init-build-env

With the recipe in place, you need to tell the environment to look for it. For this, edit the bblayers.conf file from the base setup (this file will be in build/conf/ underneath your top level directory) and add a line to the BBLAYERS stanza (there’s no need to add to BBLAYERS_NON_REMOVABLE, that doesn’t apply to this build). Using the format of the base post, the line would look like this:

$ cat conf/bblayers.conf

RELATIVE_DIR := "${@os.path.abspath(os.path.dirname(d.getVar('FILE', True)) + '/../../..')}"
# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
LCONF_VERSION = "6"

BBPATH = "${TOPDIR}"
BBFILES ?= ""

BBLAYERS ?= " \
  ${RELATIVE_DIR}/poky/meta \
  ${RELATIVE_DIR}/poky/meta-yocto \
  ${RELATIVE_DIR}/poky/meta-yocto-bsp \
  "
BBLAYERS_NON_REMOVABLE ?= " \
  ${RELATIVE_DIR}/poky/meta \
  ${RELATIVE_DIR}/poky/meta-yocto \
  "

BBLAYERS += "${RELATIVE_DIR}/poky/meta-artik"

Then, build the image:

$ MACHINE=artik10 bitbake core-image-minimal

When this finishes, write the image to the SD card:

lsusb # list of disks devices
disk=/dev/sdTODO # update device for sdcard
dd if=tmp/deploy/images/artik10/core-image-minimal-artik10.artik-sdimg bs=32M oflag=sync of=$disk

Adding IoTivity

IoTivity is an open source framework that enables effective device-to-device connectivity for the Internet of Things. It implements the emerging standards from the Open Connectivity Foundation (OCF was formerly known as OIC, and the specified protocols still use the oic name, as does the layer described below).

The layered approach of Yocto makes it easy to add components, as long as descriptions exist for them. Otherwise you need to write your own descriptions, which is not too hard. To add IoTivity, add the meta-oic layer:

$ cd ../../poky
$ git clone -b 1.1.0 http://git.yoctoproject.org/cgit/cgit.cgi/meta-oic

Next, add it to previous file (./build/conf/bblayers.conf)

BBLAYERS += "${RELATIVE_DIR}/poky/meta-oic"

We also want to add some of the IoTivity packages into the built image; edit build/conf/local.conf to add these:

# IoTivity
IMAGE_INSTALL_append = " iotivity iotivity-resource iotivity-service"
# To get the example apps, include these as well:
IMAGE_INSTALL_append = " iotivity-resource-samples iotivity-service-samples"

Finally, build it :

MACHINE=artik10 bitbake core-image-mininal

Like before, copy this image to the SD card and boot it up the normal way.

Trying out IoTivity

Once you have installed the example packages, you can find apps to try things out. This is probably more interesting with two machines running IoTivity, so you can have a distinct client and server device, but it is possible to run both on the same machine.

In /opt/iotivity/examples/resource/cpp/ you will see these files, which make up client/server pairs – the names are descriptive:

ls /opt/iotivity/examples/resource/cpp/

OICMiddle              groupclient            simpleclient
devicediscoveryclient  groupserver            simpleclientHQ
devicediscoveryserver  lightserver            simpleclientserver
fridgeclient           presenceclient         simpleserver
fridgeserver           presenceserver         simpleserverHQ
garageclient           roomclient             threadingsample
garageserver           roomserver

Start the simple server, which will emulate an OCF/OIC device, and in this case, a light:

$ cd /opt/iotivity/examples/resource/cpp/ ; ./simpleserver

Usage : simpleserver 
    Default - Non-secure resource and notify all observers
    1 - Non-secure resource and notify list of observers

    2 - Secure resource and notify all observers
    3 - Secure resource and notify list of observers

    4 - Non-secure resource, GET slow response, notify all observers
Created resource.
Added Interface and Type
Waiting

Now start the simple client in the another shell session:

$ cd /opt/iotivity/examples/resource/cpp/ ; ./simpleclient

---------------------------------------------------------------------
Usage : simpleclient 
   ObserveType : 1 - Observe
   ObserveType : 2 - ObserveAll
---------------------------------------------------------------------



Finding Resource...
Finding Resource for second time...
In foundResource
In foundResource
Found resource 4126ec5c-16ce-4b9b-84e6-15ad6e268774In foundResource
/oic/d for the first time on server with ID: 4126ec5c-16ce-4b9b-84e6-15ad6e268774
DISCOVERED Resource:
Found resource 4126ec5c-16ce-4b9b-84e6-15ad6e268774/oic/p for the first time on server with ID:         URI of the resource: /oic/d4126ec5c-16ce-4b9b-84e6-15ad6e268774

DISCOVERED Resource:
        URI of the resource: /oic/p
Found resource 4126ec5c-16ce-4b9b-84e6-15ad6e268774/a/light for the first time on server with ID:       Host address of the resource: coap://[fe80::b82b:e6ff:fe30:8efd]:44475
        List of resource types:
        Host address of the resource:           oic.wk.d
        List of resource interfaces:
4126ec5c-16ce-4b9b-84e6-15ad6e268774            oic.if.baseline
                oic.if.r
coap://[fe80::b82b:e6ff:fe30:8efd]:44475
        List of resource types:

                oic.wk.p
        List of resource interfaces:
DISCOVERED Resource:            oic.if.baseline
        URI of the resource: /a/light

                oic.if.r
        Host address of the resource: coap://[fe80::b82b:e6ff:fe30:8efd]:44475
        List of resource types:
                core.light
                core.brightlight
        List of resource interfaces:
                oic.if.baseline
                oic.if.ll
Getting Light Representation...

The OIC specifications describe a RESTful model of communications, where everything is modeled as resources and web-style commands are used to operate on representations of these resources, such as GET, POST, PUT, etc. The communication model is called CRUDN (Create, Retrieve, Update, Delete, Notify). In this example, the IoTivity server sets up resources to describe the (emulated) OIC device. Once the client starts, it tries to discover if there are interesting resources on the network using a multicast GET; the server responds to this with the resources it is hosting. In our example, there is a light resource and a brightlight resource (meaning the light that can have brightness controlled). At this point, the two can speak to each other with the client driving the conversation.

We can see the client getting a representation of the state of the emulated light:

GET request was successful
Resource URI: /a/light
        state: false
        power: 0
        name: John's light

It then creates a state on the server that has a power value of 15.

Putting light representation...
PUT request was successful
        state: true
        power: 15
        name: John's light

Next the client tries to update the light state to a value of 55:

Posting light representation...
POST request was successful
        Uri of the created resource: /a/light1
Posting light representation...
POST request was successful
        state: true
        power: 55
        name: John's light

After this, the client sets up an observe on the server so it will be notified of state changes.

Observe is used.

Observe registration action is successful

The light power is updated in steps of 10, with the client reporting to via prints every time it receives a notification. A typical event looks like:

OBSERVE RESULT:
        SequenceNumber: 0
        state: true
        power: 55
        name: John's light

Meanwhile, we can watch the server report the requests it receives and the actions it takes. Once the light power reaches a certain level and the client sees that the last change has been observed, the connection is shut down.

This is just a basic sample of IoTivity in operation, have fun experimenting with this brand new technology for the Internet of Things! There is more to be done from this state, specifically on the connectivity domain such as adding Bluetooth, BTLE, or LPWAN, since these are what matter the most for IoTivity.

What’s Next?

We can expect that Tizen support will improve soon as it’s becoming one of the reference board.

IoT applications have an incredibly wide range of uses; here is an example of work we’ve done within the automotive context:

Feedback is also welcome if you’d like us to explain anything with more depth, such as Tizen support, Tizen OBS/GBS images, or Tizen Yocto, feel free to keep in touch.

[wp_biographia user="pcoval"]

[wp_biographia user="mats"]

ARTIK is a registered trademark of Samsung.

ARM KVM Maturing Quickly: Headed for Industrial IoT, Routers, & Servers

There are many companies shipping products based on ARMv8 today, including AMD, Samsung, NVIDIA, Cavium, Apple, Broadcom, Qualcomm, Huawei, and others.  What makes ARM unique is the differentiation between vendors; while a majority of features are based on the ARM reference processor and bus architectures, additional custom features may be introduced or reference features may be omitted at the vendor’s discretion. This results in various profiles that are tailored to specific products, making it extremely difficult for another architecture to compete against such a wide selection of choices. As ARMv8 matures, it’s moving out of mobile and embedded devices into Industrial IoT (Internet of Things), network routers, wireless infrastructure, and eventually, the cloud.

For those of you not familiar with KVM, it stands for Kernel Virtual Machine. It’s a Linux Kernel hypervisor module. KVM is the primary open-source-based hypervisor, and is the default for Openstack, a popular cloud computing software platform that’s used to deploy a wide variety of services.

The Future Uses of ARM

ARM added hardware virtualization extensions first in the ARMv7 architecture, and also included them in ARMv8. For the most part, extensions for both architectures are compatible to the extent that ARMv7 and v8 share a lot of common ARM KVM code. Given that ARM is used extensively in the embedded world, this adds a whole new dimension to virtualization. One key application is Industrial IoT, where hardware is developed with future expansion in mind and typically only requires new software upgrades to add new features.

Virtualization is a perfect solution for such situations, and cloud provisioning of such devices would help with remote servicing, debugging, legacy support through emulation, and many other things. In particular, virtualization and ARM KVM are an ideal match given the majority of embedded devices run on Linux and ARM processors; additionally, with virtualization extensions, ARM KVM works well in embedded environments. This concept make so much sense that Intel, Freeescale and WindRriver have all presented their own vision of IoT device virtualization.

ARM has also continued making progress in the network hardware market. For example, Cavium produces processors based on ARMv8 for large, Software-Defined Networking (SDN) capable routers, and Broadcom is headed in that direction as well after unveiling a new ARMv8 processor. Currently, several features such as Unified Extensible Firmware Interface (UEFI), and Advanced Configuration and Power Interface (ACPI) support are in development for the server market. Prior to ARM-KVM, only proprietary micro-kernel virtualization vendors were available on ARM, and these were often complicated solutions with high levels of vendor lock in. Today, Linux runs on wide-ranging products from embedded devices to servers and now ARM KVM is reaching full maturity. A device with as little as 32MB of flash memory is enough to run several ARM-KVM guests.

What Makes ARM Virtualization so Great?

ARM hardware virtualization extensions span all processor features, and the primary focus is on limiting Guest exits, or at least minimizing exit intervals. The following list summarizes a few of the ARM virtualization extensions:

  • Second stage tables – Also known as nested page tables, these relieve the hypervisor from managing the guest shadow page tables in the software.
  • Interrupt Virtualization – Provides efficient delivery of interrupts to the virtual CPU (vCPU).
  • Clock source and timer virtualization – Allows the guest to schedule its own timers and directly access a clock source. The OS uses this to measure the passage of time or to schedule timer events. This is one of several areas where ARM has improved over x86.
  • Event Trapping – Programs the CPU events that should be trapped to the hypervisor. For example, access control to privileged registers can be used to prevent the guest from reprogramming the host registers.
  • Exit reason and instruction decoding on Guest exit – The former accelerates resolving of all exits, and the latter is typically used for accelerating Guest IO accesses.

ARM has taken a completely different approach to managing the Guest and Host context. As shown below, on the x86 architecture, a Virtual Machine Control Structure (VMCS) is implemented which, among other important fields, contains the Guest and Host state. On entry to a Guest, the Host state is saved and the Guest is restored; on an exit, the opposite occurs.

ARM, on the other hand, has decided to add a HYP mode and an exception level 2 for ARMv7 and v8 respectively. This extra mode is more privileged than the secure mode, but less privileged than the KVM Host SVC or EL1 mode. With HYP mode, transition to and from  the Guest and Host occurs through this extra mode. For a world switch (guest to hypervisor context switch) the approach of x86 and ARM differs considerably:

ARM KVM Maturing Quickly Headed for Industrial IoT Routers Servers - ARMModes
ARM (left) has improved how guest exits are handled considerably over other approaches like x86 VMCS.
  • ARM performs a VM enter world switch in software, saves Host context on HYP stack and restores Guest from KVM vCPU structure, and the reverse on a VM exit. This allows future software optimization based on the features a processor supports.
  • It’s quite possible that some exits can be handled in HYP mode; this is an ultra-light, inexpensive exit. In fact, some benchmarks on the KVM-ARM mailing list show millions of such exits with no apparent performance degradation. The key is that in HYP mode, only a portion of the state is saved to handle the exit and resume the Guest.  An example is discussed in this article, which shows this behavior while running massive compilations.

ARM Virtualization Comes of Age

Over the past several months the open source community has made great progress in enabling ARM KVM for various markets and closing the gap with x86. The following are some key features that have been pushed upstream:

  • Device Pass-through – Required for nearly any product, ranging from IoT device sensor pass-through for monitoring and control, to server Network Interface Card (NIC) access, and Network Functions Virtualization (NFV) device access.
  • Vitual IO (virtio) – The most widely used Input/Output (I/O) in cloud environments. ARM supports virtio block, network, and memory balloon devices based on virtio-mmio transport (as opposed to virtio-pci). Virtio-pci is the next step once a reference a PCI controller that supports PCI cycles has been developed.
  • Live, cold migration support, and image save/restore – These features are essential for zero downtime maintenance, high availability, and load balancing. This applies to all environments including embedded devices, cloud, and specific NFV elements.
  • Transparent huge pages – This is essential for near native performance. While  it’s been supported for sometime now, during live migration a new feature is introduced to dissolve huge pages during the migration and coalesce after the migration completes. This enables the rapid migration of memory loads that were previously not possible.
  • GDB debug – This is for debugging the guest.

Additionally, the community is hard at work to support UEFI, ACPI, and GUID Partition Table (GPT) partitioned image boot. There is also ongoing work on Libvirt to support ARM KVM machine model, as well as additional Openstack integration. Finally, future work requires support for hardware profiling support, in the Guest and Host simultaneously, as well as Asynchronous Page Fault support which allows the Guest to execute another thread during a second stage page fault.

We’ll continue to follow these developments, so stay tuned for future articles that focus on our community activities and ARM KVM adoption as the technology continues to mature.