How to Run IoT.js on the Raspberry PI 0

IoT.js is a lightweight JavaScript platform for building Internet of Things devices; this article will show you how to run it on a few dollars worth of hardware. The First version of it was released last year for various platforms including Linux, Tizen, and NuttX (the base of Tizen:RT). The Raspberry Pi 2 is one of the reference targets, but for demo purposes we also tried to build for the Raspberry Pi Zero, which is the most limited and cheapest device of the family. The main difference is the CPU architecture, which is ARMv6 (like the Pi 1), while the Pi 2 is ARMv7, and the Pi 3 is ARMv8 (aka ARM64).

IoT.js upstream uses a python helper script to crossbuild for supported devices, but instead of adding support to new device I tried to build on the device using native tools with cmake and the default compiler options; it simply worked! While working on this, I decided to package iotjs for debian to see how well it will support other architectures (MIPS, PPC, etc), we will see.

Unfortunately, Debian armel isn’t optimized for ARMv6 and FPU, both of which are present on the Pi 1 and Pi 0, so the Raspbian project had to rebuild Debian for the ARMv6+VFP2 ARM variant to support all Raspberry Pi SBC’s.

In this article, I’ll share hints for running IoT.js on Raspbian: the OS officially supported by the Raspberry foundation; the following instructions will work on any Pi device since a portability strategy was preferred over optimization. I’ll demonstrate three separate ways to do this: from packages, by building on the device, and by building in a virtual machine. By the way, an alternative to consider is to rebuild Tizen Yocto for  the Pi 0, but I’ll leave that as an exercise for the reader, you can accomplish this with a bitbake recipe, or you can ask for more hints in the comments section.

How to Run IoT.js on the Raspberry PI 0 - tizen-pizero

Install from Packages

iotjs landed in Debian’s sid, and until it is in testing branch (and subsequently Raspbian and Ubuntu), the fastest way is to download it is via precompiled packages from my personal Raspbian repo

url='https://dl.bintray.com/rzr/raspbian-9-armhf'
source="/etc/apt/sources.list.d/bintray-rzr-raspbian-9-armhf.list"
echo "deb $url raspbian main" | sudo tee "$source"
sudo apt-get update
apt-cache search iotjs
sudo apt-get install iotjs
/usr/bin/iotjs
Usage: iotjs [options] {script | script.js} [arguments]

Use it

Usage is pretty straightforward, start with a hello world source:

echo 'console.log("Hello IoT.js !");' > example.js
iotjs  example.js 
Hello IoT.js !

More details about the current environment can be used (this is for iotjs-1.0 with the default built-in modules):

echo 'console.log(JSON.stringify(process));' > example.js
iotjs  example.js 
{"env":{"HOME":"/home/user","IOTJS_PATH":"","IOTJS_ENV":""},"native_sources":{"assert":true,"buffer":true,"console":true,"constants":true,"dns":true,"events":true,"fs":true,"http":true,"http_client":true,"http_common":true,"http_incoming":true,"http_outgoing":true,"http_server":true,"iotjs":true,"module":true,"net":true,"stream":true,"stream_duplex":true,"stream_readable":true,"stream_writable":true,"testdriver":true,"timers":true,"util":true},"platform":"linux","arch":"arm","iotjs":{"board":"\"unknown\""},"argv":["iotjs","example.js"],"_events":{},"exitCode":0,"_exiting":false} null 2

From here, you can look to use other built-in modules like http, fs, net, timer, etc.

Need More Features?

More modules can be enabled in the master branch, so I also built snapshot packages that can be installed to enable more key features like GPIO, I2C, and more. For your convenience, the snapshot package can be installed to replace the latest release:

root@raspberrypi:/home/user$ apt-get remove iotjs iotjs-dev iotjs-dbgsym iotjs-snapshot
root@raspberrypi:/home/user$ aptitude install iotjs-snapshot
The following NEW packages will be installed:
  iotjs-snapshot{b}
(...)
The following packages have unmet dependencies:
 iotjs-snapshot : Depends: iotjs (= 0.0~1.0+373+gda75913-0~rzr1) but it is not going to be installed
The following actions will resolve these dependencies:
     Keep the following packages at their current version:
1)     iotjs-snapshot [Not Installed]                     
Accept this solution? [Y/n/q/?] n
The following actions will resolve these dependencies:

     Install the following packages:                 
1)     iotjs [0.0~1.0+373+gda75913-0~rzr1 (raspbian)]
Accept this solution? [Y/n/q/?] y
The following NEW packages will be installed:
  iotjs{a} iotjs-snapshot 
(...)
Do you want to continue? [Y/n/?] y
(...)
  iotjs-snapshot https://dl.bintray.com/rzr/raspbian-9-armhf/iotjs-snapshot_0.0~1.0+373+gda75913-0~rzr1_armhf.deb
  iotjs https://dl.bintray.com/rzr/raspbian-9-armhf/iotjs_0.0~1.0+373+gda75913-0~rzr1_armhf.deb

Do you want to ignore this warning and proceed anyway?
To continue, enter "yes"; to abort, enter "no": yes
Get: 1 https://dl.bintray.com/rzr/raspbian-9-armhf raspbian/main armhf iotjs armhf 0.0~1.0+373+gda75913-0~rzr1 [199 kB]
Get: 2 https://dl.bintray.com/rzr/raspbian-9-armhf raspbian/main armhf iotjs-snapshot armhf 0.0~1.0+373+gda75913-0~rzr1 [4344 B]
(...)

If you the run console.log(process) again, you’ll see more interesting modules to use, like gpio, i2c, uart and more, and external modules can be also used; check on the work in progress for sharing modules to the IoT.js community. Of course, this can be reverted to the latest release by simply installing the iotjs package because it has higher priority than the snapshot version.

root@raspberrypi:/home/user$ apt-get install iotjs
(...)
The following packages will be REMOVED:
  iotjs-snapshot
The following packages will be upgraded:
  iotjs
(...)
Do you want to continue? [Y/n] y
(...)

Build on the Device

It’s also possible to build the snapshot package from source with extra packaging patches, found in the community branch of IoT.js (which can be rebased on upstream anytime).

sudo apt-get install git time sudo
git clone https://github.com/tizenteam/iotjs
cd iotjs
./debian/rules
sudo debi

On the Pi 0, it took less than 30 minutes over NFS for this to finish. If you want to learn more you can follow similar instructions for building IoTivity on ARTIK;
it might be slower, but it will extend life span of your SD Cards.

Build on a Virtual Machine

A faster alternative that’s somewhere between building on the device and setting up a cross build environment (which always has a risk of inconsistencies) is to rebuild IoT.js with QEMU, Docker, and binfmt.

First install docker (I used 17.05.0-ce and 1.13.1-0ubuntu6), then install the remaining tools:

sudo apt-get install qemu qemu-user-static binfmt-support time
sudo update-binfmts --enable qemu-arm

docker build 'http://github.com/tizenteam/iotjs.git'

It’s much faster this way, and took me less than five minutes. The files are inside the container, so they need to be copied back to host. I made a helper script for setup and to get the deb packages ready to be deployed on the device (sudo dpkg -i *.deb) :

curl -sL https://rawgit.com/tizenteam/iotjs/master/run.sh | bash -x -
./tmp/out/iotjs/iotjs-dbgsym_0.0-0_armhf.deb
./tmp/out/iotjs/iotjs-dev_0.0-0_armhf.deb
./tmp/out/iotjs/iotjs_0.0-0_armhf.deb

I used the Resin/rpi-raspbian docker image (thanks again Resin!). Finally, I want to also thank my coworker Stefan Schmidt for the idea after he setup a similar trick for EFL’s CI.

Further Reading

If you want to learn more, here are some additional resources to take your understanding further.

Raspberry Pi is a trademark of the Raspberry Pi Foundation\

Common EFL Focus Pitfalls

I started patching the focus subsystem of the EFL widget toolkit quite some time ago. During this time, people have started to assign me everything that somehow looks like an issue with focus, it sometimes only takes the existence of the word “focus” somewhere in a backtrace for this to happen. I’ve discovered that most people mix up these different types, so in this blog post I hope to provide some clarity about them.

How EFL Gets Focused

There are 3 different places focus happens in EFL:

  • ecore-evas – the window manager abstraction
  • evas – the canvas library
  • elementary – the widget toolkit.

First of all, I should point out what focus itself is, I think a good example is to consider your typical smartphone interaction. While interacting with your smartphone, your complete attention is given to its screen and all interactions are with the interface of the device; you will also likely cease interactions with the outside environment entirely. In the same way, each abstraction in EFL has its own interaction partners:

  • The focused canvas object gets the attention from the keyboard
  • The focused widget is highlighted visually, so the user can see where his attention should go
  • And the window manager in the end focuses an application, which is probably an EFL Application.

These differences are often the source of people’s confusion when it comes to focus in EFL. For example, loosing the toolkit focus on a window object does not mean that the window lost the input from the user; instead, it means that another widget got the toolkit focus and the window manager still has focus on this window.

For another example, consider a toolkit widget that’s built out of two objects: an image and a text field below the image. In this example, the widget receives the toolkit’s focus, and the focus of the canvas moves to the image. Then, the user presses some key bindings to change the name of the image and the canvas focus moves to the text field. In this case, the canvas focus moves, creating update events on the canvas focus. However, the widget’s focus stayed the same, and the user is meant to have their attention on that widget, meaning there was no change to it.

Some Tips for Understanding EFL Focus

The focus property can only be true on one single object in an entity, in practice this means:

  • One window focused per user window manager session
  • One object focused per user canvas
  • One widget focused per widget tree in the toolkit

Additionally:

  • Canvas focus is only used for injecting keyboard inputs from the input system of the display technology.
  • Widget focus is used for navigation, in the case of focus movement initiated by key bindings, this is the position from where the next upper/down/right/left element is calculated.

If you ever use change events for some kind of a focus property, here’s what you need to know about focus in EFL:

  • It’s window manager focus if the user has their attention on your application.
  • It’s canvas focus if you need to know where the keyboard inputs are coming from.
  • It’s a widget focus if the user has moved their attention to a subset of canvas objects that are bound together as a widget implementation.

If you have any questions about any of this content, head to the comments section!

How to Securely Encrypt A Linux Home Directory

These days our computer enables access to a lot of personal information that we don’t want random strangers to access, things like financial login information comes to mind. The problem is that it’s hard to make sure this information isn’t leaked somewhere in your home directory like the cache file for your web browser. Obviously, in the event your computer gets stolen you want your data at rest to be secure; for this reason you should encrypt your hard drive. Sometimes this is not a good solution as you may want to share your device with someone who you might not want to give your encryption password to. In this case, you can  encrypt only the home directory for your specific account.

Note: This article focuses on security for data at rest after that information is forever out of your reach, there are other threat models that may require different strategies.

Improving Upon Linux Encryption

I have found the home directory encryption feature of most Linux distributions to be lacking in some way. My main concern is that I can’t change my password as it’s used to directly unlock the encrypted directory. So, what I would like is the ability to have is a pass phrase that is used as a encryption key, but is itself encrypted with my password. This way, I can change my password by re-encrypting my pass phrase with a different password. I want it to be a pass phrase because that makes it possible to back it up in my global password manager which protects me in the case the file holding the key gets corrupted, and it allows me to share it with my IT admin who can then put it in cold storage.

Generate a Pass Phrase

So, how do we do this? First generate the key, for that I have slightly modified a Bitcoin wallet pass phrase generator that I use to generate the key as follows:

$ git clone https://github.com/Bluebugs/bitcoin-passphrase-generator.git
$ cd bitcoin-passphrase-generator
$ ./bitcoin-passphrase-generator.sh 32 2>/dev/null | openssl aes256 -md sha512 -out my-key

This will prompt you for the password to encrypt your pass phrase, and this will become your user password. Once this command has been called, it will generate a file that contains a pass phrase using 32 words out of your system dictionary. On Arch Linux, this dictionary contains 77,649 words, so one possibility in 77649^32. Anything more than 2^256, should be secure enough for some time. To check the content of your pass phrase and back it up, run the following:

$ openssl aes256 -md sha512 -d -in my-key

Once you have entered your password it will output your pass phrase: a line of words that you can now backup in your password manager.

Encrypt Your Home Directory

Now, let’s encrypt the home directory. I have chosen to use ext4 per directory encryption because it enables you to share the hard drive with other users in the future instead of requiring you to split it into different partitions. To initialize your home directory, first move your current one out of the way (This means stopping all processes that are running under your user ID and login under another ID). After this, load the key into the kernel:

$ openssl aes256 -md sha512 -d -in my-key | e4crypt add_key 2>1

This will ask you for the password you used to protect your pass phrase; once entered, it adds it to your current user kernel key ring. With this, you can now create you home directory, but first get the key handle with the following command:

$ keyctl show | grep ext4
12345678 --alsw-v    0   0   \_ logon: ext4:abcdef1234567890

The key handle is the text that follows ext4 at the end of the 2nd line above, this can now be used to create your new encrypted home directory :

$ mkdir /home/user
$ e4crypt set_policy abcdef1234567890 /home/user

After this, you can copy all of the data from your old user directory to the new directory and it will be properly encrypted.

Setup PAM to Unlock and Lock Your Home Directory

Now, we need to play with the PAM session handle to get it to load the key and unlock the home directory. I will assume that you have stored my-key at /home/keys/user/my-key. We’ll use pam_mount to run a custom script that will add the key in the system at login and remove it at log out. The script for mounting and unmounting can be found on GitHub.

The interesting bit that I’ve left out of the mounting script is the possibility to get the key from an external device or maybe even your phone. Also, you’ll notice the need to be part of the wheel group to cleanly unmount the directory as otherwise the file will still be in cache after log out, and it will be possible to navigate the directory and create files until they are removed. I think a clean way to handle this might be to have a script for all users that can flush the cache without asking for a sudo password; this will be an easy improvement to make. Finally, modify the pam system-login file as explained in the Arch Linux documentation with just one additional change to disable pam_keyinit.so.

In /etc/pam.d/system-login, paste the following:

#%PAM-1.0

auth required pam_tally.so onerr=succeed file=/var/log/faillog
 auth required pam_shells.so
 auth requisite pam_nologin.so
 auth optional pam_mount.so
 auth include system-auth

account required pam_access.so
 account required pam_nologin.so
 account include system-auth

password optional pam_mount.so
 password include system-auth

session optional pam_loginuid.so
 #session optional pam_keyinit.so force revoke
 session [success=1 default=ignore] pam_succeed_if.so service = systemd-user quiet
 session optional pam_mount.so
 session include system-auth
 session optional pam_motd.so motd=/etc/motd
 session optional pam_mail.so dir=/var/spool/mail standard quiet
 -session optional pam_systemd.so
 session required pam_env.so

Now, if you want to change your password, you need to decrypt the key, change the password, and re-encrypt it. The following command will do just that:

$ openssl aes256 -md sha512 -d -in my-old-key | openssl aes256 -md sha512 -out my-new-key

Et voila !

 

EFL Multiple Output Support with Wayland

Supporting multiple outputs is something most of us take for granted in the world of X11 because the Xorg team has had years to implement and perfect support for multiple outputs. While we have all enjoyed the fruits of their labor, this has not been the case for Wayland. There are two primary types of multiple output configurations that are relevant: cloned and extended outputs.

Cloned Outputs

Cloned output mode is the one that most people are familiar with. This means that the contents of the primary output will be duplicated on any additional outputs that are enabled.  If you have ever hot plugged an external monitor into your Windows laptop, then you have seen this mode in action.

EFL Multiple Output Support with Wayland - mirrored-display
An example of cloned output

Extended Outputs

Extended output mode is somewhat less common, yet still very important. In this mode, the desktop is extended to span across multiple outputs, while the primary output retains sole ownership of any panels, shelves, or gadgets that reside there. This enables the secondary monitor to act as an extended desktop space where other applications can be run.

EFL Multiple Output Support with Wayland - extended-display
An example of extended outputs

Multiple Output Support in Weston and EFL

Weston, the reference Wayland compositor, has had the ability to support multiple outputs in an extended configuration for quite some time now. This extended mode configuration allows each output to run with completely separate framebuffer and repaint loops. There are patches currently being worked on to enable cloned output support, but these have not been pushed upstream yet.

While EFL has had support for multiple outputs when running under X11 for quite some years now, the support for multiple outputs under Wayland has been missing. A major hurdle to implementing this support has been that Evas, the EFL rendering library, was not accounting for multiple outputs in it’s rendering update loop. With this commit, our very own Cedric Bail has removed that hurdle and enabled work to progress further on implementing this feature.

Complete Multiple Output Support is on the Way

While the current implementation of multiple output support in EFL Wayland only supports outputs in a cloned configuration, support for extended mode is in development and is expected to be available in upstream EFL soon. When these patches get published upstream, this will bring our EFL Wayland implementation much closer in parity to our X11 support. Thankfully there are little to no changes needed to add this support to our Enlightenment Wayland compositor.

How to Prototype Insecure IoTivity Apps with the Secure Library

IoTivity 1.3.1 has been released, and with it comes some important new changes. First, you can rebuild packages from sources, with or without my hotfixes patches, as explained recently in this blog post. For ARM users (of ARTIK7), the fastest option is to download precompiled packages as .RPM for fedora-24 from my personal repository, or check ongoing works for other OS.

Copy and paste this snippet to install latest IoTivity from my personal repo:

distro="fedora-24-armhfp"
repo="bintray--rzr-${distro}"
url="https://bintray.com/rzr/${distro}/rpm"
rm -fv "/etc/yum.repos.d/$repo.repo"
wget -O- $url | sudo tee /etc/yum.repos.d/$repo.repo
grep $repo /etc/yum.repos.d/$repo.repo

dnf clean expire-cache
dnf repository-packages "$repo" check-update
dnf repository-packages "$repo" list --showduplicates # list all published versions
dnf repository-packages "$repo" upgrade # if previously installed
dnf repository-packages "$repo" install iotivity-test iotivity-devel # remaining ones

I also want to thank JFrog for proposing bintray service to free and open source software developers.

Standalone Apps

In a previous blog post, I explained how to run examples that are shipped with the release candidate. You can also try with other existing examples (rpm -ql iotivity-test), but some don’t work properly. In those cases, try the 1.3-rel branch, and if you’re still having problems please report a bug.

At this point, you should know enough to start making your own standalone app and use the library like any other, so feel free to get inspired or even fork demo code I wrote to test integration on various OS (Tizen, Yocto, Debian, Ubuntu, Fedora etc…).

Let’s clone sources from the repo. Meanwhile, if you’re curious read the description of tutorial projects collection.

sudo dnf install screen psmisc git make
git clone http://git.s-osg.org/iotivity-example
# Flatten all subprojects to "src" folders
make -C iotivity-example
# Explore all subprojects
find iotivity-example/src -iname "README*"

 

 

Note that most of the examples were written for prototyping, and security was not enabled at that time (on 1.2-rel security is not enabled by default except on Tizen).

Serve an Unsecured Resource

Let’s go directly to the clock example which supports security mode, this example implements OCF OneIoT Clock model and demonstrates CoAP Observe verb.

cd iotivity-example
cd src/clock/master/
make
screen
./bin/server -v
log: Server:
Press Ctrl-C to quit....
Usage: server -v
(...)
log: { void IoTServer::setup()
log: { OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: { FILE* override_fopen(const char*, const char*)
(...)
log: } FILE* override_fopen(const char*, const char*)
log: Successfully created oic.r.clock resource
log: } OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: } void IoTServer::setup()
(...)
log: deadline: Fri Jan  1 00:00:00 2038
oic.r.clock: { 2017-12-19T19:05:02Z, 632224498}
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

Then, start the observer in a different terminal or device (if using GNU screem type : Ctrl+a c and Ctrl+a a to get switch to server screen):

./bin/observer -v
log: { IoTObserver::IoTObserver()
log: { void IoTObserver::init()
log: } void IoTObserver::init()
log: } IoTObserver::IoTObserver()
log: { void IoTObserver::start()
log: { FILE* override_fopen(const char*, const char*)
(...)
log: { void IoTObserver::onFind(std::shared_ptr)
resourceUri=/example/ClockResUri
resourceEndpoint=coap://192.168.0.100:55236
(...)
log: { static void IoTObserver::onObserve(OC::HeaderOptions, const OC::OCRepresentation&, const int&, const int&)
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

OK, now it should work and the date should be updated, but keep in mind it’s still *NOT* secured at all, as the resource is using a clear channel (coap:// URI).

To learn about the necessary changes, let’s have look at the commit history of the clock sub project. The server’s side persistent storage needed to be added to the configuration; we can see from trace that Secure Resource Manager (SRM) is loading credentials by overriding fopen functions (those are designed to use hardware security features like ARM’s Secure Element/ Secure key storage). The client or observer also needs a persistent storage update.

git diff clock/1.2-rel src/server.cpp

(...)
 
+static FILE *override_fopen(const char *path, const char *mode)
+{
+    LOG();
+    static const char *CRED_FILE_NAME = "oic_svr_db_anon-clear.dat";
+    char const *const filename
+        = (0 == strcmp(path, OC_SECURITY_DB_DAT_FILE_NAME)) ? CRED_FILE_NAME : path;
+    return fopen(filename, mode);
+}
+
+
 void IoTServer::init()
 {
     LOG();
+    static OCPersistentStorage ps {override_fopen, fread, fwrite, fclose, unlink };
     m_platformConfig = make_shared
                        (ServiceType::InProc, // different service ?
                         ModeType::Server, // other is Client or Both
                         "0.0.0.0", // default ip
                         0, // default random port
                         OC::QualityOfService::LowQos, // qos
+                        &ps // Security credentials
                        );
     OCPlatform::Configure(*m_platformConfig);
 }

This example uses generic credential files for the client and server with maximum privileges, just like unsecured mode (or default 1.2-rel configuration).

cat oic_svr_db_anon-clear.json
{
    "acl": {
       "aclist2": [
            {
                "aceid": 0,
                "subject": { "conntype": "anon-clear" },
                "resources": [{ "wc": "*" }],
                "permission": 31
            }
        ],
        "rowneruuid" : "32323232-3232-3232-3232-323232323232"
    }
}

These files will not be loaded directly because they need to be compiled to the CBOR binary format using IoTivity’s json2cbor (which despite the name does more than converting json files to cbor, it also updates some fields).

Usage is straightforward:

/usr/lib/iotivity/resource/csdk/security/tool/json2cbor oic_svr_db_client.json  oic_svr_db_client.dat

To recap, the minimal steps that are needed are:

  1. Configure the resource’s access control list to use new format introduced in 1.3.
  2. Use clear channel (“anon-clear”) for all resources (wildcards wc:*), this is the reason coap:// was shown in log.
  3. Set verbs (CRUD+N) permissions to maximum according to OCF_Security_Specification_v1.3.0.pdf document.

One very important point: don’t do this in a production environment; you don’t want to be held responsible for neglecting security, but this is perfectly fine for getting started with IoTivity and to prototype with the latest version.

 

Still Unsecured, Then What?

With faulty security configurations you’ll face the OC_STACK_UNAUTHORIZED_REQ error (Response error: 46). I tried to sum up the minimal necessary changes, but keep in mind that you shouldn’t stop here. Next, you need to setup ACLs to match the desired policy as specified by Open Connectivity, check the hints linked on IoTivity’s wiki, or use a provisioning tool, >stay tuned for more posts that cover these topics.

 

 

 

EFL: Enabling Wayland Output Rotation

The Enlightenment Foundation Libraries have long supported the ability to do output rotation when running under X11 utilizing the XRandR (resize and rotate) extension. This functionality is exposed to the user by way of the Ecore_X library which provides API function calls that can be used to rotate a given output.

Wayland

While this functionality has been inside EFL for a long time, the ability to rotate an output while running under the Enlightenment Wayland Compositor has not been possible. This is due, in part, to the fact that the Wayland protocol does not provide any form of RandR extension. Normally this would have proved a challenge when implementing output rotation inside the Enlightnement Wayland compositor, however EFL already has the ability to do this.

Software-Based Rotation

EFL’s Ecore_Evas library, which is used as the base of the Enlightenment Compositor canvas, has the ability to perform software-based rotation. This means that when a user asks for the screen to be rotated via Enlightenment’s Screen Setup dialog, the Enlightenment Compositor will draw it’s canvas rotated to the desired degree using the internal Evas engine code. While this works for any given degree of rotation, it is not incredibly efficient considering that modern video cards can do rotation themselves.

Hardware-Based Rotation

Many modern video cards support some form of hardware-based rotation. This allows the hardware to rotate the final scanout buffer before displaying to the screen and is much more efficient than software-based rotation. The main drawback is that many will only support rotating to either 0 degrees or 180 degrees. While this is a much more practical and desired approach to output rotation, the lack of other available degrees of rotation make it a less than ideal method.

Hybrid Rotation

In order to facilitate a more ideal user experience, we have decided to implement something I call hybrid rotation. Hybrid rotation simply means that when asked to rotate the compositor canvas, we will first check if the hardware is able to handle the desired rotation itself. In the event that the hardware is unable to accommodate the necessary degree of rotation, it will then fall back to the software implementation to handle it. With this patch we can now handle any degree of output rotation in the most efficient way available.

The major benefit for application developers is that they do not need to know anything about the capabilities of the underlying hardware. They can continue to use their existing application code to rotate the canvas:

/**
 * Ecore example illustrating the basics of ecore evas rotation usage.
 *
 * You'll need at least one Evas engine built for it (excluding the
 * buffer one). See stdout/stderr for output.
 *
 * @verbatim
 * gcc -o ecore_evas_basics_example ecore_evas_basics_example.c `pkg-config --libs --cflags ecore evas ecore-evas`
 * @endverbatim
 */

#include <Ecore.h>
#include <Ecore_Evas.h>
#include <unistd.h>

static Eina_Bool
_stdin_cb(void *data EINA_UNUSED, Ecore_Fd_Handler *handler EINA_UNUSED)
{
   Eina_List *l;
   Ecore_Evas *ee;
   char c;

   int ret = scanf("%c", &c);
   if (ret < 1) return ECORE_CALLBACK_RENEW;

   if (c == 'h')
     EINA_LIST_FOREACH(ecore_evas_ecore_evas_list_get(), l, ee)
       ecore_evas_hide(ee);
   else if (c == 's')
     EINA_LIST_FOREACH(ecore_evas_ecore_evas_list_get(), l, ee)
       ecore_evas_show(ee);
   else if (c == 'r')
     EINA_LIST_FOREACH(ecore_evas_ecore_evas_list_get(), l, ee)
       ecore_evas_rotation_set(ee, 90);

   return ECORE_CALLBACK_RENEW;
}

static void
_on_delete(Ecore_Evas *ee)
{
   free(ecore_evas_data_get(ee, "key"));
   ecore_main_loop_quit();
}

int
main(void)
{
   Ecore_Evas *ee;
   Evas *canvas;
   Evas_Object *bg;
   Eina_List *engines, *l;
   char *data;

   if (ecore_evas_init() <= 0)
     return 1;

   engines = ecore_evas_engines_get();
   printf("Available engines:\n");
   EINA_LIST_FOREACH(engines, l, data)
     printf("%s\n", data);
   ecore_evas_engines_free(engines);

   ee = ecore_evas_new(NULL, 0, 0, 200, 200, NULL);
   ecore_evas_title_set(ee, "Ecore Evas basics Example");
   ecore_evas_show(ee);

   data = malloc(sizeof(char) * 6);
   sprintf(data, "%s", "hello");
   ecore_evas_data_set(ee, "key", data);
   ecore_evas_callback_delete_request_set(ee, _on_delete);

   printf("Using %s engine!\n", ecore_evas_engine_name_get(ee));

   canvas = ecore_evas_get(ee);
   if (ecore_evas_ecore_evas_get(canvas) == ee)
     printf("Everything is sane!\n");

   bg = evas_object_rectangle_add(canvas);
   evas_object_color_set(bg, 0, 0, 255, 255);
   evas_object_resize(bg, 200, 200);
   evas_object_show(bg);
   ecore_evas_object_associate(ee, bg, ECORE_EVAS_OBJECT_ASSOCIATE_BASE);

   ecore_main_fd_handler_add(STDIN_FILENO, ECORE_FD_READ, _stdin_cb, NULL, NULL, NULL);

   ecore_main_loop_begin();

   ecore_evas_free(ee);
   ecore_evas_shutdown();

   return 0;
}

 

Summary

While not all hardware can support the various degrees of output rotation, EFL provides the ability to rotate an output via software. This gives the Enlightenment Wayland Compositor the ability to handle output rotation in the most efficient method available while being transparent to the user.

The Wayland Zombie Apocalypse is Near

Quite some time ago I received a report of a nasty Wayland bug: under certain circumstances a Wayland event was being delivered with an incorrect file descriptor. The reporter dug deeper and determined the root cause of this; it wasn’t good.

When a client deletes a Wayland object, there might still be protocol events coming from the compositor destined for it (as a contrived example, I delete my keyboard object because I’m done processing keys for the day, but the user is still typing…). Once the compositor receives the delete request it knows it can’t send any more events, but due to the asynchronous nature of Wayland, there could still be some unprocessed events in the buffer destined for the recently deleted object.

The Zombie Scourge

This is handled by making the deleted object into a zombie, rather, THE zombie, as there is a singleton zombie object in the client library that gets assigned to receive and eat (like a yummy bowl of brains) events destined for deleted objects. Once the compositor receives the delete request it will respond with a delete event, and at that time the client object ceases to be a zombie, and ceases to exist at all. Any number of objects may have been replaced by the zombie, a pointer in the object map just points to the zombie instead of a live object.

When an event is received, it undergoes a process called “demarshalling” where it’s decoded: a tiny message header in the buffer indicates its length, its destination object, and its “op code.” The type of the destination object and the op code are used to look up the signature for that event, which is a list of its parameters (integers, file descriptors, arrays, etc). Even though file descriptors are integer values, for the purposes of the Wayland protocol, integer is distinct from file descriptor. This is because when a Wayland event contains a file descriptor, that file descriptor is sent in a sort of out-of-band buffer along side the data stream (called an ancillary control message), instead of in the stream like an integer.

The demarshalling process consumes the main data stream and the ancillary buffer as it parses the message signature. Once a complete message is demarshalled, it is dispatched (client callbacks for that object plus op code are passed as parameters, and the client program gets to do its thing). When an event is destined for the zombie object, this demarshalling process is skipped. The length of data from the header is simply used to determine how much data to discard, and we proceed to the next event in the queue.

Here Lies the Problem

The file descriptors aren’t in the main data stream, so simply consuming that many bytes does not remove them from the ancillary buffer. The signature for the object is required to know how many file descriptors must be removed from the ancillary buffer, and the singleton zombie doesn’t (and can’t) have any specific event signatures at all.

So, if an event carrying a file descriptor is destined for a zombie object:

  • At best, the file descriptor is leaked in the client, is unclosable, and counts towards the client’s maximum number of open file descriptors forever.
  • At worst, there is a different problem; Since the file descriptors are pulled from the ancillary buffer in the order they’re inserted, if there is a following event that carries a file descriptor for a live object, it will get the file descriptor the zombie didn’t eat. The client will have no idea that this has occurred, and no idea what the file descriptor it received is actually going to provide for it. Bad things happen.

Not the Fix

We can’t change the wire protocol (to indicate the number of fds in the message header) because this would break existing software. We can’t simply keep the old object alive and mark it as dead, the object interface that contains the signatures is in client code, possibly in some sort of plug-in system in the client, and the client is allowed to dispose of all copies of it after deleting the object.

The Fix? More Zombies!

I recently sent a new edition of a patch series to fix this (and other file descriptor leaks) to the Wayland mailing list. The singleton zombie is permanently put to rest and is now replaced by an army of bespoke zombies, one for any object that can receive file descriptors in events, created at the time the object is instantiated (see, you can’t create at time of object destruction because it requires memory allocation, and that would allow deletion to fail…).

When the object is no longer viable, its zombie will live on, consuming its unhandled file descriptors until such time as the compositor informs the client it will no longer send any.

Improving EFL Graphics With Wayland Application Redraws

One of the most heinous visual artifacts modern displays are capable of is tearing, on the desktop, there are two major potential sources of this:

  1. A compositor redraw during screen refresh – this leads to global artifacts such as application windows breaking up while being repositioned.
  2. Applications redrawing while the compositor is reading their content – the contents of a single window will be displayed in an indeterminate state.

Under X, application redraws are tricky to do without tearing because content can be updated at any chosen time with no clear feedback as to when the compositor will read it. EFL uses some clever tricks to this end (check out the state of the art X redraw timing for yourself), but it’s difficult to get right in all cases. For a lot of people this just works, or they’re not sensitive to the issue when it doesn’t.

Wayland Does Application Redraws Better

For the rest of us, Wayland has a different approach; prepare the buffer, attach it to a surface, and commit it for display. If you want the compositor to help with timing, you can request a frame callback before committing the surface; the compositor will display the frame on the next vertical retrace and will send a callback event at some point in the future. This callback informs the program that if it submits a buffer now, it has a reasonable chance to hit the upcoming vertical retrace.

Pro-tip: You can request a frame callback without submitting a buffer as well, and the compositor will still send a callback (which may be immediate) when it thinks you should commit a buffer for display.

The client shouldn’t expect the frame callback to be generated when the client is obscured or otherwise invisible. That is, the frame callback only tells you to draw when it’s useful for you to draw. This means the frame callback is not useful as a general purpose timing mechanism, it’s only for display.

Once the buffer is committed to the compositor, the compositor owns it. The Wayland compositor will send the buffer back when it’s done with it via a release event; until this point you’re simply not allowed to draw into it again. The result is undefined, and can cause your program to terminate as punishment for being poorly written.

Because of this commit/release paradigm, it’s very difficult to cause tearing artifacts under Wayland (it requires out of spec behavior that can lead to application termination).

Some people (notably gamers) will complain that synchronizing display with monitor refresh like this introduces input lag, and while I’m not going to wander into that firefight, I’d be remiss if I didn’t mention there’s no obligation to use frame callbacks for timing. If your goal is to render your frames as temporally proximal as possible to the screen retrace, then standard triple buffering is easily accomplished without using frame callbacks.

The client can render and commit frames as quickly as the CPU/GPU allows, and the compositor will release many of them undisplayed as soon as they’re replaced with a newer frame. Only the most recently-submitted frame at the time the compositor redraws the screen will be used. When an application doesn’t need twitchy response times though, using frame callbacks results in smooth animation with no wasted rendering.

Bringing This Improvement to EFL

New for the upcoming EFL 1.21 release (TBA soon, we promise) EFL applications now (finally!) drive animations directly from these frame callbacks. This has been a long time coming as it required major changes to our core timing code. Previously, we used a separate timer source that ran continuously in a thread, sending frame times through a pipe to the main thread. These “ticks” were then gated by a clever hack that prevented a redraw from occurring between buffer submission time and frame callback time. Prior to this, we simply triple buffered and threw away a few frames.

The most obvious immediate benefit of doing this properly has been that when the compositor blanks the screen, many clients stop consuming CPU cycles entirely. This is a departure from our previous timing implementation which burned many CPU cycles sending timestamps nothing cared about. It’s also a radical change from the behavior under X where animations continue despite the black screen, while X, enlightenment, and the client all continue to work needlessly.

So, here we have yet another case where Wayland does less than X.

One Small Step to Harden USB Over IP on Linux

The USB over IP kernel driver allows a server system to export its USB devices to a client system over an IP network via USB over IP protocol. Exportable USB devices include physical devices and software entities that are created on the server using the USB gadget sub-system. This article will cover a major bug related to USB over IP in the Linux kernel that was recently uncovered; it created some significant security issues but was resolved with help from the kernel community.

The Basics of the USB Over IP Protocol

There are two USB over IP server kernel modules:

  • usbip-host (stub driver): A stub USB device driver that can be bound to physical USB devices to export them over the network.
  • usbip-vudc: A virtual USB Device Controller that exports a USB device created with the USB Gadget Subsystem.

There is one USB over IP client kernel module:

  • usbip-vhci: A virtual USB Host Controller that imports a USB device from a USB over IP capable server.

Finally, there is one USB over IP utility:

  • (usbip-utils): A set of user-space tools used to handle connection and management, this is used on both the client and server side.

The USB/IP protocol consists of a set of packets that get exchanged between the client and server that query the exportable devices, request an attachment to one, access the device, and finally detach once finished. The server responds to these requests from the client, and this exchange is a series of TCP/IP packets with a TCP/IP payload that carries the USB/IP packets over the network. I’m not going to discuss the protocol in detail in this blog, please refer to usbip_protocol.txt to learn more.

Identifying a Security Problem With USB Over IP in Linux

When a client accesses an imported USB device, usbip-vhci sends a USBIP_CMD_SUBMIT to the usbip-host, which submits a USB Request Block (URB). An endpoint number, transfer_buffer_length, and number of ISO packets are among the valid URB fields that will be in a USBIP_CMD_SUBMIT packet. There are some potential vulnerabilities with this, specifically a malicious packet could be sent from a client with an invalid endpoint, or a very large transfer_buffer_length with a large number of ISO packets. A bad endpoint value could force the usbip-host to access memory out of bounds unless usbip-host validates the endpoint to be within valid range of 0-15. A large transfer_buffer_length could result in the kernel allocating large amounts of memory. Either of these malicious requests could adversely impact the server system operation.

Jakub Jirasek from Secunia Research at Flexera reported these security vulnerabilities in the driver to the malicious input. In addition, he reported an instance of a socket pointer address (kernel memory address) leaked in a world-readable sysfs USB/IP client side file and in debug output. Unfortunately, the USB over IP driver had these security vulnerabilities since the beginning. The good news is that they have been found and fixed now, I sent a series of 4 fixes to address all of the issues reported by Jakub Jirasek. In addition, I am continuing to look for other potential problems.

All of these problems are a result of a lack of validation checks on the input and an incorrect handling of error conditions; my fixes add the missing checks and take proper action. These fixes will propagate into the stable releases within the next few weeks. One exception is the issue with kernel address leaks which is an intentional design decision to provide a convenient way to find IP address from socket addresses that opened a security hole.

Where are these fixes?

The fixes are going in to the 4.15 and stable releases.  The fixes can be found in the following two git branches:

Secunia Research has created the following CVEs for the fixes:

  • CVE-2017-16911 usbip: prevent vhci_hcd driver from leaking a socket pointer
    address
  • CVE-2017-16912 usbip: fix stub_rx: get_pipe() to validate endpoint number
  • CVE-2017-16913 usbip: fix stub_rx: harden CMD_SUBMIT path to handle
    malicious input
  • CVE-2017-16914 usbip: fix stub_send_ret_submit() vulnerability to null
    transfer_buffer

Do it Right the First Time

This is a great example of how open source software can benefit from having many eyes looking over it to identify problems so the software can be improved. However, after solving these issues, my takeaway is to be proactive in detecting security problems, but better yet, avoid introducing them entirely. Failing to validate input is an obvious coding error in any case, that can potentially allow users to send malicious packets with severe consequences. In addition, be mindful of exposing sensitive information such as the kernel pointer addresses. Fortunately, we were able to work together to solve this problem which should make USB over IP more secure in the Linux kernel.

Introduction to Projective Transformation

The Italian city of Florence is home to the Uffizi Gallery, one of the most famous art museums in the world, with a particular emphasis on Renaissance art, including my favorite event in all of art history: the discovery of perspective. Today we’re so surrounded by artwork that uses perspective that we hardly notice it. In fact, it’s the *non-*perspective art that looks weird to us today, but prior to the 1400’s it simply didn’t exist.

Drawing and painting differ from other art forms like sculpture, architecture, or theater, in that they represent life and the world via a flat two-dimensional surface. With sculpture, artists essentially make a 3D copy of a physical object measured in three dimensions; with a drawing or painting, you’re challenged with flattening reality down to just two.

Indeed, the earliest artists resorted to just showing front or profile views of their subject. Sometimes depth is shown by overlapping objects in the scene, but there is no foreshortening; making something larger or smaller in the drawing is used to indicate the importance of the subject, not its depth. Egyptian art shows an interesting workaround by drawing a figure both in front view (the body), and in profile (the head and arms).

Isometric Projection

My own drawing background has nothing to do with art. Through high school and college I took year after year of drafting classes, yet I never learned perspective drawing (perhaps this academic deficiency led to my interest in the topic). Technical drawing seeks to represent 3D objects on a 2D surface in such a way that the builder or mechanic can make the 3D object accurately. Objects are drawn face-on, top-down, and side-view. Often, the object is also shown in an isometric projection.

Isometric projection shows the object rotated by 45° about the vertical axis and 35.264° about the horizontal. Importantly, lines drawn along any axis will be equivalent in length. Or stated another way, lines drawn for the front of an object will be proportional to the actual lengths of lines drawn for the rear of the object. This “parallel perspective” characteristic — while valuable for engineers (and video games!) — makes this type of projection unsuitable for art. If you’re drawing a scene with people in the foreground and in the background, they’ll all be drawn of the same size!

Finding Perspective

As I stroll through the rooms of the Uffizi with the oldest art, it strikes me how primitive and simple looking much of it is. Much of the art displayed from this period was commissioned by churches in the form of triptychs: paintings on wooden panels used for ecclesiastical purposes such as altar pieces. The rich gold leafing, careful attention to beautifully rendered facial expressions, and meaningful compositions prove these to be notable works of art, yet technically I can’t help but notice how reminiscent they are of Egyptian wall paintings from thousands of years earlier: figures standing in a row, mostly frontal views, with heads turned to give the illusion of three-dimensionality, and differences in size used to indicate importance, rather than depth.

If you look carefully though, you can start to see experiments with perspective, by masters like Cimabue and Giotto. Parallel perspectives being used to show buildings and background elements. Foreshortening the sizes or diagonals of buildings, tiled floors, thrones, or objects in the background, even if the people themselves are still equal sized both in foreground and background. When I turned the corner into the Quattrocento rooms, a sudden change struck me. The figures themselves didn’t look that much different, but they were now in realistic looking places. The change was the invention of perspective.

Art history traditionally credits the discovery not to a painter, but a sculptor and architect: the famous Filippo Brunelleschi, who demonstrated the geometrical concepts to his contemporaries in 1413 using a mirror to quickly alternate between the view of a building and a drawn picture of it. The eye could then quickly ascertain that the various lines of the drawing matched to what was seen by the eye directly.

Other artists began experimenting with the geometries of the optics, and working out ways to accurately abstract these into simpler rules of thumb that any artist could follow. Leon Battista Alberti’s treatise De pictura was the first writing to lay out the new painting theories with the worked-out geometrical analysis. Artists started incorporating these ideas and techniques into their works, first for architecture and background scenery, then later for compositions and people. Masaccio’s The Tribute Money, painted in the 1420’s, is one of the first of the age to show a single vanishing point. The sizes of trees, clouds, and Peter fishing in the background give the scene a sense of depth not seen in earlier works. With this illusion of space finally solved, the 15th century saw an explosion of explorations of this new perspective driven art.

The Perspective of an Engineer

One of the interesting things about perspective to an artistic luddite like myself is that while it seems a high art, there is actually a very strict mathematics to it. Brunelleschi and his compatriots likely figured out the geometric operations from the inception, which were thence taught to and studied by future artists, architects, and opticians.

Today, computers use linear algebra instead of geometry to calculate the transformation of an object or scene into a perspective view. Linear algebra represents a set of dimensional equations as a grid of numbers, called a matrix. The computer runs computations using this matrix against each point in its scene to calculate where the point will lay in the two-dimensional view of the scene. In my next post I’ll dig into the technicalities of how this perspective transformation functionality can be added to the Cairo graphics drawing library. Stay tuned!