An End-to-End Web IoT Demo Using Mozilla Gateway

Imagine that you are on your way to a holiday home you have booked. The weather is changing and you might start wondering about temperature settings in the holiday home. A question might pop up in you mind: “Can I change the holiday home settings to my own preference before reaching there?”

Today we are going to show you an end-to-end demo we have created that allows a holiday maker to remotely control sensor devices, or Things in the context of Internet of Things, at a holiday home. The private smart holiday home is built with the exciting Mozilla Things Gateway – an open Gateway that anybody can now create with a Raspberry Pi to control Internet of Things devices.  For holiday makers, we provided a simple Node.js holiday application to access Things via Mozilla Things Gateway. Privacy is addressed by introducing concepts of Things ownership and Things usership, which is followed by the authorization work flow.

The Private Smart Holiday Home

The private smart holiday home is the home for Gateway and Things –

Things Gateway

One of the major challenges in the Internet of Things is interoperability. Getting different devices to work nicely with each other can be painful in a smart home environment. Mozilla Things Gateway addresses this challenge and provides a platform that bridges existing off-the-shelf smart home devices to the web by providing them with web URLs and a standardized data model and API [1]Implementations of the Things Gateway follows the proposed Web of Things standard and is open sourced under Mozilla Public License 2.0.

In this demo, we chose Raspberry Pi 3 as the physical board for the Gateway.  Raspberry Pi 3 is well-supported by the Gateway community and has been a brilliant initial choice for experimenting the platform. It is worth mentioning that the Mozilla Project Things is not tied only to the Raspberry Pi, but they are looking at supporting a wide range of hardwares.

The setup of the Gateway is pretty straightforward. We chose to use the tunneling service provided by Mozilla by creating a sub-domain of mozilla-iot.orgsosg.mozilla-iot.org. To try it yourself, we recommend going through the README file at Gateway github repository.  Also a great step-by-step guide has been created by Ben Francis on “How to build a private smart home with a Raspberry Pi and things Gateway”.

Things Add-ons

The Mozilla Things Gateway has introduced an Add-on system, which is loosely modeled after the add-on system in Firefox, to allow for the addition of new feature or device such as an adapter to the Things Gateway. The tutorial from James Hobin and Michael Stegeman on “Creating an Add-on for Project Things Gateway” is a good place to grab the concepts of Add-on, Adapter, Device and to start creating your own Add-ons. In our demo, we have introduced fan, lamp and thermostat Add-ons as shown below to support our own hardware devices.

Phil Coval has posted a blog explaining how to get started and how to establish basic automation using I2C sensors and actuators on gateway’s device. It is the base for our Add-ons.

Holiday Application

The holiday application is a small Node.js program that has functionalities of a client web server, OAuth client and browser User Interface.

The application consists of two parts. First is for the holiday maker to get authorization from the Gateway for accessing Things at the holiday home. Once authorized, it moves to the second part, Things access and control.



OAuth client implementation is based on simple-oauth2, a Node.js client library for OAuth2.0. The library is open sourced under Apache License, Version 2.0  and available at github.

The application code can be accessed here. The README file provides instructions for setting up the application.

Ownership and Usership of Things

So here we have it, the relationships among Things owner, Things user, third party application, Gateway and Things.

whole_picture2

  • The holiday home owner is the Things Gateway user and has full control of the Things Gateway and Things.
  • The holiday maker is a temporary user of Things and has no access to the Gateway.
  • The holiday home owner uses the Gateway to authorize the holiday maker  accesses to the Things with scopes via gateway.
  • Holiday application accesses the Things through the Gateway.

User Authorization

The Things Gateway provides a system for safely authorizing third-party applications using the de-facto authorization standard OAuth 2.0. The work flow for our demo use case is shown in the diagram below –

The third party application user, the Holiday App User in this case, requests authorization to access the Gateway’s Web Thing API. The Gateway presents the request list to the Gateway User, the holiday home owner, as below –

With the holiday user’s input, the Gateway responds with an authentication code. The holiday application then requests to exchange the authentication code to a JSON Web Token (JWT) . The token has a scope that indicates what accesses were actually granted by the holiday home owner. It is noted that the granted token scope can only be a subset of the request scope. With the JWT granted, the holiday application can access the Things that the user is granted to.

Demo Video

We also created a demo video which tailored together different parts we talked above, and is available at https://vimeo.com/271272094.

What’s Next?

The Mozilla Gateway is a work in progress and is not yet reached production use stage.  There are a lot exciting developments happening. Why not get involved?

How to Securely Encrypt A Linux Home Directory

These days our computer enables access to a lot of personal information that we don’t want random strangers to access, things like financial login information comes to mind. The problem is that it’s hard to make sure this information isn’t leaked somewhere in your home directory like the cache file for your web browser. Obviously, in the event your computer gets stolen you want your data at rest to be secure; for this reason you should encrypt your hard drive. Sometimes this is not a good solution as you may want to share your device with someone who you might not want to give your encryption password to. In this case, you can  encrypt only the home directory for your specific account.

Note: This article focuses on security for data at rest after that information is forever out of your reach, there are other threat models that may require different strategies.

Improving Upon Linux Encryption

I have found the home directory encryption feature of most Linux distributions to be lacking in some way. My main concern is that I can’t change my password as it’s used to directly unlock the encrypted directory. So, what I would like is the ability to have is a pass phrase that is used as a encryption key, but is itself encrypted with my password. This way, I can change my password by re-encrypting my pass phrase with a different password. I want it to be a pass phrase because that makes it possible to back it up in my global password manager which protects me in the case the file holding the key gets corrupted, and it allows me to share it with my IT admin who can then put it in cold storage.

Generate a Pass Phrase

So, how do we do this? First generate the key, for that I have slightly modified a Bitcoin wallet pass phrase generator that I use to generate the key as follows:

$ git clone https://github.com/Bluebugs/bitcoin-passphrase-generator.git
$ cd bitcoin-passphrase-generator
$ ./bitcoin-passphrase-generator.sh 32 2>/dev/null | openssl aes256 -md sha512 -out my-key

This will prompt you for the password to encrypt your pass phrase, and this will become your user password. Once this command has been called, it will generate a file that contains a pass phrase using 32 words out of your system dictionary. On Arch Linux, this dictionary contains 77,649 words, so one possibility in 77649^32. Anything more than 2^256, should be secure enough for some time. To check the content of your pass phrase and back it up, run the following:

$ openssl aes256 -md sha512 -d -in my-key

Once you have entered your password it will output your pass phrase: a line of words that you can now backup in your password manager.

Encrypt Your Home Directory

Now, let’s encrypt the home directory. I have chosen to use ext4 per directory encryption because it enables you to share the hard drive with other users in the future instead of requiring you to split it into different partitions. To initialize your home directory, first move your current one out of the way (This means stopping all processes that are running under your user ID and login under another ID). After this, load the key into the kernel:

$ openssl aes256 -md sha512 -d -in my-key | e4crypt add_key 2>1

This will ask you for the password you used to protect your pass phrase; once entered, it adds it to your current user kernel key ring. With this, you can now create you home directory, but first get the key handle with the following command:

$ keyctl show | grep ext4
12345678 --alsw-v    0   0   \_ logon: ext4:abcdef1234567890

The key handle is the text that follows ext4 at the end of the 2nd line above, this can now be used to create your new encrypted home directory :

$ mkdir /home/user
$ e4crypt set_policy abcdef1234567890 /home/user

After this, you can copy all of the data from your old user directory to the new directory and it will be properly encrypted.

Setup PAM to Unlock and Lock Your Home Directory

Now, we need to play with the PAM session handle to get it to load the key and unlock the home directory. I will assume that you have stored my-key at /home/keys/user/my-key. We’ll use pam_mount to run a custom script that will add the key in the system at login and remove it at log out. The script for mounting and unmounting can be found on GitHub.

The interesting bit that I’ve left out of the mounting script is the possibility to get the key from an external device or maybe even your phone. Also, you’ll notice the need to be part of the wheel group to cleanly unmount the directory as otherwise the file will still be in cache after log out, and it will be possible to navigate the directory and create files until they are removed. I think a clean way to handle this might be to have a script for all users that can flush the cache without asking for a sudo password; this will be an easy improvement to make. Finally, modify the pam system-login file as explained in the Arch Linux documentation with just one additional change to disable pam_keyinit.so.

In /etc/pam.d/system-login, paste the following:

#%PAM-1.0

auth required pam_tally.so onerr=succeed file=/var/log/faillog
 auth required pam_shells.so
 auth requisite pam_nologin.so
 auth optional pam_mount.so
 auth include system-auth

account required pam_access.so
 account required pam_nologin.so
 account include system-auth

password optional pam_mount.so
 password include system-auth

session optional pam_loginuid.so
 #session optional pam_keyinit.so force revoke
 session [success=1 default=ignore] pam_succeed_if.so service = systemd-user quiet
 session optional pam_mount.so
 session include system-auth
 session optional pam_motd.so motd=/etc/motd
 session optional pam_mail.so dir=/var/spool/mail standard quiet
 -session optional pam_systemd.so
 session required pam_env.so

Now, if you want to change your password, you need to decrypt the key, change the password, and re-encrypt it. The following command will do just that:

$ openssl aes256 -md sha512 -d -in my-old-key | openssl aes256 -md sha512 -out my-new-key

Et voila !

 

How to Prototype Insecure IoTivity Apps with the Secure Library

IoTivity 1.3.1 has been released, and with it comes some important new changes. First, you can rebuild packages from sources, with or without my hotfixes patches, as explained recently in this blog post. For ARM users (of ARTIK7), the fastest option is to download precompiled packages as .RPM for fedora-24 from my personal repository, or check ongoing works for other OS.

Copy and paste this snippet to install latest IoTivity from my personal repo:

distro="fedora-24-armhfp"
repo="bintray--rzr-${distro}"
url="https://bintray.com/rzr/${distro}/rpm"
rm -fv "/etc/yum.repos.d/$repo.repo"
wget -O- $url | sudo tee /etc/yum.repos.d/$repo.repo
grep $repo /etc/yum.repos.d/$repo.repo

dnf clean expire-cache
dnf repository-packages "$repo" check-update
dnf repository-packages "$repo" list --showduplicates # list all published versions
dnf repository-packages "$repo" upgrade # if previously installed
dnf repository-packages "$repo" install iotivity-test iotivity-devel # remaining ones

I also want to thank JFrog for proposing bintray service to free and open source software developers.

Standalone Apps

In a previous blog post, I explained how to run examples that are shipped with the release candidate. You can also try with other existing examples (rpm -ql iotivity-test), but some don’t work properly. In those cases, try the 1.3-rel branch, and if you’re still having problems please report a bug.

At this point, you should know enough to start making your own standalone app and use the library like any other, so feel free to get inspired or even fork demo code I wrote to test integration on various OS (Tizen, Yocto, Debian, Ubuntu, Fedora etc…).

Let’s clone sources from the repo. Meanwhile, if you’re curious read the description of tutorial projects collection.

sudo dnf install screen psmisc git make
git clone http://git.s-osg.org/iotivity-example
# Flatten all subprojects to "src" folders
make -C iotivity-example
# Explore all subprojects
find iotivity-example/src -iname "README*"

 

 

Note that most of the examples were written for prototyping, and security was not enabled at that time (on 1.2-rel security is not enabled by default except on Tizen).

Serve an Unsecured Resource

Let’s go directly to the clock example which supports security mode, this example implements OCF OneIoT Clock model and demonstrates CoAP Observe verb.

cd iotivity-example
cd src/clock/master/
make
screen
./bin/server -v
log: Server:
Press Ctrl-C to quit....
Usage: server -v
(...)
log: { void IoTServer::setup()
log: { OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: { FILE* override_fopen(const char*, const char*)
(...)
log: } FILE* override_fopen(const char*, const char*)
log: Successfully created oic.r.clock resource
log: } OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: } void IoTServer::setup()
(...)
log: deadline: Fri Jan  1 00:00:00 2038
oic.r.clock: { 2017-12-19T19:05:02Z, 632224498}
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

Then, start the observer in a different terminal or device (if using GNU screem type : Ctrl+a c and Ctrl+a a to get switch to server screen):

./bin/observer -v
log: { IoTObserver::IoTObserver()
log: { void IoTObserver::init()
log: } void IoTObserver::init()
log: } IoTObserver::IoTObserver()
log: { void IoTObserver::start()
log: { FILE* override_fopen(const char*, const char*)
(...)
log: { void IoTObserver::onFind(std::shared_ptr)
resourceUri=/example/ClockResUri
resourceEndpoint=coap://192.168.0.100:55236
(...)
log: { static void IoTObserver::onObserve(OC::HeaderOptions, const OC::OCRepresentation&, const int&, const int&)
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

OK, now it should work and the date should be updated, but keep in mind it’s still *NOT* secured at all, as the resource is using a clear channel (coap:// URI).

To learn about the necessary changes, let’s have look at the commit history of the clock sub project. The server’s side persistent storage needed to be added to the configuration; we can see from trace that Secure Resource Manager (SRM) is loading credentials by overriding fopen functions (those are designed to use hardware security features like ARM’s Secure Element/ Secure key storage). The client or observer also needs a persistent storage update.

git diff clock/1.2-rel src/server.cpp

(...)
 
+static FILE *override_fopen(const char *path, const char *mode)
+{
+    LOG();
+    static const char *CRED_FILE_NAME = "oic_svr_db_anon-clear.dat";
+    char const *const filename
+        = (0 == strcmp(path, OC_SECURITY_DB_DAT_FILE_NAME)) ? CRED_FILE_NAME : path;
+    return fopen(filename, mode);
+}
+
+
 void IoTServer::init()
 {
     LOG();
+    static OCPersistentStorage ps {override_fopen, fread, fwrite, fclose, unlink };
     m_platformConfig = make_shared
                        (ServiceType::InProc, // different service ?
                         ModeType::Server, // other is Client or Both
                         "0.0.0.0", // default ip
                         0, // default random port
                         OC::QualityOfService::LowQos, // qos
+                        &ps // Security credentials
                        );
     OCPlatform::Configure(*m_platformConfig);
 }

This example uses generic credential files for the client and server with maximum privileges, just like unsecured mode (or default 1.2-rel configuration).

cat oic_svr_db_anon-clear.json
{
    "acl": {
       "aclist2": [
            {
                "aceid": 0,
                "subject": { "conntype": "anon-clear" },
                "resources": [{ "wc": "*" }],
                "permission": 31
            }
        ],
        "rowneruuid" : "32323232-3232-3232-3232-323232323232"
    }
}

These files will not be loaded directly because they need to be compiled to the CBOR binary format using IoTivity’s json2cbor (which despite the name does more than converting json files to cbor, it also updates some fields).

Usage is straightforward:

/usr/lib/iotivity/resource/csdk/security/tool/json2cbor oic_svr_db_client.json  oic_svr_db_client.dat

To recap, the minimal steps that are needed are:

  1. Configure the resource’s access control list to use new format introduced in 1.3.
  2. Use clear channel (“anon-clear”) for all resources (wildcards wc:*), this is the reason coap:// was shown in log.
  3. Set verbs (CRUD+N) permissions to maximum according to OCF_Security_Specification_v1.3.0.pdf document.

One very important point: don’t do this in a production environment; you don’t want to be held responsible for neglecting security, but this is perfectly fine for getting started with IoTivity and to prototype with the latest version.

 

Still Unsecured, Then What?

With faulty security configurations you’ll face the OC_STACK_UNAUTHORIZED_REQ error (Response error: 46). I tried to sum up the minimal necessary changes, but keep in mind that you shouldn’t stop here. Next, you need to setup ACLs to match the desired policy as specified by Open Connectivity, check the hints linked on IoTivity’s wiki, or use a provisioning tool, >stay tuned for more posts that cover these topics.

 

 

 

Linux Kernel License Practices Revisited with SPDX®

The licensing text in the Linux kernel source files is inconsistent in verbiage and format. Typically, in each of its ~100k files there is a license text that describes which license applies to each specific file. While all licenses are GPLv2 compatible, properly identifying the licenses that are applicable to a specific file is very hard. To address this problem, a group of developers recently embarked on a mission to use SPDX® to research and map these inconsistencies in the licensing text. As a result of this 10 month long effort, the Linux 4.14 release includes changes to make the licensing text consistent across the kernel source files and modules.

Linux Kernel License

As stated on its COPYING file, the Linux kernel’s default license is GPLv2, with an exception that grants additional rights to the kernel users:

  • NOTE! This copyright does *not* cover user programs that use kernel
    services by normal system calls - this is merely considered normal use
    of the kernel, and does *not* fall under the heading of "derived work".
    Also note that the GPL below is copyrighted by the Free Software
    Foundation, but the instance of code that it refers to (the Linux
    kernel) is copyrighted by me and others who actually wrote it.
    
    Also note that the only valid version of the GPL as far as the kernel
    is concerned is _this_ particular version of the license (ie v2, not
    v2.2 or v3.x or whatever), unless explicitly otherwise stated.
    

The kernel’s COPYING file produces two practical effects:

  1. User-space applications can use non-GPL licenses by using the the above mentioned exception.
  2. It allows using different licenses in the kernel’s source files, when explicitly defined as such.

The Current Kernel License Model

A common practice is to add a comment at the beginning of each file with some sort of text like:

  • /*
     * Copyright (c) 2017 by foo <foo@bar>
     *
     * This program is free software; you can redistribute it and/or
     * modify it under the terms of the GNU General Public License
     * as published by the Free Software Foundation; either version 2
     * of the License, or (at your option) any later version.
     *
     * This program is distributed in the hope that it will be useful,
     * but WITHOUT ANY WARRANTY; without even the implied warranty of
     * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
     * GNU General Public License for more details.
     */
    

However, Philippe Ombredanne’s research showed that:

  • there are 64,742 distinct license statements
    • … in 114,597 blocks of text
    • … in 42,602 files
  • license statements represent 480,455 lines of text;
  • licenses are worded in 1,015 different ways;
  • there are about 85 distinct licenses, the bulk being the GPL.

Also, before kernel 4.14, there were about 11,000 files without any license at all. However, due to the COPYING file they’re defaulted to be GPLv2. This inconsistency makes it complex to determine which license applies to a particular kernel version.

Software Package Data Exchange® (SPDX®)

The Linux Foundation has sponsored the SPDX® project to solve the license identification challenges in open source software. The goal of the project is to provide license identifiers inside the source code that could be easily parsed by machines and would allow checking for license compliance of an open source project easier.

In practice, supporting SPDX® inside source code is as simple as adding an SPDX® tag (SPDX-License-Identifier) with the license that applies (usually, GPL-2.0). If you’re the copyright holder, you may also consider removing the now redundant licensing text.

An example commits that add SPDX® tags and cleanup redundant license from USB over IP driver:

commit 5fd54ace4721fc5ce2bb5aef6318fcf17f421460
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Fri Nov 3 11:28:30 2017 +0100

    USB: add SPDX identifiers to all remaining files in drivers/usb/
    
    It's good to have SPDX identifiers in all files to make it easier to
    audit the kernel tree for correct licenses.
    
    Update the drivers/usb/ and include/linux/usb* files with the correct
    SPDX license identifier based on the license text in the file itself.
    The SPDX identifier is a legally binding shorthand, which can be used
    instead of the full boiler plate text.
    
    This work is based on a script and data from Thomas Gleixner, Philippe
    Ombredanne, and Kate Stewart.
    
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Kate Stewart <kstewart@linuxfoundation.org>
    Cc: Philippe Ombredanne <pombredanne@nexb.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Acked-by: Felipe Balbi <felipe.balbi@linux.intel.com>
    Acked-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
index 11b9a22799cc..0e16a7dcfb38 100644
--- a/drivers/usb/usbip/vhci_hcd.c
+++ b/drivers/usb/usbip/vhci_hcd.c
@@ -1,3 +1,4 @@
+// SPDX-License-Identifier: GPL-2.0+

...

Depending on the type of the source file, the tag will be either one of the tags below:

SPDX® licence identifier tags at the source code
Type of file SPDX License tag
C source: // SPDX-License-Identifier:
C header: /* SPDX-License-Identifier: */
ASM: /* SPDX-License-Identifier: */
scripts: # SPDX-License-Identifier:
.rst: .. SPDX-License-Identifier:
.dts{i}: // SPDX-License-Identifier:

Replacing the licenses inside each source file by a single SPDX license will likely reduce the kernel source files by ~400k lines, with is a nice cleanup, and solve all those issues with ~64k different license statements.

Future Work

Besides the license on each kernel source file, all Linux modules use MODULE_LICENSE() macro to specify their individual license. Right now, the following values for the macro are valid for a module within the official kernel sources:

Types of licenses for MODULE_LICENSE() macro
Macro argument License
“GPL” GNU Public License v2 or later
“GPL v2” GNU Public License v2 only
“GPL and additional rights” GNU Public License v2 rights and more
“Dual BSD/GPL” GNU Public License v2 or BSD license choice
“Dual MIT/GPL” GNU Public License v2 or MIT license choice
“Dual MPL/GPL” GNU Public License v2 or MPL license choice

Any other value “taints” the kernel when such module is loaded, in order to inform the user that a proprietary module was loaded.

With the addition of SPDX® at the kernel, the next step will be to use SPDX tags for Module License. The current plans seem to be to replace the MODULE_LICENSE() macro with MODULE_LICENSE_SPDX() that would take the same SPDX identifier as used in the source code and convert to the values above to keep backward compatibility. The method to achieve this is still under discussion at the Linux Kernel Mailing List.

Additional References:

LWN.net has an interesting article covering other aspects: SPDX identifiers in the kernel. The Free Software Foundation Europe has a set of best practices for License identification at the source code, called REUSE.

The SPDX Trademark is owned by the Linux Foundation.

Who Made that Change and When: Using cregit for Debugging

When debugging kernel problems that aren’t obvious, it’s necessary to understand the history of changes to the source files. For example, a race condition that results in a lockdep warning might have tentacles into multiple code paths. This requires us to examine and understand not only the changes made, but also why they were made. Individual patch commit logs are the best source of the information on why a change was made.

So how do we find this information? My goto tool set for such endeavors has been a combination of git gui and git log. Recently I started using cregit. I will go over these options in this blog.

git log

Running git log on a source file will show all the commits for that file, then you can find the corresponding code change by generating the patch. Using git log can be tedious, but useful for targeted commit search.

git gui

Running git gui in the Linux Kernel source directory and then examining the repository/branch will show the individual commits and the corresponding changes. It can also be used to look at the branch history in a nice graph.

In the image below, you will see the git gui main window on the far left, the selected repository directory/file structure in the middle window, and the selected file ../drivers/media/platform/s5p-mfc/s5p_mfc.c in the window on the right. The SHA1 ID for each commit is shown on the left hand side bar in blue.

Who Made that Change and When Using cregit for Debugging - git-gui

Scroll through and find the commit that you want to highlight and look at its commit log. Once you find the commit, click on the commit and it will expand the commit as shown in the image below. The “this this” tag indicates the selected commit and the changes associated with this commit are highlighted in green, and you can find all the changes by scrolling through the file. The bottom window displays the commit log for the selected commit:

Who Made that Change and When Using cregit for Debugging - this_commit

You can select the Visualize all Branch History option to see the history of commits for the repository:

Who Made that Change and When Using cregit for Debugging - visual_branch

cregit-linux

I started using cregit-linux (Contributors to Linux) for doing research on the changes for debugging problems. Cregit was created by Daniel German from the University of Victoria in Canada, Alexandre Courouble and Bram Adams from the Polytechnique of Montreal in Canada, and Kate Stewart from the Linux Foundation.

The cregit-linux site maintains Linux kernel source code repositories that are transformed using cregit for each of the releases for the past year. It currently has Linux kernel releases 4.7 though 4.13. Please note that the “Information contained on the cregit-linux website is for historical information purposes only and does not indicate or represent copyright ownership.”

It is easy to browse the source files to examine individual commit information that includes the SHA1 ID, Author, and the corresponding code change, all in a single window. It also includes the complete list of contributors to the source file as well as individual functions in it. cregit – Linux 4.13 s5p_mfc.c

Now let’s find the s5p_mfc_open() and it’s contributors as shown in the pictures below:

Who Made that Change and When Using cregit for Debugging - s5p_mfc_open

Who Made that Change and When Using cregit for Debugging - s5p_mfc_open_contributors

To find out who added the line and associated commit information is as simple as hovering over a line of the source code. The lines of code each of the contributors changed is displayed as the same color that’s used for the name of that contributor.  Hover over the line of code you are interested in and click to get an option to examine the corresponding commit. It will pop up in a window like the one below:

Who Made that Change and When Using cregit for Debugging - hover_commit

Now, you can select the commit to view the actual commit on Linus Torvalds’ Linux mirror on GitHub.

Who Made that Change and When Using cregit for Debugging - cregit_commit

It’s super easy! One downside is that the cregit-linux site doesn’t include the mainline kernel, so if you’re debugging a mainline problem you’ll have to use git-gui for those commits. Even so, the cregit-linux site makes it a lot easier to find out who made a specific change and when; also provides a lot more information on the contributors and history of the changes.

Happy browsing and debugging!

How We Simplified X.org Foundation Administration with SPI

The X.org Foundation is a non-profit governance entity charged with overseeing core components of the open source graphics community. X.org had been structured as a legal (non-profit) corporate entity registered in the state of Delaware for some years, which provided tax deduction on donations and other such benefits.

Unfortunately, being a non-profit is not cheap and entails various administrative tasks – filing annual reports, maintaining a bank account, dealing with donations and expenses, and so on – so the overhead of being an independent non-profit was deemed not worth the benefits, and in 2016 the members voted to join Software in the Public Interest (SPI). Joining SPI made a lot of sense; primarily, it would relieve X.org of administrative burdens while preserving the benefits of non-profit status. The costs of being in SPI are offset by the savings of not having to pay the various fees required to upkeep the corporate entity ourselves.

In that 2016 vote, I was elected to a position on the board to serve in the role of treasurer. I took on the tasks of establishing our financial services with SPI and closing down our Delaware registration. These seemingly straightforward administrative tasks proved to turn into a much more involved ordeal.

The Challenges of Dissolving a Non-Profit

Initiating things with SPI was complicated by the fact that I was new to the board, as well as the usual response delay endemic with volunteer-driven efforts. With some email back-and-forthing, we were added to SPI’s website, established an X.org “Click&Pledge” bucket for credit card donations, and later established Paypal donations as well.

The X.org Foundation’s cash assets were held in a Bank of America (BofA) account registered under the name of a previous treasurer. Unfortunately, BofA’s security policies meant that only the old treasurer could close out the account; fortunately, he was still available and able to do this. A lesson learned for other non-profits would be to ensure multiple people are registered for the organization’s bank accounts.

Once the assets were transferred it was time to close down the foundation itself, or “dissolution” in corporate terms. Delaware had a byzantine array of forms, fees, and procedures, and with this all being completely foreign to my experience, mistakes were inevitable. They kindly provide PDFs with fillable fields, however while some forms can be filed online, the PDFs have to be printed out and sent by mail. I was confused on this point and sent the filled in PDF to them online; unfortunately their system’s PDF processing system doesn’t support fillable fields and strips them out, resulting in invalid submissions.

Another lesson I learned is that the Division of Corporations does not notify you in case of errors, so I did not discern the failure for months and it took a number of phone calls to identify the problems and correct them. What brought the failure to my attention was when we attempted to terminate services with our registered agent. I’m still not exactly sure what a registered agent is, but corporate registration in Delaware requires it. X.org had a contract with National Registered Agents, Inc., who (once I’d squared away the dissolution paperwork) were pleasant to work with to close out our account with them.

It’s hard not to think of this transition as the end of one era and the beginning of another, and I’m very optimistic about the new capacity that the X.org Foundation has via SPI for receiving and processing donations. We’re now well positioned for organizing fund-raising activities to support X.org’s events and student sponsorships, and hopefully the information provided in this article will be beneficial to other open source communities facing similar problems.

Improve System Entropy to Speed Up Secure Internet Connections

After my previous blog post, you should now be using SSH and Tor all the more often, but things are probably slow when you are trying to setup a secure connection with this method. This may well be due to your computer lacking a proper source of entropy to create secure cryptographic keys. You can check the entropy of your system with the following command.

 $ cat /proc/sys/kernel/random/entropy_avail

This will return a number, hopefully it’s above 3,000 because that’s what is likely needed to keep up with your needs. So what do you do if it’s not high enough? This article will cover two tips to improve your computer’s entropy. All examples in this guide are for Linux distributions that use systemd.

rngd

rngd is a tool designed to feed the system with more entropy from various sources. It is part of the rng-tools package. After installing it, the rngd service needs to be started and enabled; the following command will do so:

$ systemctl enable rngd.service
$ systemctl start rngd.service

tpm

The Trusted Platform Module (TPM) has a hardware random generator that can also be used to improve system entropy. If your system has TPM, it will be available for rng to use. Most modern computers come with TPM these days, you can check to see on your system by doing the following command:

 $ lsmod |grep tpm

If this returns a result, you can enable rng to use tpm by doing the following:

 $ modprobe tpm-rng

For a more permanent solution, do the following:

 $ echo "tpm-rng" > /etc/modules-load.d/tpm.conf

Once this is done, find where the location of the configuration file by doing the following:

$ cat /etc/systemd/system/multi-user.target.wants/rngd.service
[Unit]
Description=Hardware RNG Entropy Gatherer Daemon [Service] EnvironmentFile=/etc/conf.d/rngd ExecStart=/usr/bin/rngd -f $RNGD_OPTS [Install] WantedBy=multi-user.target

With this information, you can now modify the /etc/conf.d/rngd with the following information:

RNGD_OPTS="-o /dev/random -r /dev/hwrng"

Restart rngd.service and check the entropy on your system again. This should make setting up cryptographic keys slightly faster.

How to use V4L2 Cameras on the Raspberry Pi 3 with an Upstream Kernel

A V4L2 staging driver for the Raspberry Pi (RPi) was recently merged into the Linux kernel 4.11. While this driver is currently under development, I wanted to test it and to provide help with V4L2-related issues. So, I took some time to build an upstream kernel for the Raspberry Pi 3 with V4L2 enabled. This isn’t a complex process, but it requires some tricks for it to work; this article describes the process.

Prepare an Upstream Kernel

The first step is to prepare an upstream kernel by cloning a git tree from the kernel repositories. Since the Broadcom 2835 camera driver (bcm2835-v4l2) is currently under staging, it’s best to clone the staging tree because it contains the staging/vc04_services directory with both ALSA and V4L2 drivers:

$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
$ cd staging
$ git checkout staging-next

There’s an extra patch that it is required for DT to work with the bcm2835-v4l2 driver:

[PATCH] ARM: bcm2835: Add VCHIQ to the DT.

Signed-off-by: Eric Anholt <eric@anholt.net>

commit e89ae78395394148da6c0e586e902e6489e04ed1
Author: Eric Anholt 
Date:   Mon Oct 3 11:23:34 2016 -0700

    ARM: bcm2835: Add VCHIQ to the DT.
    
    Signed-off-by: Eric Anholt 

diff --git a/arch/arm/boot/dts/bcm2835-rpi.dtsi b/arch/arm/boot/dts/bcm2835-rpi.dtsi
index 38e6050035bc..1f42190e8558 100644
--- a/arch/arm/boot/dts/bcm2835-rpi.dtsi
+++ b/arch/arm/boot/dts/bcm2835-rpi.dtsi
@@ -27,6 +27,14 @@
 			firmware = <&firmware>;
 			#power-domain-cells = <1>;
 		};
+
+		vchiq {
+			compatible = "brcm,bcm2835-vchiq";
+			reg = <0x7e00b840 0xf>;
+			interrupts = <0 2>;
+			cache-line-size = <32>;
+			firmware = <&firmware>;
+		};
 	};
 };

You need to apply this to the git tree, in order for the vciq driver to work.

Prepare a Cross-Compile Make Script

While it’s possible to build the kernel directly on any RPi, building it using a cross-compiler is significantly faster. For such builds, I do this with a helper script, rather than “make,” to set the needed configuration environment.

Before being able to cross-compile, you need to install a cross compiler on your local machine; several distributions come with cross-compiler patches, or you can download an arm cross-compiler from kernel.org. In my case, I installed the toolchain at /opt/gcc-4.8.1-nolibc/arm-linux/.

I named my make script rpi_make; it not only sets the cross-compiler and defines a place to install the final output files. It has the following content:

#!/bin/bash

# Custom those vars to your needs
ROOTDIR=/devel/arm_rootdir/
CROSS_CC_PATH=/opt/gcc-4.8.1-nolibc/arm-linux/bin
CROSS_CC=arm-linux-
HOST=raspberrypi

# Handle arguments
INSTALL="no"
BOOT="no"
ARG=""
while [ "$1" != "" ]; do
	case $1 in
	rpi_install|install)
		INSTALL=yes
		;;
	boot|reboot)
		INSTALL=yes
		BOOT=yes
		;;
	*)
		ARG="$ARG $1"
	esac
	shift
done

# Add cross-cc at PATH
PATH=$CROSS_CC_PATH:$PATH

# Handle Kernel makefile rules
if [ "$ARG" == "" ]; then
	rm -rf $ROOTDIR/

	set -eu

	make CROSS_COMPILE=$CROSS_CC ARCH=arm \
	    INSTALL_PATH=$ROOTDIR \
	    INSTALL_MOD_PATH=$ROOTDIR \
	    INSTALL_FW_PATH=$ROOTDIR \
	    INSTALL_HDR_PATH=$ROOTDIR \
	    zImage modules dtbs

	make CROSS_COMPILE=$CROSS_CC ARCH=arm \
	    INSTALL_PATH=$ROOTDIR \
	    INSTALL_MOD_PATH=$ROOTDIR \
	    INSTALL_FW_PATH=$ROOTDIR \
	    INSTALL_HDR_PATH=$ROOTDIR \
	    modules_install

	# Create a tarball with the drivers
	mkdir -p $ROOTDIR/install
	(cd $ROOTDIR/lib; tar cfz $ROOTDIR/install/drivers.tgz modules/)

	# Copy Kernel and DTB at the $ROOTDIR/install/ dir
	VER=$(ls $ROOTDIR/lib/modules)
	cp ./arch/arm/boot/dts/bcm283*.dtb $ROOTDIR/install
	cp ./arch/arm/boot/zImage $ROOTDIR/install/vmlinuz-$VER
	cp .config $ROOTDIR/install/config-$VER
	cp System.map $ROOTDIR/install/System.map-$VER
else
	make CROSS_COMPILE=$CROSS_CC ARCH=arm \
	    INSTALL_PATH=$ROOTDIR \
	    INSTALL_MOD_PATH=$ROOTDIR \
	    INSTALL_FW_PATH=$ROOTDIR \
	    INSTALL_HDR_PATH=$ROOTDIR \
	    $ARG
fi

echo "Build finished."
echo "  Install: $INSTALL"
echo "  Reboot : $BOOT"

# Install at $HOST
if [ "$INSTALL" == "yes" ]; then
	DIR=$(git rev-parse --abbrev-ref HEAD)
	if [ "$(echo $DIR|grep rpi)" == "" ]; then
		DIR=upstream
	fi

	echo "Installing new drivers at $HOST:/boot/$DIR"
	scp $ROOTDIR/install/drivers.tgz $HOST:/tmp
	ssh root@$HOST "(cd /lib; tar xf /tmp/drivers.tgz)"
	ssh root@$HOST "mkdir -p /boot/$DIR"
	scp $ROOTDIR/install/*dtb $ROOTDIR/install/*-$VER root@$HOST:/boot/$DIR
fi

# Reboots $HOST
if [ "$BOOT" == "yes" ]; then
	ssh root@$HOST "reboot"
fi

Please note, you need to change the CROSS_CC_PATH var to point to the directory where you installed the cross-compiler. You may also need to change the CROSS_CC variable on this script to match the name of the cross-compiler. In my case, the cross-compiler is called /opt/gcc-4.8.1-nolibc/arm-linux/bin/arm-linux-gcc. The ROOTDIR contains the PATH where driver, firmware, headers and documentation will be installed. In this script it’s set to /devel/arm_rootdir.

Prepare a Build Script

There are a number of drivers that are needed in.config to build the kernel, and new configuration data may be needed as kernel development progresses. Also, although RPi3 supports 64-bit kernels, the userspace provided with the NOOBS distribution is 32 bits. There’s also a TODO for the bcm2835 mentioning that it should be ported to work on arm64. So, I opted to build a 32 bit kernel; this required a hack because the device tree files for the CPU used in the RPi3 (Broadcom 2837) exist only under arch/arm64 directory. So, my build procedure had to change the kernel build systems to build it for 32 bit as well.

Instead of manually handling the required steps, I opted to use a build script, called build, to set the configuration using the scripts/config script. The advantage of this approach is that it makes it easier to maintain as newer kernel releases are added.

#!/bin/bash

# arm32 multi defconfig
rpi_make multi_v7_defconfig

# Modules and generic options
enable="MODULES MODULE_UNLOAD DYNAMIC_DEBUG"
disable="LOCALVERSION_AUTO KPROBES STRICT_MODULE_RWX MODULE_FORCE_UNLOAD MODVERSIONS MODULE_SRCVERSION_ALL MODULE_SIG MODULE_COMPRESS ARM_MODULE_PLTS BPF_JIT TEST_ASYNC_DRIVER_PROBE I2C_STUB SPI_LOOPBACK_TEST INTERVAL_TREE_TEST PERCPU_TEST TEST_LKM TEST_USER_COPY TEST_BPF TEST_STATIC_KEYS CRYPTO_TEST MODULE_FORCE_LOAD DRM_TEGRA_STAGING PRISM2_USB COMEDI RTL8192U RTLLIB R8712U R8188EU RTS5208 VT6655 VT6656 ADIS16201 ADIS16203 ADIS16209 ADIS16240 AD7606 AD7780 AD7816 AD7192 AD7280 ADT7316 AD7150 AD7152 AD7746 AD9832 AD9834 ADIS16060 AD5933 SENSORS_ISL29028 ADE7753 ADE7754 ADE7758 ADE7759 ADE7854 AD2S90 AD2S1200 AD2S1210 FB_SM750 FB_XGI USB_EMXX SPEAKUP MFD_NVEC STAGING_MEDIA STAGING_BOARD LTE_GDM724X MTD_SPINAND_MT29F LNET DGNC GS_FPGABOOT COMMON_CLK_XLNX_CLKWZRD FB_TFT FSL_MC_BUS WILC1000_SDIO WILC1000_SPI MOST KS7010 GREYBUS"

# Bluetooth
enable="$enable BNEP_MC_FILTER BNEP_PROTO_FILTER BT_HCIUART_H4 BT_HCIUART_BCSP BT_HCIUART_LL BT_HCIUART_3WIRE BT_BNEP_MC_FILTER BT_BNEP_PROTO_FILTER BT_HCIUART_BCM"
module="$module BT_BNEP BT_HCIUART BT_BCM"
disable="$disable BT_HCIUART_ATH3K BT_HCIUART_INTEL BT_HCIUART_QCA BT_HCIUART_AG6XX BT_HCIUART_MRVL BT_HCIBPA10X"

# Raspberry Pi 3 drivers
enable="$enable ARCH_BCM2835 I2C_BCM2835 BCM2835_WDT SPI_BCM2835 SPI_BCM2835AUX SND_BCM2835_SOC_I2S USB_DWC2_HOST DMA_BCM2835 BCM2835_MBOX RASPBERRYPI_POWER RASPBERRYPI_FIRMWARE STAGING BCM_VIDEOCORE"
module="$module BCM2835_VCHIQ SND_BCM2835 UIO UIO_PDRV_GENIRQ FUSE_FS CUSE"
disable="$disable BCM2835_VCHIQ_SUPPORT_MEMDUMP UIO_CIF UIO_DMEM_GENIRQ UIO_AEC UIO_SERCOS3 UIO_PCI_GENERIC UIO_NETX UIO_PRUSS UIO_MF624"

# Raspberry Pi 3 serial console
enable="$enable SERIAL_8250_EXTENDED SERIAL_8250_SHARE_IRQ SERIAL_8250_BCM2835AUX SERIAL_8250_DETECT_IRQ"
disable="$disable SERIAL_8250_MANY_PORTS SERIAL_8250_RSA"

# logitech HCI
enable="$enable HID_PID USB_HIDDEV LOGITECH_FF LOGIWHEELS_FF HIDRAW"
module="$module HID USB_G_HID I2C_HID HID_LOGITECH HID_LOGITECH_DJ HID_LOGITECH_HIDPP"
disable="$disable HID_ASUS LOGIRUMBLEPAD2_FF LOGIG940_FF"

# This is currently needed for bcm2835-v4l2 driver to work
#enable="$enable VIDEO_BCM2835"
module="$module VIDEO_BCM2835"
disable="$disable DRM_VC4"

# Settings related to the media subsystem
enable="$enable MEDIA_ANALOG_TV_SUPPORT MEDIA_DIGITAL_TV_SUPPORT MEDIA_RADIO_SUPPORT MEDIA_SDR_SUPPORT MEDIA_RC_SUPPORT MEDIA_CEC_SUPPORT DVB_NET DVB_DYNAMIC_MINORS RC_DECODERS LIRC VIDEO_VIVID_CEC MEDIA_SUBDRV_AUTOSELECT VIDEO_AU0828_V4L2 VIDEO_AU0828_RC VIDEO_CX231XX_RC SMS_SIANO_RC"
module="$module RC_MAP IR_NEC_DECODER IR_RC5_DECODER IR_RC6_DECODER IR_JVC_DECODER IR_SONY_DECODER IR_SANYO_DECODER IR_SHARP_DECODER IR_MCE_KBD_DECODER IR_XMP_DECODER VIDEO_AU0828 VIDEO_CX231XX DVB_USB DVB_USB_A800 DVB_USB_DIBUSB_MB DVB_USB_DIBUSB_MC DVB_USB_DIB0700 DVB_USB_V2 DVB_USB_AF9015 DVB_USB_AF9035 DVB_USB_AZ6007 DVB_USB_MXL111SF DVB_USB_RTL28XXU DVB_USB_DVBSKY SMS_USB_DRV IR_LIRC_CODEC VIDEO_CX231XX_ALSA VIDEO_CX231XX_DVB"
disable="$disable MEDIA_CEC_DEBUG MEDIA_CONTROLLER_DVB DVB_DEMUX_SECTION_LOSS_LOG RC_DEVICES VIDEO_PVRUSB2 VIDEO_HDPVR VIDEO_USBVISION VIDEO_STK1160_COMMON VIDEO_GO7007 VIDEO_TM6000 DVB_USB_DEBUG DVB_USB_UMT_010 DVB_USB_CXUSB DVB_USB_M920X DVB_USB_DIGITV DVB_USB_VP7045 DVB_USB_VP702X DVB_USB_GP8PSK DVB_USB_NOVA_T_USB2 DVB_USB_TTUSB2 DVB_USB_DTT200U DVB_USB_OPERA1 DVB_USB_AF9005 DVB_USB_PCTV452E DVB_USB_DW2102 DVB_USB_CINERGY_T2 DVB_USB_DTV5100 DVB_USB_FRIIO DVB_USB_AZ6027 DVB_USB_TECHNISAT_USB2 DVB_USB_ANYSEE DVB_USB_AU6610 DVB_USB_CE6230 DVB_USB_EC168 DVB_USB_GL861 DVB_USB_LME2510 DVB_USB_ZD1301 DVB_TTUSB_BUDGET DVB_TTUSB_DEC DVB_B2C2_FLEXCOP_USB DVB_AS102 USB_AIRSPY USB_HACKRF USB_MSI2500 DVB_PLATFORM_DRIVERS SMS_SDIO_DRV RADIO_ADAPTERS VIDEO_IR_I2C DVB_USB_DIBUSB_MB_FAULTY"

# GSPCA driver
module="$module USB_GSPCA USB_GSPCA_BENQ USB_GSPCA_CONEX USB_GSPCA_CPIA1 USB_GSPCA_DTCS033 USB_GSPCA_ETOMS USB_GSPCA_FINEPIX USB_GSPCA_JEILINJ USB_GSPCA_JL2005BCD USB_GSPCA_KINECT USB_GSPCA_KONICA USB_GSPCA_MARS USB_GSPCA_MR97310A USB_GSPCA_NW80X USB_GSPCA_OV519 USB_GSPCA_OV534 USB_GSPCA_OV534_9 USB_GSPCA_PAC207 USB_GSPCA_PAC7302 USB_GSPCA_PAC7311 USB_GSPCA_SE401 USB_GSPCA_SN9C2028 USB_GSPCA_SN9C20X USB_GSPCA_SONIXB USB_GSPCA_SONIXJ USB_GSPCA_SPCA500 USB_GSPCA_SPCA501 USB_GSPCA_SPCA505 USB_GSPCA_SPCA506 USB_GSPCA_SPCA508 USB_GSPCA_SPCA561 USB_GSPCA_SPCA1528 USB_GSPCA_SQ905 USB_GSPCA_SQ905C USB_GSPCA_SQ930X USB_GSPCA_STK014 USB_GSPCA_STK1135 USB_GSPCA_STV0680 USB_GSPCA_SUNPLUS USB_GSPCA_T613 USB_GSPCA_TOPRO USB_GSPCA_TOUPTEK USB_GSPCA_TV8532 USB_GSPCA_VC032X USB_GSPCA_VICAM USB_GSPCA_XIRLINK_CIT USB_GSPCA_ZC3XX"

# Hack needed to build Device Tree for RPi3 on arm 32-bits
ln -sf ../../../arm64/boot/dts/broadcom/bcm2837-rpi-3-b.dts arch/arm/boot/dts/bcm2837-rpi-3-b.dts
ln -sf ../../../arm64/boot/dts/broadcom/bcm2837.dtsi arch/arm/boot/dts/bcm2837.dtsi
git checkout arch/arm/boot/dts/Makefile
sed -i "s,bcm2835-rpi-zero.dtb,bcm2835-rpi-zero.dtb bcm2837-rpi-3-b.dtb," arch/arm/boot/dts/Makefile

# Sets enable/modules/disable configuration
for i in $enable; do ./scripts/config --enable $i; done
for i in $module; do ./scripts/config --module $i; done
for i in $disable; do ./scripts/config --disable $i; done

# Sets the max number of DVB adapters
./scripts/config --set-val DVB_MAX_ADAPTERS 16

# Use rpi_make script to build the Kernel
rpi_make

Please note, the script had to disable the VC4 DRM driver and use the simplefb driver instead. This is because the firmware is currently not compatible with both the bcm2835-v4l2 driver and vc4 driver. Also, as I wanted the ability to test a bunch of V4L2 and DVB hardware on it so I’ll be enabling several media drivers.

In order to build the kernel, I simply call:

    ./build 

Install the New Kernel on the Raspberry Pi3

I used an RPi3 with a NOOBS distribution pre-installed on its micro-SD card. I used the normal procedure to install Raspbian on it, but any other distribution should do the job. In order to make it easy to copy the kernel to it, I also connected it to my local network via WiFi or Ethernet. Please note, I’ve not yet been able to get the WiFi driver to work with the upstream kernel. So, if you want to have remote access after running the upstream kernel you should opt to use the Ethernet port.

Once the RPi3 is booted, set it up to run an SSH server; this can be done by clicking on the Raspberry Pi icon in the top menu, and selecting the Raspberry Pi Configuration application from the Preferences menu.  Switch to the Interfaces tab, select SSH, and click the button.

Then, from the directory where you compiled the kernel on your local machine, you should run:

$ export ROOTDIR=/devel/arm_rootdir/
$ scp -r $ROOTDIR/install pi@raspberrypi
pi@raspberrypi's password: [type the pi password here - default is "pi"]
System.map-4.11.0-rc1+                        100% 3605KB  10.9MB/s   00:00    
vmlinuz-4.11.0-rc1+                           100% 7102KB  11.0MB/s   00:00    
bcm2835-rpi-b-plus.dtb                        100%   11KB   3.6MB/s   00:00    
config-4.11.0-rc1+                            100%  174KB   9.7MB/s   00:00    
bcm2835-rpi-b-rev2.dtb                        100%   10KB   3.9MB/s   00:00    
bcm2837-rpi-3-b.dtb                           100%   10KB   3.7MB/s   00:00    
bcm2835-rpi-a.dtb                             100%   10KB   3.8MB/s   00:00    
bcm2836-rpi-2-b.dtb                           100%   11KB   3.8MB/s   00:00    
drivers.tgz                                   100% 5339KB  11.0MB/s   00:00    
bcm2835-rpi-zero.dtb                          100%   10KB   3.6MB/s   00:00    
bcm2835-rpi-b.dtb                             100%   10KB   3.7MB/s   00:00    
bcm2835-rpi-a-plus.dtb                        100%   10KB   3.7MB/s   00:00    
$ ssh pi@raspberrypi
pi@raspberrypi's password: [type the pi password here - default is "pi"]

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Mar 15 14:47:13 2017

pi@raspberrypi:~ $ sudo su -
root@raspberrypi:~# mkdir /boot/upstream
root@raspberrypi:~# cd /boot/upstream/
root@raspberrypi:/boot/upstream# cp -r ~pi/install/* .
root@raspberrypi:/boot/upstream# cd /lib
root@raspberrypi:/lib# tar xf /boot/upstream/drivers.tgz 

Another alternative is to setup the ssh server to accept root logins and copy the ssh keys to it, with

ssh-copy-id root@raspberrypi.

If you opt for this method, you could call this small script after building the kernel:

	echo "Installing new drivers"
	scp install/drivers.tgz raspberrypi:/tmp
	ssh root@raspberrypi "(cd /lib; tar xf /tmp/drivers.tgz)"
	scp install/*dtb install/*-$VER root@raspberrypi:/boot/upstream
	ssh root@raspberrypi "reboot"

This logic is included in the rpi_make script. So, you can call:

    rpi_make rpi_install

Adjust the RPi Config to Boot the New Kernel

Changing the RPi to boot the new kernel is simple. All you need to do is to edit the /boot/config.txt file and add the following lines:

# Upstream Kernel
kernel=upstream/vmlinuz-4.11.0-rc1+
device_tree=upstream/bcm2837-rpi-3-b.dtb

# Sets for bcm2385-v4l2 driver to work
start_x=1
gpu_mem=128

# Set to use Serial console
enable_uart=1

Use Serial Console on RPi3

Raspberry Pi3 has a serial console via its GPIO connector. If you want to use it, you’ll need to have a serial to USB converter capable of working with 3.3V. For it to work, you need to wire the following pins:

Pin 1 – 3.3V power reference
Pin 6 – Ground
Pin 8 – UART TX
Pin 10 – UART RX

This image shows the RPi pin outs. Once wired up, it will look like the following image.

How to use V4L2 Cameras on the Raspberry Pi 3 with an Upstream Kernel - raspberry_pi3
Raspberry Pi 3 with camera module v2

You should also change the Kernel command line to use ttyS0, e. g. setting /boot/cmdline.txt to:

dwc_otg.lpm_enable=0 console=ttyS0,115200 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

Use the bcm2835-v4l2 Driver

The bcm2835-v4l2 driver should be compiled as a module that’s not loaded automatically. So, to use it run the following command.

modprobe bcm2835-v4l2

You can then use your favorite V4L2 application. I tested it with qv4l2, camorama and cheese.

Current Issues

I found some problems with upstream drivers on the Raspberry Pi 3 with kernel 4.11-rc1 + staging (as found on March, 16):

  • The WiFi driver didn’t work, despite brcmfmac driver being compiled.
  • The UVC driver doesn’t work properly: after a few milliseconds it disconnects from the hardware.

That’s it. Enjoy!

Improving the Security of Your SSH Configuration

Most developers make use of SSH servers on a regular basis and it’s quite common to be a bit lazy when it comes to the admin of some of them. However, this can create significant problems because SSH is usually served over a port that’s remotely accessible. I always spend time securing my own SSH servers according to some best practices, and you should review the steps in this article yourself.  This blog post will expand upon these best practices by offering some improvements.

Setup SSH Server Configuration

The first step is to make the SSH service accessible via only the local network and Tor. Tor brings a few benefits for an SSH server:

  • Nobody knows where users are connecting to the SSH server from.
  • Remote scans need to know the hidden service address Tor uses, which reduces the risk of automated scan attacks on known login/password and bugs in the ssh server.
  • It’s always accessible remotely, even if the user’s IP address changes; there’s no need to register IP addresses or track changes.

To do so, you’ll need to run a Tor leaf node (if you don’t know how, the Arch Linux wiki has a good article on the subject). Then, add the following lines to the end of the torrc configuration file

HiddenServiceDir /var/lib/tor/hidden_service/ssh
HiddenServicePort 22 127.0.0.1:22

Restart Tor, then edit the sshd_config file according to the best practices linked to above. It should include something similar to the following configurations

ListenAddress 127.0.0.1:22

Protocol 2

HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

AuthorizedKeysFile .ssh/authorized_keys

IgnoreRhosts yes
AuthenticationMethods publickey,keyboard-interactive
PasswordAuthentication no
ChallengeResponseAuthentication yes
UsePAM yes
AllowGroups ssh-users

UsePrivilegeSeparation sandbox

Subsystem sftp /usr/lib/ssh/sftp-server

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com

You will notice a few changes compared to the direction given in the first link. The first change is the addition of UsePrivilegeSeparation sandbox. This further restricts and isolates the process that parses the incoming connection. You might have a different implementation depending on the distro you use, but it’s an improvement anyways.

The second set of changes include adding keyboard-interactive to the AuthenticationMethods and setting ChallengeResponseAuthentication to yes. This adds the use of a One Time Password (OTP) in addition to a public key. One time passwords are a nice addition because they make it so that additional knowledge that changes over time is also needed to login. Many websites do this as an extended mean to protect login information, including GitHub, facebook, Gmail, and Amazon. If you haven’t turned on OTP or 2FA on these sites, you really should do so. Now we’ll configure the SSH server to use this.

In an OTP setup, each user has their own secret key and this key is different for every service. The following setup will be done with pam_oath from the oath-toolkit package. Please install it before continuing. For each user you want to enable OTP for do, the following

$ head -10 /dev/urandom | sha512sum | cut -b 1-30
a0423afcb323d675ba9bbc36d18253

Add the following line in /etc/users.oath for each user

HOTP/T30/6 user - a0423afcb323d675ba9bbc36d18253

The hexadecimal key should be unique and you shouldn’t copy the one in this example! Once you have added all users, make sure nobody can access this file:

 # chmod 600 /etc/users.oath
 # chown root /etc/users.oath

Now, let’s enforce all login to require the OTP code when connecting to SSH by adding  the following line at the beginning of /etc/pam.d/sshd

 auth	  sufficient pam_oath.so usersfile=/etc/users.oath window=30 digits=6

With all that is done, you now need to add all the users to the ssh-users group; this needs to be created first with sudo groupadd ssh-users. Then, run the following command

sudo usermod -a -G ssh-users user

Now, you can restart your SSH server and it will require you to input your OTP code. You can get that code by typing the following command :

 oathtool -v -d6 a0423afcb323d675ba9bbc36d18253

This will output the following

Hex secret: a0423afcb323d675ba9bbc36d18253
Base32 secret: UBBDV7FTEPLHLOU3XQ3NDAST
Digits: 6
Window size: 0
Start counter: 0x0 (0)

621877

The last line is the password you’ll need, and it will change over time. Obviously, this isn’t very nice to do every time you login to your server. Today, most people have a smart phone with a decent level of security on it. Most manufacturers provide full disk encryption and, sometime, even containers like Samsung Knox. So let’s use them for that purpose. I use FreeOTP on Android inside Samsung Knox; it should provide ample security for the task. Now, the easiest method to upload a key to the phone is to use a QR code. Install qrencode and proceed with the following line

qrencode -o user.png 'otpauth://totp/user@machine?secret=UBBDV7FTEPLHLOU3XQ3NDAST'

This should generate a png file of a QR code you can use to quickly and easily create a key for each user. Obviously, this png should be handled carefully as it contains the secret key for your OTP configuration!

Improving the Security of Your SSH Configuration - user

Setup SSH Client Configuration

Now it’s time to configure the SSH client you will use to connect to the server; if you also followed the instructions of this article, you should have also started on a secure configuration for your SSH client. If so, you should have something like the following configuration either at the system level in /etc/ssh/ssh_config, or in your user config ~/.ssh/config.

Host github.com
    KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1

Host *
    KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
    PubkeyAuthentication yes
    HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,ssh-rsa
    Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com
    ServerAliveInterval 300
    ServerAliveCountMax 2
    TCPKeepAlive yes
    Compression yes
    CompressionLevel 9
    UseRoaming no

Host *.onion
    ProxyCommand socat - SOCKS4A:localhost:%h:%p,socksport=9050

If you have configured your server to only answer on Tor as described in my previous blog post, you probably want to easily reach it from the various devices you use. Once Tor has been installed on them it’s possible to contact the server at the address given in /var/lib/tor/hidden_service/ssh/hostname. Remembering the random sequence of numbers and letters is not trivial; this is where one last trick in the user config will help. Append the following lines to the client configuration file

Host nice.onion
    Hostname random4name.onion

Where nice is the name you will remember to join the SSH server and random4name.onion is the real name of the hidden service. If you have not generated public key for your client, it is time to do so with the following comand :

ssh-keygen -t ed25519 -o -a 100
ssh-keygen -t rsa -b 4096 -o -a 100

These commands generate two keys, and you only need to execute the first line; ed25519 key generation is more efficient than a 4,096 bit RSA key while being just as secure. If you have any keys that weren’t generated by one of these commands, you should regenerate them now and update the server authorization keys as soon as possible. Don’t forget to protect them with a long pass-phrase and use ssh-copy-id to deploy them to the server.

You should now be able to fully trust connecting the server to the public Internet. One final note: never use ssh -A to connect to a server because it provides direct access to your private key to any admin on the server without password protection. If you need to use an intermediate host, you should instead use ProxyCommand in your client configuration.

Improving Linux Kernel Development Process Documentation

This article is part of a series on improvements being made to Linux kernel documentation:

This article will cover how the Linux kernel community handled the conversion of documentation related to the kernel development process.

Introduction

It’s not an easy task to properly describe the Linux development process. The kernel community moves at a very fast pace and produces about 6 versions per year. Thousands of people, distributed worldwide, contribute to this collective work; the development process is a live being that constantly adjusts to what best fits the people involved in the process. Additionally, since kernel development is managed per subsystems, each maintainer has their own criteria for what works best for the subsystem they take care of. To address this, the documentation provides a common ground for understanding the best practices all kernel developers should follow.

The Documentation/Development-Process Book

There are several files inside the kernel that describes the development process. In 2008, the Linux Foundation sponsored work from Jonathan Corbet to create a set of files that group the best development practices together into a development-process book. This book received additional updates in 2011 to describe the changes that happened since the original version, and reflect the practices followed for kernel 2.6.38. The kernel development practices for kernels 4.x are aproximately the same as they were during the late 2.6.x kernels; this document is not too far from the current practices.

The Essential Documentation for Kernel Developers

All kernel developers use a set of well-known documents that were written before the development-process files. These contain, among other things, the procedures to submit patches and drivers, the logic used to apply patches, how to solve conflicts, and the description of the kernel coding style. All kernel developers should be familiar with these documents, yet they’re in the middle of several other documents that include driver descriptions.

In order to get some sanity out of the chaos, I selected a group of documents that, in my opinion, contain the essence of the knowledge that all kernel developer should know, and added them into the development-process book:

  • Documentation/HOWTO is a sort of index for the other files in this set, it provides an introduction to Linux kernel development,
  • Documentation/adding-syscalls.txt describes the steps involved in adding a new system call to the Linux kernel,
  • Documentation/applying-patches.txt describes how to apply a patch to the Linux kernel using the patch tool, also describes the patch files stored at ftp.kernel.org
  • Documentation/Changes despite the name, describes the minimal requirement to build and to run the Linux kernel,
  • Documentation/CodeOfConflict describes the process to be used if someone feels personally abused, threatened, or otherwise uncomfortable with the development process,
  • Documentation/CodingStyle describes the preferred coding style for the C files inside the Linux kernel tree,
  • Documentation/email-clients.txt provides recommendations for developers to use git send-email to send e-mails with patches whenever possible. When not possible, provides information about how to setup e-mail clients to avoid mangling patches,
  • Documentation/kernel-docs.txt contains a list of external references and books related to the kernel development,
  • Documentation/magic-number.txt provides the set of magic numbers used inside the Linux kernel to help to protect kernel data structures,
  • Documentation/ManagementStyle provides a general guideline about the prefered management style for the Linux kernel,
  • Documentation/stable_api_nonsense.txt explains why the kernel internal API/ABI doesn’t have a stable kernel interface, and the advantages of such an approach,
  • Documentation/stable_kernel_rules.txt explains the process related to the kernel stable releases and the related development process,
  • Documentation/SubmitChecklist contains a list of actions that should be followed before submitting a new patch,
  • Documentation/SubmittingDrivers describes the process of submitting a new driver to the kernel,
  • Documentation/SubmittingPatches describes the process of submitting a new driver to the kernel, and
  • Documentation/volatile-considered-harmful.txt explains the proper procedure to work with data that could be modified by some other CPU or task (“volatile”).

There are other files that might be considered as part of the development process in the future.

Converting Kernel Development Files to Sphinx

As part of the effort to convert Linux documentation to Sphinx we converted the files listed above. Most of the work will be merged for kernel 4.9, but some conversions won’t be available until kernel 4.10.

Converting the Documentation/development-process files to a Sphinx book was trivial. First, Paragraph markups were added to the chapters and sections. Then, special characters that Sphinx uses were converted; specifically, emphasis required the correct markups. Finally, cross-references to other documents that were already converted to Sphinx were added.

The conversion of the other files was somewhat tricky. They’re all part of the development process, so ideally they should be located in the same directory: Documentation/development-process. However, moving several files is something that should be carefully considered, as some of them (like: CodingStyle, SubmittingDrivers, SubmittingPatches) are well known within the developer community. Moving them to any other directory would break a wide range of links that point to them, not only at the kernel tree itself, but also hyper-links all over the web. Worse than that, it will also break people’s mental references to those import documents. So, the approach should avoid physically moving them as much as possible, this results in a few choices:

  1. copy the files to the new directory, replacing the original contents with a link to the new place,
  2. create the files in the new location and use the Sphinx include directive, or
  3. use soft links at the new location.

Option 1 is the best long-term solution because the number of “orphan” files under Documentation/ will be reduced, making it easier to identify the files that haven’t been converted to Sphinx yet. Option 2 would require creating one file for every file that’s merged. Alternatively, the Sphinx include statements could be in a single RST file. However, in this case a large set of documentation files would be in the same HTML file since Sphinx generates only one output HTML file per RST source. Option 3 is easy to do and doesn’t prevent the adoption of another option in the future, so we opted to follow this direction. However, this has made it more challenging to figure out which files under Documentation/ haven’t been converted to Sphinx nor merged into a book, so we may need to review this decision in the future.

The final result of these efforts can be seen here.