Web, Internet of Things and GDPR

From the 25th  of May 2018, the European General Data Protection Regulation, also known as GDPR, has come into force across European Union (EU) and the European Economic Area (EEA).  This new European Regulation on privacy has far-reaching consequences on how personal data is collected, stored and used in the  Internet of Things (IoT) world and across the web. For IoT and web developers, it has a significant impact on how we start to think about technologies we deploy and services we build. One promise of the open web platform is enhanced privacy in comparison to other technology stacks. Can some of the inherent privacy issues in the currently deployed IoT architectures be mitigated by the use of web technology? We think it can.

In this blog post we will look at The Web of Things (WoT) technologies we have been involved in. By walking through the options from local device discovery to a gateway framework that enables remote access, we hope to give you a picture of a loosely coupled IoT solution using Web of Things technologies. At each stage, GDPR compliance of these technologies will be discussed.

GDPR and the Internet of Things

The 88 pages long GDPR regulation document has 99 articles on the rights for all individuals and the obligations placed on organizations. These new requirements represent a significant change on how data will be handled across businesses and industries. When it comes to the Internet of Things, GDPR compliance is particularly challenging. Concerns about the risks that the Internet of Things has posed on data protection and personal privacy have been raised for a few years.  An study from ICO in 2016 stated that “Six in ten Internet of Things devices don’t properly tell customers how their personal information is being used”. The good news is that industry and governments are aware of the problem and taking actions to change this.

With the Internet of Things growing faster and smarter, a lot of new devices and new solutions will be introduced in the coming years. When looking into technical solutions and innovation, we need to have “GDPR awareness”, especially “data protection by design” that GDPR advocates, in mind.  “Data protection by design“, previously known as “privacy by design”, encourages business to implement technical and organizational measures, safeguard privacy and data protection principles within the full life cycle right from the start.

The Web of Things

Interoperability is one of the major challenges in the Internet of Things. The Web of Things addresses this challenge by providing  a universal application layer protocol for the Things to talk to each other regardless of the physical and transport layer protocols used. Rather than reinventing the wheel, the Web of Things reuses existing and well-known web standards and web technologies. The Web of Things addresses Things via URLs and following standard APIs. Using the web as the framework makes things discoverable and linkable, and provides web developers with an opportunity to let people interact via a wide range of interfaces: screens, voice, gesture, augmented reality… even without an Internet connection!

Web of Things has been referred to as the application layer of the Internet of Things on many occasions. Bearing in mind though, the scope of IoT applications is broader and includes systems that are not necessarily accessible through the web. We prefer to think  that the Web of Things is an option of an application layer added over the network layer of the traditional Internet of Things architecture.

The Web of Things is growing fast in its standardization and implementations. The W3C has launched the Web of Things Working Group to take a leading role at addressing the challenge of fragmentation of the IoT through standards. Early this year Mozilla announced the “Project Things” framework – an open framework of software and services that bridges the gap between devices via the web. The “Project Things” is a good reflection of Mozilla’s involvement in creating an open standard with the W3C around the Web of Things, and practical implementations that provide open interfaces to devices from different vendors.

For developers, the fast growth of the Web of Things has opened up a lot opportunities when we search for solutions for the Internet of Things.  Let’s walk through an example to see what this means.

Imagine that you are a homeowner and just bought a new washing machine. To have the new device integrated into your smart home and be able to control or access it, here are some optional solutions that the Web of Things can offer:

  • Discover, pair and connect new devices around you via Physical Web and Web Bluetooth.
  • Control devices via a Progressive Web Application
  • Communicate with your device via an end-to-end encrypted channel provided by Mozilla Web of Things Gateway, locally and remotely.

So what are these technologies about? How do they address privacy and security concerns? Let’s walk through them in a bit more detail.

Physical Web and Web Bluetooth

The Physical Web is a discovery service based on Bluetooth Low Energy (B.L.E.) technology. In the Physical Web concept, any object or location can be a source of content and addressed using a URL. The idea is that smart objects broadcast relevant URLs to nearby devices by way of a BLE beacon.

The Physical Web has been brought into Samsung’s web browser, Samsung Internet, as an extension called CloseBy. When the browser handles the URL information received on the phone,  “no personal information is included and our servers do not collect any information about users at all,” stated my colleague  Peter O’Shaughnessy in his article on “Bringing the real world to your browser with CloseBy”.

Web Bluetooth is another technology based on Bluetooth Low Energy. Alongside efforts like the Physical Web, it provides a set of APIs to connect and control B.L.E. devices directly from web. With this web API, developers are able to build one solution that could work across all platforms. Although the API is still under development, there are already a few very cool projects and applications around. My colleague Peter has produced  a Web Bluetooth Parrot Drone demo to give you a sense of controlling a physical device using a web browser. And I promise you that playing Jo’s Hedgehog Curling game is simply light and fun!

The Web Bluetooth Community Group aims to provide APIs that allow websites to communicate with devices in a secure and privacy-preserving way. Although some security features are already in place, for example, the site must be served over a secure connection (HTTPS) and discovering Bluetooth devices must be triggered by a user gesture , there are still lots security implications to consider as per the Web Bluetooth security model by Jeffrey Yasskin.

Samsung Internet had Web Bluetooth enabled for developers by default since v6.4 stable Release.  With Physical Web and Web Bluetooth available, it is possible to have device on-boarding “just a tap away”.

Progressive Web Application

Progressive Web Apps (PWAs) are websites that deliver native app-like user experiences. They address issues in native mobile applications and websites with new design concepts and new Web APIs. Some key features of PWAs include:

  • “Add to Home Screen” prompts
  • Offline functionality
  • Fast loading from cache
  • (Optionally) web push notifications

These features are achieved by deploying a collection of technologies. Service Workers are the core of Progressive Web Apps and work in the background to power the offline functionality, push notifications and other striking features. With the app shell concept, PWAs can achieve a fast loading time by caching the basic shell of an application UI, while still loading fresh content when possible. The native install banner installs the mobile website to the homescreen with Web App Manifest support.

In PWAs, you can only register service workers on pages served over HTTPS. Since service workers process all network requests from the web app in the background, this is essential to prevent a man-in-the-middle attack. PWAs can work offline as we mentioned earlier. From privacy perspective, this potentially offers us possibilities to:

  • Minimize Collecting, storing, and using user data as much as possible.
  • Know where the data resides.

Since the term “Progressive Web Apps”  was first coined by Google in 2015,  we have seen big brands such as FT, Forbes and Twitter switching to PWAs in the last few years. Our Samsung Internet Developer Advocacy team has been actively contributing to and promoting this new technology [1] [2] [3] [4] [5], and even designed a community-approved Progressive Web Apps logo! Want to have hands-on experiences of PWAs? Check our demos on Podle, Snapwat, Airhorn VR, PWAcman and Biscuit Tin-der.

Mozilla “Project Things”

Mozilla “Project Things” aims at “building a decentralized ‘Internet of Things’ that is focused on security, privacy, and interoperability”, as stated by the company. The structure of the framework is shown as below –

  • Things Cloud. It provides a collection of IoT cloud services including supports for setup, backup, updates, 3rd party applications and services integration, and remote encrypted tunneling.
  • Things Gateway. Generally speaking the Things Gateway is the IoT connectivity hub for your IoT network.
  • Things Controllers. Smart devices such as smart speakers, tablets/phones, AR headsets etc, to control the connected Things.
  • Things Device Framework. It consists of a collection of sensors and actuators, or “Things” in the context of the Web of Things.

The “Things Project” has introduced an add-on system, which is loosely modeled after the add-on system in Firefox, to allow for the addition of new features or devices such as an adapter to the Things Gateway. My colleague Phil Coval has posted a blog explaining how to get started and establish basic automation using I2C sensors and actuators on a gateway’s device.

The framework has provided security and privacy solutions such as:

  • Secure remote access is achieved using HTTPS via encrypted tunneling. Basically, the “Things Project” provides a TLS tunnelling service via its registration server to allow people to easily set up a secure subdomain during first time setup . An SSL certificate is generated via LetsEncrypt and a secure tunnel from a Mozilla cloud server to the gateway is set up using PageKite.
  • The Things Gateway provides a system for safely authorizing third-party applications using the de-facto authorization standard OAuth 2.0. When a third-party application needs to access or control another person’s Things, it always requires consent from the Things’ owner. The owner can decide the scope of the access token granted to the third-party application. Things’ owner also has options to delete or revoke the tokens that are assigned to the third-party application. Details on this have been discussed at our recent blog “An End-to-End Web IoT Demo Using Mozilla Gateway” and talk “The Complex IoT Equation.

Future work

GDPR challenges us all to re-prioritize digital privacy and to reconsider how and when we need to collect, manage and store people’s data. This is an opportunity for the the Web of Things. As the technology develops, some of the security and privacy issues have been or being addressed. Still, building the Web of Things has various challenges ahead. For developers, using the right technologies are a way forward to make the Internet of Things a better place. Join us on this exciting journey!

 

An End-to-End Web IoT Demo Using Mozilla Gateway

Imagine that you are on your way to a holiday home you have booked. The weather is changing and you might start wondering about temperature settings in the holiday home. A question might pop up in you mind: “Can I change the holiday home settings to my own preference before reaching there?”

Today we are going to show you an end-to-end demo we have created that allows a holiday maker to remotely control sensor devices, or Things in the context of Internet of Things, at a holiday home. The private smart holiday home is built with the exciting Mozilla Things Gateway – an open Gateway that anybody can now create with a Raspberry Pi to control Internet of Things devices.  For holiday makers, we provided a simple Node.js holiday application to access Things via Mozilla Things Gateway. Privacy is addressed by introducing concepts of Things ownership and Things usership, which is followed by the authorization work flow.

The Private Smart Holiday Home

The private smart holiday home is the home for Gateway and Things –

Things Gateway

One of the major challenges in the Internet of Things is interoperability. Getting different devices to work nicely with each other can be painful in a smart home environment. Mozilla Things Gateway addresses this challenge and provides a platform that bridges existing off-the-shelf smart home devices to the web by providing them with web URLs and a standardized data model and API [1]Implementations of the Things Gateway follows the proposed Web of Things standard and is open sourced under Mozilla Public License 2.0.

In this demo, we chose Raspberry Pi 3 as the physical board for the Gateway.  Raspberry Pi 3 is well-supported by the Gateway community and has been a brilliant initial choice for experimenting the platform. It is worth mentioning that the Mozilla Project Things is not tied only to the Raspberry Pi, but they are looking at supporting a wide range of hardwares.

The setup of the Gateway is pretty straightforward. We chose to use the tunneling service provided by Mozilla by creating a sub-domain of mozilla-iot.orgsosg.mozilla-iot.org. To try it yourself, we recommend going through the README file at Gateway github repository.  Also a great step-by-step guide has been created by Ben Francis on “How to build a private smart home with a Raspberry Pi and things Gateway”.

Things Add-ons

The Mozilla Things Gateway has introduced an Add-on system, which is loosely modeled after the add-on system in Firefox, to allow for the addition of new feature or device such as an adapter to the Things Gateway. The tutorial from James Hobin and Michael Stegeman on “Creating an Add-on for Project Things Gateway” is a good place to grab the concepts of Add-on, Adapter, Device and to start creating your own Add-ons. In our demo, we have introduced fan, lamp and thermostat Add-ons as shown below to support our own hardware devices.

Phil Coval has posted a blog explaining how to get started and how to establish basic automation using I2C sensors and actuators on gateway’s device. It is the base for our Add-ons.

Holiday Application

The holiday application is a small Node.js program that has functionalities of a client web server, OAuth client and browser User Interface.

The application consists of two parts. First is for the holiday maker to get authorization from the Gateway for accessing Things at the holiday home. Once authorized, it moves to the second part, Things access and control.



OAuth client implementation is based on simple-oauth2, a Node.js client library for OAuth2.0. The library is open sourced under Apache License, Version 2.0  and available at github.

The application code can be accessed here. The README file provides instructions for setting up the application.

Ownership and Usership of Things

So here we have it, the relationships among Things owner, Things user, third party application, Gateway and Things.

whole_picture2

  • The holiday home owner is the Things Gateway user and has full control of the Things Gateway and Things.
  • The holiday maker is a temporary user of Things and has no access to the Gateway.
  • The holiday home owner uses the Gateway to authorize the holiday maker  accesses to the Things with scopes via gateway.
  • Holiday application accesses the Things through the Gateway.

User Authorization

The Things Gateway provides a system for safely authorizing third-party applications using the de-facto authorization standard OAuth 2.0. The work flow for our demo use case is shown in the diagram below –

The third party application user, the Holiday App User in this case, requests authorization to access the Gateway’s Web Thing API. The Gateway presents the request list to the Gateway User, the holiday home owner, as below –

With the holiday user’s input, the Gateway responds with an authentication code. The holiday application then requests to exchange the authentication code to a JSON Web Token (JWT) . The token has a scope that indicates what accesses were actually granted by the holiday home owner. It is noted that the granted token scope can only be a subset of the request scope. With the JWT granted, the holiday application can access the Things that the user is granted to.

Demo Video

We also created a demo video which tailored together different parts we talked above, and is available at https://vimeo.com/271272094.

What’s Next?

The Mozilla Gateway is a work in progress and is not yet reached production use stage.  There are a lot exciting developments happening. Why not get involved?

How to Securely Encrypt A Linux Home Directory

These days our computer enables access to a lot of personal information that we don’t want random strangers to access, things like financial login information comes to mind. The problem is that it’s hard to make sure this information isn’t leaked somewhere in your home directory like the cache file for your web browser. Obviously, in the event your computer gets stolen you want your data at rest to be secure; for this reason you should encrypt your hard drive. Sometimes this is not a good solution as you may want to share your device with someone who you might not want to give your encryption password to. In this case, you can  encrypt only the home directory for your specific account.

Note: This article focuses on security for data at rest after that information is forever out of your reach, there are other threat models that may require different strategies.

Improving Upon Linux Encryption

I have found the home directory encryption feature of most Linux distributions to be lacking in some way. My main concern is that I can’t change my password as it’s used to directly unlock the encrypted directory. So, what I would like is the ability to have is a pass phrase that is used as a encryption key, but is itself encrypted with my password. This way, I can change my password by re-encrypting my pass phrase with a different password. I want it to be a pass phrase because that makes it possible to back it up in my global password manager which protects me in the case the file holding the key gets corrupted, and it allows me to share it with my IT admin who can then put it in cold storage.

Generate a Pass Phrase

So, how do we do this? First generate the key, for that I have slightly modified a Bitcoin wallet pass phrase generator that I use to generate the key as follows:

$ git clone https://github.com/Bluebugs/bitcoin-passphrase-generator.git
$ cd bitcoin-passphrase-generator
$ ./bitcoin-passphrase-generator.sh 32 2>/dev/null | openssl aes256 -md sha512 -out my-key

This will prompt you for the password to encrypt your pass phrase, and this will become your user password. Once this command has been called, it will generate a file that contains a pass phrase using 32 words out of your system dictionary. On Arch Linux, this dictionary contains 77,649 words, so one possibility in 77649^32. Anything more than 2^256, should be secure enough for some time. To check the content of your pass phrase and back it up, run the following:

$ openssl aes256 -md sha512 -d -in my-key

Once you have entered your password it will output your pass phrase: a line of words that you can now backup in your password manager.

Encrypt Your Home Directory

Now, let’s encrypt the home directory. I have chosen to use ext4 per directory encryption because it enables you to share the hard drive with other users in the future instead of requiring you to split it into different partitions. To initialize your home directory, first move your current one out of the way (This means stopping all processes that are running under your user ID and login under another ID). After this, load the key into the kernel:

$ openssl aes256 -md sha512 -d -in my-key | e4crypt add_key 2>1

This will ask you for the password you used to protect your pass phrase; once entered, it adds it to your current user kernel key ring. With this, you can now create you home directory, but first get the key handle with the following command:

$ keyctl show | grep ext4
12345678 --alsw-v    0   0   \_ logon: ext4:abcdef1234567890

The key handle is the text that follows ext4 at the end of the 2nd line above, this can now be used to create your new encrypted home directory :

$ mkdir /home/user
$ e4crypt set_policy abcdef1234567890 /home/user

After this, you can copy all of the data from your old user directory to the new directory and it will be properly encrypted.

Setup PAM to Unlock and Lock Your Home Directory

Now, we need to play with the PAM session handle to get it to load the key and unlock the home directory. I will assume that you have stored my-key at /home/keys/user/my-key. We’ll use pam_mount to run a custom script that will add the key in the system at login and remove it at log out. The script for mounting and unmounting can be found on GitHub.

The interesting bit that I’ve left out of the mounting script is the possibility to get the key from an external device or maybe even your phone. Also, you’ll notice the need to be part of the wheel group to cleanly unmount the directory as otherwise the file will still be in cache after log out, and it will be possible to navigate the directory and create files until they are removed. I think a clean way to handle this might be to have a script for all users that can flush the cache without asking for a sudo password; this will be an easy improvement to make. Finally, modify the pam system-login file as explained in the Arch Linux documentation with just one additional change to disable pam_keyinit.so.

In /etc/pam.d/system-login, paste the following:

#%PAM-1.0

auth required pam_tally.so onerr=succeed file=/var/log/faillog
 auth required pam_shells.so
 auth requisite pam_nologin.so
 auth optional pam_mount.so
 auth include system-auth

account required pam_access.so
 account required pam_nologin.so
 account include system-auth

password optional pam_mount.so
 password include system-auth

session optional pam_loginuid.so
 #session optional pam_keyinit.so force revoke
 session [success=1 default=ignore] pam_succeed_if.so service = systemd-user quiet
 session optional pam_mount.so
 session include system-auth
 session optional pam_motd.so motd=/etc/motd
 session optional pam_mail.so dir=/var/spool/mail standard quiet
 -session optional pam_systemd.so
 session required pam_env.so

Now, if you want to change your password, you need to decrypt the key, change the password, and re-encrypt it. The following command will do just that:

$ openssl aes256 -md sha512 -d -in my-old-key | openssl aes256 -md sha512 -out my-new-key

Et voila !

 

How to Prototype Insecure IoTivity Apps with the Secure Library

IoTivity 1.3.1 has been released, and with it comes some important new changes. First, you can rebuild packages from sources, with or without my hotfixes patches, as explained recently in this blog post. For ARM users (of ARTIK7), the fastest option is to download precompiled packages as .RPM for fedora-24 from my personal repository, or check ongoing works for other OS.

Copy and paste this snippet to install latest IoTivity from my personal repo:

distro="fedora-24-armhfp"
repo="bintray--rzr-${distro}"
url="https://bintray.com/rzr/${distro}/rpm"
rm -fv "/etc/yum.repos.d/$repo.repo"
wget -O- $url | sudo tee /etc/yum.repos.d/$repo.repo
grep $repo /etc/yum.repos.d/$repo.repo

dnf clean expire-cache
dnf repository-packages "$repo" check-update
dnf repository-packages "$repo" list --showduplicates # list all published versions
dnf repository-packages "$repo" upgrade # if previously installed
dnf repository-packages "$repo" install iotivity-test iotivity-devel # remaining ones

I also want to thank JFrog for proposing bintray service to free and open source software developers.

Standalone Apps

In a previous blog post, I explained how to run examples that are shipped with the release candidate. You can also try with other existing examples (rpm -ql iotivity-test), but some don’t work properly. In those cases, try the 1.3-rel branch, and if you’re still having problems please report a bug.

At this point, you should know enough to start making your own standalone app and use the library like any other, so feel free to get inspired or even fork demo code I wrote to test integration on various OS (Tizen, Yocto, Debian, Ubuntu, Fedora etc…).

Let’s clone sources from the repo. Meanwhile, if you’re curious read the description of tutorial projects collection.

sudo dnf install screen psmisc git make
git clone http://git.s-osg.org/iotivity-example
# Flatten all subprojects to "src" folders
make -C iotivity-example
# Explore all subprojects
find iotivity-example/src -iname "README*"

 

 

Note that most of the examples were written for prototyping, and security was not enabled at that time (on 1.2-rel security is not enabled by default except on Tizen).

Serve an Unsecured Resource

Let’s go directly to the clock example which supports security mode, this example implements OCF OneIoT Clock model and demonstrates CoAP Observe verb.

cd iotivity-example
cd src/clock/master/
make
screen
./bin/server -v
log: Server:
Press Ctrl-C to quit....
Usage: server -v
(...)
log: { void IoTServer::setup()
log: { OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: { FILE* override_fopen(const char*, const char*)
(...)
log: } FILE* override_fopen(const char*, const char*)
log: Successfully created oic.r.clock resource
log: } OCStackResult IoTServer::createResource(std::__cxx11::string, std::__cxx11::string, OC::EntityHandler, void*&)
log: } void IoTServer::setup()
(...)
log: deadline: Fri Jan  1 00:00:00 2038
oic.r.clock: { 2017-12-19T19:05:02Z, 632224498}
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

Then, start the observer in a different terminal or device (if using GNU screem type : Ctrl+a c and Ctrl+a a to get switch to server screen):

./bin/observer -v
log: { IoTObserver::IoTObserver()
log: { void IoTObserver::init()
log: } void IoTObserver::init()
log: } IoTObserver::IoTObserver()
log: { void IoTObserver::start()
log: { FILE* override_fopen(const char*, const char*)
(...)
log: { void IoTObserver::onFind(std::shared_ptr)
resourceUri=/example/ClockResUri
resourceEndpoint=coap://192.168.0.100:55236
(...)
log: { static void IoTObserver::onObserve(OC::HeaderOptions, const OC::OCRepresentation&, const int&, const int&)
(...)
oic.r.clock: { 2017-12-19T19:05:12Z, 632224488}
(...)

OK, now it should work and the date should be updated, but keep in mind it’s still *NOT* secured at all, as the resource is using a clear channel (coap:// URI).

To learn about the necessary changes, let’s have look at the commit history of the clock sub project. The server’s side persistent storage needed to be added to the configuration; we can see from trace that Secure Resource Manager (SRM) is loading credentials by overriding fopen functions (those are designed to use hardware security features like ARM’s Secure Element/ Secure key storage). The client or observer also needs a persistent storage update.

git diff clock/1.2-rel src/server.cpp

(...)
 
+static FILE *override_fopen(const char *path, const char *mode)
+{
+    LOG();
+    static const char *CRED_FILE_NAME = "oic_svr_db_anon-clear.dat";
+    char const *const filename
+        = (0 == strcmp(path, OC_SECURITY_DB_DAT_FILE_NAME)) ? CRED_FILE_NAME : path;
+    return fopen(filename, mode);
+}
+
+
 void IoTServer::init()
 {
     LOG();
+    static OCPersistentStorage ps {override_fopen, fread, fwrite, fclose, unlink };
     m_platformConfig = make_shared
                        (ServiceType::InProc, // different service ?
                         ModeType::Server, // other is Client or Both
                         "0.0.0.0", // default ip
                         0, // default random port
                         OC::QualityOfService::LowQos, // qos
+                        &ps // Security credentials
                        );
     OCPlatform::Configure(*m_platformConfig);
 }

This example uses generic credential files for the client and server with maximum privileges, just like unsecured mode (or default 1.2-rel configuration).

cat oic_svr_db_anon-clear.json
{
    "acl": {
       "aclist2": [
            {
                "aceid": 0,
                "subject": { "conntype": "anon-clear" },
                "resources": [{ "wc": "*" }],
                "permission": 31
            }
        ],
        "rowneruuid" : "32323232-3232-3232-3232-323232323232"
    }
}

These files will not be loaded directly because they need to be compiled to the CBOR binary format using IoTivity’s json2cbor (which despite the name does more than converting json files to cbor, it also updates some fields).

Usage is straightforward:

/usr/lib/iotivity/resource/csdk/security/tool/json2cbor oic_svr_db_client.json  oic_svr_db_client.dat

To recap, the minimal steps that are needed are:

  1. Configure the resource’s access control list to use new format introduced in 1.3.
  2. Use clear channel (“anon-clear”) for all resources (wildcards wc:*), this is the reason coap:// was shown in log.
  3. Set verbs (CRUD+N) permissions to maximum according to OCF_Security_Specification_v1.3.0.pdf document.

One very important point: don’t do this in a production environment; you don’t want to be held responsible for neglecting security, but this is perfectly fine for getting started with IoTivity and to prototype with the latest version.

 

Still Unsecured, Then What?

With faulty security configurations you’ll face the OC_STACK_UNAUTHORIZED_REQ error (Response error: 46). I tried to sum up the minimal necessary changes, but keep in mind that you shouldn’t stop here. Next, you need to setup ACLs to match the desired policy as specified by Open Connectivity, check the hints linked on IoTivity’s wiki, or use a provisioning tool, >stay tuned for more posts that cover these topics.

 

 

 

One Small Step to Harden USB Over IP on Linux

The USB over IP kernel driver allows a server system to export its USB devices to a client system over an IP network via USB over IP protocol. Exportable USB devices include physical devices and software entities that are created on the server using the USB gadget sub-system. This article will cover a major bug related to USB over IP in the Linux kernel that was recently uncovered; it created some significant security issues but was resolved with help from the kernel community.

The Basics of the USB Over IP Protocol

There are two USB over IP server kernel modules:

  • usbip-host (stub driver): A stub USB device driver that can be bound to physical USB devices to export them over the network.
  • usbip-vudc: A virtual USB Device Controller that exports a USB device created with the USB Gadget Subsystem.

There is one USB over IP client kernel module:

  • usbip-vhci: A virtual USB Host Controller that imports a USB device from a USB over IP capable server.

Finally, there is one USB over IP utility:

  • (usbip-utils): A set of user-space tools used to handle connection and management, this is used on both the client and server side.

The USB/IP protocol consists of a set of packets that get exchanged between the client and server that query the exportable devices, request an attachment to one, access the device, and finally detach once finished. The server responds to these requests from the client, and this exchange is a series of TCP/IP packets with a TCP/IP payload that carries the USB/IP packets over the network. I’m not going to discuss the protocol in detail in this blog, please refer to usbip_protocol.txt to learn more.

Identifying a Security Problem With USB Over IP in Linux

When a client accesses an imported USB device, usbip-vhci sends a USBIP_CMD_SUBMIT to the usbip-host, which submits a USB Request Block (URB). An endpoint number, transfer_buffer_length, and number of ISO packets are among the valid URB fields that will be in a USBIP_CMD_SUBMIT packet. There are some potential vulnerabilities with this, specifically a malicious packet could be sent from a client with an invalid endpoint, or a very large transfer_buffer_length with a large number of ISO packets. A bad endpoint value could force the usbip-host to access memory out of bounds unless usbip-host validates the endpoint to be within valid range of 0-15. A large transfer_buffer_length could result in the kernel allocating large amounts of memory. Either of these malicious requests could adversely impact the server system operation.

Jakub Jirasek from Secunia Research at Flexera reported these security vulnerabilities in the driver to the malicious input. In addition, he reported an instance of a socket pointer address (kernel memory address) leaked in a world-readable sysfs USB/IP client side file and in debug output. Unfortunately, the USB over IP driver had these security vulnerabilities since the beginning. The good news is that they have been found and fixed now, I sent a series of 4 fixes to address all of the issues reported by Jakub Jirasek. In addition, I am continuing to look for other potential problems.

All of these problems are a result of a lack of validation checks on the input and an incorrect handling of error conditions; my fixes add the missing checks and take proper action. These fixes will propagate into the stable releases within the next few weeks. One exception is the issue with kernel address leaks which is an intentional design decision to provide a convenient way to find IP address from socket addresses that opened a security hole.

Where are these fixes?

The fixes are going in to the 4.15 and stable releases.  The fixes can be found in the following two git branches:

Secunia Research has created the following CVEs for the fixes:

  • CVE-2017-16911 usbip: prevent vhci_hcd driver from leaking a socket pointer
    address
  • CVE-2017-16912 usbip: fix stub_rx: get_pipe() to validate endpoint number
  • CVE-2017-16913 usbip: fix stub_rx: harden CMD_SUBMIT path to handle
    malicious input
  • CVE-2017-16914 usbip: fix stub_send_ret_submit() vulnerability to null
    transfer_buffer

Do it Right the First Time

This is a great example of how open source software can benefit from having many eyes looking over it to identify problems so the software can be improved. However, after solving these issues, my takeaway is to be proactive in detecting security problems, but better yet, avoid introducing them entirely. Failing to validate input is an obvious coding error in any case, that can potentially allow users to send malicious packets with severe consequences. In addition, be mindful of exposing sensitive information such as the kernel pointer addresses. Fortunately, we were able to work together to solve this problem which should make USB over IP more secure in the Linux kernel.

Improve System Entropy to Speed Up Secure Internet Connections

After my previous blog post, you should now be using SSH and Tor all the more often, but things are probably slow when you are trying to setup a secure connection with this method. This may well be due to your computer lacking a proper source of entropy to create secure cryptographic keys. You can check the entropy of your system with the following command.

 $ cat /proc/sys/kernel/random/entropy_avail

This will return a number, hopefully it’s above 3,000 because that’s what is likely needed to keep up with your needs. So what do you do if it’s not high enough? This article will cover two tips to improve your computer’s entropy. All examples in this guide are for Linux distributions that use systemd.

rngd

rngd is a tool designed to feed the system with more entropy from various sources. It is part of the rng-tools package. After installing it, the rngd service needs to be started and enabled; the following command will do so:

$ systemctl enable rngd.service
$ systemctl start rngd.service

tpm

The Trusted Platform Module (TPM) has a hardware random generator that can also be used to improve system entropy. If your system has TPM, it will be available for rng to use. Most modern computers come with TPM these days, you can check to see on your system by doing the following command:

 $ lsmod |grep tpm

If this returns a result, you can enable rng to use tpm by doing the following:

 $ modprobe tpm-rng

For a more permanent solution, do the following:

 $ echo "tpm-rng" > /etc/modules-load.d/tpm.conf

Once this is done, find where the location of the configuration file by doing the following:

$ cat /etc/systemd/system/multi-user.target.wants/rngd.service
[Unit]
Description=Hardware RNG Entropy Gatherer Daemon [Service] EnvironmentFile=/etc/conf.d/rngd ExecStart=/usr/bin/rngd -f $RNGD_OPTS [Install] WantedBy=multi-user.target

With this information, you can now modify the /etc/conf.d/rngd with the following information:

RNGD_OPTS="-o /dev/random -r /dev/hwrng"

Restart rngd.service and check the entropy on your system again. This should make setting up cryptographic keys slightly faster.

Improving the Security of Your SSH Configuration

Most developers make use of SSH servers on a regular basis and it’s quite common to be a bit lazy when it comes to the admin of some of them. However, this can create significant problems because SSH is usually served over a port that’s remotely accessible. I always spend time securing my own SSH servers according to some best practices, and you should review the steps in this article yourself.  This blog post will expand upon these best practices by offering some improvements.

Setup SSH Server Configuration

The first step is to make the SSH service accessible via only the local network and Tor. Tor brings a few benefits for an SSH server:

  • Nobody knows where users are connecting to the SSH server from.
  • Remote scans need to know the hidden service address Tor uses, which reduces the risk of automated scan attacks on known login/password and bugs in the ssh server.
  • It’s always accessible remotely, even if the user’s IP address changes; there’s no need to register IP addresses or track changes.

To do so, you’ll need to run a Tor leaf node (if you don’t know how, the Arch Linux wiki has a good article on the subject). Then, add the following lines to the end of the torrc configuration file

HiddenServiceDir /var/lib/tor/hidden_service/ssh
HiddenServicePort 22 127.0.0.1:22

Restart Tor, then edit the sshd_config file according to the best practices linked to above. It should include something similar to the following configurations

ListenAddress 127.0.0.1:22

Protocol 2

HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

AuthorizedKeysFile .ssh/authorized_keys

IgnoreRhosts yes
AuthenticationMethods publickey,keyboard-interactive
PasswordAuthentication no
ChallengeResponseAuthentication yes
UsePAM yes
AllowGroups ssh-users

UsePrivilegeSeparation sandbox

Subsystem sftp /usr/lib/ssh/sftp-server

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com

You will notice a few changes compared to the direction given in the first link. The first change is the addition of UsePrivilegeSeparation sandbox. This further restricts and isolates the process that parses the incoming connection. You might have a different implementation depending on the distro you use, but it’s an improvement anyways.

The second set of changes include adding keyboard-interactive to the AuthenticationMethods and setting ChallengeResponseAuthentication to yes. This adds the use of a One Time Password (OTP) in addition to a public key. One time passwords are a nice addition because they make it so that additional knowledge that changes over time is also needed to login. Many websites do this as an extended mean to protect login information, including GitHub, facebook, Gmail, and Amazon. If you haven’t turned on OTP or 2FA on these sites, you really should do so. Now we’ll configure the SSH server to use this.

In an OTP setup, each user has their own secret key and this key is different for every service. The following setup will be done with pam_oath from the oath-toolkit package. Please install it before continuing. For each user you want to enable OTP for do, the following

$ head -10 /dev/urandom | sha512sum | cut -b 1-30
a0423afcb323d675ba9bbc36d18253

Add the following line in /etc/users.oath for each user

HOTP/T30/6 user - a0423afcb323d675ba9bbc36d18253

The hexadecimal key should be unique and you shouldn’t copy the one in this example! Once you have added all users, make sure nobody can access this file:

 # chmod 600 /etc/users.oath
 # chown root /etc/users.oath

Now, let’s enforce all login to require the OTP code when connecting to SSH by adding  the following line at the beginning of /etc/pam.d/sshd

 auth	  sufficient pam_oath.so usersfile=/etc/users.oath window=30 digits=6

With all that is done, you now need to add all the users to the ssh-users group; this needs to be created first with sudo groupadd ssh-users. Then, run the following command

sudo usermod -a -G ssh-users user

Now, you can restart your SSH server and it will require you to input your OTP code. You can get that code by typing the following command :

 oathtool -v -d6 a0423afcb323d675ba9bbc36d18253

This will output the following

Hex secret: a0423afcb323d675ba9bbc36d18253
Base32 secret: UBBDV7FTEPLHLOU3XQ3NDAST
Digits: 6
Window size: 0
Start counter: 0x0 (0)

621877

The last line is the password you’ll need, and it will change over time. Obviously, this isn’t very nice to do every time you login to your server. Today, most people have a smart phone with a decent level of security on it. Most manufacturers provide full disk encryption and, sometime, even containers like Samsung Knox. So let’s use them for that purpose. I use FreeOTP on Android inside Samsung Knox; it should provide ample security for the task. Now, the easiest method to upload a key to the phone is to use a QR code. Install qrencode and proceed with the following line

qrencode -o user.png 'otpauth://totp/user@machine?secret=UBBDV7FTEPLHLOU3XQ3NDAST'

This should generate a png file of a QR code you can use to quickly and easily create a key for each user. Obviously, this png should be handled carefully as it contains the secret key for your OTP configuration!

Improving the Security of Your SSH Configuration - user

Setup SSH Client Configuration

Now it’s time to configure the SSH client you will use to connect to the server; if you also followed the instructions of this article, you should have also started on a secure configuration for your SSH client. If so, you should have something like the following configuration either at the system level in /etc/ssh/ssh_config, or in your user config ~/.ssh/config.

Host github.com
    KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1

Host *
    KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
    PubkeyAuthentication yes
    HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,ssh-rsa
    Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com
    ServerAliveInterval 300
    ServerAliveCountMax 2
    TCPKeepAlive yes
    Compression yes
    CompressionLevel 9
    UseRoaming no

Host *.onion
    ProxyCommand socat - SOCKS4A:localhost:%h:%p,socksport=9050

If you have configured your server to only answer on Tor as described in my previous blog post, you probably want to easily reach it from the various devices you use. Once Tor has been installed on them it’s possible to contact the server at the address given in /var/lib/tor/hidden_service/ssh/hostname. Remembering the random sequence of numbers and letters is not trivial; this is where one last trick in the user config will help. Append the following lines to the client configuration file

Host nice.onion
    Hostname random4name.onion

Where nice is the name you will remember to join the SSH server and random4name.onion is the real name of the hidden service. If you have not generated public key for your client, it is time to do so with the following comand :

ssh-keygen -t ed25519 -o -a 100
ssh-keygen -t rsa -b 4096 -o -a 100

These commands generate two keys, and you only need to execute the first line; ed25519 key generation is more efficient than a 4,096 bit RSA key while being just as secure. If you have any keys that weren’t generated by one of these commands, you should regenerate them now and update the server authorization keys as soon as possible. Don’t forget to protect them with a long pass-phrase and use ssh-copy-id to deploy them to the server.

You should now be able to fully trust connecting the server to the public Internet. One final note: never use ssh -A to connect to a server because it provides direct access to your private key to any admin on the server without password protection. If you need to use an intermediate host, you should instead use ProxyCommand in your client configuration.

Running letsencrypt as an Unprivileged User

Running letsencrypt as an unprivileged user (non-root) is surprisingly easy, and even more surprisingly undocumented. There is no mention in the official documentation, nor was I able to find anything online. There are alternative clients that were designed to be run as unprivileged, but they are not as beginner-friendly as the official one. Personally, I’ve switched to acme-tiny (and created an AUR package for it). Its much smaller and lets me have an even more secure setup.

Why would you want to bother with this? One word: security. You should always strive to run every process with the lowest privileges possible because this reduces the chances of data loss as a result of a bug. More importantly, this reduces the chances of your server being compromised and thus improves overall security.

Summary

In this tutorial we will setup letsencrypt to run as an unprivileged user using the webroot plugin. This tutorial uses basic letsencrypt commands for simplicity. Refer to the official documentation for more advance usage.

Definitions and assumptions:

  • The domain: example.com
  • The web server’s web root: /srv/http/example.com
  • Commands preceded by # should be run as root.
  • Commands preceded by $ should be run as the letsencrypt user.

Prepare the Environment

First we need to create an unprivileged user for letsencrypt; I chose letsencrypt. The following command will create a new system user without a logging shell or a home directory.

# useradd --shell /bin/false --system letsencrypt

Now we will create the needed directory for the webroot plugin, and set the correct permissions.

# mkdir -p /srv/http/example.com/.well-known/acme-challenge
# chown -R letsencrypt: /srv/http/example.com/.well-known

Optional: verify the web server can serve files created by our user:

$ echo "works!" > /srv/http/example.com/.well-known/acme-challenge/test.txt
$ curl example.com/.well-known/acme-challenge/test.txt

If the last command printed “works!”, everything works. Otherwise, something is wrong with your configuration. This is unfortunately out of scope for this tutorial, but feel free to contact me, I might be able tohelp.

Setup letsencrypt

There are two options for this step. The first option is easier, and is best if you already have a working letsencrypt setup. The second option is more correct and is preferred if you are starting fresh.

Option 1: Update the Permissions of the Default Paths

By default letsencrypt (at least on Arch Linux) uses these three paths:

  • logs-dir: /var/log/letsencrypt
  • config-dir: /etc/letsencrypt
  • work-dir: /var/lib/letsencrypt

We need to change these directories to be owned by our user and group:

# chown -R letsencrypt: /var/log/letsencrypt /etc/letsencrypt /var/lib/letsencrypt

Now we need to run letsencrypt so it creates the initial configuration and our first certificate.

$ letsencrypt certonly --webroot -w /srv/http/example.com -d example.com -d www.example.com

At this stage letsencrypt will complain about not running as root, that is fine. Ignore it. Just follow the steps and answer the questions.

Option 2: Create New Directories for letsencrypt

Letsencrypt supports a few undocumented flags that let you change the running environment.

First we need to create the relevant directory structure, for simplicity I chose /home/letsencrypt as the base directory and the rest as subdirectories:

# mkdir /home/letsencrypt
# chown letsencrypt: /home/letsencrypt

And as the user:

$ cd /home/letsencrypt
$ mkdir logs config work

Now we can run letsencrypt as we normally do, just with the addition of the --logs-dir, --config-dir and the --work-dir flags.

$ letsencrypt certonly --config-dir /home/letsencrypt/config --work-dir /home/letsencrypt/work \
 --logs-dir /home/letsencrypt/logs/ --webroot -w /srv/http/example.com -d example.com -d www.example.com

At this stage letsencrypt will complain about not running as root, that is fine. Ignore it. Just follow the steps and answer the questions.

Verify Functionality

If you got here, you should already have your certificate issued.

You can verify this by running:

Verify Option 1:

$ ls /etc/letsencrypt/live/example.com

Verify Option 2:

$ ls /home/letsencrypt/config/live/example.com

This should output cert.pem chain.pem fullchain.pem privkey.pem

Certificate Renewal

Certificates need to be renewed before they expire or users will receive ominous warnings when visiting your site. You should run letsencrypt in a cron job so the certificate is renewed before it expires (at the time of writing, letsencrypt certificates are valid for 3 months). I have a cron job running once a month.

When renewing, you should run:

Renew Option 1:

$ letsencrypt certonly --agree-tos --renew-by-default --webroot -w /srv/http/example.com -d example.com \
 -d www.example.com

Renew Option 2:

$ letsencrypt certonly --agree-tos --renew-by-default --config-dir /home/letsencrypt/config \
 --work-dir /home/letsencrypt/work --logs-dir /home/letsencrypt/logs/ --webroot -w /srv/http/example.com \
 -d example.com -d www.example.com

Important: do not forget to make the server reload the certificates after they are renewed. Nginx for example, does not do this automatically.

Some Final Notes

For more information about letsencrypt, please refer to the official documentation.

This tutorial does not cover setting up your web server to use the new certificates. This is very simple and covered at length elsewhere.

However, here is an example for nginx:

server {
     server_name  example.com;
     ...

     ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
     ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

     ...
}

Letsencrypt is an incredibly important tool in providing better security on the web, so if you have site that doesn’t currently offer HTTPS encryption, I highly encourage you to follow this guide. Please let me know if you encountered any issues or have any suggestions, or feel free to leave a comment on this article.

As originally posted on my blog.

An Introduction to Installing Your First Let’s Encrypt HTTPS Certificate

The usage of https has been so far somewhat restricted on open source projects, because of the cost of acquiring and maintaining certificates. As a result of this and the need to improve Internet security, several projects are working on providing free valid certificates. Among those projects, Let’s Encrypt launched a public beta last week on December, 3 2015.

The Let’s Encrypt Approach

Let’s Encrypt is a Linux Foundation Collaborative project that started to fulfill an Electronic Frontier Foundation – EFF long-term mission to Encrypt the Web. According with EFF, the “aim is to switch hypertext from insecure HTTP to secure HTTPS. That protection is essential in order to defend Internet users against surveillance of the content of their communications; cookie theft, account hijacking and other web security flaws; cookie and ad injection; and some forms of Internet censorship.”.

With that goal in mind, the Let’s Encrypt project is providing free certificates, valid for 90 days. The certificate renewals are also free, and the enrollment procedure is meant to be simple and scriptable. They have proposed an RFC to the Internet Engineering Task Force – IETF for an automatic protocol to manage https certificates, called Automatic Certificate Management Environment (ACME) protocol.

There are several clients that support the ACME protocol, we chose to use letsencrypt. As we’ve just upgraded the LinuxTV server last week, I decided to pioneer the install of the Let’s Encrypt certificates.

How to Use Letsencrypt to Get an https Certificate

The process is actually really simple.

The first step is to clone the letsencrypt script from https://github.com/letsencrypt/letsencrypt with:

$ git clone https://github.com/letsencrypt/letsencrypt

The first time it runs, it will install python dependencies. The script is smart enough to identify the distribution and do the right thing in most cases. I tested it on both Fedora 23 and Debian with similar results, but some distributions like SUSE might require more work:

# cd letsencrypt
# ./letsencrypt-auto
Bootstrapping dependencies for Debian-based OSes...

And, after installing the packages:

And check for the missing dependencies, installing them.
Creating virtual environment...
Updating letsencrypt and virtual environment dependencies.......
Running with virtualenv: sudo /home/mchehab/.local/share/letsencrypt/bin/letsencrypt

It will then proceed to the next step of asking for the e-mail of the admin:

An Introduction to Installing Your First Lets Encrypt HTTPS Certificate - email
It then asks you to agree to the license terms, everything seemed fine to me, so I accepted it:
An Introduction to Installing Your First Lets Encrypt HTTPS Certificate - encrypt-sla

If Let’s Encrypt successfully detects the domains on your server, it will present you with a set of checkboxes to select the domains you want to serve over https.

An Introduction to Installing Your First Lets Encrypt HTTPS Certificate - https_select

If the script can’t detect the domains on the server, it will ask you to type them in, separated by a space:

An Introduction to Installing Your First Lets Encrypt HTTPS Certificate - domains

NOTE: It should be noted that the script needs either root access or sudo access in order to install the needed dependencies and set up the apache server. It also needs to run on the server where the certificates will be installed. Trying to run it on a different machine would cause an error:
Failed authorization procedure.
www.linuxtv.org (http-01): urn:acme:error:unauthorized ::
The client lacks sufficient authorization ::
Invalid response from http://www.linuxtv.org/.well-known/acme-challenge/03ocs4YOeW32134wH3Oo911sv-aJ_SK0B1R_YVCGk [130.149.80.248]: 404, git.linuxtv.org (http-01):
urn:acme:error:unauthorized ::
The client lacks sufficient authorization ::
Invalid response from http://git.linuxtv.org/.well-known/acme-challenge/oPcUtwer423oc2dVqElgVc0HxTjJfuVv1cwk1A-F0 [130.149.80.248]: 404, linuxtv.org (http-01):
urn:acme:error:unauthorized :: The client lacks sufficient authorization ::
Invalid response from http://linuxtv.org/.well-known/acme-challenge/ZuPCq4geW36d6GxcIK_GhIfaH35l1mCNOS9X67HU4 [130.149.80.248]:
404, patchwork.linuxtv.org (tls-sni-01): urn:acme:error:unauthorized ::
The client lacks sufficient authorization :: Correct zName not found for TLS SNI challenge. Found

It then asked me if I wanted to allow both http and https or just https. I chose to allow both, but if your site communicates sensitive information like passwords or personal data, you might consider forcing all connections to use https:

An Introduction to Installing Your First Lets Encrypt HTTPS Certificate - https_type

After that, it created the certificate and updated the /etc/apache2 configurations for all the sites that were enabled:
An Introduction to Installing Your First Lets Encrypt HTTPS Certificate - https_congrats

Starting Using the New Certificates

That’s the most exciting part of the letsencrypt tool: the script adjusted all the configurations on my apache2 server and auto-reloaded it, so there’s no need to do anything to start using it! Ubuntu, Debian, Centos 7, and Fedora are currently the only Linux distros that support automatic configuration, other distributions will likely require manual configurations.

After running the script my apache server was running with the new certs with no downtime! Now visitors to Linux TV can now use https to access the site securely. We are currently working on implementing Let’s Encrypt on our blog and other internal resources here at the OSG. Here’s to a safer and more secure web!

The Essentials of IoTivity Connectivity

This article is part of a series on how IoTivity handles security for the connected IoT world:

IoTivity is a Linux Foundation Collaborative Project that implements the Open Interconnect Consortium (OIC) standard. OIC is a consortium of over 100 companies that are working together to develop a standard for interoperability between the IoT devices. It includes a certification program to check interoperability between devices from different manufacturers.

The OIC has various task groups that each address different areas in the IoT domain. The primary group is the core group which defines the base layer and lays the foundation for the other task groups. The other prominent task groups include security and remote connectivity.

The security task group defines the base security layer that is expected in each device; this allows devices to secure trust and provide an access control policy for other devices in a house. Remote connectivity defines how an OIC device will communicate remotely. There are other groups which handle vertical such as home, industry, health, etc. These groups build on top of the core group and address specific details to that domain on top of the core IoTivity group. Based on the vertical a device belongs to, OIC will certify the devices as being interoperable between different vendors.

The IoTivity Client – Server Model

OIC is based on RESTful interface concept where devices communicate with each other over well-known interface (resource). Each resource can have multiple attributes and including a type, interface, the operations it’s capable of performing, and access control. The device that hosts the resources is the server and any device that queries resources is a client. The server makes the resource discoverable to clients.

The Essentials of IoTivity Connectivity - oic-connection

The client must establish communication with the server to access resources, and the first resource must be discovered over the network. This discover process occurs by sending a multicast packet via CoAP (Constrained Application Protocol). CoAP is a REST based protocol that is a trimmed-down version of HTTP. The main difference between CoAP and HTTP is that CoAP uses smaller headers because it’s is targeted primarily for constrained devices.

Multicast packets are sent over the resource /oic/res and the device that matches the resource being looked for responds with a unicast communication to the querying device. This response provides the address and information for the querying device to connect to the device holding the resource and perform control operations. Control operations are typically to write, update or delete the resource attributes.

Discovery and access control of resources are based on the security settings that are established between the devices. The first step is to establish a device trust relationship which is done via Onboarding. Next, the resources are provisioned and the devices are able to establish a secure connection between each other. It uses DTLS (Datagram Transport Layer Security) to connect the devices using the key that was created during the onboarding process. This key is also used to encrypt and decrypt network communications. Finally, the access control policies control which device has access to which resources.

Remote connectivity of devices in OIC network is handled over XMPP, and devices in the home network that need to communicate remotely must have an XMPP client and they must login to the XMPP server in order to communicate with each other. The XMPP connection establishes an in-band bytestream that uses the same security mechanism as the local area network to establish device trust relationships.