What Is IOMMU Event Tracing?

The IOMMU event tracing feature enables reporting IOMMU events in the Linux Kernel as they happen during boot-time and run-time. IOMMU event tracing provides insight into IOMMU device topology in the Linux Kernel. This information helps understand which IOMMU group a device belongs to, as well as run-time device assignment changes as devices are moved from hosts to guests and back by the Kernel. The Linux Kernel moves devices from host to guest when users requests such a change.

In addition, IOMMU event tracing helps debug BIOS and firmware problems related to IOMMU hardware and firmware implementation, IOMMU drivers, and device assignment. For example, tracing occurs when a device is detached from the host and assigned to a virtual machine, or the device gets moved from the host domain to the VM domain and allows debugging to occur for each of these processes. The primary purpose of IOMMU event tracing is to help detect and solve performance issues.

Enabling IOMMU event tracing will provide useful information about devices that are using IOMMU as well as as changes that occur in device assignments. In this article, I’ll discuss the IOMMU event tracing feature and the various classes of IOMMU events. In part two of this series, I’ll discuss how to enable and use it to trace events during boot-time and run-time, and how to use the IOMMU tracing feature to get insight into what’s happening in virtualized environments as devices get assigned from hosts to virtual machines and vice versa. This feature helps debug IOMMU problems during development, maintenance, and support.

What is an IOMMU?

IOMMU is short for I/O Memory Management Unit. IOMMUs are hardware that translate device (I/O) addresses to the physical (machine) address space. IOMMU can be viewed as an MMU for devices. MMU maps virtual addresses into physical addresses. Similarly, IOMMU maps device addresses into physical addresses. The following picture shows a comparative depiction of IOMMU vs. MMU.

What Is IOMMU Event Tracing - iommu
A Comparison of IOMMU and MMU Address Mapping

In addition to basic mapping, the IOMMU provides device isolation via access permissions. Mapping requests are allowed or disallowed based on whether or not the device has proper permissions to access a certain memory region. Another key feature IOOMU brings to the table is I/O Virtualization which provides DMA remapping hardware that adds support for the isolation of device accesses to memory, as well as translation functionality. In other words, devices present I/O addresses to the IOMMU which translates them into machine addresses, thereby bridging the gap between device addressing capability and the system memory range.

What Is IOMMU Event Tracing - iommu_access

What Does an IOMMU Do?

IOMMU hardware provides several key features that enhance I/O performance on a system.

  • On systems that support IOMMU, one single contiguous virtual memory region can be mapped to multiple non-contiguous physical memory regions. IOMMU can make a non-contiguous memory region appear contiguous to a device
    (scatter/gather).
  • Scatter/gather optimizes streaming DMA performance for the I/O device.
  • Memory isolation and protection allows device access to memory regions that are mapped for it. As a result, faulty and/or malicious devices can’t corrupt system memory.
  • Memory isolation allows safe device assignment to virtual machines without compromising host and other guest operating systems. Similar to the faulty and/or malicious device case, devices are given access to memory regions which are mapped specifically for them. As a result, devices assigned to virtual machines will not have access to the host or another virtual machine’s memory regions.
  • IOMMU helps address discrepancies between I/O device and system memory addressing capabilities. For example, IOMMU enables 32-bit DMA capable non-DAC devices access to memory regions above 4GB.
  • IOMMU supports hardware interrupt remapping. This feature expands limited hardware interrupts to extendable software interrupts, thereby increasing the number of interrupts that can be supported. Primary uses of interrupt remapping are interrupt isolation, and the ability to translate between interrupt domains. e.g: ioapic vs. x2apic on x86.

As we all know, there is no free lunch! IOMMU introduces latency due to translation overhead in the dynamic DMA mapping path. However, most servers support I/O Translation Table (IOTLB) hardware to reduce the translation overhead.

IOMMU Groups and Device Isolation

Devices are isolated in IOMMU groups. Each group contains a single device or a group of devices, but single device isolation is not always possible for a variety of reasons. Devices behind a bridge can communicate without reaching IOMMU via peer-to-peer communication channels. Unless the I/O hardware/firmware provides a way to disable peer-to-peer communication, IOMMU can’t ensure single device isolation and will be forced to place all the devices behind a bridge in a single IOMMU group for isolation.

Multi-function cards don’t always support the PCI access control services required to describe isolation between functions. In such cases, all functions on a multi-function card are placed in a single IOMMU group. Device(s) in a group can’t be separated for assignment and all devices in that group must be assigned together, even when a virtual machine only needs one of them. For example, IOMMU might be forced to group all 4-ports on a multi-port card because device isolation at port granularity isn’t possible on all hardware.

What Is IOMMU Event Tracing - multiport_device1
Network hardware with device isolation at port level is capable of separating ports for specific IOMMU groups.
What Is IOMMU Event Tracing - multiport_device2
Without port isolation, the network hardware must assign all ports to a single group.

IOMMU Domains and Protection

IOMMU domains are intended to provide protection against a virtual machine corrupting another virtual machine’s memory. Devices get moved from one domain to another as they get moved between VM’s or from a host to a VM. Any device in a domain is given access to the memory regions mapped for the specific domain it belongs to. When a device is assigned to a VM, it is first detached from the host and removed from the host domain, moved to VM domain, and attached to the VM as shown below:

What Is IOMMU Event Tracing - device_host1
Step 1: A guest OS needs access to hardware that’s currently mapped to the host.
What Is IOMMU Event Tracing - device_detached
Step 2: The hardware must first be detached from the host and transfered from the host domain to the VM domain.
What Is IOMMU Event Tracing - device_vm
Step 3: Once the hardware has been moved to the Vm domain, it can be attached to the guest OS.

A Brief Overview of IOMMU Boot and Run-Time Operations

The IOMMU driver creates IOMMU groups and domains during initialization. Devices are placed in IOMMU groups based on their device isolation capabilities. iommu_group_add_device() is called when device is added to a group and iommu_group_remove_device() is called when a device is from a group.

All devices are attached to the host and when a user requests a device to be assigned to a VM, the device gets detached from the host and then attached to the VM. iommu_attach_device() is called to attach a device and iommu_detach_device() is called to detach it. The iommu_map() and iommu_unmap() interfaces are for creating and deleting mappings for the device address space and system address space.

A series of device additions occur during boot. During run-time, after a device is attached, a series of device maps, and unmaps occur until the device is detached.

What Is IOMMU Event Tracing - iommu_ops
IOMMU event tracing provides insight to what is occurring during all of these processes.

The ability to have visibility into device additions, deletions, attaches, detaches, maps, and unmaps is valuable in debugging IOMMU problems. As you can see below, this is exactly what IOMMU events are designed to do. In fact, the idea for this tracing work was a result of debugging several IOMMU problems without having a good insight into what’s happening. Let’s take a look at the trace events.

IOMMU Trace Event Classes

IOMMU events are classified into group, device, map and unmap, and error classes to trace activity in each of these areas. Group class events are generated whenever a device gets added and removed from a group. Device class events are intended for tracing device attach and detach activity. Map and unmap events trace map/unmap activity. Finally, In addition to these normal path events, error class events are for tracing autonomous IOMMU faults that might occur during boot-time and/or run-time.

What Is IOMMU Event Tracing - iommu_events
IOMMU Trace Event Classes

IOMMU Group Class Events

IOMMU group class events are triggered during boot. Traces are generated when devices get added to or removed from an IOMMU group. These traces provide insight into IOMMU device topology and how the devices are grouped for isolation.

  • Add device to a group – Ttriggered when IOMMU driver adds a device to a group. Format: IOMMU: groupID=%d device=%s
  • Remove device from a group – Triggered when IOMMU driver adds a device to a group. Format: IOMMU: groupID=%d device=%s

IOMMU Device Class Events

Events in this group are triggered during run-time, whenever devices are attached to and detached from domains. For example, when a device is detached from host and attached to a guest. This information provides insight into device assignment changes during run-time.

  • Attach (add) device to a domain –  Triggered when a device gets attached (added) to a domain. Format: IOMMU: device=%s
  • Detach (remove) device from a domain – Triggered when a device gets detached (removed) from a domain. Format: IOMMU: device=%s

IOMMU Map and Unmap Events

Events in this group are triggered during run-time whenever device drivers make IOMMU map and unmap requests. This information provides insight into map and unmap requests and helps debug performance and other problems.

  • IOMMU map event –  Triggered when IOMMU driver services a map request. Format: IOMMU: iova=0x%016llx paddr=0x%016llx size=%zu
  • IOMMU unmap event – Triggered when IOMMU driver services an unmap request. Format: IOMMU: iova=0x%016llx size=%zu unmapped_size=%zu

IOMMU Error Class Events

Events in this group are triggered during run-time when an IOMMU fault occurs. This information provides insight into IOMMU faults and useful in logging the fault and take measures to restart the faulting device. The information in the flags field is especially useful in debugging BIOS and firmware problems related to IOMMU hardware and firmware implementation, as well as, problems resulting from incompatibilities between the OS, BIOS, and firmware in spec compliance.

  • IO Page Fault (AMD-Vi): Triggered when an IO Page fault occurs on a AMD-Vi system. Format: IOMMU:%s %s iova=0x%016llx flags=0x%04x

Error class events are implemented in common IOMMU driver code, Intel, and ARM.

How Can IOMMU Event Tracing Help You?

This article is part one of a two part series on IOMMU event tracing. This introduction will help set the knowledge foundation for the second article which will cover how to use this feature to benefit you the most. Stay tuned to this blog to learn more about IOMMU event tracing!

References

GearVRf: The Journey from Proprietary to Open Source

The Open Source Group recently provided technical and strategic consulting to a Samsung team that has developed an exciting new Virtual Reality Framework. GearVRf is a rendering library to help speed application development on VR-supported Android devices. The team had a desire to launch an open source project around this code, and in this post we’ll share the process we went through to help them make this happen.

We believe sharing this experience is important for two main reasons. First, readers with less experience in this area will gain a sense of what’s required to take internal code, make it available under an open source license and then drive its adoption by growing a developer community. Second, readers with more experience will hopefully give us feedback on how we can do this better the next time.

What Does it Take to Launch a Successful Open Source Project?

Our process started by deciding if we were making this open source for the right reasons. We began by establishing that there were no similar projects we could join efforts with, meaning we would need to support the project ourselves. We ensured we had enough internal commitment to maintain the project, continue new development, and build a community around it, and once this was done, we moved through our checklist of necessary steps to prepare the project to be released as open source.

The next step in the process was to run legal, technical, business and marketing reviews in parallel.

  • The legal review ensured compliance with other licenses for all code used with the library. We decided to use the Apache license for the project, as well as a variant of the Linux Kernel developer certificate of origin to help protect against potential code provenance issues later.
  • The technical review ensured there were no dependencies on any internal or undeclared third-party software, provided better documentation, license, and copyright notices for the source code, and ensured the code could be compiled and built using open source tools.
  • The business review ensured internal buy-in and availability of sponsorship to sustain the open source development. The early discussion of open sourcing the code covered the benefits of doing so versus keeping it internal, so there was no reason for us to revisit that conversation during the business review.
  • Finally, the marketing review focused on designing the project’s logo, website, and registering the domain and various social media accounts.

After all of the review processes were complete, the next major task was to define the project’s governance and processes. The governance for the GearVRf project is simple and is modeled after many other successful open source projects. Various concepts such as decision making, project roles, and community guidelines were defined to make it easy and efficient for anyone to join and participate. The governance and processes are all documented on the project’s web site.

The last building block was to setup the project infrastructure, including a Wiki to help inform others about the project and make it easier to understand how to contribute and use the framework and a mailing list server to facilitate communications for the project. Lastly, the team decided to continue hosting the source code within our Samsung Github Account.

We Need Your Help

Like any other open source project, the success of GearVRf will need to be driven by the community that’s built around it. We hope users and developers will take advantage of this open source release to help us do something great. If you have any ideas, code, or other contributions you’d like to make, feel free to join our community and start hacking away.  If you just want to keep up with project developments, join the announcements mailing list to get updates as they happen. This project is just the beginning of what we hope are many more open source projects from Samsung, and we want to get as many people involved as possible. We welcome both contributions to the source code as well as new GearVRf apps.

Check out the project at http://www.gearvrf.org.

[wp_biographia user="guym"]

[wp_biographia user="benp"]

Open Source Weekly Wrap-Up: April 19 – 24, 2015

Samsung Launches GearVRf: an Open Source Virtual Reality Framework

GearVRf is a new open source rendering library for application development on VR-supported Android devices. The GearVRf API provides simplified access to the Oculus SDK functionality via the Java Native Interface and the GearVRf native library. This project was launched by Samsung as a collaborative effort that involved the Open Source Group. This blog will feature an article next week that covers the work we put into launching this project.

Learn more, or get involved through the Gear VRf Wiki.

Daily Wayland & Weston Builds For Ubuntu

While Canonical is focusing on Mir for Ubuntu’s display server, other developers are still working on alternatives, such as our very own Bryce Harrington. Bryce has established a daily build archive of Wayland and the reference Weston compositor. This makes it easier than ever before to try out the newest Wayland/Weston code on Ubuntu

Read more at Phoronix.

Open Interconnect Consortium Hosts First Plugfest

The Open Interconnect Consortium hosted its first interoperability Plugfest April 22-24 in Fremont, California. This three-day members-only event offered a valuable opportunity to test interoperability and compliance of implementations of the draft OIC specification. This plugfest is the first of several contemplated for OIC to be used to test implementations, refine the specification, and to inform test-case development. This first plugfest is a major milestone in the development of the OIC specification indicating that member companies see value in what is being developed and are investing in the OIC specification and products based on it.

Read more at the OIC blog.

Linux Media Summit Wrap Up

The Linux Media Summit was held in San Jose California on March 26 in conjunction with the Embedded Linux Conference. A number of technical discussions took place to help guide the next 6 months of development on this area of the Kernel. In particular, new contributions to DVB pipelines, media tokens Project Ara dynamic reconfiguration, and Android Camera v3 were some of the major topics that were discussed. Mauro Carvalho Chehab, and Shuah Khan represented the Open Source Group in these discussions, and have contributed a large amount of code to this area.

Read more on the Linux Kernel media mailing list.

FFmpeg in Google Summer of Code

FFmpeg is currently dealing with a large amount of mentor allocation and student/intern ranking work after securing spots on both Google’s summer of code and SFC’s Outreach. Being selected for the two programs came as a surprise for this 15-year-old project which couldn’t secure a spot for GSoC last year, even though it’s existence is considered vital for the FOSS multimedia landscape. FFmpeg’s outreach participation is being sponsored by the Samsung Open Source Group who is also sponsoring GStreamer participation in this round of the program.

Read More on the FFmpeg Wiki.

 

The Tangled Terms of Testing

Overview

When it comes to testing, most developers admit they probably don’t do as much as they should. Developers, testers and even end users get blocked for various reasons, but one of the initial reasons is becoming overwhelmed by the various terms and approaches.

Buzzwords abound, including phrases like: black box, white box, unit, incremental integration, integration, functional, system, end-to-end, sanity, regression, acceptance, load, stress, performance, usability, install/uninstall, security, compatibility, comparison, alpha and beta.

Using testing related terms can even influence developers to do the opposite of what they need to. Getting a handle on what exactly “unit testing” means can help break through the quagmire and move on to being more productive. The two subjects being covered here are “levels” of testing and automated/build-time testing.

Levels of Testing

First, it helps to group a few related terms and lay out their basics. If you group testing by levels, it can be thought of as a progression from unit testing through integration testing, system testing, and then finally to acceptance testing. The process starts with unit tests, which cover individual parts (functions, classes, etc.), and ends with acceptance tests to verify the software is doing what was promised. Ideally, developers should be writing tests and performing unit testing while they are writing the initial code.

On the other hand, portions of system and most acceptance testing would normally fall to QA departments. Of course, with smaller projects end users or developers sometimes play the role of QA. However, it is important for them to keep in mind that they are fulfilling a different role at the time, and act accordingly.

The Tangled Terms of Testing - testing-chain

The different levels/types are usually not completely separate as there is some ambiguity, and trying to pin down the fluid reality to rigid definitions can easily waste time and give poor results. With that said, there are some general descriptions of these levels that can help get a feel for what a test may or may not count as. More importantly, keeping the different levels in mind can help to write better, more efficient tests.

Unit tests are generally the starting point and should be done by developers writing any code. They should be small, focused, and require minimal setup. They also shouldn’t perturb the system they are being run on. If one has to execute a test as root or an administrator, then it is probably not a unit test.

Integration tests combine different parts together, however they normally don’t require an entire product to be assembled. They may or may not affect the system they are being run on, and thus start to blur the line between what might fall to build-time tests or developer-written tests.

System tests are usually a little easier to spot. The main thing that distinguishes a system test is that it tests an entire product as a whole and normally requires installation of the software that is being tested.

Acceptance tests check for end-to-end functionality and user experiences. The distinction between a system test and an acceptance test might be hard to spot, or might not really exist at all.

However, in a more formalized development environment they are tied to meeting exit criteria or declaring releases. Frequently, engineering groups will do their own acceptance testing before declaring sufficient quality to pass on to QA who will then begin their own set of acceptance tests.

Automated Unit Testing

The concept of automated unit testing during builds is one that’s probably the most important for faster and better quality development time and again. The general idea is to have a set of tests that are run automatically each time the software is built. Thus, if a developer changes something in one place that causes problems in other places, the automated build tests will catch things when they are first checked in, or more often before they even have a chance to be committed. The cost to fix is much lower in both time and effort (and money too) if caught early, and gets more expensive the longer problems go unresolved.

This is the area that normally is at the unit test level, but can extend somewhat into the realm of integration tests. The key is that such automated unit tests should be quick to build and execute. Each developer has these tests run every time they build the system, as well as any framework or build machines, such as those running some continuous integration solution. If certain tests start to take too long to build or run, I often suggest running a secondary build to cover extended testing. This falls somewhere between basic build-time unit testing and full acceptance testing, and could perhaps be done once a day as opposed to during every build. From my own experience, a rule that works well is to ensure such testing is executed at least once a week.

Summary

I was surprised the first time I heard of technical directors and software architects instructing developers not to write unit tests since they “should be written by QA engineers, not developers.” However, such cases are usually a result of misunderstandings, like not realizing unit tests are not synonymous to acceptance tests. Clarifying terminology ends up saving a lot of time while simultaneously boosting software quality. Understanding what unit tests are and how they should be used is something all developers and testers should know.

Now that I’ve defined these terms in order to establish a common understanding, the next installment in this blog series will go into details about structuring and writing better unit tests.

Servo: Building a High-Performance, Safe Web Browser

Servo is a new web rendering engine that was launched by Mozilla in 2012 and is now receiving significant contributions from both Samsung and independent community members. Our goal is to produce an embeddable engine that can be used in both browsers and applications to make the web platform faster and safer, and bring it to more devices.

We started this project to address fundamental limitations of current browser engines. First, the C family of programming languages doesn’t ensure safe use of memory, which leads to the majority of all zero-day browser security bugs. Second, current engines were originally designed for use on a PC, and are challenging to scale down to low memory and low power devices. Finally, as the web platform has evolved, the tightly-coupled design of current browser engines has made it difficult to provide performance guarantees, such as 60 fps screen updates.

Memory Safety

Investigations have shown that most high priority, security critical bugs in the browser engine are related to use-after-free or buffer overruns. In Servo, we are using the Rust programming language to prevent many of these issues statically, at compile-time. In Rust, all allocated memory is owned by a single variable, and the compiler ensures that the memory cannot be referenced by multiple owners.

For example:

fn foo() {
let x = create_thing();
store_thing(x);
use_thing(x); // Error - the value in 'x' has been moved!
}

This ownership model ensures that pointers to allocated memory do not outlive the release of that memory and that concurrent threads cannot access the same piece of memory simultaneously.The Rust compiler automatically inserts all calls to allocate and free data, and because these ownership properties are statically checked, at runtime this data may freely be accessed without any extra overhead, unlike traditional approaches such as garbage collection or reference counting.

Low Memory and Power Devices

Modern browsers have achieved incredible levels of performance, even running full video games in JavaScript. While Firefox runs well on devices with only 256 MB of RAM , with Servo we are working to make it possible to bring the web platform to wearables and appliances with less than a quarter of that amount.

Parallelism has been exploited primarily to provide speedups by using a machine with four cores in the hopes that the program would run four times as fast as the program on a single core. However, in Servo we are using parallelism to provide better power usage and are experimenting with running four cores clocked to a more power-efficient clock speed, resulting in the same execution time but significantly lower power usage. For example, the following charts show that when running Servo on large benchmark page at a lower clock frequency, the four cores have the same speed as a single core at the higher frequency, but use 40% less power.

Servo Building a High-Performance Safe Web Browser - servo-perf
Running 4 threads at a lower frequency is as fast as a single core running at a higher frequency.

Servo Building a High-Performance Safe Web Browser - servo-power

Performance

Concurrency is the ability to perform multiple tasks in an interleaved fashion. As web sites have become more complex, the lack of concurrency in modern browsers has resulted in ‘jank’, or cases where the page fails to update because a non-graphics task is being performed on the same thread where the graphics are being rendered. In Servo, our graphics task is completely decoupled from JavaScript evaluation and webpage layout, allowing the graphics to be updated at a consistent 60fps.

Additionally, we have added concurrency within web pages. If a page has multiple iframe elements, each of which is running JavaScript, in today’s web browsers all of the iframes will pause when any one of them is executing that script. Some modern browsers plan to solve this by spawning a process per iframe, but it is unclear how that will work for sites with tens to hundreds of iframes. Such sites already exist, and will only become more prevalent as ad platforms shift from Flash to HTML5.

In Servo, we avoid this problem by using Rust’s lightweight task spawning mechanism. For each iframe in a page, each of the layout, graphics rendering, and script processing tasks run concurrently. And that is also true between each iframe. So, without consuming a huge number of native OS threads or processes, we can already provide concurrency.

This design also has the advantage of forcing a clean architectural separation between the various pieces of our engine, which makes the addition of new web platform features both easier to write and less likely to affect unrelated pieces of the browser engine.

Current Status

Servo is intended to support the full web platform eventually, and it passes the ACID1 and ACID2 tests. Rather than focus on benchmark suites and further classic web conformance tests, we are now working towards full specification compliance through the Web Platform Tests initiative.

We are currently expanding on Servo’s functionality and are targeting an alpha-quality browser in 2015. We are always looking for new contributors, and actively mentor newcomers by curating bugs appropriate for them! Because Servo is so new, it is easy to make a very large contribution with a small amount of effort.

Please come join us!

Guest Author: Lars Bergstrom

Lars Bergstrom is a researcher at Mozilla, currently working on the servo parallel web browser project.He received his Ph.D. from the University of Chicago‘s Computer Science department, studying under Dr. John Reppy. His Master’s paper was on the implementation of analysis and optimization passes in our parallel compiler, Manticore. His Ph.D. research was on how to add mutation safely and efficiently into a functional parallel programming language. He has also been doing work on a runtime, garbage collector, and most recently some extensions to control-flow analysis. Before that, he was a manager and a developer at Microsoft in the Visual Studio organization, working on next-generation software development tools technology out at the Redmond, WA offices.

Samsung Open Source Meets Europe

A warm welcome from Europe! As Guy mentioned in his post, the Samsung Open Source Group (OSG) started in early 2013. Since then, the group has grown significantly and now has teams spread out over the whole world. In this post, I’ll focus on the OSG European office and leave the others (USA, Korea and India) for another time.

Our European team has approximately 20 members with some of them located at the Samsung Research UK office in Staines-upon-Thames, within the M25 belt west of London. The rest of the team works either from their home offices or as part of an Samsung Open Source Group Lab at Szeged University in Hungary. The remote setup is a good example of how the OSG is changing Samsung’s internal culture, in addition to our external open source contributions.

The goals of the OSG teams are the same in all branches. We aim to promote open source and its value inside of the company as well as work hard to make Samsung a good citizen in various open source communities. We spend much of our time working directly on upstream projects that are important to Samsung. This process involves improving the upstream projects directly while simultaneously understanding the requirements of product teams to help them grow together. Proposing/implementing changes to upstream projects to fit specific needs, improving performance, ensuring better software quality, and teaching internal teams how to make the best use of the available software are all part of this process.

Open Source at the Individual Level

In our daily work life, the team has different areas of expertise which mostly focus around specific open source projects. Each individual has several areas of expertise, but to keep it short I will only highlight a few of the most significant.

  • LLVM: Tilmann Scheller is working on various areas of LLVM with a primary focus on ARM/AArch64 performance tuning.
  • Gstreamer: Luis De Bethencourt Guimera is involved in GStreamer pretty much all over the place, which includes general bug fixing, newer features such as an audiovisualizer base class and some release work.
  • Webkit / Blink: Habib Virji and Ziran Sun have been contributing to both Webkit and Blink in various areas with some focus on autofill and RTL.
  • Open Interconnect Consortium: Habib Virji is (besides his work on Webkit / Blink) also involved in creating specifications for the OIC to drive the IoT interests of Samsung forward.
  • Enlightenment Foundation Libraries: With Tom Hacochen, Daniel Kolesa and Stefan Schmidt, a fairly large part of the team works on the EFL. Besides leading some of the upcoming core changes (such as the design of the new API of the EFL as well as the integration of Lua as a first class scripting language), the team also plays a major role in code quality and test coverage improvements.

Cultivating Open Source

While this is what individual team members do on a daily basis, from a global viewpoint the OSG helps guide Samsung in effective usage, collaboration and creation of open source. We have to do work on both sides of the fence: a role that is both challenging and interesting. Our developers have great influence in their respective communities, and subsequently, a lot of responsibility to both Samsung and the communities they work in. Over the coming months we will publish posts from the individual team members about the work they are doing within their respective open source project areas.

There is one final thing that must be mentioned before ending this post: we are looking to fill the leadership role within our European team. If you have a strong background in open source, are interested in working for a major electronics company, and want to invest your energy towards turning it into a better place for open source developers, please send an e-mail to jobs@osg.samsung.com. Along with the email, please share some background about yourself to help us start the conversation.

Bringing Tizen to a Raspberry PI 2 Near You…

This article is part of a series on Tizen on the Raspberry Pi 2.

The Raspberry Pi is the most popular single-board computer with more than 5 million sold. While there are numerous Linux Distributions that run on RPI including Raspbian, Pidora, Ubuntu, OSMC, and OpenElec, the Tizen OS does not currently run on it. Since Tizen is being positioned as an important player within the growing Internet of Things (IoT) ecosystem, providing a Raspberry PI 2 port can help developers gain more Tizen experience. For this reason, the Samsung Open Source group decided to work on such port. This article will go over how to build a bootable Tizen image for Raspberry Pi from a computer running Linux.

The Raspberry Pi 2 has several advantages over the first version. Among them:

  • It has a quad-core CPU
  • It runs at 900 MHz
  • It uses an ARM Cortex-A7 CPU

The ARM Cortex-A7 CPU is very nice since most distributions are compiled to use the arm instruction set found on ARMv7 processors.

Initial Tests

Before doing a native Tizen build, we needed to determine if the new Raspberry PI2 was capable of running Tizen. To do this, we used a small trick: we downloaded a pre-built Tizen image found at tizen.org and borrowed a Debian boot image. Since the Tizen root image was built for ARMv7, the image ran properly on Raspberry Pi 2, except for a few issues related to graphics. That gave us enough confidence to go to step 2: building the boot image and root machine targeted for Raspberry Pi2.

Building Images on Tizen

Currently, as described on the Tizen Wiki, there are two ways used to build a Tizen image:

  • Via GBS: The “traditional” way, which requires setting up a GBS server to compile several packages
  • Via Yocto: The “new” way, which uses OpenEmbedded bitbake recipes to generate the image. This can easily be done on a user’s machine, provided a good Internet link is available.

GBS would require a lot of time, and we would need to allocate a dedicated build server, because of this, we decided to use Yocto to produce the images.

Creating a Tizen image for Raspberry PI2

BEFORE STARTING: As Yocto build will download lots of packages, you should ensure that your firewall policy won’t be blocking ftp, http, https and git protocols, as otherwise the build may fail.

1) Create a Local Copy of tizen-distro

The first step is to clone the Tizen distro tree. We actually use an alternate version of the already existing tizen.org tizen-distro tree.

Our variant has some patches on the top of the tree that allows building Tizen for ARM CPUs. It also disables the usage of the Open Source mesa/gallium 3D driver, since the Broadcom GPU used on Raspberry PI2 is not currently supported by this open source driver. The plan is to rework these patches to submit them to Tizen upstream, without breaking anything for x86.

To create the clone, do the following from any terminal window:

git clone git://git.s-osg.org/tizen-distro.git

2) Add Raspberry PI 2 BSP Meta Repository

Yocto works with several layers of software. The BSP (Board Supported Package) layer provides support for the board, and writing a BSP can consume a lot of time. Fortunately, there’s already a Yocto BSP for Raspbery PI 2, and the only extra requirement is to adjust the BSP to work with Tizen. Again, we opted to create a fork of the tree, to avoid interfering with other distros supported by this BSP, but the plan is to rework these patches in order to submit them to Yocto upstream in a way that would not affect builds for other distros.

The steps to create the clone is:

cd tizen-distro
git clone git://git.s-osg.org/meta-raspberrypi.git

3) Initialize the Environment to Prepare for Build

Now that we have a copy of both the Tizen and BSP Yocto bits, we need to setup the build environment in order to use bitbake. It should be noted that some packages may be needed to be installed, depending on the distribution you’re using at the build machine. The Yocto builder (bitbake) requires Python 2.7.3 or greater. So, we don’t recommend using an LTS distro for the build, as it may have packages that are too old. Here, we used Fedora 21, as it provides recent packages, while being stable enough for desktop needs.

The command to initialize the build environment is:

source ./tizen-common-init-build-env build

4) Modify Config Files to Point to Build for Raspberry PI2

The Tizen build is controlled by configuration files. Assuming Tizen was installed at ~/tizen-distro the previous steps would have changed the working directory to the new build directory. So, the current directory should be ~/tizen-distro/build and the configuration files are in the ./conf directory.

From the build directory, you’ll need to edit the conf/local.conf with your favorite editor. You should comment out any existing line that starts with “MACHINE”, and add the line bellow:

MACHINE ??= "raspberrypi2"

This will tell bitbake that it should compile for Raspberry PI 2 board.

Now, we need to add the BSP meta-raspberrypi layer to the conf/bblayers.conf file, at both BBLAYERS and BBLAYERS_NON_REMOVABLE. Again, use your favorite editor.

After the changes, the file contents should look like the one below, with your home directory instead of /home/mchehab:

# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
LCONF_VERSION = "5"

BBPATH = "${TOPDIR}"
BBFILES ?= ""

BBLAYERS ?= " \
  /home/mchehab/tizen-distro/meta \
  /home/mchehab/tizen-distro/meta-raspberrypi \
  /home/mchehab/tizen-distro/meta-openembedded/meta-oe \
  /home/mchehab/tizen-distro/meta-openembedded/meta-multimedia \
  /home/mchehab/tizen-distro/meta-openembedded/meta-ruby \
  /home/mchehab/tizen-distro/meta-openembedded/meta-systemd \
  /home/mchehab/tizen-distro/meta-openembedded/meta-gnome \
  /home/mchehab/tizen-distro/meta-tizen/meta-tizen-adaptation/meta \
  /home/mchehab/tizen-distro/meta-tizen/meta-tizen-adaptation/meta-oe \
  /home/mchehab/tizen-distro/meta-tizen/meta-tizen-common-base \
  /home/mchehab/tizen-distro/meta-tizen/meta-tizen-common-share \
  /home/mchehab/tizen-distro/meta-tizen/meta-tizen-common-devtools \
  /home/mchehab/tizen-distro/meta-tizen/meta-tizen-common-demo \
"

BBLAYERS_NON_REMOVABLE ?= " \
  /home/mchehab/tizen-distro/meta \
  /home/mchehab/tizen-distro/meta-raspberrypi \
  /home/mchehab/tizen-distro/meta-openembedded/meta-oe \
  /home/mchehab/tizen-distro/meta-openembedded/meta-multimedia \
  /home/mchehab/tizen-distro/meta-openembedded/meta-ruby \
  /home/mchehab/tizen-distro/meta-openembedded/meta-systemd \
  /home/mchehab/tizen-distro/meta-openembedded/meta-gnome \
  /home/mchehab/tizen-distro/meta-tizen/meta-tizen-adaptation-oe-core \
  /home/mchehab/tizen-distro/meta-tizen/meta-tizen-adaptation-meta-oe \
  /home/mchehab/tizen-distro/meta-tizen/meta-tizen-common-base \
"

Notice the new entry for “~/tizen-distro/meta-raspberrypi \” in each of the sections.

5) Start building

Now that everything is set, it is time to start the build. This is the step that will take a considerable amount of time, and will require a good Internet connection because it will download thousands of packages and/or clone upstream git trees for several packages.

Do this by running the following command:

bitbake rpi-hwup-image

NOTE: On some distros, this step will cause an error (this has been confirmed on multiple Debian-based distros):

ERROR: Task 400 (/tizen-distro/meta-tizen/meta-tizen-common-base/recipes-extended/pam/pam_git.bb, do_compile) failed with exit code '1'

If such an error happens, just pull an extra patch, and re-run bitbake with the following commands:

git pull . origin/tizen-debianhost
bitbake rpi-hwup-image

If everything goes well, the images will be located in the tmp-glibc/deploy/images/raspberrypi2 directory, and will look like the ones below:

$ ls tmp-glibc/deploy/images/raspberrypi2/
bcm2835-bootfiles
Image
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-bcm2708-rpi-b-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-bcm2708-rpi-b-plus-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-bcm2709-rpi-2-b-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-ds1307-rtc-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-hifiberry-amp-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-hifiberry-dac-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-hifiberry-dacplus-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-hifiberry-digi-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-iqaudio-dac-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-iqaudio-dacplus-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-lirc-rpi-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-pcf8523-rtc-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-pps-gpio-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-raspberrypi2-20150409151425.bin
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-w1-gpio-overlay-20150409151425.dtb
Image--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-w1-gpio-pullup-overlay-20150409151425.dtb
Image-bcm2708-rpi-b.dtb
Image-bcm2708-rpi-b-plus.dtb
Image-bcm2709-rpi-2-b.dtb
Image-ds1307-rtc-overlay.dtb
Image-hifiberry-amp-overlay.dtb
Image-hifiberry-dac-overlay.dtb
Image-hifiberry-dacplus-overlay.dtb
Image-hifiberry-digi-overlay.dtb
Image-iqaudio-dac-overlay.dtb
Image-iqaudio-dacplus-overlay.dtb
Image-lirc-rpi-overlay.dtb
Image-pcf8523-rtc-overlay.dtb
Image-pps-gpio-overlay.dtb
Image-raspberrypi2.bin
Image-w1-gpio-overlay.dtb
Image-w1-gpio-pullup-overlay.dtb
modules--3.18.5+gita6cf3c99bc89e2c010c2f78fbf9e3ed478ccfd46-r0-raspberrypi2-20150409151425.tgz
modules-raspberrypi2.tgz
README_-_DO_NOT_DELETE_FILES_IN_THIS_DIRECTORY.txt
rpi-hwup-image-raspberrypi2-20150409151425.rootfs.ext3
rpi-hwup-image-raspberrypi2-20150409151425.rootfs.manifest
rpi-hwup-image-raspberrypi2-20150409151425.rootfs.rpi-sdimg
rpi-hwup-image-raspberrypi2-20150409151425.rootfs.tar.bz2
rpi-hwup-image-raspberrypi2.ext3
rpi-hwup-image-raspberrypi2.manifest
rpi-hwup-image-raspberrypi2.rpi-sdimg
rpi-hwup-image-raspberrypi2.tar.bz2

Use the Image

Be careful at this point, because the steps below should be run as root, and will rewrite a disk image. If you do something wrong at this point, you may end by damaging what’s stored on your disks!

Let’s assume that the SD card will be mapped as /dev/sdc. Replace /dev/sdc with the device where your SD card is stored. To find out where your device is mapped, you can run the command:

$ df -aTh

WARNING: In the next section, be sure to change any instance of /dev/sdc to your specific disk that it is mapped. Otherwise you may damage your system!

1) Fill the SD Card With the Image

Insert an SD card on your computer using an SD->USB adapter and check if it was mapped at /dev/sdc.

The current image size is about 620 MB. Be sure to have an SD card big enough to fit the size of the generated image.

Now, write the image with the following command:

# dd if=tmp-glibc/deploy/images/raspberrypi2/rpi-hwup-image-raspberrypi2.rpi-sdimg of=/dev/sdc bs=128M && sync
4+1 records in
4+1 records out
616562688 bytes (617 MB) copied, 91.798 s, 6.7 MB/s

NOTE: the actual image size may vary, as we add more packages to the build image.

The image is now created and you can use it to boot into Tizen.

NOTE: The root password used on this image will be “root”.

2) Optional: Resize the Root Image to the Disk Size

By default, the image size doesn’t fill the entire SD disk. If you want to extend it to use all your SD card space, you can check what’s the first sector of the second partition of the disk.

Assuming that the first sector is 49152, you should delete it and re-create, using the maximum SD size as the last sector, as shown below:

# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/sdc: 7.4 GiB, 7969177600 bytes, 15564800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000ceb8d

Device Boot Start End Sectors Size Id Type
/dev/sdc1 * 8192 49151 40960 20M c W95 FAT32 (LBA)
/dev/sdc2 49152 1204223 1155072 564M 83 Linux

Command (m for help): d
Partition number (1,2, default 2):

Partition 2 has been deleted.

Command (m for help): n
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (2-4, default 2): 2
First sector (2048-15564799, default 2048): 49152
Last sector, +sectors or +size{K,M,G,T,P} (49152-15564799, default 15564799):

Created a new partition 2 of type 'Linux' and of size 7.4 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Now, we’ll use e2fsck to extend the ext4 partition to fill the entire partition. Before that, the partition should be checked with e2fsck.

Those steps are shown below:

# e2fsck -f /dev/sdc2
e2fsck 1.42.12 (29-Aug-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Setting filetype for entry 'uncompress' in /bin (12) to 1.
Setting filetype for entry 'mke2fs' in /sbin (317) to 1.
Setting filetype for entry 'mkfs.ext4dev' in /sbin (317) to 1.
Setting filetype for entry 'fsck.ext3' in /sbin (317) to 1.
Setting filetype for entry 'mkfs.ext4' in /sbin (317) to 1.
Setting filetype for entry 'e2fsck' in /sbin (317) to 1.
Setting filetype for entry 'fsck.ext4' in /sbin (317) to 1.
Setting filetype for entry 'mkfs.ext2' in /sbin (317) to 1.
Setting filetype for entry 'fsck.ext4dev' in /sbin (317) to 1.
Setting filetype for entry 'arm-oe-linux-gnueabi-ld' in /usr/bin (416) to 1.
Setting filetype for entry 'gawk' in /usr/bin (416) to 1.
Pass 3: Checking directory connectivity
Pass 3A: Optimizing directories
Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/sdc2: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdc2: 3647/60480 files (0.8% non-contiguous), 157146/241664 blocks

# sudo resize2fs /dev/sdc2
resize2fs 1.42.12 (29-Aug-2014)
Resizing the filesystem on /dev/sdc2 to 7757824 (1k) blocks.
The filesystem on /dev/sdc2 is now 7757824 (1k) blocks long.

Get Involved

Now, You’re ready to use Tizen on your Raspberry PI 2. If you want to learn more or get involved, you can join us in the Tizen IRC chatroom, or subscribe to the Tizen Community mailing list. Lastly, you can also stay tuned to this blog to catch more articles about building things on Tizen and Raspberry Pi. Have fun!

Raspberry Pi is a trademark of the Raspberry Pi Foundation

Creating an Open Source & Transparent Culture @ Samsung

Welcome everyone, to this, the new Samsung Open Source Group (OSG) blog!

We’re glad you’re here, and we’re excited to show you what our team has been up to in the first two years of our existence, as well as what we’re working on going forward…

But first, a little background…

In 2012, Samsung leadership realized their consumption of Open Source software to help develop their products was increasing at a rapid pace.  Also, since most of the company’s developers were focused on product development, there was a lack of sufficient upstream contributions to give the company enough technical equity to influence the strategic direction of these key open source projects.

Enter Ibrahim Haddad, formerly of the Linux Foundation, who was hired in early 2013 to start the Samsung Open Source Group, and charged with hiring strong open source talent in several key technology areas (system, web, multimedia, graphics, and virtualization).  The goal of the team is to contribute to strategic open source projects on Samsung’s behalf, and also to begin influencing the internal company culture, slowly but surely, to reflect a more transparent and open source stance.

Promoting the business case for Open Source

To be clear, while our team consists of individuals who believe strongly in the value of open source, this effort is not led by ideology. There are solid business reasons why our team exists, much like comparable teams found within other large companies.  Our focus is not on things like making Samsung devices ‘rootable,’ or making Samsung products more hackable via open source software or tools. Instead, our goal is to contribute sustained and valuable code, ideas, documentation, etc. into key projects, including the Linux Kernel, Blink, Webkit, Gstreamer, Wayland, KVM, EFL, and others.  We want to help grow these communities in addition to having a say in their strategic project directions, and our hope is to be as valuable to them as we try to be to Samsung.

Our team is excited to begin posting here on topics that we’re passionate about – be it technical areas, or lessons learned in helping move Samsung forward in our open source journey.  We strongly believe in the power of collaborative development for all companies involved in software development, including Samsung.

As author Dana Blankenhorn said all the way back in 2005:

The same point is clear in software as in business and in politics. Transparency wins.

Enjoy and happy reading!

Custom Compose Keys on Ubuntu

The Compose key is awesome, and I think Linux distributions should include this in all keyboard layouts by default. You’re probably thinking, “Wait, ‘Compose’?? There’s no key on my keyboard labeled ‘Compose’, what the heck is this guy talking about? And why would I need it, anyway?”

I’m a mono-lingual USian. Now, I had a few years of German in high school but ach, nein, it really didn’t take. However, with today’s multicultural, globalized Internet, I’ve gained friends and colleagues from all over the globe with all sorts of odd foreign letters in their names, and I want to refer to them properly.

Not too many years ago, this was a hard thing to do. The Internet communicated in so-called “plain text”, which consisted of just the basic English letters, numbers and symbols. This Internet alphabet was originally created and standardized by the Teletype industry, who named it the American Standard Code for Information Interchange (ASCII). When the World Wide Web first started, it relied on ASCII for its web pages. So, back in the day, everything online was plain ASCII text, and anything that wasn’t showed up as weird empty squares unless you did a lot of messing around with fonts, encoding rules, and the like.

But today, I can use all the strange letters and symbols I want thanks to something called ‘Unicode’. Unicode is basically a super-set of all font symbols – French, Thai, Algonquin, you name it. It’s like the uber-alphabet. You’ll still need a Unicode-supporting font, but most standard fonts include character symbols for a sizable proportion of the Unicode set. On Ubuntu, for example, they install their own ‘Ubuntu’ font which supports an impressive range of Unicode symbols.

Unicode also supports a huge variety of useful and fun symbols beyond just funny foreign characters. Bullets and asterisks, smiley faces and snowmen, Greek letters and roman numerals, arrows and musical notes, and other such wingdings galore.Custom Compose Keys on Ubuntu - skull_line

So, how do you put them into your documents? The lazy way is to just cut and paste. But hey, cut and paste works and is easy. Try copying this: ☠. Now open your favorite word processor and paste it in. Arr, ye be enlarging the font to 48. Arr.

A slightly less lazy method is to launch the Character Map under Accessories (in Ubuntu) and drag and drop the symbol to your editor. Give it a shot, and look through all the wild and wonderful common characters. I took tons of math in college and surface integrals were my bane, but I managed to avoid volume integrals (∰ ) yikes!

Every Unicode character has a code. Like U+2620 for that thar skull’n’bones. If you know the code you can enter it (in Ubuntu) by holding Ctrl+Shift+u, then the code, and then Enter. But who’s going to remember all those hex strings? This leads us to understanding the Custom Compose Keys on Ubuntu - unicode_symbolsCompose key’s awesomeness. Instead of memorizing hex codes, you can use more memorable characters. You tap the Compose key, followed by 2 or more keys to produce the given Unicode character. For example, ” t m” prints ™, ” ` a” prints à, and ” = L” prints ₤.

Try it Yourself

You probably don’t actually have a literal “Compose” key on your keyboard. Digital Equipment Corporation and Sun used to manufacture keyboards with actual physical keys labeled “Compose” on them. Since Microsoft Windows and OSX never included compose support by default, keyboards for PC hardware didn’t bother with them. Thus, using Linux on PC hardware means we have to map the Compose key behavior to some other infrequently used key.

I like to turn my right Alt key into Compose. I can still use the left Alt for hotkey combos and other things. Here’s how to enable it on Ubuntu 14.04:

  1. Open System Settings, and navigate to the Keyboard Configuration
  2. Select the Shortcuts tab, then Typing
  3. Click Compose Key, and change “Disabled” to “Right Alt” (or whatever you prefer from the list)
  4. Exit out of Keyboard Configuration

Let’s give it a test! Open your favorite text application and then hit and release the right Alt key, followed by the letter o and then – (minus). You should see ō. As far as I know there isn’t an official standard for these Compose sequences, but people have cleverly thought up a bunch of good ones. These come by default when enabling the Compose key on Ubuntu. The defaults add keys needed for typing the special letters of various languages, along with a scattering of symbols.

There’s a particularly good collection of Compose sequences available via the kragen/xcompose project on github. This includes tons of arrows, stars, math symbols, small caps letters, funky Latin letters, the entire Greek alphabet (upper and lower) along with Greek punctuation symbols, fractions, Roman numerals, dingbats, a very rich set of planetary and astrological symbols, and both a full deck of playing cards and a full chess set. You know you want ’em!

But wait, there’s more! You can go beyond these and provide your own custom Compose sequences in your ~/.XCompose file. The format of this file is:

<Multi_key> […keys…] : “<character[s]>” [UNICODE] # Comment

The comment is optional of course. The UNICODE number seems also not to be necessary but is handy to list for your own reference purposes I guess.

So for example, let’s add a couple bullets to this file:

<Multi_key> <asterisk> <O> : “•” U2022 # BULLET

<Multi_key> <asterisk> <o> : “◦” U25E6 # WHITE BULLET

Let’s test and see if your system supports .XCompose in applications. Launch xterm, and then hit and release the Alt key followed by an asterisk (Shift+8 on my US keyboard) and the lower case letter ‘o’. You should see a small empty circle.

If it didn’t work, the first thing to try is setting your Input Method to xim, by adding this line to the top of your ~/.gnomerc (or perhaps ~/.profile), and then log out and back in again:

export GTK_IM_MODULE=”xim”

Ubuntu 14.04 by default uses Gnome’s gtkim, which doesn’t permit user configuration of key combos. The above setting will switch it back to xim.

Once you’ve verified your ~/.XCompose is getting parsed and used, you’re all set to have some fun. Pull up the table of Unicode symbols and look through it for stuff you want to use.

The check mark can be useful for your todo lists:

<Multi_key> <at> <slash> : “✓” U2713 # CHECK MARK

Personally I think the square root symbol makes for a more distinctive check mark:

<Multi_key> <v> <slash> : “√”

You’re not limited to single characters either. For instance, you can add shortcut sequences for commonly typed strings:

<Multi_key> <m> <e> : “John Hancock”

This can be a huge time saver if you find yourself typing the same thing over and over again, like doing patch review:

<Multi_key> <r> <b> : “Reviewed-by: John Hancock “

The compose key’s functionality is rather odd, but it’s not unique; it’s one of a set of functions called “dead keys”. A “dead key” is a key that appears to do nothing when you first press it, but modifies the output of the next key that you push. Compose is one type of dead key, but there are others that add specific decorations to letters. For instance, the ‘dead-acute’ key places an acute on top of letters, ala áéíóú.

Various languages also have graves, macrons, cedilla’s, hooks, breves, and more. Keyboards for these languages may have dedicated dead keys for adding these bits to letters. Even on boring US style keyboards, you can select alternative keyboard layouts that map in dead-key functionality to lesser used keys or key combinations. Impress your professors!

As you can see, compose key sequences are a convenient way to access the wealth of Unicode symbols, but getting your system configured to use them is a bit involved. Hopefully in the future Linux distributions will improve functionality by providing the Compose key enabled by default. This would make the basic international letters far more accessible to all users.

 

We Run on Open Source

Open source developers can create an immense amount of value for any company that relies on open source software by giving the company the ability to direct and influence aspects of the open source community. This allows the company to shape the tools they rely on, making them better fit their needs: a phenomenon otherwise known as “scratching their own itch.”

While an open source developer’s primary skill is writing good code, their value extends far beyond technical skills. Adopting open source practices requires participation in diverse communities that have a number of stakeholders who each have their own itches to scratch.  Open source developers find themselves in a complex position that requires them to be experts not only in their technical field, but also in communication and collaboration. Open source development is a collaborative process that happens all over the world, and our group is no different with developers across 4 continents. We have many people in our group who are deeply familiar with the open source process, and it’s been our job to figure out how to make this process work within Samsung. A big part of this has been the IT infrastructure we’ve put in place.

The Tools of the Trade

The Open Source Group has a fairly unique set of needs when compared to Samsung at large, and we’ve been given the freedom to deploy infrastructure that best suits our open source methodology. Naturally, we’ve chosen open source tools as the foundation for our infrastructure, and we’ve done our best to allow our group to work internally with all of the same tools that make up the most successful open source communities.

Real-time Communication – IRC

An IRC server allows our developers to handle short, quick communications with each other. It can also provide an informal location for water-cooler chatter that helps build a sense of community in our distributed environment.

Asynchronous Communications – Mailing Lists

Mailing lists are great for people to ask questions or discuss anything related to the open source work. Our group has numerous collaborations going on at any single time with both internal and external groups, communities, and organizations. We use Mailman to for our mailing lists which makes it incredibly easy for everyone to stay in the loop or catch up on projects through the archives.

Reporting and Documentation – Wiki

Wikis are the benchmark for providing information in a form that can be manipulated easily which is also easy for a human to read. In most open source communities, this is the primary resource for learning the inner workings of an open source project. A good wiki is functional, extensible, and capable of being automated in some manner. We use TWiki for all of our internal reporting as well as the recently-launch Gear VRf project, but there are dozens of great open source wikis to choose from. We’ve also found great use of Python Mechanize as a way to automate weekly patch and status reporting through our wiki.

Public Web Presence – WordPress, Drupal

A public website is the best platform for delivering the most important information for an open source project. This type of outlet is ideal for sharing release information, general community developments, and high-level project developments. The site you are currently looking at runs on WordPress, which is a great tool for beginners and experts alike.

Collaborative Development – Git

We use Git when we need to collaborate both with internal teams at Samsung as well as people in the open source community at large. Git is such a simple, yet powerful tool that is used to develop software all over the world, and we use it to share projects like our brand new port of Tizen for the Raspberry Pi 2.

Linux

No list of open source IT infrastructure is complete without mentioning Linux. Most of our team does their day-to-day work using various desktop distributions, and all of our servers run Linux.

Establishing Open Source at Samsung, One Tool at a Time.

We’ve realized one of the most powerful tools any team can have is flexibility because open source is constantly changing and always offers new solutions to all types of problems. Rarely does a tool fit the job perfectly, which means the ability to to customize applications by modifying the code and automate them through custom scripting is invaluable. This need for customization and automation is the primary reason open source has proven to be so useful for us from an infrastructure perspective. We are fully committed to open source simply because of how valuable it can be to the development of modern software and technology, and we are valuable to Samsung not only through our contributions to open source, but also in how we leverage the benefits of open source in our everyday use. The business case for open source IT infrastructure is strong; feel free to share your insights about it in the comments below.