When is a Keyboard Not a Keyboard?

Much of my day-to-day work revolves around Wayland and Weston. Wayland is a protocol for communication between display servers and their clients, and Weston is a reference implementation of a Wayland display server. Simplistically (and I really can’t stress enough that this is an oversimplification), Wayland is similar to the X protocol and Weston is much like a combination of an X server and a window manager.

Lately I’ve been working on Weston’s text input implementation – specifically how Weston deals with on-screen keyboards. Currently, applications directly control whether the virtual keyboard should be displayed or hidden, and I’ve been adding a way for applications to request that Weston auto-hide the virtual keyboard based on whether or not a real keyboard is present.

Keyboards, Keyboards Everywhere…

Wayland’s input model is based on the concept of a “seat,”  a collection of input devices that would be present in front of a single user’s seat at an office. A seat can have one keyboard, one pointer and one touch interface. If you have two mice plugged in and assigned to the same seat, they’ll move the same pointer. Additionally, two keyboards in the same seat will send keystrokes over the same keyboard interface to applications, and removing one of those keyboards won’t necessarily send any notification at all as the seat still contains a keyboard.

While testing my virtual keyboard auto-hide code, it failed mysteriously for the primary seat. I’d unplug the keyboard and nothing would happen. Everything worked as expected for a second keyboard and mouse I have configured as a secondary seat. This puzzled me for a while until I took a step back and checked the logs to see what input devices were attached.

[12:04:38.741] input device 'Power Button', /dev/input/event1 is tagged by udev as: Keyboard
[12:04:38.741] input device 'Power Button', /dev/input/event1 is a keyboard
[12:04:38.743] input device 'Video Bus', /dev/input/event3 is tagged by udev as: Keyboard
[12:04:38.743] input device 'Video Bus', /dev/input/event3 is a keyboard
[12:04:38.744] input device 'Power Button', /dev/input/event0 is tagged by udev as: Keyboard
[12:04:38.744] input device 'Power Button', /dev/input/event0 is a keyboard
[12:04:38.745] input device 'Topre Corporation Realforce 87', /dev/input/event7 is tagged by udev as: Keyboard
[12:04:38.745] input device 'Topre Corporation Realforce 87', /dev/input/event7 is a keyboard
[12:04:38.746] input device 'Razer Razer Abyssus', /dev/input/event8 is tagged by udev as: Mouse
[12:04:38.746] input device 'Razer Razer Abyssus', /dev/input/event8 is a pointer caps

Everything seems to be in order here, except for the fact that I appear to have 4 “keyboards.” When I unplug the real keyboard there are still 3 keyboards attached, preventing the virtual keyboard from appearing. These keyboards all get piled into the primary seat, which is the only seat configured on most setups. In this case, auto-hide of the virtual keyboard never occurs.

Four keyboards… Seems reasonable.

Taking a Deeper Look

What are these extra keyboards?  It turns out the keyboard interface is very useful for devices you can’t actually type on, so I may not need to get a bigger desk just yet. This leads us to the answer to the whimsical title of this post:  When is a keyboard not a keyboard? Usually.  The power button (two of them in fact) is enumerated as a keyboard device, as is an ACPI “Video Bus” driver that can send back light control events.

This shouldn’t be a problem however, since we can just query udev for some information about these so-called keyboards and better decide what to do with them.

$ udevadm info /dev/input/event0
P: /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0/event0
N: input/event0
E: DEVNAME=/dev/input/event0
E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0/event0
E: ID_PATH=acpi-PNP0C0C:00
E: ID_PATH_TAG=acpi-PNP0C0C_00
E: TAGS=:power-switch:

More good news! My power button is a US-layout PC105 keyboard. Unfortunately, other than Morse code, there aren’t really a lot of ways to use the power button for alphanumeric entry. So, even though it’s reported as a fully functional keyboard, we’re still likely to want to pop up the virtual keyboard when a text entry widget is focused.

I don’t think my power button actually provides the same functionality as this keyboard.

A Solution Begins to Come Together

So what’s the solution here? I’m not stuck yet because the Linux kernel provides the ability to query the keycodes generated by an input device and the recently-released libinput-0.15 has a new API to expose this functionality:

libinput_device_keyboard_has_key(struct libinput_device *device, uint32_t code)

Now I can tell which of the available keyboards have a complete set of alphabetic keys and flag them as text-input capable.  I’ve recently posted a patch series to the Wayland mailing list to do just that.  Not far behind is another patch set that uses this information to implement virtual keyboard auto-hide functionality.

This fix means that Weston’s virtual keyboard will no longer be displayed unconditionally when running an application that uses text input (like the Terminology terminal emulator) – it’ll only pop up when no real keyboard is plugged in. Desktop users won’t be persistently annoyed with a virtual keyboard they don’t need, and dockable/hybrid tablets will be able to pop-up the keyboard only when a physical one isn’t available. This should go a long way towards improving the user experience in Weston environments.

When I first approached this task I never would have expected power button functionality to have an effect on the operation of virtual keyboards, and sometimes even minor tasks, like detecting the existence of a physical keyboard, require out-of-the-box thinking to solve. Lastly, always remember to check the logs!

Author: Derek Foreman

Derek has a long history of writing graphics software and has spent much of his career working in open source.

4 thoughts on “When is a Keyboard Not a Keyboard?”

  1. This is good information. Thanks for writing this up.

    A lot many more players have used Linux (and other Free Software stacks) to build similar solutions. Android, Tizen, Firefox OS, Maemo etc… I am just wondering if there is anything we’ve reused / leveraged from these projects.

    Or is it that these projects have all used crude means of doing stuff, something which cannot be used as a generic framework ?

  2. Sort of a difficult question to answer – Android, Tizen, etc are all operating systems. Wayland/Weston are just part of that stack. Some of them (like Android) have implemented their own display server, some (Maemo, some versions of Tizen) use Xorg, and some (other versions of Tizen) use Wayland.

    There are large differences between X, Surface Flinger and Wayland to the point where not much code re-usability is possible.

    That said, a lot of the input code in Weston was split out a while back to become libinput, and that’s now being developed as a separate library. It’s being used by both weston and Xorg.

  3. I wrestle with a similar problem. I’ve a setup where the weston-editor demo client launches the weston-keyboard. But my own gtk+ application does not show the virtual keyboard.
    Have you some more information what you mean with the sentence: “… applications directly control whether the virtual keyboard should be displayed or hidden …”

    At this time I’m not so familiar with the wayland/weston environment.

    1. We spoke briefly on IRC about this, and I forgot to reply here…

      What’s happening here is that weston-terminal uses weston text-input protocol while gtk+ does not. Since gtk+ doesn’t use the text-input protocol weston has no idea that it should pop up the virtual keyboard.

      EFL also uses weston’s text-input protocol, so apps like terminology will pop up weston-keyboard for you.

      The text-input protocol is actually just a weston thing right now, and it’s not really clear whether it will be endorsed as part of wayland in the future, so it’s unclear whether it will ever be used by gtk+.

      What I meant by “applications directly control whether…” is that an app has to call wl_text_input_show_input_panel(struct text_input *) and wl_text_input_hide_input_panel(struct text_input *) in order to control virtual keyboard visibility.

      Perhaps you can add weston text support to your own application? check out protocol/text.xml and clients/editor.c in the weston repository – if you have access to a wl_surface for focus tracking that might be all you need.

Comments are closed.