Linux Media Summit Report 2016 – San Diego
The first 2016 Linux Media Summit was recently held on April, 7 in San Diego, California in conjunction with the Embedded Linux Conference. This post will cover some of the major developments that took place during this summit.
CEC Framework Status Update
The first version of the Consumer Electronics Control framework is nearing completion, and will likely be submitted for either Kernel 4.7 or 4.8. This framework will allow a lot of useful end-user features including better menu, playback, recording, and tuner control in addition to remote control pass-through. In will also enable better device information discovery and routing.
Finally, plans to implement ARC/CDC hotplug support were revealed as well as plans to implement high level protocol constraints (resend, timeout, rate limiting of messages). Whether those constraints can be implemented in the kernel remains to be analyzed, as the kernel to userspace API might need some work.
Demo of the New qdisp Utility
The qdisp utility is a simpler alternative to qv4l2 that handles video capture and shows only the captured video. Currently, qdisp requires OpenGL, but OpenGLES support is planned. Color space and format conversion code is based on GPU shaders. It will be split into a library to be shared with qv4l2, and a CPU-based alternative would be feasible but isn’t planned at the moment.
Media Request API Status update
Currently, the Media Request API provides the ability to chain multiple existing IOCTLs into a request which will either be applied atomicly or not at all. Some new ideas were proposed for this API:
- New IOCTL to start a commit
- APPLY operation will apply changes in a request immediately. Useful to change multiple controls at the same time
- QUEUE operation will queue changes with a buffer and will be applied when the buffer is processed
Alternatively, another proposal was to create one new IOCTL that takes all state and applies it atomicly (like DRM atomic mode setting). One question remains: How can atomic operations be performed across subsystems like V4L2, DRM, ALSA, and MC? This is something that still needs to be figured out.
CSI-2 has up to 4 virtual channels (2 bits), 6 bits for Data Type. Virtual channels do not have to be in sync with one another, so different virtual channels can carry different frame rates. Within a virtual channel, each line is tagged with a data type as well, so it can be used to pass metadata and video in one virtual channel.
This has a handful of requirements, including pipeline validation, format validation, bandwidth/QoS, and routing (related to muxing/demuxing). The solution that was proposed is to introduce the concept of virtual channels that are routed on top of the physical links. A virtual channel has a route that goes through multiple physical entities, with routing information at each entity that explains how the data is forwarded. Laurent Pinchart (Ideas on Board) will dig up old router entity code he posted in the past as a reference.
DT Bindings for Flash & Lens Controllers
Some drivers create their MC topology using the device tree information. This works great for entities that transport data, but how can entities that don’t transport data such as flash devices and focusers be detected? Additionally, how can these be deduced using the device tree? A solution has been proposed for the sensor DT node to add a handle for the focus controller and generic v4l binding properties to reference such devices.
Improving the Linux Media Patch and Review Process
The Linux media community has been having problems with consistent sub-maintainership recently, and we are currently lacking sub-maintainers for DVB and RC. Additionally, we also lack developers to review code, something Shuah Khan has agreed to help out with for the time being.
We’ve decided to post an RFC to the linux-media mailing list asking for volunteers to act as sub-maintainer for DVB and Remote Controllers.
Media Device Allocator API – Fix broken media_device alloc/remove
The media module (media.ko) needs to be owner of the media devnode.cdev, but not the first driver that registers it. With this change,
__media_device_register() no longer need the struct module passed in so this can be cleaned up.
All drivers that use the media controller should use the Media Device Allocator API. Dynamic changes in the graph (with happens when two drivers bind to MC) don’t produce an event to the userspace. So, the application would need to poll to find changes to the topology; this is not ideal and requires more work.
Media Controller Connection Entities
The Media Controller lacks a way to expose how external sources and outputs (RF, S-Video, composite, etc) are connected. There are a number of userspace needs the current APIs don’t cover and that could benefit from the MC API. Essentially, one of the goals for the MC is to be capable of showing related device nodes that should be used together to capture analog TV, digital TV and ALSA, in addition to being capable of preventing related drivers from streaming in unexpected ways. When an analog TV connection is in use, the DVB API can’t be enabled and vice-versa. The device nodes issue is unrelated to connectors, but it’s necessary to support the connectors to prevent the use of two incompatible paths at the same time. In other words, Analog and digital paths are usually exclusive, and without MC and MC connectors, there’s no way for userspace to know what the constraints are.
Physical x Logical representation
While media hardware has physical connectors, the chips managed by the Kernel driver provide a “logical” connection in the sense that each input/output is mapped via a pin set with a corresponding register setup. Due to this, the representation used by V4L2 (VIDIOC_ENUM_INPUT, VIDIOC_G_INPUT, VIDIOC_S_INPUT) is to represent the logical connection.
Another desired feature for MC connections is to present information about connectors to the user that make it easier to know where to plug in cables. A connection-based representation in the MC would require properties to map them to physical connectors, and a connector-based representation in MC would require properties to map them to logical connections. Provided that both mappings are possible, the MC representation could either use logical connections or physical connectors.
This problem is similar to virtual channels over a logical link (like CSI-2, see “Stream multiplexing”), in the sense that logical connectors can be thought of as a specific routing of one or more signals from the physical connector, through some fabric (e.g. switch, crossbar), to the demodulator.
The physical and logical representation are the same for RF and Composite because these connectors have one analog signal. However, S-Video contains two signals (Y+C); when an S-Video signal is sent to the connector, the physical and logical representations match. However, some devices provide the ability for the S-Video connector to send a composite signal by using a Composite to S-Video cable, but this requires a different setup at the chipset. In this case, the physical and logical representations are different.
There’s also one case that is not covered yet: how to handle the cases where one logical connection is mapped via several different connectors? This is common on ALSA, where the stereo input/output can be either a single jack or two RCAs.
Proposal for complex connections
It was proposed that we model physical connectors and support a routing ioctl for the entities they are connected to. For existing drivers that use S_INPUT, we can either not show the logical connectors at all, or we show the logical connectors in the absence of knowledge about the actual physical connectors. What this routing ioctl should look like is unknown, but this might be done in the same way as the routing as discussed in the context of CSI-2.
For now, we’ve decided to map only the cases where the physical and logical representations are the same via the MC, e. g. RF, Composite and S-Video signals over a S-Video connector. We’ll postpone the other cases for when we have a routing ioctl.
Currently, a pad is identified by index and whether it is a input or output pad. If the index changes between kernel versions, the userspace ABI breaks. In situations where the properties API is unavailable, it’s possible to use an enum to give a type to the pad. However, we don’t want to expose this to the userspace yet because this requires more discussions, as well as the creation of the properties API.
Allowing pad names to be exposed to the userspace can be useful. However, there’s a risk of the userspace relying on specific string names to identify pads. So, in order to avoid repeating the same mistake as we did with subdev and entity names, the API should define the contents of names and how they should be constructed. The API also needs to define what the userspace can expect from the name, including information that can be extracted from it. So, for now, we’ll be doing only the Kernel internal API support.
Yet Another Productive Media Summit
As you can see, we had a very productive Media Summit and we managed to get a lot of important work done to improve the Linux Kernel Media Controller. We will be holding another Media Summit this year with the date and location still to be determined. We look forward to seeing everyone there, and maybe even some new faces!