Support layers
The idea for layers is to support passing dmabufs through from gstreamer directly to the compositor without having to render them into the gtk framebuffer.
We define in advance how many layers a frame needs, and then rendernodes can declare that they want to be rendered into a separate layer.
For Wayland, we would allocate two subsurfaces for each layer, and then when rendering the nodes, when we hit a node that requests a separate layer, we would start collecting content to attach to the next subsurface (which might just be a dmabuf that we can pass through). When the node ends, we would put the remaining sibling nodes in the second subsurface for the layer (this is to handle cases like rendering a button on top of a video).
There will be situations where the layering does not work, e.g. when the rendernode requesting it is clipped or transformed. In such cases, we would just render the content as normal. This should cover cases such as a video player animating a switch from the video to a settings page, or somesuch.
Backends where subsurfaces aren't supported would always use the fallback rendering path.
Some open questions here:
- Can we identify situations when layering will not work (e.g. clipping, transforms) ?
- Do we need a new render node type to handle the 'just attach this dmabuf' case ?
- We will also need colorspace information to be attached to that node, to cover YUV and such
- How do size the subsurfaces? One idea would be to make the subsurfaces always the same size as the surface, and position the buffer using a viewporter