Dynamic frame clock dispatch max render time for reduced input latency
The first part of !1620 (closed), see that for more explanation, discussion of approaches and some history of the branch.
I built a latency tester similar to the one described in this post: https://thume.ca/2020/05/20/making-a-latency-tester/. I measured latency of two terminals running
cat and my nvim setup, three editors and one Wayland-native SDL2 game painting on frame callbacks and constantly. The latency tester repeatedly types and erases the
a letter, and measures the time between sending the a key and the monitor brightness starting to change.
Here are results on my laptop running F34 Silverblue with c2968c89 and with this MR's Mutter.
Dots represent individual measurements. Each application has 100 separate measurements (100 a key presses). I removed a few obvious outliers (e.g. frame drops) by hand. "idle" means the monitor is idling between key presses, while "updating" means that the monitor is constantly repainted (I used
Difference between maximum and minimum measurement in the same group is usually around ~17.2 ms, which is close to the expected 1000 / 60 = 16.67 ms if application latency was always exactly the same. The difference between median latency of c2968c89 and this MR is ~10 ms for most applications.
I'm not sure why "updating" is always about the same as "idle", except alacritty. I'd expect "updating" to be slightly worse given that, as far as I understand, it should always repaint ~14.67 ms before the presentation, while "idle" should repaint up to ~8.3 ms before the presentation, resulting in a ~6.3 ms difference. Alacritty is the only application to show this difference (~6 ms), I'm not sure why it's the only one.
"Quaver (unlimited)" doesn't show much of a change, I suspect that's because it's saturating the integrated GPU and struggles to render at full 60 FPS in that configuration.
As always here's a COPR. Note it contains some other stuff (Shell 40 for F33, which is kinda required since this is new Mutter).