The values used here were set based on developers' x86_64 workstations and GLib's x86_64 CI autobuilders, but not every CPU architecture is as fast as a modern x86_64.
This is a trade-off: the longer we set it, the more likely it is that the tests will pass on non-mainstream or embedded machines, but the longer a developer will have to wait for feedback if a test hangs. I think we should choose a value that is enough for modern examples of relatively mainstream architectures like the ARM family, but not too high so that we don't penalize developers too much. This would still require
meson test --timeout-multiplier=... on particularly slow architectures like the MIPS and SPARC families, or when emulating a machine with qemu.
What I'm initially proposing here is doubling the arbitrary timeouts, which I hope shouldn't be too annoying. I need to test this on Debian's many architectures before landing it, to check that this is actually enough to be useful; what I'm testing there is x50 on qemu emulating riscv64, x10 on the mips* and sparc* families, and x3 on everything else, and then I can use the logs to check whether x2 would be sufficient.
If this change is merged, then I hope that in the long term, the Debian packaging can use x5 (or less) on mips* and sparc* (which I accept are probably too slow to be taken into account upstream), and the upstream default on everything else (arm*, powerpc* etc.).