enhancement proposal, add performance metering to the test-suite,
to keep an eye on performance while trying to change / improve things I'd like to have some qualified metering. How do the experienced devs do such?
I tried to use the time for the test suite ( 'time make check TESTS' ), but that suffers from two things, 'more and early fails' will show false positive performance, and choosing random files for some tests is bad for precision of results. I'd propose:
- select a standard set of files, at least as an alternative to the random selections,
- add timing to and print the result in the logs for each individual test, as well as for the suite in total,
- keep the logs from the last run for compare, rename instead of delete,
IMHO such can help for quickly spotting problems.