Define and test processing limits for tracker-miners
We need to quantify the expected performance of Tracker, so we can deal with cases where resource usage is higher than a user expects.
I imagine a text file we maintain in Git with:
- Expected baseline hardware (specs of a 5 year old laptop, for example)
- "Normal" amount of resources, e.g.
- 100,000 files processed by extractor
- 10,000 text files processed for full-text search
- Expected performance:
- Response time and memory usage during search
- Total time to complete initial indexing.
We could then provide a measurement script which users could run and measure if Tracker met the expected performance targets.