(on behalf of CLR perf team)
We are fishing for best practices how to do performance tests in open source projects and for cross-platform.
We have bunch of reusable practices home-grown, but they heavily depend on monitoring on certified dedicated machines in our perf lab (i.e. no other process/disk IO interaction).
Any pointers how other open source projects deal with perf test suites in their CI systems?
What we do today in-house:
There’s a set of tests. Many of them are microbenchmarks, they are the easiest: Each microbenchmark is wrapped in runner which warms up the scenario, then runs it 5 times and measures the time. It prints the result with some basic statistics to the output/log. A tool then parses all the logs and displays them (through a DB) in HTML-based UI with history (graphs for trends, etc.).
Currently we run it on the same dedicated machines, so results are comparable and one can reason about changes over time.
What we could do:
Keep running tests on dedicated machines and publish results somewhere. Is that the best practice?
To support dev scenario on local box we could generalize the infrastructure to run on any machine and provide a tool that compares 2 runs (baseline vs. PR), so any dev can check impact on performance prior to PR if there’s perf risk.