Too often we give up on hard problems, but one of the development teams that I have worked with put this concept into practice. In addition to the challenges listed in the BlogPost, they struggled with other issues:
- Presenting the results for maximal impact/clarity. This team worked hard to represent the results graphically (see below).
- The delta between test runs — while the idea is to perform the tests continuously, in reality the practice was simultaneously with development. (You might also read Tim Hinds, next post "How to Make Your Load & Performance Testing More Continuous")
- Dealing with “White Noise” — allowing for changes in network performance, other activity on the systems under test, etc. This team chose measure performance test runs as “Passing” based on a plus/minus 15% of previous runs.
- The previous approach created two other corollary problems:
- the tests could “pass” with each run while getting 14.9% worse with each run — hardly a desirable definition of "pass".
- when to re-baseline the performance expectation (and measure the plus/minus 15%)
No comments:
Post a Comment