Criticizers say that DevOps is just a buzz word. And yes it is a buzz word! But luckily not just a buzz word. DevOps has changed the way performance testing is done. Teams are working closely together to deliver continiously features into production at a high cadence.
The culture shift that goes with DevOps feels for a Performance Engineer like a huge bottleneck that has been resolved. No more begging for log files, or production statistics, no more working in total isolation testing a black box. A lot of change, which was very benificial for a performance tester. Finally ….
The life cycle of performance engineering needed to be reinvented. The main reason is that features are now being deployed more rapidly. Release cycles have become shorter and with continuous integration and automation, the velocity is faster than ever before. A release cycle can be anything from weeks to days to hours. The days of long running, big bang projects – that took sometimes years to deliver (!!!) – are over. The below picture details my view of the life cycle of performance testing in a DevOps world.
As a performance tester we need to do things smarter, and quicker. Therefore having a focus on the correct features is a must. It is crucial to do a performance risk assessment of the features that are being delivered in the next iteration. Which features are required to be tested and which ones are not! Features that are technically complex, client facing, synchronous and heavily used can be seen as of high risk and need to be thoroughly tested.
And testing starts with a developer who writes the code. Unit testing and code profiling are his responsibility. A peer review of the code – maybe combined with a bit of gamification to make it more fun – should become a habit.
A performance engineer must not limit testing to E2E testing. Web based HTML/HTTP scripts are very sensitive to functional change and break easily. You may end up with a lot of effort and time to maintain a working E2E test framework. And time is just something we lack when moving to DevOps.
Testing a feature as a standalone component using exploratory testing techniques is a fun way of quickly finding code limitation (single threading issues, memory leaks, code instability under increased load). Tests can be very short and the outcome of a test will lead to the next test.
Most applications (like web front-end) use API calls (rest/soap) to communicate to the back-ends that do most of the processing. A performance testers should put more effort into building and maintaining a API test framework. API calls are relatively easy to write and less sensitive to functional change. Automating an API regression framework that can be ran daily on new builds is a great way of finding performance issues earlier. If you combine your API performance framework with an Application Performance Management solution (e.g. AppDynamics, Dynatrace,New Relic), a performance tester will not only detect issues but can provide solutions. APM tooling are excellent to accelerate testing and improve the stability and performance by providing insights into the IT stack. Too often, APM tooling are seen as a production monitoring solution while APM tooling is of great value for testing and development.
By deploying a small set of features into production, the risk of having a big outage decreases. By having advanced monitoring solutions (APM) in place, a decrease in stability and performance can be easily detected. More so, by having feedback loops from production back to test and development, teams can be hold accountable for the bugs that they have introduced and learn from the issues. Excellent!
A simple Synthetic Monitoring (SM) solution can be very helpful. This solution is a robot who puts load onto production to measure 24 – 7x 7 the stability and availability of the application. Simple is key as you don’t want to invest too much time in maintaining an always broken SM solution.
I grew up in Belgium in a province called Limburg. I currently work in Heerlen in the Netherlands. Both regions were famous for coal mining. Coal miners would carry down a canary into the mining tunnels. Carbon monoxide would first kill the canary before killing the miners. When we release features into production, we should think about the canary and do a release on a small subset of servers are expose the feature to a small subset of users. Monitoring the servers/users will determine if the canary is dying are singing their delightful songs.
Only by changing the way we do performance testing, performance engineers can be successful and become digital super heroes in the world of DevOps. Shift left, shift right and move forward, faster then ever before. I have written 3 other articles that are closely related to testing in a DevOps world and may be worth reading: