performance testing in devops world

This is the last of three blog posts about Performance Testing in a DevOps World. DevOPs is about having small teams working closely together, automating as much as possible and accelarating releases. In my first two blog posts I explained about what DevOps means for a performance engineer and why Shifting Right is important.

In this blog post I will explain what Shifting Left means and why this is crucial to uplift quality and the cadence of the release train and increase the job satisfactions of a test engineer.

Shifting “Performance Testing” Left is about doing Performance Test activities earlier in the SDLC. A Test Engineer can start with assessing the design of a feature and can work with development to do code profiling. 

For DevOps programs based on Scaled Agile, a Program Increment (PI) day might be the wright moment to start involving Performance Engineers. The Engineer gets an understanding of the objectives of each scrum team and gets to know the features that are planned to be delivered during the next increment. 

Non Functional Requirements

After so many years, I am still surprised that very often Non Functional requirements are considered secondary after functional requirements. Too often, the pressure on releasing functionality is so high that Non Functional Requirements are not (well) considered.

This may result in new features that are working functionally correctly but that may be too slow or too instable to be used. Or features may cause too much stress on system resources (disk I/O, CPU, memory, threads, data pools) and therefore impacting also other functionality.

Scrum teams should include the relevant Performance Requirements (NFRs) into their Definition of Done (DoD) and do some kind of performance testing against their features to make sure that the complete solution meets the NFRs.

A golden tip is to NOT overdo NFRs but keep these simple, understandable and testable. For performance engineering, a great NFR could be to improve the performance with at least 1% for every iteration. 

Can you imagine the improvements you have made after many iterations? And more important…. you keep a focus on performance. To put this into numbers, the performance of a system would improve with around 15% in one year if you had 3 weekly iterations.

Performance Risk Assessment

A Risk Assessment can be defined as a high level “paper” assessment to analyse which feautures need performance testing and which features don’t.

A risk assessment may include a design review to analyse the architecture of the solution to see if the business flow of transactions will meet the performance, availability and reliability requirements.

For a design review it is important to fully understand the solution and the reasoning why architectural decisions have been taken. It can be important to understand what the strategic direction is.The point of a risk assessment is to be smart and efficient in testing and focus on the most crucial features and not the low risk features.

Unit Testing & Code Profiling

It makes so much sense to start with performance testing when you write your first line of code. Developers should not only be proud of what their code does but also how it runs. 

For critical code (methods that are used often, or methods that process heaps of data), a profiler should be used to assess the performance of the code. Performance is not only the time it takes to execute the method but also the CPU it uses and how memory is allocated.  

For JAVA the most common profilers used are VisualVm or JProfiler. A great article describing how to do code profiling (NetBeans) can be found here.

As a Performance Test Engineer, I have been using APM tools (DynatraceAppDynamics) to profile the code. It is just brilliant to be able to look at the performance of an application and track down the methods that cause these nasty system bottlenecks. 

API Testing (REST/SOAP)

The majority of Web Based systems will use the SOAP or REST protocol to communicate from the web based front-end (consumer) through a service bus to the back-end systems (provider). Although it may seem difficult to test SOAP or REST calls because you cannot just record such a call with your load test tool, it is actually easier and will prove you with a more robust and automated test framework.

HTML Web Based Load Test Scripts, mostly used for E2E testing, are very sensitive to changes of the application. A new release can easily break all these scripts so re-recording will be required. It can also be that your scripts do not fail but that some objects (heavy CSS, JS) are not being downloaded providing you with incorrect measurements.

When you need to deliver features rapidly into production you may not have the luxury of time to re-record all of your load test scripts again. Therefore having (also) a more robust framework based on API calls is beneficial. API calls do not change so often and the effort to re-create these type of scripts is less.

To build up a reliable load test based on API calls you will need to understand the workload of your service calls which can be tricky to retrieve. Also the data contained in the XML can be difficult to model correctly. The golden rule to successfully build an API framework is to start small and build up the framework over different sprints. Start with the SOAP/REST calls and methods that are used most often.

Once your first call is working you can start testing. No point to wait until “everything” is finished to start testing.  A great video about how to create a SOAP test with NeoLoad. The high level method described would be similar for other testing tools like LoadRunner or Jmeter.

Continuous Testing … so AUTOMATE!

In a DevOps world, continuous is everywhere: Continuous Integration, Continuous Deployments, Continuous Feedback and Continuous Testing.  I believe that the importance of continuous lies not in the technique but in the process; having a team of Performance Engineers that can focus continuously on the performance of a Release Train.

In the old days, when projects were delivered based on a waterfall project, often external consultants would be hired to deliver performance testing for that specific project. Once the project was finished, the team would be disestablished and the IP would walk out of the door. Continuous in DevOps means that there is room to improve test assets and to decrease technical testing debt. IP management and knowledge sharing becomes easier.

But of course … no DevOps without automation.

I am a bit skeptic about a full automated suit of performance assets. Executing automated performance test is easy but analyzing the results and deriving correct conclusions out of a test in an automated way looks to be one bridge too far. I totally appreciate the value of automating execution of performance tests on a daily basis but without proper pattern recognition of the results, manual analysis seems to be a must. An API framework (SOAP/REST) is a perfect candidate for automated performance regression tests.

Integrating low level performance test with automated integration tools like Jenkins or Bamboo can speed up the validation of a build. But be careful not failing a build because of instabilities of the environment. If you don’t have a complete stable environment, the risk is that you may have a lot of false positives slowing down your release train.

Conclusion

DevOps is about doing things right and improving while you deliver. Performance Testing has many shapes and is always context driven. There is no one solution fits its all. Being able to do performance engineering at different levels makes a difference. Being able to provide solutions and not just raise defects is a massive change… a change from Performance Testing in the Old World towards Performance Engineering in a DevOps world.

DevOps can be daunting, so much change, so many new tools. And we need to trust these developers and operational folks! Start small, learn and embrace the new way of working and the pleasure that you can get out of performance testing will only grow.

source: https://www.linkedin.com/pulse/performance-testing-devops-world-shifting-left-stijn-schepers/

Leave a Reply

Your email address will not be published. Required fields are marked *