This is our second blog post on performance testing. In the previous article, we reviewed the different types of performance testing and how those might apply to the application that you are testing. This article will go into more detail about how you can approach performance testing, the pitfalls to avoid, and what to consider when choosing the best tooling for you.
Arguably, the best way to approach performance testing would be with a set of clearly defined non-functional requirements. These would define how the system should behave under stress and load, what percentage of system interactions should succeed under these conditions, and how long these calls should take.
However, you won’t always have these requirements, or they won’t always reflect the real world, so you’ll need an alternative approach. One way is to create benchmarks of your application’s performance by monitoring the way the system currently works. Once you have this information you can define your thresholds and ensure that the application doesn’t fall below these baseline performance metrics.
If you think about what is involved in performance testing it can be difficult to imagine how you might carry out such activities. If you’re trying to replicate 1,000 users all logging onto your application at the same time, how might you go about simulating this? Or what if the numbers are bigger, 10,000 or even 100,000 users?
It is important to remember that this is only one type of performance testing. It can be less daunting to consider other aspects of performance, such as the responsiveness or stability of your application.
As an individual tester, you can quite easily do some manual exploratory testing to determine the responsiveness and stability of the application to a single user. After all, how the user perceives the application is a key objective of performance testing.
You can then scale this up by getting other people to perform the same test at the same time and see if the system continues to perform in the same way. This clearly has limitations, though - a normal engineering team will run out of individuals long before they reach the sort of stress limits most applications are designed for.
This is where automated performance testing steps in; instead of lots of people, there can be one person or a small team developing automation test scripts with the help of an automation testing tool. You might still have a large hardware requirement, but we can discuss that in the next section.
Banner vector created by upklyak - www.freepik.com
There are several common mistakes that are often made when undertaking performance testing. Here are some things to consider to help you avoid those mistakes.
1. Test the right environment - Ensure that the environment that you choose is representative of your production environment. If you test against a development environment that doesn’t have the ability to run at the same scale as your production environment, then the test results are not going to give you a real idea of how the system will behave. Alternatively, if you consider testing on your production environment, then you will be directly testing the performance of your system, but you risk degrading the experience for your user.
2. Consider quality over quantity - You can’t test everything, so think about the tests that are most valuable. This isn’t just a consideration for performance testing, but for all testing activities. Focus on the simplest tests that provide the most value first.
3. Test early - Don’t leave performance testing to the end of your project. If you conduct performance testing alongside development it will lead to finding problems more quickly and getting faster solutions. If you leave the performance testing until most of the main development has been completed, any major performance issues found that need to be resolved could be very expensive. Imagine your team had decided on a specific architecture for your application and you found a performance issue once the product was established that was due to a shortcoming of the architecture. At this point you may then have to choose between re-architecting the product, or accepting that your product will always have poor performance.
Additionally, if you leave performance testing to the end, then you probably won’t have time to complete endurance testing.
4. Tests are still code - Treat your performance testing scripts as you would any other code. Make test scripts reusable, don’t hard-code values and think about reusability, quality and future maintenance.
There are a lot of considerations needed when starting performance testing, but starting early, and finding the right tool, will help to make your performance testing journey successful.
Let's connect and explore how we'd make your initiative more successful. What describes your situation best?