Performance Testing Best Practices: What You Should Know?

Today, the functionality of a product is no doubt an important and integral component in the testing process. But there are response time, reliability, resource usage and scalability that do matter as well. Performance testing is a separate type of testing that allows to ensure that software applications will perform without problems under the expected workload.

In conditions for the development of automated testing, there is a variety of ready-made tools that allow to generate loads, record scripts and run them on a schedule. But before you start doing this, you need to know the basic concepts of performance, know the key parameters in performance and how to influence them. Ultimately, this will make the process of automating loads more objective and this will enable you to use the correct data, design useful scenarios, competently analyze the results.

So, today we will consider the following points:

  • what should we test
  • technical requirements and their types
  • list of performance testing tools
  • performance testing goals
  • performance testing
  • load models
  • performance metrics
  • sources of problems

What Should We Test?

Performance testing definitions

Efficiency – a set of attributes relating to the ratio between the quality level of the software functioning and the amount of resources used under the given conditions.

Efficiency characteristics:

  • time behavior – the software attributes relating to the response and processing times and the execution speed of its functions.
  • resource behavior – attributes of the software relating to the amount of resources used and the duration of such use in the performance of the function.

These are the 2 characteristics we try to check when conducting performance testing – that the nature of the change with time and resources meet the requirements

Remembering the basics of the test design, we should know that the most interesting errors occur at the boundaries of the values. So the most interesting behavior of the system begins when the resources begin to peter out – CPU, memory network, disk etc. And when we get to the resources’ boundary values, the server begins to experience performance problems, along with which functional problems occur.

Therefore, when we conduct performance testing, on the one hand, we pay attention to efficiency (to resource consumption), and on the other hand – we are also interested in functional defects that occur when the boundaries of available server resources are reached.

Reliability – a set of attributes related to the ability of the software to maintain its level of performance in the given conditions for a specified period of time.

Reliability characteristics:

  • stability (maturity) – software attributes related to the failure rate in the event of software errors.
  • fault tolerance – software attributes related to its ability to maintain a certain level of performance in cases of software errors or violations.
  • recoverability – software attributes related to its ability to restore the level of performance and restore the data directly damaged in case of failure, as well as the time and effort required to do so.

These three characteristics should be included in performance testing, in other words, performance testing is testing reliability + testing efficiency, using the same tools and the same data analysis approaches.

Technical Requirements

The requirements for completeness of the presentation can be divided into detailed and informal ones.

Detailed requirements are not always good, as it may seem, because they can relax the tester. Informal requirements on the contrary, provide more freedom for creativity. The most suitable is the option with incompletely described requirements. And you will hardly see requirements that are strictly observed.

Three basic components of technical requirements:

  • load generation – the load must be generated for load testing;
  • performance characteristics monitoring – since the server load is high, it is necessary to read a large enough amount of indicators, and if you want to use them for analysis, you must first collect them;
  • results analysis – achievement of mean values of various parameters, as well as analysis of serious abnormalities (anomalies).

Performance Testing Tools

Load generation tools

Load generation tools are used to generate a large number of requests, to write the scripts and record/playback the last ones. Below is a list of some of the more popular tools.

Monitoring tools

This type of tools allows you to measure the main characteristic – response time – on the client side, not the server.

  • built-in tools
  • operating system tools
  • means of servers, DBMS
  • specialized means
  • Zabbix, Nagios, Hyper

Analysis of results tools

Speaking about the results analysis in performance testing, first of all it means the ability to build various graphs. The following tools can be used:

  • built-in tools.
  • spreadsheets.
  • packages for statistical data processing.

Performance testing goals

In simple terms, the performance testing goal goes like “get performance information and provide stakeholders with it so that they can use it to make some decisions”. How can this information be used further?

  1. To verify the requirements compliance.
  2. To compare different versions or system configurations.
  3. To identifying bottlenecks.

Load Models

Depending on the performance testing type you should know and use one of the standard load models.

Load model – how many impacts you need to have on the system at any specific time.

1. Constant load

Implementing Performance Testing On A Project

It means that we send approximately the same number of requests for a large amount of time. And we observe the response time, it should remain approximately constant, unchanged during the entire time of the load. The main objective of the constant load is observation of the response time – it should remain unchanged during all this time.

2. Сontinuously increasing load

Сontinuously increasing load model

Unlike the previous model, where there should not be any problems with a uniform load, here as the load increases, we expect to encounter problems. The main objective of this model is to find a saturation point after which serious problems occur in the system – deviations from the expected behavior (failures).

3. Сonstant load, close to the limit

Сonstant load, close to the limit model

After continuously increasing the load we should check the system’s stability in states close to critical. For this, we need to take a value equal to two-thirds of the critical one and verify that the system will work stably under such loads. And then we have to perform the same test for a value that is 10 percent lower than the critical one. The last one allows you to catch the resources leaks. So this type of load provides system’s stability testing under loads close to marginal ones.

Load Testing Metrics

Metric is a standard for measuring a computer resource. Metrics can refer either to a resource and units of measure, or to the data collected on that resource.

The load testing metrics described are key performance indicators for your web application or web site. Response metrics show the performance measurement from a user perspective while volume metrics show the traffic generated by the load testing tool against the target web application. The most common performance metrics are listed below:

  • response time – the most basic metric, a long response time – both the mean and the individual anomalies and deviations that lead to delays.
  • number of failures – faults on the server due to load.
  • resource consumption – how many resources are consumed by the server.

There are others performance indicators you may measure depending on your project, server, requirements, problems. But the metrics we described above are the core ones.

Sources of Problems

The following main reasons which can be the trouble spots are known.

  1. Lack of resources – CPU time, RAM, network bandwidth, disk subsystem operating time
  2. Non-optimal algorithms – the computer slowly calculates, uses such algorithms, which leads to a long calculation time
  3. Incorrect load balancing – one server is overloaded while the other server is underloaded
  4. Incorrect resource caching – frequently used data consumes as many resources as if we request them for the first time, for example.
  5. Incorrect queue management – more resources are allocated to one request than they are for another one.
  6. Functional defects – bugs in the product may lead to performance problems


Performance testing protects investments from product failure. Cost of performance testing is usually more than compensated with improved customer satisfaction and retention.

And, in this article, we have covered the most important aspects you should know to build automated performance testing of your product. Performance testing isn’t inherently difficult. Starting it, keep in mind the fundamentals of performance testing, and also evaluate user behaviors, workflows and be prepared for real-world scenarios.

About QAwerk

QAwerk is a Ukrainian company that provides testing services on demand. The team’s expertise covers all testing stages and methods that allow monitoring the stability of separate units and the system as a whole, providing integration possibilities, and ensuring positive feedback from your customers. That way, QAwerk serves as a one-stop shop for the quality assurance of your project.