All tests were executed on a server with a new database with no pre-created users or test data. The data was automatically created for each test case right before the test execution. This approach helped keep autotests stable and allowed running them in any order as there were no test data interdependencies. For example, if two test cases rely on the same piece of test data and the first test deletes it as one of the test case steps, the latter test will fail, resulting in a false positive.
Our autotests also contained post-conditions for data cleanup. When new data is generated for every small test case, the size of the data source explodes quite quickly, putting an unnecessary strain on the server. Therefore, each autotest deleted the test data it previously created after successful test execution.
As you can imagine, running 284 autotests simultaneously takes a while, or about 6 hours, to be more precise. To speed things up, we divided all the tests into groups, singling out the smoke group containing about 30 autotests for main product features. Other groups were labeled based on the functionality they checked, for example, “login”, “add to cart”, and “create account.”
These tests were run continuously after each commit to the GitLab repository. If needed, it was also possible to manually launch or skip autotests for a specific group.