Tag Archives: Test Flakiness

Test Flakiness – One of the main challenges of automated testing (Part II)

By George Pirocanac

This is part two of a series on test flakiness. The first article discussed the four components under which tests are run and the possible reasons for test flakiness. This article will discuss the triage tips and remedies for flakiness for each of these possible reasons.


Components


To review, the four components where flakiness can occur include:
  • The tests themselves
  • The test-running framework
  • The application or system under test (SUT) and the services and libraries that the SUT and testing framework depend upon
  • The OS and hardware and network that the SUT and testing framework depend upon

This was captured and summarized in the following diagram.

The reasons, triage tips, and remedies for flakiness are discussed below, by component.



The tests themselves


The tests themselves can introduce flakiness. This can include test data, test workflows, initial setup of test prerequisites, and initial state of other dependencies.


Reason for Flakiness

Tips for Triaging

Type of Remedy

Improper initialization or cleanup.

Look for compiler warnings about uninitialized variables. Inspect initialization and cleanup code. Check that the environment is set up and torn down correctly. Verify that test data is correct.


Explicitly initialize all variables with proper values before their use.

Properly set up and tear down the testing environment. Consider an initial test that verifies the state of the environment.

Invalid assumptions about the state of test data.

Rerun test(s) independently.

Make tests independent of any state from other tests and previous runs.

Invalid assumptions about the state of the system, such as the system time.

Explicitly check for system dependency assumptions.

Remove or isolate the SUT dependencies on aspects of the environment that you do not control.

Dependencies on execution time, expecting asynchronous events to occur in a specific order, waiting without timeouts, or race conditions between the tests and the application.

Log the times when accesses to the application are made.


As part of debugging, introduce delays in the application to check for differences in test results.

Add synchronization elements to the tests so that they wait for specific application states. Disable unnecessary caching to have a predictable timeline for the application responses.

Note: Do NOT add arbitrary delays as these can become flaky again over time and slow down the test unnecessarily.

Dependencies on the order in which the tests are run. (Similar to the second case above.)

Rerun test(s) independently.

Make tests independent of each other and of  any state from previous runs.


Table 1 - Reasons, triage tips, and remedies for flakiness in the tests themselves

The test-running framework


An unreliable test-running framework can introduce flakiness. 


Reason for Flakiness

Tips for Triaging

Type of Remedy

Failure to allocate enough resources for the SUT, thus preventing it from running.

Check logs to see if SUT came up.

Allocate sufficient resources.

Improper scheduling of the tests so they “collide” and cause each other to fail.

Explicitly run tests independently in different order.

Make tests runnable independently of each other.

Insufficient system resources to satisfy the test requirements. (Similar to the first case but here resources are consumed while running the workflow.)

Check system logs to see if SUT ran out of resources.

Fix memory leaks or similar resource “bleeding.”


Allocate sufficient resources to run tests.


Table 2 - Reasons, triage tips, and remedies for flakiness in the test running framework


The application or SUT and the services and libraries that the SUT and testing framework depend upon


Of course, the application itself (or the SUT) could be the source of flakiness. 
An application can also have numerous dependencies on other services, and each of those services can have their own dependencies. In this chain, each of the services can introduce flakiness. 


Reason for Flakiness

Tips for Triaging

Type of Remedy

Race conditions.

Log accesses of shared resources.

Add synchronization elements to the tests so that they wait for specific application states. Note: Do NOT add arbitrary delays as these can become flaky again over time.

Uninitialized variables.

Look for compiler warnings about uninitialized variables.


Explicitly initialize all variables with proper values before their use.

Being slow to respond or being unresponsive to the stimuli from the tests.

Log the times when requests and responses are made.

Check and remove any causes for delays.

Memory leaks.

Look at memory consumption during test runs. Use tools such as Valgrind to detect.

Fix programming error causing memory leak. This Wikipedia article has an excellent discussion on these types of errors.

Oversubscription of resources.

Check system logs to see if SUT ran out of resources.

Allocate sufficient resources to run tests.


Changes to the application (or dependent services) out of sync with the corresponding tests.

Examine revision history.

Institute a policy requiring code changes to be accompanied by tests.


Table 3 - Reasons, triage tips, and remedies for flakiness in the application or SUT


The OS and hardware that the SUT and testing framework depend upon


Finally, the underlying hardware and operating system can be sources of test flakiness. 


Reason for Flakiness

Tips for Triaging

Type of Remedy

Networking failures or instability.

Check for hardware errors in system logs.

Fix hardware errors or run tests on different hardware.

Disk errors.

Check for hardware errors in system logs.

Fix hardware errors or run tests on different hardware.

Resources being consumed by other tasks/services not related to the tests being run.


Examine system process activity.

Reduce activity of other processes on test system(s).


Table 4 - Reasons, triage tips, and remedies for flakiness in the OS and hardware of the SUT


Conclusion

As can be seen from the wide variety of failures, having low flakiness in automated testing can be quite a challenge. This article has outlined both the components under which tests are run and the types of flakiness that can occur, and thus can serve as a cheat sheet when triaging and fixing flaky tests.


References











Test Flakiness – One of the main challenges of automated testing

By George Pirocanac


Dealing with test flakiness is a critical skill in testing because automated tests that do not provide a consistent signal will slow down the entire development process. If you haven’t encountered flaky tests, this article is a must-read as it first tries to systematically outline the causes for flaky tests. If you have encountered flaky tests, see how many fall into the areas listed.


A follow-up article will talk about dealing with each of the causes.


Over the years I’ve seen a lot of reasons for flaky tests, but rather than review them one by one, let’s group the sources of flakiness by the components under which tests are run:
  • The tests themselves
  • The test-running framework
  • The application or system under Test (SUT) and the services and libraries that the SUT and testing framework depend upon
  • The OS and hardware that the SUT and testing framework depend upon

This is illustrated below. Figure 1 first shows the hardware/software stack that supports an application or system under test. At the lowest level is the hardware. The next level up is the operating system followed by the libraries that provide an interface to the system. At the highest level, is the middleware, the layer that provides application specific interfaces.



In a distributed system, however, each of the services of the application and the services it depends upon can reside on a different hardware / software stack as can the test running service. This is illustrated in Figure 2 as the full test running environment.




As discussed above, each of these components is a potential area for flakiness.


The tests themselves


The tests themselves can introduce flakiness. Typical causes include:
  • Improper initialization or cleanup.
  • Invalid assumptions about the state of test data.
  • Invalid assumptions about the state of the system. An example can be the system time.
  • Dependencies on the timing of the application.
  • Dependencies on the order in which the tests are run. (Similar to the first case above.)


The test-running framework


An unreliable test-running framework can introduce flakiness. Typical causes include:

  • Failure to allocate enough resources for the system under test thus causing it to fail coming up. 
  • Improper scheduling of the tests so they “collide” and cause each other to fail.
  • Insufficient system resources to satisfy the test requirements.

The application or system under test and the services and libraries that the SUT and testing framework depend upon


Of course, the application itself (or the system under test) could be the source of flakiness. An application can also have numerous dependencies on other services, and each of those services can have their own dependencies. In this chain, each of the services can introduce flakiness. Typical causes include:
  • Race conditions.
  • Uninitialized variables.
  • Being slow to respond or being unresponsive to the stimuli from the tests.
  • Memory leaks.
  • Oversubscription of resources.
  • Changes to the application (or dependent services) happening at a different pace than those to the corresponding tests.

Testing environments are called hermetic when they contain everything that is needed to run the tests (i.e. no external dependencies like servers running in production). Hermetic environments, in general, are less likely to be flaky.

The OS and hardware that the SUT and testing framework depend upon



Finally, the underlying hardware and operating system can be the source of test flakiness. Typical causes include:
  • Networking failures or instability.
  • Disk errors.
  • Resources being consumed by other tasks/services not related to the tests being run.

As can be seen from the wide variety of failures, having low flakiness in automated testing can be quite a challenge. This article has both outlined the areas and the types of flakiness that can occur in those areas, so it can serve as a cheat sheet when triaging flaky tests.


In the follow-up of this blog we’ll look at ways of addressing these issues.


References