George Tsiokos

is a ternary operator

github twitter linkedin instagram email
Sleeping automated tests
Jan 14, 2019
4 minutes read

One of the most common mistakes I see with automated testing involve sleeping the test runner.


The consequences to sleeping automated tests become more severe as the test suite grows.

Increase continuous delivery execution time

If the tests take too long to execute, many tests end up being re-classified so they’re run daily or weekly, not through continuous delivery to prevent artificial bottlenecks in release management.

Bugs are no longer found on commit, they’re found manually by developers or QA during the day even though you have an automated test covering the use case. They become confused why they discovered the bug and not the automated test.

Worst case involves authoring a new test for a test that already exists!

Queue execution of other tests

If the testing infrastructure is busy running tests that are sleeping, other tests become queued for execution, creating a bottleneck that is resolved by removing sleep or increasing the testing infrastructure size.

Increase execution cost

You end up wasting money - many cloud providers charge based on the execution time.

Another popular option is “unlimited” execution under a single pipeline. If that pipeline is busy, you end up purchasing multiple pipelines. This artificially increases cost if the busy pipeline is just sleeping.

Flakiness (false positives)

The most common, and most severe consequence of sleeping is the test failing for a reason outside the scope of the test case. Tests will fail, but not because the test case failed, the test fails because the sleep duration is wrong.

What does sleeping look like?

Thread.Sleep(1000); or await sleep(1000);

Why is this wrong?

What does the system under test do? Are the operations time-based? Does the system provide a service level agreement (SLA) that guarantees the execution time is not faster or slower than a constant period of time?

I’m my experience, the target system always has a variable execution time, not constant. Even if the system has a SLA of 30 seconds, wait for the completion event - don’t sleep.

If you have a constant maximum period of time that the test is allowed to execute, configure a test timeout. This ensures the test must complete prior to the timeout and you leave that concern out of the test implementation.

Common reasons for variability:

  1. One-time initialization process
  2. A recent software deployment
  3. non-cached DNS request
  4. a new TLS connection
  5. connection pooling, and the pool is empty
  6. cache miss
  7. Internet latency
  8. Recycle application after a period of inactivity

These reasons typically compound on each other!

This seems odd, why would anyone choose to sleep a test?

For request/response API testing, it’s very uncommon. It’s too easy & intuitive to issue a request and wait for the response, through a blocking call or async/await.

More complicated request/response API testing, involving concurrent requests, or user interface-based testing, typically incorrectly use sleeping to pause the test runner between actions.

Concurrent requests

When the test calls for concurrent requests, you want to coordinate the completion of the requests and wait for all operations to complete before performing the next action or start assertions.

For example, to efficiently wait for any number of HTTP requests to complete:

var urls = new string[] { … };

Issue a concurrent async request for each URL:

var requests = urls.Select(httpClient.GetAsync).ToArray();

Wait for all requests to complete: Task.WaitAll(requests);

If all requests complete in 1 second, then we’re only waiting 1 second. If the longest request takes 17 seconds, then we’re waiting 17 seconds.

Browser-based Ui testing

This is the most common use-case I’ve seen for sleeping a test runner. The author of the test guesses or looks at the wall clock to select a constant period of time to pause between each action. For example:

  1. Navigate to URL
  2. Sleep 5 seconds (waiting for page to load)
  3. Find username text box
  4. Enter username
  5. Find password
  6. Enter password
  7. Sleep 1 second
  8. Ensure there is no validation error message
  9. Click login
  10. Sleep 5 seconds
  11. Assert login successful

Browser-based test frameworks provide many tools for the test runner to efficiently wait for a specific event. Waiting for a specific event is the optimized way to author the test.

I don’t want the test to run forever

Test runners typically support setting a test timeout to specify the maximum amount of time the test is allowed to execute.

Use this instead of specifying timeouts for every action of a test.

Anti-bot feature requires sleeping

Some applications may include a feature to detect and block automated tools from using the application. This is not an argument to sleep the tests.

The “anti-automation” feature should be either implemented outside the application or as a feature toggle.

For the entire test automation, run the tests again the application directly or against a deployment with the “anti-automation” feature toggle disabled. This is perfect for continuous delivery.

One or more smoke tests can hit the application with the feature enabled to ensure blocking of automation and allow simulated human interaction. Simulated human interactions could be captured from production and replayed for testing.

Back to posts