Troubleshooting random failures with Coded UI Tests

Problem Statement

Coded UI Automation Tests runs fine locally but randomly fails when run on the server. The functionality which you are trying to verify works fine, but the tests does not consistently pass on the test agents. Troubleshooting these failing tests is frustrating because the tests successfully runs in your local machine.

In this article, we will look at a simple way by which we can analyze/debug these randomly failing tests and get more understanding on how to resolve it.

Why is Test Automation required?

Test Automation results provides an indication on the stability of your system and brings into light any issues introduced with the latest changes. It helps in getting rid of all the manual efforts required for doing repeated regression tests, which is time consuming and tedious. The results are accurate, and rules out any errors/inefficiencies resulting from manual testing. Yes, there is an additional effort required to have the automation tests in place, but the benefits go a long way.

However if your automation suite is plagued with non-deterministic tests (meaning tests which fail randomly) – then it potentially decreases the value of having an automated regression suite.

Why should we get rid of the random failures?

In my project, we had a set of about 20 tests, which used to fail randomly in the QA box but used to run fine when run locally. We made multiple attempts to fix these tests but were not able to overcome this mysterious ‘random’ factor. Before doing a production deployment, we always look at our automation test results. For all these non-deterministic tests, we used to run them locally, and if they all passed, we went ahead with our deployment. Gradually over a period of time, these set of tests were literally tagged as ‘unreliable’ and we started overseeing these failures. But what if one day, these tests fail for a valid reason? How do we distinguish a legit failure?

After pushing any change to QA environment, we wait for the nightly Feature Test run to identify any issues which might have creeped in, following the checked in code. These random failures made it difficult to distinguish between an actual problem and a false alarm.

This made us to look out for more troubleshooting techniques for Coded UI Tests and strive towards a 100% pass rate.

What are the various challenges to fix these random failures?

We use Microsoft Coded UI to write all our automation tests. It is pretty fast and sometimes we need to add explicit waits/delays to ensure that all components in a page are loaded, before the test can continue. At the same time, we do not want to introduce an additional wait time, which would increase the test execution time. So eventually the magic is to figure out just the minimalistic wait time, a test requires to execute successfully.

There might also be external factors like Agent issues, Server issues, Network issues which might impact the feature test runs and cause unexpected and random failures.

I am mentioning below few of the errors which I encountered, while analyzing the failed Coded UI Tests –

Test method threw exception: Microsoft.VisualStudio.TestTools.UITest.Extension.TechnologyNotSupportedException: Testing web applications in 64-bit Internet Explorer is only supported on Internet Explorer versions 10 or later. —> System.Runtime.InteropServices.COMException: Exception from HRESULT: 0xF004F00A

Initialization method. TestInitialize threw exception. System.Reflection.TargetInvocationException: System.Reflection.TargetInvocationException:
Exception has been thrown by the target of an invocation. —> System.NullReferenceException: Object reference not set to an instance of an object.

The biggest problem while analyzing these test failures is that the error stack trace indicates the line of code which might be completely unrelated to the actual cause of failure. From a debugging perspective, it would be nice to have information about the state of the application, when things go wrong.

How to configure Coded UI to generate diagnostics information ?

There are 2 ways of enabling HTML logging in your Coded UI Tests —

  • Enabling logging for individual tests


  • Enabling logging for all your feature tests: Add it in your app.config file —


What is the significance of EqtTraceLevel values ?

0 – Exclude HTML from log file output.
1 or higher – Generates an HTML log file.
1 and 2 – Screenshots are taken for Errors only
3 or higher – Screenshots are taken for all actions.

When the test completes execution, it will display an HTML page as an attachment – showing all the steps taken by the test and a corresponding screenshot of the application. The best part is that it will highlight the control in red which Coded UI is trying to search/operate upon.


As shown in the below screenshot, the HTML log shows lot of details about how your test executed, the controls it utilized and how to identify the causes of test failures.



To summarize, enabling tracing in Coded UI Tests helps you to capture important information about the running tests and helps you in troubleshooting the random test failures. In my case, I was able to finally fix the ‘random’ factor with my Coded UI Test suite by turning on HTML logging.

I would suggest you to try this and do let me know in case you have questions.

Categories: C#, Test Automation

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: