Sunday, August 26, 2012

Using Fiddler to create VSTS Web Performance Tests


My shop uses Visual Studio 2010 Ultimate Edition for our test framework. Frankly, I spent a few months automating test cases using the coded-UI style of browser automation before I decided that the “Web Performance Test” was better suited to our testing needs.  (I’ll refer to the web performance test as webtest in the rest of this post.)

Our website makes frequent AJAX calls for server-side user entry validation, and frankly, the recorded coded-UI tests would hang in a non-deterministic way. Even though I inserted custom retry logic when entering user values, the framework would hang and cause our team no end of frustration.

VSTS Web Performance Test (*.webtest)

On a happier note, I’ve had a lot of success using the webtest, a type of test built-in to VSTS which works at the http layer rather than through interaction with the web browser. In our case we use the webtest as a functional test to test the server-side code, though we also use sets of them in load tests as well.

That’s a little background on why I use webtests, but I want to comment more about using Fiddler. If you’re not familiar with Fiddler, it’s a free web debugging tool that logs all http and https traffic between your computer and the internet.

Export Fiddler Sessions when Visual Studio Webtest Recorder Can't

We soon discovered to our dismay that when recording our shopping cart wizard using a web performance test inside VSTS, that not all http requests (read AJAX) were captured. But happily Fiddler DOES capture all this traffic, and the bonus is that Fiddler allows you to export recorded sessions in VSTS web performance test (.webtest) format!
 

Screens in Fiddler after Choosing to Export Sessions


 

 

Sometimes we’ll create a hybrid recording, creating a webtest in VSTS. If some of the requests were not captured, we’ll capture the workflow in Fiddler and then export it to webtest.  After adding the Fiddler-exported webtest to our project, we cut/paste the requests we want into our main webtest.
We’ve had much success using Fiddler to help us build out a complete functional test suite of some fairly complex workflows and would have been stymied in doing this without this great tool.

Saturday, August 18, 2012

Which tests should be automated?

I don't know how many times at work I've had this topic come up. Typically a program manager (the one controlling the staffing) muses about how if we could just automate 100% of our testing then wouldn't everything be grand!  Think of the savings!

Of course this is an unreachable dream. Why can't 100% of tests be automated? Here are a few reasons that come to mind:

Ten Reasons Why 100% Test Automation is only a Dream

  1. There are an infinite number of possible test cases.
  2. We don't have enough trained staff to create the automated tests.
  3. There isn't enough time in the schedule to create the automation.
  4. The more tests created, the more time is needed for test updating and maintenance.
  5. The greater the number of tests in our test run, the more time is needed to analyze the test run results.
  6. Some test cases are too hard to automate, such as image or layout validation.
  7. The test automation team doesn't have comprehensive product detail knowledge.
  8. We don't have a comprehensive test plan.
  9. There is no detailed functional specification.
  10. Our test framework doesn't support manipulation of custom controls.
Well that is a bit of a discouraging list.  So at this point I need to decide how to prioritize our test automation effort. Here's how I usually approach the question of which tests to automate first.

Automate pri-1 "happy path" test cases first

We have a manual test team that knows the product better than the test automation engineers. If they have spent time preparing a test plan for manual test execution, great!  We steal some of that work and go through the documentation making a list of Pri-1 test cases. These are the most important "happy path" functions that the software or website supports, so we plan to automate these test cases first.

I talk to the manual testers and try to understand which test cases are the best candidates for automation. I key in on those cases that are simple to execute but involve a lot of repetitive effort, large forms to fill out, or generally mind numbing to repeatedly execute.  I have to use my experience to guage how much effort would be required to automate the test scenario. The best automation candidates are stable features that aren't frequently changed -- which would require updating the automation.

These pri-1 automated test cases also become excellent candidates for a regression test suite. I've been using Visual Studio recently and like to use both test lists and test categories for organizing our tests into sets that can be used for different purposes. I create a test list for "smoke tests" that includes only reliable deterministic tests that run fast and verify core functionality.

I create other test lists that include every test in our arsenal.  These we script to run during off-hours using Windows Task Scheduler because they will run for several hours.


 

Next automate pri-2 tests including negative test cases

If we have automated all the pri-1 cases we scan for secondary scenarios or important negative test cases. Another good source of automation cases is found in the bug reports. For example, we found that the developers had inserted a debug code into our website when a translated string was not found and the test team was finding these codes popping up all over the website. We automated a test that crawled the entire website looking for this code and saved our test team a ton of effort.

Our team rarely gets past automating a few pri-2 cases before the schedule moves us on to the next project.  What's your experience?