Thursday, August 12, 2010

Your mission – should you choose to accept it –is to automate! –Part 1

The automation is strong with this one…

Writing automated test scripts is a very important task that we perform from time to time on almost all projects. Why do we write automated scripts? Because we'll do the regression through them, because the acceptance testing will be completely automated, because we have to??!!!

Don't get me wrong, I'm not making fun of automation or its value. In fact, I am a HE-UGE fan. I'm just wondering, are we doing the automation for the right reasons using the right techniques?

I've worked on the automation of several projects. And it was always an easy fun task… well, not always easy to be honest, not always fun either. Okay, okay, I liked it when it was fun and a little complex, and hated it when it got repetitive and boring. But that is not the point! The point is that there was always something to automate. Make sure the scenario "works". Now doesn’t that statement sound a little familiar? Didn’t project managers and development leads always tell us that 5 letter statement whenever we were testing a release with critical time? And didn’t we always tell them that this is not how testing works? It's a shame to find those exact words coming out of the mouth of a tester. In fact, it is a shame to have the exact opposite come out of the mouth of a tester: "your scripts must make sure everything works!"

Just because automation may seem like the light at the end of the tunnel doesn’t mean that the end of the tunnel is Utopia. We have to know why we are writing automated scripts exactly, how long it would take to analyze the results. Hell, we have to know what we need to have in the test results in the first place. But above all, we must define the objective of the automation mission.

I've mentioned earlier two broad ideas of objectives for the automated tests. One is regression testing and the other is acceptance testing. Let's start with the first one:

Regression testing is the task of making sure the fixes for old bugs has not introduced new bugs. The action itself is broad level testing; a quick test cycle across as much of the system as possible, so the objective of regression testing is to cover all the major bases of the SUT. That means that the objective of the automated scripts for the SUT is to cover all the bases (same objective as the regression test); to make sure no crashes take place, all the scenarios are completed and committed, the major items of every page do in fact exist and are working. The details of the above few statements depend on the system and its maturity level, the definition of major highlights in the system, and the definition of the most critical scenarios. But the main objective has been defined: Major System Highlights. Under an agile process, the team may have a release weekly. That means the average tester will have to do regression on the old part of the system, test the new functionality, and automate the new functionality for next week's regression ever single week. Now if we consider a complex of big release where a tester is required to automate "everything"… say 50 test cases for example with the option to test several modules with several hundred data inputs to make sure that there are no bugs as a result of data, that could take them a month to code, a week to maintain with every release, and another week to analyze the results of the automation. This will make testing using automation a huge bottleneck! At this point, the ROI of the automated scripts will be in the red. In addition, the tester will most probably decide to stick to manual regression (fast regression while only looking at the highlights of the system) that to waste time with extensive automation.

On the other hand, if the automation task had the same objective as the actual function it is supposed to do, which is regression, then preparing the test scripts and the data for a week long moderately complex release would be 12 to 16 hours (just an educated guess here). So if the release took 40 hours to develop, and the actual testing took 40% of that time, the testing effort will take 16 hours, scripting will take another 16, regression on the old release would take 4 hours, and the maintenance will take another 4 hours, then we're talking about a total 40 hours. Voila! Mission accomplished J

1 comment:

  1. http://safsdev.sourceforge.net/FRAMESDataDrivenTestAutomationFrameworks.htm


    see this

    ReplyDelete