Friday, August 13, 2010

Your mission – should you choose to accept it –is to automate! –Part 2

No, I'm not done with the objectives of automation :)

My purpose is not to cover the objectives of automation as much as it is to cover the idea behind the objectives we define. Testing, and test automation, is like anything else in the SW process. It needs design, going into details, and having a clear idea of what the heck is going on. Testing may cover that, but test automation does not always have that cleared out.

So I discussed my idea of how to automate tests for regression purposes, and I believe that the mission was accomplished with maximum return on investment and minimum extra work. But in the case of acceptance testing, things may get a bit tricky.

Again, before laying upon you my great ideas (kiddiiiing :)), we'll have to define the objective of acceptance testing. By definition, acceptance testing is to make sure the system under test conforms exactly to the customer requirements. Acceptance testing is not done for the purpose of finding bugs in the system. So the objective is to make sure every single detail the client has agreed upon to be a system requirement is covered in the acceptance test and is present clearly in the system. It is also to make sure the system behaves "Expectedly"; i.e. there are no crashes and no surprises due to deployment issues or different environments. So the objective of acceptance testing will be split to 2 parts: firstly, there will be the regression test objective mentioned before which is covering all the bases; no crashes, the scenarios are completed till the end and committed, etc. And secondly, the automated script must map the requirements to the T. so if the requirements say that a password field for example must not be less than 6 characters, then an automated acceptance test case is to make sure the password text field does not in fact take less than 6 characters.

Come to think of it, if the automated acceptance test cases cover the requirements to the T, wouldn't that be too much work? I mean, what were we doing during integration testing then? We already tested all the requirements and then some. We confirmed that everything that is "code related" is working as good as expected and we decided to release so the quality must be somewhat good! This is in addition to all the negative testing of course, that goes without saying. So why cover every single detail if I, as a tester, am confident about 70% of the details? So does that mean we can alter the testing objective?

Of course we can, but we won’t! let’s consider the manual testing perspective. Manual acceptance testing covers the most critical forward scenarios, the most critical business rules, and if there is still time, manual testers go a bit deeper into the less critical scenarios and rules, and maybe, they cover some commonly used negative scenarios. If we map this testing to automation testing then we will have to follow the same concept; first, we cover the basic scenarios of everything (if there was a regression automation suite ready then that would be sufficient), then we cover the basic business rules which may or may not be covered in the regression automation suite. Next, we start covering the less critical forward scenarios. This shouldn’t take much time because there will be reuse of the scenarios already covered earlier, same issue with the less critical business rules. Finally, if there is still time, some negative scenarios should be covered just to put your conscience to rest.

Time wise, this shouldn’t take that much time if there is already a core automation suite ready. But this will not be the case most likely. The process of adding iteratively to the testing suite may cause a lot of problems in the maintenance and use of the suite. In addition, tracking bugs would be a pain. However, the most critical problem would be maintaining a stable version. It’s starting to sound a bit like the problems the developers face. That’s because they are the problems the developers face. Will adding a new feature to the test suite break another feature? Will the tracing of the code become impossible? Will we end up changing core functionalities in the code to accommodate the changes? Most probably yes! So when is automation good for acceptance testing? When there is proper design of the test suite, but that’s a story for another day.

Let’s assume the happy scenario when the test suite was properly designed and planned for. Does the automated suite help now? Of course it does, with maximum return on investment. Consider manual testing again, if it is done systematically, testers rarely have the time to cover all the scenarios and rules mentioned. In fact, they barely cover half of them. Manual testers rely on their intuition in acceptance testing, based on their experience in testing the system. The automated test suite does not have gut feelings, but it is much faster. This speed allows for an automated suite to cover more test cases in acceptance testing, thus increasing the return on investment of the acceptance testing. If we assume that the time taken to automated the acceptance test suite (and by that I mean the additions to the suite not the core) and the time taken to run and analyze the automates tests, is the same as the time taken to manually do acceptance testing, the ROI will be higher than that of manual testing because the coverage will be higher, thus giving more confidence to the team and customer. Mission accomplished! (Again)

5 comments:

  1. most people always think that Automation testing declines the manual functional testing , but am saying that automation comes in a step that came after the manual functional testing

    ReplyDelete
  2. do you mean in acceptance testing in particular? i think acceptance testing can be 90% automated and the remaining 10% can be handled by whoever developer deploying live. they should be aware that there may be some UI glitches that will not be tested by the automated suite

    ReplyDelete
  3. No i mean whats before the acceptance testing i mean the integration testing for example

    ReplyDelete
  4. I think if testing is well planned, analyzed and designed lots of falls you mentioned will be handled automatically, 'coz test cycle supposed to be an accumulative process consists of set of stages each stage dependent on the success of the previous one, and starts where the previous stages ended not from scratch else it would be waste of time,money and efforts.

    ReplyDelete
  5. automation can never replace manual testing in integration except in very special parts where it is just about testing how different modules work together. but it's infeasible and useless to automate negative scenarios

    ReplyDelete