Paul Hammant's Blog: WinRunner Best Practices
I was quite surprised to find that google had no entries at all for WinRunner best practices. So I guess this is the first. The only one I guess for a short while :-)
The first observable fact is that WinRunner tests script creation is not normally a development deliverable. Developers typically throw completed changes over the fence to testers. The testers typically place tests in a different place to developers' Java/JSP or C#/ASP source. Thus the developers are not in the habit of running WinRunner test suites or (shock horror) modifying them after a refactoring. As a consequence testing remains a high science that developers shun, and the software world is not much closer to true one click builds, when using WinRunner.
OK, so Brian Marick asserts (correctly) that developers and testers should get a lot closer to each other to increase the agility of software delivery. In this posting I detail some of the things that I've learned about the technical aspects of a closer working relationship. That people have to change too, is left for others to discuss. I'd like to credit Kevin Raymen and John Englezou (at my previous client site - large energy trading company) and Dawn C at my current client site. For constant encouragement and a significant amount of contribution during the attempts to prove the actual worth of these ideas, I'd like to suggest that Neil Fletcher is my co-conspiritor.
Use real Source control
In reality that means the same choice that the development team have chosen. In fact it means the same depot/module. The proprietary version control build into the more enterprise versions of the dominant graphical testing tools is just simply wrong (as is all proprietary source/version control).
So, in order to use an arbitrary source control package(subversion and perforce are the pinnacle IMHO), we must understand how WinRunner scripts can be saved for that vision. Here is view of part of a directory structure containing three tests:-
SomeLogicalDir/ SomeTestOrOther/ db/ crvx.asc tm.asc debug/ - this dir needed, even if empty exp/ - this dir needed, even if empty header script - the actual test script AnotherTest/ (etc) YetAotherTest/ (etc)
'SomeTestOrOther' (and the other two tests) - rename as appropriate for your test. Please no more Test01 thru 99. Names should be representative of what they are testing. That way someone perusing the file system with windows explorer can easily see what is what.
There are two empty directories that need to exist for WinRunner to run. Some source control packages will delete the dir if empty, so place a dummy.txt file in there to stop this. The other files are strictly necessary. The actual file that does the business is script (more later). If you book the above in to source control and ignore all the other extraneous files (including the artifacts of each run), you'll be in a position where others can do a perfect check-out/sync to get the latest versions of each test in the suite.
Test Data and Script Separation
It is very common for people to separate test scripts and test data nowadays. It seems the EMOS Framework is often used. To me, this keeps the whole realm of testing in the voodoo world from the developers point of view. As EMOS uses Excel spreadsheets we find that people find source control difficult (merge a binary file anyone?). EMOS is something that I actively discourage. However, one of the problems it was trying to solve is worthy - separating test data from scripts.
What we feel now is best practice is the using of INI files for data, and a simple library to pull fields from them into scripts. In a spike that we kicked off at my current client (internet bank), we proved that it would all work. Dawn C pointed out that we did not need to code a properties file mechanism as there was already an INI file technique available. She also suggested that teams may prefer CSV as it is a better representation for repeating rows of data. Perhaps the sweet spot is a mixture of both. Anyway what we are left with is a textual form that is mergeable if needs be for a arbitrary source control package.
It is perfectly possible to record scripts using the standard tools, then work on them to separate the data/scripts and reuse existing functions. Yes, you may well build libraries of functions to do often repeated things like login. Teams may choose to hand craft all scripts, or refactor a recorded script.
We we are rolling out here leverages the following best practices:
- Source under standard source control
- Text is better than binary for mergeability and search
- Separation of data and function
- Separation of different tests (modularity)
With these steps, we are close to the developers way of working. If you have a WinRunner installation for all staff (or one of those n-users at a time licenses) your developers can be in a position to join in with the writing of WinRunner tests. As such, Brian's vision of having developers and testers pairing might be more likely to be realized. Implicitly, we are also suggesting that TestDirector and Mercury's other enterprise offerings are not the correct path. That last might get me into some trouble, so I'll state that that is my own claim. It is not the opinion of any clients of mine nor my employer ThoughtWorks.