Presenter: Byron Goodman National Director of Quality Assurance Neudesic Some thoughts on automation Successful test automation Components of a framework Some coding standards Standard routines Let the machines do the tests Easy to create alternate scenarios by leveraging existing tests Using tools can create more interest in testing activities Potential to reduce testing cycle time Effective at catching regression bugs Tests break too often! Pass or fail results may not be reliable Tools are expensive, although there are some good open source solutions Tools may oversimplify or overcomplicate the degree of effort required to test applications More time spent working on automated tests than time needed to do the manual testing Sporadic interest in implementing automated testing Statements by others that automated testing is: a sham, ineffective, waste of time Shelving of test automation in the interest of project success As with all project related work, the time spent creating automated testing must offset this effort by the return of value. This is demonstrated in various ways: • Less time expended during the testing cycle than if manual testing is the only solution • More thorough and consistent regression coverage • Reduce human error and boredom • Frees up people to test new functionality while computers test existing functionality Strategy – start simple and work towards a more complete and complex solution Timing – start using tests as soon as they are written, Do not wait for a completed suite Buy-in – Demonstrate to others that each test does exactly what it is intended to so that there is confidence in results Traceability – Each automated test must trace back to a specific automated test Generally speaking, some automated testing is better than no automated testing Select the most simple of tests to automate Begin by using record and playback to minimize time investment but to show that the tool does prove to be effective For years, I have insisted that no tests will be created using record and playback I will admit that there is a place for R&P Very light use to prove out the tool and to minimize the reluctance people may have to use complicated technology However, R&P is does not have much gain in efficiencies over manual testing Many of us have experience with testing frameworks Other names– DLLs, namespace, test library Its just a fancy name for code that is used to automate tests A testing framework is a set of routines that are reused by tests that will speed up test development while increasing test reliability Needs to be recognized to have the same rigor as any development effort Adds value to the overall test effort Implements complex and redundant routines for tests to leverage Minimizes the impact of a changing application to the tests, which reduces test maintenance Each layer is developed to promote portability and reusability The more generic a layer, the more reusable it needs to be i.e. the Generic Routines should be able to work for everyone all the time regardless of the environments and AUTs Common Layer Response Flow Request Flow Generic UI Layer Technology – Specific Layer Application – Specific Layer Application Test Layer Common routines that do not interact with any application Generic UI layer, non-technology-specific manipulation routines Technology-specific layer routines Application-specific layer routines Application-specific Scenario/transactional layer routines Tests layer Not related to the application or an interface Build more capability and utility that the test tool may not have Functionality that a tester may need for any reason If written properly, these routines will work with test tools that use the same programming language Some control types, like grids, that are generic and rather independent from the technology that they are written in This is a good place to create additional or alternate logging For handling specific technologies – i.e. Browsers, WinForms and WPF. Custom controls will be wrapped with calls that are identical to similar standard controls Create interfaces for new versions of technology if the tool vendor does not yet have the new technology versions integrated Routines that are specific to the AUT Repository for UI mappings Defines a handler for every individual form/page AUT service interfaces defined and wrapped Serves the calls needed for tests to interact with the AUT Use this as a distinguishable layer if the routines for scenarios are complex and extensive May include multiple forms/pages In cases where there will not be a lot of routines, this can be combined into the Application-specific manipulation routines This layer will contain only tests No test calls any other test Tests call into the other layers Tests should be very simple as all the complexity is implemented in the other layers Common Layer Response Flow Request Flow Generic UI Layer Technology – Specific Layer Application – Specific Layer Application Test Layer Using consistent coding standards promoted reusability and maintainability Standards need to be documented and followed similar to what project development would do Use code reviews to enforce standards Naming conventions for variables and methods/functions • Use human readable names! Limit code complexity Separation of layers Comments Do not use the control type in the name, Address, not AddressEdit No duplication of control names for any specific window if the tools would normally allow this Consistent naming of similar controls form to form, even if the forms label the controls differently Execute anywhere – tests need to written so that they do not rely upon a very specific environment configuration, unless that is the purpose of the test Unattended testing – tests that need human interaction to execute should be segregated so that the other tests can all be run unattended No hard coding – use configuration files or similar mechanisms to store information or environment values In tests, avoid using the automation tool syntax • Tests should call into the other layers • This will make tests less likely to fail if the tool vendor modifies their syntax • Can make it easier to port tests to new automation tools Overuse logging Design for distributed testing and for concurrent execution of tests An automated test will test just a single test No test ever depends upon the successful actions or completion of a previous test Computers don’t care if they do repetitive actions Baseline (Initialization) Navigation Editing Verification Tools already have their own routines defined for some of this – however, you will get more flexibility and maintainability using these patterns Will set the environment to the proper state to start a test This state is defined for each AUT and test team Can be simple to very complex Determined by defining a known state that all tests can reliably execute and get consistent results Used to determine if a page/form is active and ready for interaction Will navigate to the desired page/form if the steps required are not complex, or the steps will not alter the outcome of any test Every page/form will have an unique navigation routines created Enters data into fields, pushes buttons, sets checkboxes, selects values in lists, etc. The control types should not matter when calling this routine – it will figure out what the control type is during test execution and interact with the controls appropriately Defined as an unique routine for every individual page/form Reads data from all controls types Determines if the values read result in a pass or fail Responsible for logging results Defined as an unique routine for every individual page/form Solution layout Actual code examples Dealing with particularly difficult controls Configuration management Skill sets Specific tools’ capabilities Beginning automated tests Test framework goals Layers of a framework Coding standards Initial routines that every page/form requires Thanks for your participation Byron.goodman@neudesic.com