Click to learn more about author Drew Horn.
This is part two of a three-part series on introducing and building test automation into your application development and deployment pipeline.
In part one of this three-part series, we covered the first steps of introducing automated testing into your software development lifecycle. Now that you’ve worked on codifying manual tests into an automation framework and hopefully secured some quick wins with your first smoke tests, you can continue to boost your confidence in test automation.
Using Test Case Management Tools
The most important competency of this intermediate SDLC stage is that you establish a test case management system (TCM) and reporting structure. It’s key to make sure that all of your results are going to one place to be seen from one, single view. By doing this, you’ll have a consistent window into any failures and can easily decide whether or not to deploy.
This approach also enables you to merge your manual and automated testing efforts into a single system, with a single source of truth. As test automation and manual testing combine together (which we will cover later in this article) your practice can scale properly without hitting unnecessary bottlenecks.
With a single TCM in place, you can now more effectively put quality protections in place as part of your deployment pipeline. Each testing stage in the pipeline (e.g., smoke testing, manual regression, test automation) should have a quality gate identified that establishes whether or not the build should continue through the pipeline for additional testing. Implementing quality gates at each stage of the process helps your team recognize build issues earlier. The earlier build issues are identified, the more cost-optimized your practice will be. This is especially important as you scale and grow your practice and increase test coverage.
Uniting Manual and Automated Testing
After your test cases have been merged into a single repository and an assessment has been done to determine which tests should be run manually, and which should be automated, it becomes much easier to blend your manual and automation testing efforts together and embed them into your deployment pipeline. This is essential for any production-grade QA practice looking to scale.
In general, embedding automated and manual testing together into your deployment pipeline can be seen as a four-step process:
- Define a changeset based on evaluation and goals. What technical changes to your deployment pipeline and/or your manual processes need to be implemented? Which ones should be documented in order to embed all testing into the pipeline? For example, once an automated smoke test is complete, should a QA lead be notified so that they can initial manual testing? When manual testing is finished, how does a key stakeholder review the results and determine if the quality gate should allow the build to the next step in the pipeline for additional downstream (possibly nonfunctional) testing? These are the types of answers you need in order to have a concise plan for moving forward.
- Test the solution out-of-band. Changes to your pipeline and processes should also be tested. One way to test your new process without influencing the current workflow is to do it out-of-band. One option is to build a job on your CI server that runs automated regression tests but does not impact the existing pipeline flow. Doing this enables you to review the process and iterate as needed until every team is ready to move a particular process directly inline.
- Educate your team on the process. In any testing scenario there are almost always going to be manual processes involved, which is why it is important to solidify them by training your team throughout every stage: development, integration, staging, production, and feedback.
- Introduce changes into continuous integration pipeline. Finally, once the changes to the pipeline and processes have been vetted and all teams are trained, you can make the switch.
The last component of getting your automation out of an initial or beginner stage is scaling the practice. With the above tools and processes in place, you should feel comfortable adding more automated tests and platforms. As your test matrix grows and the frequency of runs increases, executing tests in parallel becomes a top priority. The challenge is making sure the tests that your team has created work well when run at the same time. To do this, make sure your tests are as atomic and idempotent as possible. And, if possible, the state of the application after each test should be the same as when it started. If this isn’t an option, try to set up some tests in a way that each relies on its own data. If test data used in one test impacts another, you will have a very difficult time debugging test failures.
If your framework doesn’t support running tests in parallel, you could also set up separate jobs on your CI server to run groups of tests at the same time. This works, but generally adds additional complexity to your pipeline that could instead be captured in your framework.
In part three, we’ll see what a mature test automation practice looks like.