CUSTOMER SUCCESS BRIEF

Beyond Pass/Fail: Striking the Right Balance between Automated and Manual Testing

Calendar icon 08-24-2021

Modern DevSecOps-driven test automation shifts defects left, cuts time to deployment, and enables continuous integration/continuous delivery — but it’s only as good as the requirements and scripts that drive itTesting maturity is not just function of how much you have automated; the best test programs focus instead on how human testers optimize the use of both automation and manual techniques to maximize the value of testing in reaching product goals. 

In the early days of software development, engineers tested their code manually by repeatedly entering inputs and checking the outputs against expected results. Todaydevelopment teams are increasingly adopting DevSecOps to improve both the speed and reliability of testing through automation. Test automation acts as a resource multiplier, allowing development teams to test higher volumes of code while eliminating the potential for human error. Despite these advantages, test automation also has limitations and cannot replace all human intervention. Subsequently, the most mature testing practice recognizes the strengths of automated testing — the capacity to save time and effort while reducing human error — while also understanding its shortcomings in areas where testing cannot be automated.

 

Mature software testing methods are not an either-or choice between automated and manual testing, but a both-and decision, harnessing the strengths of each.

 

Automated testing and test coverage limitations

Fundamentally, both manual and automated testing rely on a process that parses inputs (i.e., the code being tested) and yields outputs (i.e., whether the code has passed the test, according to set parameters). Automated testing consists of test scripts written to those requirements in order to validate an expected result. For example, a block of code for a smart thermostat might be tested by feeding examples of a room’s current temperature as the input and checking that the outputs (e.g., whether to turn on heat or air conditioning) are within an acceptable range. This is sometimes referred to not as automated testing, but as automated checking

While testing code against set requirements and expected outcomes is essential, it does not paint a complete picture of a product’s readiness for release. Further, test coverage is an imperfect indicator of testing efficacy: it is impossible to express test coverage as the number of tests run vs. the number of possible tests because, theoretically, there is an infinite number of possible tests that can be run. Test coverage is therefore an expression of tests run vs. either the number of tests planned or the number of requirements that must be tested. Both are important metrics, but both have gaps. If the number and types of tests planned do not fully encompass the core functionality of the code, or if the requirements being tested are poorly expressed, it is possible to have 100% test coverage, in which all tests yield passing results, while still having broken or suboptimal code.

 

A truly mature testing practice goes beyond automated testing to incorporate a broader notion of what testing seeks to achieve.


 

What mature testing should do

Effective and mature testing practices go beyond merely checking inputs against expected outputs in an automated fashion. To expand test coverage past the essentials, developers should design tests that actively seek to replicate adverse scenarios to determine how the system responds. This can take several different forms, whether as small tests — such as deliberately inputting invalid characters into a field — or large ones, such as testing the effect of a mass submission of data in a short window of time. Such stress tests go past an analysis of how the program operates under ideal conditions, yielding crucial data about whether the system can withstand edge-case scenarios. Testing for adverse scenarios is also a key component of identifying security vulnerabilities, in which testers deliberately simulate attacks to determine where weaknesses might lurk. While it can be difficult to identify security gaps within an incomplete product or application, proactively testing for security issues at the code level can drastically reduce the risk of issues post-deployment.

Additionally, developers should consider the root cause behind expected events that do not occur during testing. For example, when testing a secure single sign-on system that is supposed to email users a reminder to update passwords every 90 days, testers should closely examine why some or all reminder emails failed to send. Automated testing alerts developers that an expected outcome failed to occur, but it does not always tell them why it failed. For this reason, developers need to conduct root cause analysis (RCA) to understand the full context of the failed test and ensure effective code remediation. Furthermore, because written requirements determine test parameters, written requirements can be sources of error in and of themselves. As mentioned above, poorly expressed requirements result in poorly designed tests, and testers should examine whether captured requirements accurately gauge product efficacy as part of quality assurance practices.

 

By assessing requirements for sufficient detail, consistency, and relevance prior to testing, product teams can increase test accuracy and ensure the final result matches the expected outcome.

 

Why manual testing still matters (and why automation makes it better)

Test automation plays a significant role in DevSecOps by enabling product teams to test code more frequently, detect defects earlier, and shorten the overall testing cycle. The DevSecOps pipelines of Halfaker, an SAIC company, incorporate a suite of automated testing tools to execute a variety of tests and flag code for rapid remediation. However, automated testing alone does not equate to mature testing. As shown above, human testers are crucial to a mature testing process and automated testing does not diminish their role. Rather, automated testing enables product teams to free up human testers for tasks that cannot be automated, expanding test coverage and maximizing resource efficiency. Testers can and often do identify product flaws in areas that automated tests cannot verify, such as usability. For example, automated testing could determine whether a single sign-on system denies access after a set number of unsuccessful attempts to log in. However, only manual testing can determine whether the text of system messages is legible and comprehensible to the user, or whether the user interface changes appropriately after multiple incorrect password entries. By applying test automation where appropriate, human testers can focus more attention on areas that automated tests cannot verify.

 

The sign of a mature testing protocol is not a wholesale reliance on automation; it is understanding that automation has its place in testing and, when used effectively, is a resource multiplier that allows human testers to deliver an optimal product to customers.