After attending several keynotes and technical sessions on software testing, analysis, and review at the StarEast BUG OUT! conference held recently in Orlando, I can confirm that the classical design-test conundrum remains unresolved. For sure, there are approaches, methods, best practices, and many, many suggestions—but definitely no single, all-inclusive solution.
However, there were several things I immediately liked about the conference. First, the proceedings, including the speakers’ notes and slides, were available online before the conference started so I could decide how best to spend my time. I also appreciated the layout of the printed conference guide with tabs to get directly to the relevant section or the agenda. And, only one keynote or session abstract was on a page with more than half of the page ruled for notes—a great way to organize your ideas and comments in relation to the session where they occurred.
Teamwork was the theme continuously reinforced throughout many sessions. Kirk Lee, a QA manager at Infusionsoft, drew a vertical line to represent the point at which coding occurs. Before that, you can and should challenge and refine all aspects of the proposed project from the basic assumptions to the detailed architecture. How thoroughly you do this depends on the team’s culture, talent, and experience. The object is to find as many errors as possible as early as possible.
Lee is a certified ScrumMaster, but he seldom referred to Agile, instead discussing software quality improvement through code reviews, walk-throughs, and pair testing. In contrast, Jeroen Mengerink, a test consultant for the Netherlands-based Polteq Testing Services BV, related Agile to structured methods. Those technically sound approaches lack the flexibility associated with Agile. As Lee had done, Mengerink also stressed the importance of the team’s performance when attempting to find optimum solutions to both design and test. Mengerink suggested that testers should know the structured basics but apply them flexibly.
How could a team practicing Agile techniques interface with a more rigid management structure? Mengerink acknowledged that such an arrangement would create tension, but in projects that used metrics to determine legal and financial factors, you needed to include those metrics in the deliverables. He also discussed tools used in risk assessment. As a test consultant, he determines a team’s weak areas and the degree to which they constrain the team to follow a generic test approach.
Lee advised that rather than attempting to design specific tests, it was useful initially to think of types of test case categories. He gave examples of factors such as climate, types of users, foreign users, and accessibility that could affect software success.
Mind-mapping, an approach to organize information around a central topic, has found its way into software testing. Because many errors are made by trying to solve the wrong problem, clear and consistent user stories identify risks and lead to correct test ideas and test cases. The topic of team dynamics appeared in many presentations in many forms.
For example, Zeger Van Hese’s keynote “Testing in the Age of Distraction: Flow, Focus, and Defocus in Testing” discussed how people learn and the relationship between time and creativity. Testing certainly has basic principles, but creative solutions to problems take time to develop. Van Hese is the founder of Z-sharp test consultancy.
Similarly, Randy Rice of Rice Consulting Services explained the nonlinear relationship between classroom learning and applied knowledge. Practicing what you have been taught is the all-important factor. There will be failures along the way, but continuous practice distinguishes professionals in any field.
Rice highlighted pair-wise testing. Given a huge number of possible combinations of independent conditions, research has shown that most errors involve only one or two conditions—there’s a very low chance that unique problems involve combinations of three or more factors. Clearly, this insight collapses an exhaustive set of millions of tests down to a manageable size.
IBM’s Allan Wagner, an evangelist for test automation and service virtualization, discussed capabilities IBM now offers following its Green Hat U.K. acquisition. He said that service virtualization provides support for production-like test environments by simulating services that may be missing early in a project. Rather than delay testing, virtualization allows test to begin and be refined as more actual modules become available.
The last session of the day was a lively round of 11 five-minute keynote addresses. Some speakers highlighted points they had drawn from their full-length presentations given earlier in the day while others focused on underlying themes such as Lee Copeland’s message that testing is changing: change with it.
Testing should begin well before there is any software, before there are any test cases, and even before all the user stories are written. Involving test and QA professionals at the beginning of a project helps ensure that the team is solving the right problem in the best way.