About the Lab

Our mission is to:

"Create effective, grounded, timely materials to support the teaching and self-study of software testing, software reliability, and quality-related software metrics."


We are currently (September 2012) working on several projects:

NOTE: BBST is a registered trademark of Kaner, Fiedler & Associates, LLC.

We have posted the instructional materials for these courses and they are available to the public for free. You can teach these courses yourself or modify them and include the modified materials in your own courses for free. (If you use our materials, with or without modification, please acknowledge our authorship and the support of the National Science Foundation.)

In a corporate environment, a competent testing project probably requires a mix of exploration (learning about the product and its risks; hunting for new types of problems) and confirmatory (aka scripted) testing (checking the program against expectations derived from specifications, past performance of the product, intentions of the programmer, etc.) These courses focus on testing as a cognitively complex activity. When we introduce students to black box testing, our teaching bias is toward exploration. We focus more on confirmation in our work on programmer (glass box) testing and automated black box testing.

The BBST series was primarily developed by Dr. Cem Kaner and Dr. Rebecca L. Fiedler, with extensive support from the National Science Foundation, via grants EIA-0113539 ITR/SY+PE Improving the Education of Software Testers and CCLI-0717613 Adaptation & Implementation of an Activity-Based Online or Hybrid Course in Software Testing and the assistance of several colleagues and students. We update the materials frequently, revising instructional support materials, including labs, assignments and exam questions every semester. The course videos take longer to revise (a full rework takes about 1500 hours). We are working on an interim revision, which we hope to publish in 2013. Funding for this maintenance comes from Kaner, Fiedler & Associates (President: Rebecca L. Fiedler). Through Kaner, Fiedler, we're working on several follow-up projects. More information on those will appear at bbst.info.

The Association for Software Testing has provided a key testbed for the courses. AST's volunteer instructors offer these courses to AST members at a low cost. Historically, this has been primarily a training ground for instructors (it's often been compared to a Barber's College) and the frequency of courses is limited. Feedback from AST instructors led to significant improvements in the BBST courses. AST's Education Special Interest Group runs the AST-BBST courses. Kaner and Fiedler led this SIG for about 5 years. Now that AST has built up a strong BBSTinfrastructure, including a pool of skilled instructors and reliable web hosting, we've passed the reins to Michael Larsen, who now manages instructor training, course scheduling and maintenance for AST.

At Florida Institute of Technology, we offer the BBST series as an integrated one-semester course (CSE 3411, SWE 5411). This is a required course for undergraduate and graduate degrees in software engineering. Several other universities also offer courses based these materials. We're glad to answer professors' questions on designing or teaching such courses.

  • High Volume Test Automation: Imagine testing a program with millions of not-necessarily-powerful tests rather than a small collection of hand-crafted, risk-optimized tests. Both approaches are valuable; they hunt different bugs. We believe the high-volume approaches are better-suited to exposing bugs that are hard to replicate, such as problems associated with timing or corrupted memory. We believe these intermittent bugs have resulted in life-critical failures and there is no "traditional" testing technique to expose them.

We're trying to figure out how to teach people high-volume techniques that go beyond the most simplistic variant (fuzzing).

Fuzzing is the best-known form of high volume test automation. Fuzzing generally involves testing a program with randomized (e.g. mutated) input until it crashes. The first fuzzer that we're aware of was the "EVIL" program, used at HP in the 1960's. The approach was sporadically used in the software industry throughout the 1980's (for example, at Apple). Other high-volume techniques check the program's behavior against more stringent criteria, looking for data corruption, memory corruption, calculation errors, or other non-crash bugs. Doug Hoffman and Cem Kaner saw examples of these in industry in the 1980's and 1990's and began teaching courses on them in 1998.

We've given a few talks about this, for example in 2004 (more 2004), 2009, and 2010 (see video) but we haven't had much success helping people new to the techniques learn how to actually apply them. The talks have been too abstract.

What we're doing now is creating reference implementations: open source examples of the application of the techniques. We'll use these worked examples as the basis of our instruction, initially at a course at Florida Tech in Spring 2013.

  • Software Metrics: We often hear claims that 95% of American software companies (or projects) don't manage their projects using software metrics. Is that because American companies are shiftless, lazy and undisciplined (maybe they could be properly fixed by a few high-priced consultants)? Or is it because the metrics we routinely use in this field are so poorly validated that they often do more harm than good?

Sadly, these are easy targets. The challenge is to develop better metrics and that seems to have proven remarkably difficult. At this point, we're trying to translate research on metrics validity (and threats to validity) from the social sciences to software engineering. (Video here.) We're not yet sure what our next steps will be.

  • Graduate Courseware in Software Ethics: The National Science Foundation has awarded another research contract to develop courseware on ethics for graduate students. We're developing courses on:
    • The law and ethics of reverse engineering
    • The law and ethics of whistleblowing by engineers about product safety or product-quality-related fraud
    • How graduate teaching assistants can deal with plagiarism by their students
    • How graduate students can deal with intellectual property disputes with their faculty supervisors