Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

The Software Test Engineer's Handbook, 2nd Edition: A Study Guide for the ISTQB Test Analyst and Technical Test Analyst Advanced Level Certificates 2012
The Software Test Engineer's Handbook, 2nd Edition: A Study Guide for the ISTQB Test Analyst and Technical Test Analyst Advanced Level Certificates 2012
The Software Test Engineer's Handbook, 2nd Edition: A Study Guide for the ISTQB Test Analyst and Technical Test Analyst Advanced Level Certificates 2012
Ebook932 pages10 hours

The Software Test Engineer's Handbook, 2nd Edition: A Study Guide for the ISTQB Test Analyst and Technical Test Analyst Advanced Level Certificates 2012

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

Many books cover functional testing techniques, but relatively few also cover technical testing. The Software Test Engineer's Handbook-2nd Edition fills that gap. Authors Graham Bath and Judy McKay are core members of the ISTQB Working Party that created the new Advanced Level Syllabus-Test Analyst and Advanced Level Syllabus-Technical Test Analyst. These syllabi were released in 2012.

This book presents functional and technical aspects of testing as a coherent whole, which benefits test analyst/engineers and test managers. It provides a solid preparation base for passing the exams for Advanced Test Analyst and Advanced Technical Test Analyst, with enough real-world examples to keep you intellectually invested.

This book includes information that will help you become a highly skilled Advanced Test Analyst and Advanced Technical Test Analyst. You will be able to apply this information in the real world of tight schedules, restricted resources, and projects that do not proceed as planned.

LanguageEnglish
PublisherRocky Nook
Release dateJun 12, 2014
ISBN9781492014737
The Software Test Engineer's Handbook, 2nd Edition: A Study Guide for the ISTQB Test Analyst and Technical Test Analyst Advanced Level Certificates 2012
Author

Graham Bath

Graham Bath's experience in testing spans over 25 years and has covered a wide array of domains and technologies. As a test manager, he has been responsible for the testing of mission-critical systems in spaceflight, telecommunications, and police incident control. Graham has designed tests to the highest levels of rigor within real-time aerospace systems such as the Eurofighter military aircraft. As a principal consultant for the T-Systems Global Delivery Unit "Testing Services," he has mastered the Quality Improvement Programs of several major companies, primarily in the financial and government sectors. In his current position, Graham is responsible for the company's training and test-consulting programs. Graham is co-author of the ISTQB Expert Level syllabus, Improving the Test Process. He is a long-standing member of the German Testing Board and is chairman of the ISTQB Expert Level working group.

Related to The Software Test Engineer's Handbook, 2nd Edition

Related ebooks

Software Development & Engineering For You

View More

Related articles

Reviews for The Software Test Engineer's Handbook, 2nd Edition

Rating: 4 out of 5 stars
4/5

5 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    The Software Test Engineer's Handbook, 2nd Edition - Graham Bath

    1 Introduction

    It was a dark and stormy project ... No wait, that’s the beginning of another book, although it does accurately describe some test projects that seem to be perpetually in a crisis with management in the dark—but we’ll save that for later.

    This book is designed to serve two purposes. First and foremost, it is a useful book full of techniques and practice exercises that will make you, the advanced tester, successful in the real world. Second, it covers everything you need to know to successfully complete the exam for the ISTQB Advanced Test Analyst certification and the ISTQB Advanced Technical Test Analyst certification. In this first chapter we explain the objectives we set out to achieve and the basic layout of the chapters. After that, we explore some fundamental questions: what does the word advanced mean in the context of tester certification and what is the role of the test analyst and technical test analyst?

    One note of clarification: The term test engineer is in the title of this book. Test engineer, in most but not all countries, is the title given to the senior, most technically adept tester. In deference to areas where this term might have a different meaning, ISTQB decided to use the terms test analyst (less technically inclined and more business oriented) and technical test analyst (more technically inclined, probably with a strong development background as well as a strong testing background). We have adopted the use of test analyst and technical test analyst throughout this book to keep the terminology consistent with the ISTQB.

    1.1 Structure of the Book

    The ISTQB Advanced Test Analyst and the ISTQB Advanced Technical Test Analyst syllabi have been created as separate documents in the 2012 issue. This permits a clear structure for the book as follows:

    1.2 Requirements for This Book

    We established some fairly tough requirements for this book. Before we launch into the actual content of domain and technical testing itself, we’d like to give you a brief overview of those requirements. This will help you understand the general approach we have taken.

    As the authors, we require that the book be both complete and readable.

    1.2.1 Completeness

    This book is based on the ISTQB Advanced Level syllabi (2012) and covers everything you will need to know to successfully sit for the examinations for test analyst and technical test analyst. You can also use the information in this book to become a very good, very employable test analyst.

    1.2.2 Readability

    The book’s not just about covering the Advanced syllabi.

    When writing a book based on a predefined syllabus, it’s easy to fall back into a style that focuses on syllabus coverage alone. Of course, syllabus coverage is essential, but too often this results in a rather dry, definition-oriented style with all kinds of fancy fonts and symbols to indicate specific parts of the syllabus. We don’t want this. We want you to have a book that gives you syllabus coverage and is readable.

    We intend to make this book readable by adopting a particular style and standardized approach to each chapter:

    Technical content

    After a brief introduction, we list the terms that are mentioned in the chapter. The definitions of these commonly used industry terms are found in our mini-glossary in appendix A. And, speaking of industry terms, you will find we use the terms bug and defect interchangeably. Again, being practitioners in the industry, we tend toward the more commonly used terms.

    We then present the actual technical content of the chapter. The learning objectives of the ISTQB Advanced syllabi don’t focus on just learning and repeating, they are meant to help you apply what you have learned and provide reasoned arguments for your choices. To that end, we go beyond the information provided in the syllabus and add more descriptive material to give you a more well-rounded level of knowledge.

    Let’s be practical

    We use a realistic, complex, real-world example application.

    Most chapters include a section called Let’s Be Practical to help you to further understand and assimilate the information provided. It’s also a chance to get away from the textbook style that unfortunately prevails with syllabus-oriented books, so this section should also appeal to those of you who are not necessarily focused on the ISTQB syllabi alone.

    We will refer to our example application Marathon for this section (see Chapter 2 for a description). This realistic example is based on a real-world system and appears throughout the book to provide a consistent view of the many testing issues covered.

    Experience reports and lessons learned

    We, the authors, have gained a wealth of experience in our testing careers, and we’d like to share some of these experiences with you. As often occurs in life, things don’t always go according to the book; these experiences may therefore help us realize that being a certified tester doesn’t always guarantee success, mainly because reality doesn’t politely conform to theory! Look for these gray-shaded blocks throughout the book.

    Who is talking? Generally speaking, if the chapter relates to the test analyst, it’s Judy, and if it relates to the technical test analyst, it’s Graham. So now you know who I is when we relate experiences, lessons learned, and things we’d rather forget.

    Exercises

    At the end of each chapter we give you some multiple-choice questions to test your knowledge. You will not, of course, find these exercises in the ISTQB examination (that would be too easy!).

    1.3 What Does Advanced Mean?

    Saying that you are an advanced anything can be like waving a red rag in front of a bull. A typical reaction might be OK, wise guy; let’s see if you can solve this one. Faced with this kind of challenge, the testing professional should be able to explain what it means to be an advanced tester. Here are a few quick replies for you to have ready, just in case:

    Advanced testers have chosen a career path in testing, having already successfully become an ISTQB certified tester at the Foundation Level.

    They have demonstrated both theoretical and practical testing skills to an internationally recognized high standard.

    They have gained experience in testing projects.

    They can fulfill the role of test manager, test analyst, or technical test analyst in a project.

    They recognize that we never stop learning and improving.

    They have (therefore) more chances of being, and staying, employed.

    Testing professionals benefit from speaking a common testing language.

    Just one other (occasionally controversial) point on the issue of certification: Being certified at an advanced level doesn’t actually guarantee anything. There are plenty of good testers about who are not certified. However, having certification does demonstrate that you have achieved a high standard of testing professionalism and that you are likely to speak a common testing language with others in the testing world. In a global IT industry where many testing projects are spread over several countries, this is a very big plus.

    By the way, we, the authors, are Certified Testers at the Advanced Level in all three roles (and proud of it). We are also leading the way with the Expert Level syllabus development. The major organizations we work with have embedded the certified tester schemes into their career development plans and consider this to have been highly successful as a staff motivator and in achieving satisfaction for their customers.

    In addition to the certification aspect of this book, it is also packed full of good, useful information that an advanced tester will find valuable. So, regardless of whether you think certification is the right thing for you, we think you will benefit from learning, practicing, and applying the information provided.

    1.4 What Is a Test Analyst?

    Defining a role at the international level is not easy. Often countries, even different companies within the same country, have different names for a role or have a slightly different understanding of what a person with a particular role should do. There is no one reason for this—it’s usually just the way things developed.

    At the Foundation level, the ISTQB improved the situation somewhat by introducing the roles of test manager (which can also be referred to as test leader) and tester.

    The test analyst adds specialization to the tester role.

    At the Advanced level, the ISTQB continued this standardization trend by establishing the role of test analyst. Essentially, the test analyst should be able to do all of the tasks of the tester defined in the ISTQB Foundation syllabus. However, the test analyst adds specialization to the tester role, and it’s this specialization that we address in this section.

    What would be expected of a test analyst? At the highest level, an employer would expect an advanced test analyst to have the ability to do the following:

    Perform the appropriate testing activities based on the software development life cycle being used

    Determine the proper prioritization of the testing activities based on the information provided by the risk analysis

    Select and apply appropriate testing techniques to ensure that tests provide an adequate level of confidence, based on defined coverage criteria

    Provide the appropriate level of documentation relevant to the testing activities

    Determine the appropriate types of functional testing to be performed

    Assume responsibility for the usability testing for a given project

    Effectively participate in formal and informal reviews with stakeholders, applying knowledge of typical mistakes made in work products

    Design and implement a defect classification scheme

    Apply tools to support an efficient testing process

    Support the test manager in creating appropriate testing strategies

    Structure the testing tasks required to implement the test strategy

    Perform analysis on a system in sufficient detail to permit appropriate test conditions to be identified

    Apply appropriate techniques to achieve the defined testing goals

    Prepare and execute all necessary testing activities

    Judge when testing criteria have been fulfilled

    Report on progress in a concise and thorough manner

    Support evaluations and reviews with evidence from testing

    Implement the tools appropriate to performing the testing tasks

    In general, the test analyst has a good understanding of the test manager’s role and an appreciation of the fundamental principals of test management. This includes the ability to understand requirements and appreciate different forms of risk.

    Two specific types of test analysts are defined.

    The test analyst position is further defined into two roles according to the Advanced syllabi and industry practices. Two specific types of test analysts are defined. Both roles share the generic requirements outlined earlier but apply them in different testing contexts. In broad terms, the technical test analyst serves more of a technical function, whereas the domain test analyst has a more business-oriented approach.

    The technical test analyst can do the following:

    Recognize and classify the typical risks associated with the performance, security, reliability, portability, and maintainability of software systems

    Create test plans that detail the planning, design, and execution of tests for mitigating performance, security, reliability, portability, and maintainability risks

    Select and apply appropriate structural design techniques to ensure that tests provide an adequate level of confidence, based on code coverage and design coverage

    Effectively participate in technical reviews with developers and software architects, applying knowledge of typical mistakes made in code and architecture

    Recognize risks in code and software architecture and create test plan elements to mitigate those risks through dynamic analysis

    Propose improvements to the security, maintainability, and testability of code by applying static analysis

    Outline the costs and benefits to be expected from introducing particular types of test automation

    Select appropriate tools to automate technical testing tasks

    Understand the technical issues and concepts in applying test automation

    2 Example Application, Marathon

    Testing concepts are usually easier to understand when applied to a realistic project. We have created a fictitious application that we will use to illustrate the various techniques and types of testing covered in this book. The application, called Marathon, is typical of many systems we find today in that both functional and non-functional testing will be required.

    As you would expect from a book pitched at the ISTQB Advanced level, the example is sufficiently complicated to provide realistic test scenarios; however, the effort you put into understanding the Marathon system will be rewarded later by a more thorough appreciation of specific testing issues in the context of a realistic application.

    At various stages in this book, we will expand on the general description of Marathon provided in this chapter (this simulates the scope creep we all experience!) so that particular points can be covered in more detail.

    Having said that, don’t expect the design of the Marathon system to be absolutely watertight in all respects (the authors are, after all, testing experts, not system architects). Should you find holes or inconsistencies in the design, well done; you’re thinking like an advanced tester already!

    2.1 Overview of Marathon

    Essentially, the system allows organizers of major marathon events (e.g., Boston, London) to set up and organize the races efficiently using modern-day technology.

    Take a look at figure 2-1 on page 10. What do you see? You probably noticed our durable marathon runner. You probably also noticed that the Marathon system is actually made up of a number of independent hardware and software components that work together to make the complete application (the arrows represent major data and control flows). Furthermore, some of the software components are standard products (sometimes referred to as commercial off-the-shelf, or COTS, systems), some are to be developed in-house, and some have been contracted out for development.

    Figure 2–1

    The Marathon system

    For the sake of simplicity, the diagram doesn’t even touch on the technical architecture used, but we can be sure that a mix of different platforms (clients, servers, and operating systems), communications protocols, databases, and implementation languages is used. In short, it’s typical of the kind of system we testers have to deal with in the real world.

    We’ll be meeting our intrepid marathon runner throughout the book in the Let’s Be Practical sections. For now, though, let’s take a closer look at the functional requirements and outline how the system is used.

    2.2 General Requirements

    The Marathon application is designed to provide the following features:

    An administration system for the organizers of the race

    A registration system for runners

    A system for sponsoring runners

    Timely and accurate information to runners and the media

    A reporting system for runners and media

    A help desk for all those with questions about the race

    An invoicing system that allows sponsor money and runner fees to be invoiced

    The system needs to be capable of handling up to 100,000 runners and 10,000 sponsors for a given race without failing. It must be possible to handle up to five races each year.

    2.3 Use of the Marathon System

    The Marathon system provides support prior to the race, after the race, and, of course, during the race itself. These principal activities are shown in figure 2-2 (which isn’t to scale).

    Figure 2–2

    Phases supported by the Marathon system

    Let’s now look at how the Marathon system is used.

    Runners and sponsors register.

    Before each race, the system is used for registering runners and sponsors.

    Runner registration starts four weeks before the race commences and lasts for one week. As soon as the registration week starts, a runner may register for the race using an Internet application. Anyone can register for the race, but the maximum number of participants (100,000) may not be exceeded. A first come, first served principle is used.

    At the end of the runner registration week, the system administrator starts the invoicing system so that invoices can be issued to all accepted runners for race fees.

    Sponsor registration then takes place over the next three weeks. Sponsors register via the Internet application and can select any runners they wish to sponsor.

    Response time to the registering runners and sponsors must never exceed eight seconds from the time the Submit button is pushed to the time the confirmation screen is displayed.

    All information concerning runners and sponsors is held in a database, which is used as a central source of information for all other software components in the Marathon system.

    Race information can be viewed via the Internet application prior to the race and specific questions are handled via a Customer Relations Management (CRM) system with a help desk.

    Ready, set ...

    During the race, the system tracks the position of each runner.

    The tracking is enabled using a strap-on run unit carried by each runner. This unit receives position information via GPS satellite and transmits a runner’s position as a Short Message Service (SMS) message every minute.

    Heavy loads are handled during the race.

    A communications server receives the SMS messages sent by the run units, constructs position records from them, and writes them to a position database.

    A cost calculation system calculates cost records for sponsors using their entered details and the current position of runners they have sponsored. It is assumed that not everyone is going to finish the race, but they can still receive sponsorship money for the distance they cover.

    A reports generator generates an online race report for the Internet application every 10 minutes and also constructs individual runner reports every minute. These individual reports are currently prepared as an email and sent to the communications server for transmission. Runners may then receive and read this during the race via their smart-phones.

    The email method is already known to be unpopular with runners, so a future extension is planned where the reports generator also provides the individual reports via SMS messaging. The communications server will then be able to send these messages directly to the run units for display.

    After the race, the final reports are created and financial aspects are finalized.

    The reports generator creates an end-of-race report for publishing via the Internet application containing final positions and various race details, such as the number of starters and the number of finishers, weather, oldest and youngest runner, and so on. The preliminary report is generated one hour after the first runner has crossed the finishing line and is updated five hours later (when the race is declared officially over).

    Collect the sponsoring money.

    The invoicing application is started by the administrator one day after the race. This application reads records from the cost database and prepares invoices for sponsors according to the runners they sponsored and the distances those runners completed. Completed invoices are sent via email to the sponsor.

    Invoices are provided as hard copy only by special request and are sent to a postal service for dispatch (manual system).

    Payment receipt is an outsourced function that is not covered in our application.

    The help desk stays open to handle queries and complaints.

    2.4 Availability of the Marathon System

    The system must be available 24/7 during the runner registration week, during the sponsor registration weeks, and on race day itself.

    After race day, all data must be available to the help desk/customer relations system between 08:00 and 20:00 local time for a week, after which the data must be archived for at least two years.

    2.5 Caveats about Marathon

    As you can see, there are interesting testing challenges associated with this project. But, be aware that in order to be sure we are applying our testing techniques to a realistic situation, we reserve the right to complicate this project with late change requests. Welcome to the real world!

    3 Types of Systems

    The test analyst and the technical test analyst need to understand the types of systems they are dealing with and how they might affect the testing approach. They also need to understand the overall test process and what their contribution will be at each step. In addition, a good understanding of risk-based testing and risk management in terms of project and product risk is an asset for any test analyst.

    3.1 Introduction

    Testing strategies are influenced by the type of system under test.

    The types of systems we may need to test are many and varied. Each represents different levels of risk that may lead to particular testing strategies being proposed. In a book on test analysis, a full coverage of specific types of systems and their architectures would be inappropriate. However, certain specific types of systems are described in the following sections because they have significant and direct influence on the software quality characteristics to be addressed in testing strategies. We’ll consider the following system types:

    Systems of systems

    Safety-critical systems

    Real-time and embedded systems

    3.1.1 Systems of Systems

    Today, we are frequently involved in testing systems of systems. As you will see from the points discussed in this section, the very nature of such systems represents a particular challenge for all those with testing responsibilities.

    The architecture that makes up a system of systems features several individual components that themselves may be considered systems. These cooperate to provide benefit to a particular stakeholder (e.g., business owner). The components of the overall system of systems typically consist of various software applications or services, communications infrastructure, and hardware devices. These may themselves be driven by software applications.

    The Marathon example is a system of systems.

    Systems of systems are developed using a building block concept. Individual component systems are integrated with each other so that entire systems can be created without having to develop applications from scratch. A system of systems frequently makes use of reusable software components, third-party applications, commercial off-the-shelf (COTS) software, and distributed business objects.

    On the upside, this concept may result in cost reductions for the development organization, but there is a downside when you consider the cost of testing, which may increase substantially. Why is this?

    High levels of complexity

    Complexity is inherent in systems of systems. This arises from a number of sources, including system architectures employed, the different software life cycle development models that may be used for individual application development efforts, and complex compatibility issues of both a technical and functional nature (i.e., do the building blocks actually fit together?). Testing professionals know that complexity is a major driver of product risk; where we have high levels of complexity we generally expect there to be more defects in the product, both from a functional (domain) and a non-functional (technical) perspective.

    The time and effort needed to localize defects

    Within a system of systems, the localization of defects can be a technical and organizational challenge. It may take a long time and considerable effort to localize defects since the testing organization typically does not have complete access to all system components. As a result, it may simply not be possible to perform detailed analysis or set up monitors where we would like to.

    System integration tests play a critical role.

    More integration testing may be required

    Whereas the development of an individual system normally calls for an integration testing stage, with systems of systems we have an additional layer of integration testing to perform at the intersystem level. This testing level, which is often called system integration testing, may require the construction of simulators to compensate for the absence of particular component systems.

    Who’s in charge here?

    Higher management overhead

    More effort often results from managing the testing among the many organizational entities involved in developing a system of systems. These could include various product suppliers, service providers, and many supplier companies that are perhaps not even involved in the project directly. This may give rise to a lack of a coherent management structure, which makes it difficult to establish ownership and responsibilities for testing. Test analysts need to be aware of this when designing particular tests such as end-to-end tests of business processes. For example, when a user initiates a transaction, the technical and organizational responsibility for handling that transaction may change several times and may be completed on systems that are totally outside the control of the originating organization.

    Lack of overall control

    Because we may not always have control over all system components, it is common for software simulations to be constructed for particular component systems so that system integration testing can be performed with some certainty. For the same reasons, the test manager will also need to establish well-defined supporting processes such as release management so that the software can be delivered to the testing team from external sources in a controlled manner. Test analysts will need to work within the framework of these supporting processes so that, for example, tests are developed to defined releases and baselines.

    Many of the characteristics exhibited by a system of systems are present in our Marathon example application:

    Individual components such as the customer relations management system can be considered systems in their own right.

    System components consist of various software applications (e.g., invoice system) and software-driven hardware devices (e.g., run unit).

    Two of the applications used (the customer relations system and invoicing system) are COTS applications that may not have been used together in a system of systems like Marathon before. This highlights the need for system integration testing.

    3.1.2 Safety-Critical Systems

    A safety-critical system is one that may endanger life or lead to other severe losses in the event of failure. Normally the criticality of a project is estimated as part of the project’s feasibility study or as a result of initial risk management activities. The test analyst and technical test analyst must be aware of how the project’s criticality has been assessed and, in particular, whether the term safety-critical applies.

    Safety-critical systems require more rigorous testing.

    The strategies we apply to testing safety-critical systems are generally comparable to those discussed throughout this book. For safety-critical systems, though, it is the higher level of rigor with which we need to perform test tasks that shapes our testing strategies. Some of those tasks and strategies are listed here:

    Performing explicit safety analysis as part of the risk management

    Performing testing according to a predefined software development life cycle model, such as the V-model

    Conducting failover and recovery tests to ensure that software architectures are correctly designed and implemented

    Performing reliability testing to demonstrate low failure rates and high levels of availability

    Taking measures to ensure that safety and security requirements are fully implemented

    Showing that faults are correctly handled

    Demonstrating that specific levels of test coverage have been achieved

    Creating full test documentation with complete traceability between requirements and test cases

    Retaining test data, results, or test environments (possibly for formal auditing)

    Industry standards often apply to safety-critical systems.

    Often these issues are covered by standards that may be specific to particular industries, as in the following examples:

    Space industry

    The European Cooperation on Space Standardization (ECSS) [URL: ECSS] recommends methods and techniques depending on the criticality of the software.

    Food and drug industry

    The US Food and Drug Administration (FDA) recommends certain structural and functional test techniques for medical systems subject to Title 21 CFR Part 820.

    Aircraft industry

    The international Joint Aviation Authorities (JAA) defined the levels and type of structural coverage to be demonstrated for avionics software, depending on a defined level of software criticality.

    The test manager will convey the level of safety criticality of the system and software under test and whether particular standards need to be applied. We must ensure that the tests we design comply to any such standards and that we can support the test manager by demonstrating compliance not only within the testing project but also possibly to external auditors.

    3.1.3 Real-Time and Embedded Systems

    In real-time systems, there are usually particular components present whose execution times are critical to the correct functioning of the system. These may be responsible, for example, for calculating data at high update rates (e.g., 50 times per second), responding to specific events within a minimum time period, or monitoring processes.

    Embedded systems are all around us.

    Software that needs to function in real time is often embedded within a hardware environment. This is the case with many everyday consumer items, such as mobile phones, and also in safety-critical systems, such as aircraft avionics.

    Real-time and embedded systems are particularly challenging for the technical test analyst:

    We may need to apply specific testing techniques to detect, for example, race conditions.

    We will need to specify and perform dynamic analysis with tools (see section 15.2, Dynamic Analysis).

    A testing infrastructure must be provided that allows embedded software to be executed and results obtained.

    Simulators and emulators may need to be developed and tested to be used during testing (see section 23.6.7 for details).

    4 Test Management Responsibilities for the Test Analyst

    Test management is what test managers do, but they can’t do it without adequate, correct, and current data. They also need input to guide a risk-based testing approach. But beyond providing input, the test analyst is sometimes also expected to work in environments that require excellent communication capabilities and techniques. Managing test projects is a team task, and only when there is collaboration will a project be managed successfully to completion.

    Terms used in this chapter

    product risk, risk analysis, risk identification, risk level, risk management, risk mitigation, risk-based testing, test monitoring, test strategy

    4.1 Introduction

    The test analyst is a major contributor of data. With all that time we spend documenting what we do, it’s nice to know that the data goes somewhere. How many times do you sigh when you are going to enter a defect report because you know you have to fill out a bunch of fields? Do you wonder if anyone ever uses that information? Join the group! I don’t know of a single tester who has not complained about the amount of documentation they need to do (unless they don’t do any documentation, but that’s usually a different problem). So let’s agree—documentation is a necessary evil, or maybe not evil, but at least a bit of a pain.

    In this chapter we’ll look at why we are spending our time tracking data, how this valuable information is used, and why it really does matter (it’s not just because your manager says you have to do it).

    4.2 Monitoring and Controlling a Project

    Test projects are monitored to determine if they are progressing as expected. Sometimes they are better, sometimes they are worse, but rarely is a project exactly on track for the projected schedule. Estimation techniques are plentiful, but in the end, each project is unique and will have its own blend of issues and victories. A good test analyst, and a good test manager, is suspicious when a project is going smoothly and is right on schedule. This could be because we are basically not trusting people, or it could be because we are realistic in our expectations. Trust me on this one. Beware the project that is right on schedule.

    So, enough for negativity. Let’s look at the type of data that is tracked and how it is used for test monitoring. It’s an important part of the test analyst’s job to track and report accurate information. It’s more motivating to understand how and why that data is used and how your information can influence the course of a project.

    4.2.1 Product (Quality) Risks

    All software is inherently risky. It is complicated, it runs on many environments and configurations, it has requirements that may not be well understood ... the list just goes on and on. If we could always test everything, we’d never need to prioritize because everything would get tested. Of course, we’d probably never ship anything because we’d never be done, but it’s a happy thought. In the real world, though, we won’t have time to test everything, and we have to determine what should be tested and how thoroughly it should be tested.

    As part of a risk-based testing approach, the test analyst is expected to be involved in identifying product risks, assessing those risks, and mitigating the appropriate risks. Consistent with the understanding that we can’t do exhaustive testing, we also can’t possibly achieve complete coverage of all risks. We can achieve only a measure of coverage on the identified risks. The measure of coverage is determined by the understood level of risk for an item, the desired level of coverage, and the time available. It’s good to start into a risk analysis exercise with the understanding that we need to prioritize because some items will get only minimal coverage, some may get no coverage at all, and some will receive all the attention. The trick is to figure out which is which. Risk management is usually a three-step process, in which first the risks are identified, then they are assessed, and then the mitigation plan is determined.

    Identifying Risk

    Identifying risk is all about finding the scary stuff.

    Remember when you were a little kid (or maybe not so little) and there were monsters hiding in your closet? And you didn’t want to look because they might jump out at you, so you hid under the magical covers because they could protect you from the monsters? Let’s equate this to risk in software. The risks are the monsters. The closet is the mass of code you are testing and the project framework. Opening the closet door is the equivalent of identifying and assessing the risks. Our test cases and test techniques provide the magic covers. But, unlike the magic covers that can deflect all monsters, our test cases may let some risks escape undetected. Risk-based testing is not a perfect solution, but it does give us a way to deal with the most important risks with the highest priority.

    We want to identify risks so we can call them out, review them, prioritize them, and determine what to do with them. The best risk identification effort includes a broad set of stakeholders, each contributing their own unique viewpoint. Technical support people will perceive different risks than developers, and operations people will see even different risks. As test analysts, we are bringing our knowledge of the software, the domain, the customer, and other projects to bear to help identify as many risks as possible. We can conduct interviews with domain experts and users to better understand the environment for the project. We can conduct independent assessments to help evaluate and identify potential risks. We can conduct risk workshops and brainstorming sessions to gather input from the users (or potential users) regarding their areas of concern and likely areas of risk. We can use risk templates to help record the risks that we are able to uncover. We can even use testing checklists that have proven useful in the past to help us focus in on areas that have traditionally been risky. And, of course, we can leverage our experience with past projects that were similar in some manner to the project being evaluated. Problems tend to reoccur, so using past projects as inputs is a valid way of identifying risks for future projects.

    Business risk is the focus for the test analyst. The technical test analyst will focus on the technical risk items. Our concern is to look for items that would affect the ability of the customer to accomplish their business goals. These items can cover a large range of testing areas. For example, a problem with the accuracy of calculations that are used to compute mortgage rates would be catastrophic for a bank or a mortgage company. An accuracy problem in the order quantity could be a major problem for a small e-commerce company. Usability issues such as having the wrong tab order on fields or having software that is difficult to learn could be a major issue for a customer who is known for producing products that are user friendly and generally easy to learn.

    Risks can be everywhere in the software. It’s important to remember not to focus solely on the functionality. I’ve certainly encountered software that was, to me, unusable because the interface was so difficult. High risk? They lost me as a customer, so I would think that would be a fairly high risk issue (assuming they valued me as a customer, that is).

    Assessing Risk

    Once we have the risks identified, we have to study them, categorize them, and rank them. This is usually done by assessing the likelihood of the risk being realized (the risky event actually occurring) and the impact to the customer if it does. Likelihood can be very difficult to assess. You have to consider if the issue could be detected in the test environment and if it would be detected in the production environment if it occurred. Some risks are only evident in one of the environments. For example, a problem with network latency might exist in the test environment but not in production because the network is configured differently in the production environment. It’s important to differentiate between risks that will occur in both environments and risks that will happen in only one. If a problem is a risk only in the test environment, how important is it? If it could stop all testing, then it’s very important. If it’s a usability issue, it might not matter (for example, if your monitors in the test environment can’t display the window correctly but the monitors used in production can). What happens when a problem can be seen only in the production environment? How do you test for it? These types of problems can be tricky to find. Sometimes having a mitigation strategy is the only way to deal with these types of issues.

    Likelihood of occurrence is usually a result of the technical risk. This level is usually assessed by the technical test analyst, although the test analyst can certainly contribute to understanding the impact to the business if the risk should actually be realized. Impact can be difficult to assess, though. The impact of an issue is interpreted as a risk to the business. The good news is, we as test analysts have a good understanding of the business and the domain, so we are particularly well suited to assess the impact to the business.

    There are a number of factors that influence the business risk assessment. Let’s use the example of software that controls traffic signals and a risk that the software could provide a green signal to all parts of the intersection at the same time. Ugly thought, isn’t it? How would we assess each of these areas?

    Frequency of use—This would be high because our signals are used at high traffic intersections.

    Business loss—Would this company be likely to get any future contracts to furnish traffic signals? I hope not!

    Potential financial, ecological, or social losses or liability—What would happen to the company that furnished these faulty signals? Lawsuits? Civil action? Perhaps even criminal negligence?

    Civil or criminal legal sanctions—Same as the preceding item.

    Safety concerns—This is a safety-critical device. It is likely that people would be hurt, perhaps seriously, if a failure of this magnitude occurred.

    Fines, loss of license—It seems likely, doesn’t it? Or at least it should happen.

    Lack of reasonable workarounds—Now we need to talk with the rest of the team. If there is a failure like this, what should happen? Is there safety software built in that will convert the signals to flashing red if this should occur? If so, that may lower the impact of this risk item (assuming the safety software itself is reliable).

    Visibility of the feature—This type of a failure would be very visible, particularly if the safety software also failed. It would be interesting to understand how long the software would be in a failed state until the safety software took over. Would it be enough for one car to get through the intersection in each direction? That could still be a disaster.

    Visibility of the failure leading to negative publicity and potential image damage—I don’t know about you, but I sure wouldn’t want to see this company’s product used again!

    Loss of customers—Yes, definitely an issue.

    There is usually a classification or rating that can be assigned to each risk item. This would be words (very high, high, medium, etc.) or a number. It’s important to remember that a true quantitative rating is difficult to determine without a lot of information (for example, the people who do life insurance risk calculations are backed up by huge amounts of data, so they can come up with a truly quantitative rating). The rest of us use numbers, but those are still a qualitative evaluation and the numbers usually reference a qualitative rating such as 1 = very high risk.

    Once the numbers have been assigned for the business risk (impact) and the technical risk (likelihood), they can be added or multiplied to determine total risk, or they may be addressed separately to be sure both levels are addressed adequately. If you have a risk that would be catastrophic if it occurred (and drained your bank account) but was highly unlikely, what would the overall rating be? If you are using numbers and multiplying or adding, this risk would get the same total number as a risk that was highly likely (the columns on a report don’t align) but the impact is minimal (it’s still readable). There are risk-based testing models, such as PRISMA [van Veenendaal 12], that address the two risk ratings separately and may be better at dealing with this type of situation.

    Mitigating Risk

    Now that we have our risks identified and assessed, what should we do about them? We have a number of choices, but generally the first one we should investigate would be to see if we can design test cases that will test for the risk and be able to prove that it either does or does not exist. This may include reviewing the requirements and design documentation, checking the user guides, and even making sure that code reviews have happened. If we can test for the risk, this is a form of mitigation. If we can find an identified risk, we should be able to get it fixed, thus reducing the risk. Even better, if we can test and prove that the risk does not exist, we can stop worrying about it. The better our testing coverage of the risk areas, the more risk mitigation we can perform.

    Testing is not the only option for risk mitigation. There may be activities that are defined in the test plan or the test strategy that will help with risk mitigation. These could be conducting reviews at defined stages, making sure our test data is representative, making sure our configurations and environments are realistic, and a host of other items.

    Don’t forget to take a look at the identified risks as the project progresses. You may have tested for a particular risk and determined that it doesn’t exist. But what if major code changes occur? What if there has been a re-architecture? Retesting may be needed to be sure a risk stays mitigated. We may also decide that something that was classified as a low risk is actually a high risk. More information may be available that was not available when the initial assessment was done. Conversely, we may find that something we had labeled as high impact would actually not be high impact (for example, we might determine that a part of the software will have only very low usage). Risks are not static. They change. They need to be reevaluated periodically to be sure the rating is still accurate.

    What happens when you find something that was not considered a risk but clearly is? You need to get it added to the risk assessment and be sure it is evaluated, and, if needed, form a mitigation plan.

    If testing is going to be the action taken to help mitigate the identified and assessed risks, it’s important to prioritize the testing. This has to occur because we live in a world where sufficient time for testing rarely exists. The goal of prioritization is to address the most important risks as early as possible. This may seem obvious, but once the testing starts and features are arriving and defects are thwarting progress, it’s easy to lose sight of the overall priorities. Having traceability from the risk item to the test case(s) can help to provide visible tracking of the mitigation effort, and we all know that pretty charts make management happy—particularly charts that have lots of green on them!

    Ideally, the test cases that cover the higher risk items should be executed before the test cases for the lower risk items. This is sometimes called the depth-first approach because testing goes deeply into the functionality based on the risk levels. Depth-first will help to mitigate all the high-risk items first, followed by medium, followed by low. If the risk prioritization has been done correctly, this option will provide the most complete and targeted risk mitigation. It might also make sense to use a sampling approach to testing across all the risk areas, or a breadth-first approach. Because this approach lets the tester use the risk to weight the selection of the tests but still makes sure every risk (regardless of rating) is tested at least once, a broader coverage is achieved. Breadth-first testing will help to identify risks that might not have been correctly classified (too low a risk level, for example).

    Time to start planning for that maintenance release!

    But what if you run out of time? That’s a realistic view, because we often do run out of time to do all the testing. If you are in a time-critical environment, this is probably why you are using risk-based testing in the first place. When a risk-based approach is used with good traceability from risk items to test cases, as time runs out it is easy to see how much risk has been mitigated and what risk items are still left. This provides the information the decision makers need to determine if more testing time should be allocated or if the residual risk is at an acceptable level.

    As was mentioned, risks can change during the life of a project. So also can the risk strategy change. For example, you may have decided at the onset of the project that you need to have 20 test cases for each high-risk item. But, as you proceed with testing, you are not making the progress that was expected. As a result, it may make sense to cut that down to 10 test cases for each high-risk item to allow more risk items to be covered. You also have to consider any additional risks that have been discovered and areas of the software that have had more changes than were anticipated. Defect fixes can introduce risk. I have had a few memorable conversations with developers when I’ve asked about the testing needed for a particular fix. One particularly memorable one was when the developer just shook his head, looked sad, and said, You’d better just retest everything. The change was intrusive, and even I’m not sure what might have been affected. This is just not what you want to hear, but it does give an indication that there is now a large risk area that was not anticipated.

    When looking to see if you need to change your risk approach for the next cycle of testing, you might also want to review the types of defects that have been found so far. Are you seeing defect clusters that might indicate particularly risky areas? Are you seeing fixes that are introducing more problems? And, as long as you’re looking at the metrics, you should also check for areas where the test coverage is insufficient. This can be because parts of the code were not available on schedule, certain configurations or hardware were not available, or perhaps just that the testing schedule itself is slipping. Areas that have been undertested are inherently risky because they may be harboring potential schedule-affecting defects.

    Experience Report: But I only changed one character!

    Speaking of risk, can you trust the developer to adequately identify the risk of a change or the risk in a particular area of the code? It depends on the developer, of course, and the situation, but I would advise proceeding with caution. I worked with a very senior developer who had architected part of the system we were testing. He had identified an amazing performance improvement that he felt should definitely be included in the release. His change was approved. I sat down with him to discuss the change and he assured me that it was only one character that he had changed. He meant this to mean that it was a low-risk change, but I also knew this meant it would be an easy change to reverse if it didn’t work. We installed the change. And booted the system. And nothing happened. So we turned it off and booted it again. Nothing. I retrieved the developer and he repeated our steps of turning off the server and booting it again. Still nothing. He sat there for a minute looking puzzled and then uttered the immortal words, Hmmm. I must have changed the wrong character.

    A one-character change. Low risk?

    4.2.2 Defects

    Defects are probably the most common items tracked in a testing organization. Defect data can tell us what is breaking, what is working, how the project is progressing, how efficient our process is, and loads of other information. This information is usually tracked via a defect management tool, which may or may not share data with the test management tool. Defect information is generally grouped into metadata (screen shots, narrative description information) and classification data (individual fields used to report, sort, and manage the data). More information on defect data can be found in Chapter 12.

    Test analysts spend a lot of time finding and documenting defects. It’s important that the information entered for a defect is as accurate and informative as possible since this information will feed into trending reports that will be used to make decisions regarding the project. This information is also

    Enjoying the preview?
    Page 1 of 1