Tuesday, 3 November 2015

Software Test links (Including Agile Testing)

Amazon: Agile Testing: A Practical Guide for Testers and Agile Teams

Brian Marick’s testing quadrant








Adam Thornhill: Software testing in the large

A good essay on how to approach test automation


Test automation on GUI level

In large-scale projects automatic GUI tests are a necessity. The important thing is that the GUI automation is restricted to check the behaviour of the GUI itself. It's a common trap to try to test the underlying layers through the GUI (for example data access, business logic). Not only does it complicate the GUI tests and make the GUI design fragile to changes; it also makes it hard to inject errors in the software and simulate averse conditions.
However, there are valid cases for breaking this principle. One common case is when attempting to add automated tests to a legacy code base. No matter how well-designed the software is, there will be glitches with respect to test automation (e.g. lack of state inspection capabilities, tightly coupled layers, hidden interfaces, no possibility to stimulate the system, impossible to predictably inject errors). In this case, we've found it useful to record the existing behaviour as a suite of automated test cases. It may not capture every aspect of the software perfectly, but it's a valuable safety-net during re-design of the software.
The test cases used to get legacy code under test are usually not as well-factored as tests that evolve with the system during its development. The implication is that they tend to be more fragile and more inclined to change. The important point is to consider the tests as temporary in their nature; as the program under test becomes more testable, these initial tests should be removed or evolve into formal regression tests where each test cases captures one, specific responsibility of the system under test.

Integration defines error handling strategies

In large-scale software development one of the challenges is to ensure feature and interface compatibility between sub-systems and packages developed by different teams. It's of vital importance to get that feedback as early as possible, preferably on each committed code change. In this scope we need to design all tests to be sure that all possible connections are verified. The tests shall predict failures and test how one module will behave in case another other module fails. The reason is twofold.
First, it's in averse conditions that the real quality of any software is brutally exposed; we would be rich if given a penny for each Java stack trace we've seen in live systems on trains, airports, etc. Second, by focusing on inter-module failures we drive the development of an error handling strategy. And defining a common error handling policy is something that has to be done early on a multi-team software project. Error handling is classic example on cross-cutting functionality that cannot be considered locally.

Simulating the environment

Quite often we need to develop simulators and mock-ups as part of the test environment. Having or being able to have mock-ups will detect any lack of interfaces, especially when mock objects or modules has to be used instead of real ones. Further, simulators allow us to inject errors in the system that may be hard to provoke when using the real software modules.
Finally, a warning about mock objects based on hard-earned experience. With the increase in dynamic features in popular programming languages (reflection, etc) many teams tend to use a lot of mocks at the lower levels of test (unit and integration tests). That may be all well. Mocks may serve a purpose. The major problem we see is that mocks encourage interaction testing which tends to couple the test cases to a specific implementation. It's possible to avoid but any mock user should be aware of the potential problems.

TDD, unit tests and the missing link

A frequent discussion about unit tests concern their relationship to the requirements. Particularly in Test-Driven Development (TDD)[TDD] where the unit tests are used to drive the design of the software. With respect to TDD, The single most frequent question is: "how do I know the tests to write?" It's an interesting question. The concept of TDD seems to trigger something in peoples mind; something that the design process perhaps isn't deterministic. It particularly interesting since we rarely hear the question "how do I know what to program?" although it is exactly the same problem. As we answer something along the lines that design (as well as coding) always involves a certain amount of exploration and that TDD is just another tool for this exploration we get, probably with all rights, sceptical looks. The immediate follow-up question is: "but what about the requirements?" Yes, what about them? It's clear that they guide the development but should the unit tests be traced to requirements?
Requirements describe the "what" of software in the problem domain. And as we during the design move deeper and deeper into the solution domain, something dramatic happens. Our requirements explode. Robert L. Glass identifies requirements explosion as a fundamental fact of software development: "there is an explosion of "derived requirements" [..] caused by the complexity of the solution process" [GLA]. How dramatic is this explosion? Glass continues: "The list of these design requirements is often 50 times longer than the list of original requirements" [GLA]. It is requirements explosion that makes it unsuitable to map unit tests to requirements; in fact, many of the unit tests arise due to the "derived requirements" that do not even exist in the problem space!

Avoid test dependencies on implementation details

Most mainstream languages have some concept of private data. These could be methods and members in message-passing OO languages. Even the languages that lack direct language support for private data (e.g. Python, JavaScript) tend to have established idioms and conventions to communicate the intent. In the presence of short-term goals and deadlines, it may very well be tempting to write tests against such private implementation details. Obviously, there's a deeper issue with it; most testers and developers understand that it's the wrong approach.
Before discussing the fallacies associated with exposed implementation details, let's consider the purpose of data hiding and abstraction. Why do we encapsulate our data and who are we protecting it from? Well, it turns out that most of the time we're protecting our implementations from ourselves. When we leak details in a design we make it harder to change. At some point we've probably all seen code bases where what we expected to be a localized change turned out to involve lots of minor changes rippling through the code base. Encapsulation is an investment into the future. It allows future maintainers to change the how of the software without affecting the what.
With that in mind, we see that the actual mechanisms aren't that important; whether a convention or a language concept, the important thing is to realize and express the appropriate level of abstraction in our everyday minor design decisions.
Tests are no different. Even here, breaking the seal of encapsulation will have a negative impact on the maintainability and future life of the software. Not only will the tests be fragile since a change in implementation details may break the tests. Even the tests themselves will be hard to evolve since they now concern themselves with the actual implementation which should be abstracted away.
That said, it may well exist cases where a piece of software simply isn't testable without relaying on and inspecting private data. Such a case is actually a valuable feedback since it often highlights a design flaw; if something is hard to test we may have a design problem. And that design problem may manifest itself in other usage contexts later. As the typical first user of a module, the test cases are the messenger and we better listen to him. Each case requires a separate analysis, but we've often found one of the following flaws as root cause:
  1. Important state is not exposed - perhaps we shall think about some state of the module or class that shall be exposed in a kind of invariant way (e.g. by COW, const).
  2. Class/Module is complicated with overly strong coupling.
  3. The interface is too poor to write essential test cases.
  4. A proper bridge (or in C++ pimpl) pattern is not used to really hide private details that shall not be visible. In this case it's simply a failure of the API to communicate by separating the public from the hidden parts.

No comments:

Post a Comment