Testing is Not a Tug of War

It is amazing how fast technology continues to evolve while the cross-functional teams responsible for this rapid evolution seem to be in virtual ‘tug of war’. Don’t get me wrong, I am not saying every team is dysfunctional. I am simply suggesting that there remains a lot of teams where there is constant battle between the various groups and the success of their projects only happens through extraordinary effort and heroics. Anyone that is involved with this type of organization understands the toll it takes on the team and the long-term success of the business.

Organizations that do not foster creating a supportive / collaborative work environment will face challenges retaining talent and satisfying the business goals related to delivering quality solutions to their customers.

Although there are many different functions (roles) ultimately involved with creating a product, this blog post will focus on development and testing. Specifically:

  • what development should know about testing; and
  • what testing should know about development.

The goal being to share my views on things developers and testers need to understand so that they can effectively work together to deliver high value products / solutions to market.

 

What development should know about testing

First, it is important to remember that the product is more than software. There are a lot of things involved with creating, maintaining and supporting a product to ensure the success of the business, and all of these have an impact on the testing approach, scope and effort. For example:

  1. There must be a well documented, reliable and easy process to install/upgrade and configure the solution
  2. There needs to be an efficient process in place to enable the customer to obtain technical support and report issues when they encounter issues. Note: Issues reported by the customer ultimately need to be validated as being resolved before being released.
  3. The application’s code (new and existing) must be versioned and changes to the code carefully managed. Otherwise, the team will not be able to accurately plan, execute and report details on the testing associated with a specific release of the product.
  4. Expectations about the list of supported devices, O/S and Add-ons that are supported for the application must be clearly defined. Teams failing to specify these details are at risk of the application not being tested under ‘real-world’ conditions, resulting in customer dissatisfaction and negative consequences for the business.

 

Quality is more than a lack of bugs

It is a common misconception that the absence of any open bugs (aka defects) means that the product is of high quality. In reality, assessing the quality of a product is far more complex and depends on the importance placed on each of the following:

  1. Is the product / application practical to use? Determining this depends on how effectively an organization can assess / test the usability, understandability, learnability, operability and performance of the solution.
  2. Does it keep working? Teams better be prepared to test the applications reliability, maturity, fault-tolerance, integrity and recoverability if this characteristic is key to assessing the quality of the product.
  3. Does it make effective use of system resources? Test teams will need to conduct non-functional testing related to performance, load and stress to determine the efficiency, storage and processing capabilities / limitations of the product.
  4. It it economical to build and maintain? This one is important to the business, as cost-effective solutions that are maintainable, changeable, stable, testable, portable, adaptable and reusable are key to achieving ‘bottom-line’ results necessary for their long-term success.

 

Quality Assurance is more than testing

News flash! QA does not equal testing. Don’t believe me? Here is what you will discover if you research the terms online:

  • Quality Assurance – “Merriam-Webster: A program for the systematic monitoring and evaluation of the various aspects of a project, service, or facility to ensure that standards of quality are being met “
  • Testing – “Merriam-Webster: (Test) A critical examination, observation, or evaluation : trial; specifically : the procedure of submitting a statement to such conditions or operations as will lead to its proof or disproof or to its acceptance or rejection <a test of a statistical hypothesis>”

Bottom-line is that Quality Assurance is everything you do to minimize the risk of failure and promote excellence:

  • Risk management
  • Customer involvement
  • Skillful developers and testers
  • Process definition and improvements
  • Inspections, reviews and testing
  • Experience-based improvement

While Testing involves members of the team completing a variety of verification and validation tasks throughout the product development life-cycle, regardless of whether you use Water-fall, Agile or some other methodology.

  • Verification – The act of demonstrating that a work item is satisfactory by using its predecessor work item.  For example: Product is verified against module level design.

Verification answers the question “Is the system being built right?”

  • Validation – The act of demonstrating that a work item is in compliance with the original requirement. For example: The product module would be validated against the input requirements it is intended to implement.

Validation answers the question “Is the right system being built?”

So here is the “rub”!

Good testing is hard to do.

To complete an effective test campaign the tester must:

  • Anticipate the user data, skills / experience, actions, expectations and environment
  • Often examine a product that is invisible, volatile, sensitive, complex and / or unfamiliar
  • Use a process that is endless, ambiguous, negative, boring and / or laborious
  • Find problems that are unthinkable.

And all of this must be done while considering a multitude of possible permutations / combinations.

  • Functions vs. input data vs. states
  • Product vs. platforms
  • System vs. external factors
  • Testing vs. versions of product

 

Development can make testing easier to do

In reality, there are a lot of things that can be done by various members of the team to make it more effective for the organization to develop, test and deliver quality products to market. For example:

  • Publish and manage specifications, requirements and / or designs explaining the product features / functionality
  • Enforce the use of use internal error checking
  • Build and maintain a suite of automated unit tests that run as part of the integration
  • Get better at letting others (e.g. testers) know what has changed and / or where to expect problem
  • Deploy a solution to automatically test the build before releasing it to test
  • Evolve the product in functional layers
  • Build-in testability by creating and maintaining interfaces that enable the team to deploy more effective testing solutions (e.g. automated testing through product API).

 

What Test should know about Development

Up to this point the emphasis has been on trying to help developers better understand testers. Well, testers are not off the hook. It is essential anyone involved with testing remember:

  • Developers are not out to hide problems
  • Delivering a quality product is a common goal
  • Ambiguity is a common enemy
  • Development is similarly pressured to do more with less
  • Developers are seeking ways to improve overall effectiveness
  • Testers can make developers job easier to determine the root cause of a fault by providing clear, complete and concise details in defect reports
  • Testers that can support creating automated solutions to improve environment setup / configuration and reduce time to run a variety of test types (unit, integration, functional, etc) can help improve the overall effectiveness of the entire organization.

 

Bad Tester – Developer relationships

Suffice to say there are many examples of things that contribute to a less than ideal relationship between the developer and tester. Here are just a few:

  • Testers get frustrated when a defect that they took time to clearly describe, justifying the assigned priority, gets downgraded or outright rejected by development without them being contacted.

Developers need to engage the tester to better understand the defect.

  • Developers similarly become annoyed when defects lack the necessary details for them to quickly determine the root cause of a problem so that they can implement a fix quickly.

Testers need to prepare defect reports containing details necessary to quickly diagnosis and resolve issues in the product.

  • Testers struggle to plan and implement test solutions when the developer fails to notify others about changes that they make to the code. Not only does this impact the relationship developers have with testers, it also adds risk to the product quality and the business.

Developers must ensure their code changes are understood by others so that the appropriate quality assurance activity can be performed.

  • Developers and testers get frustrated when requirements are ambiguous, or non-existent. Ineffective requirements contributes to project delays, frustration by the team and ultimately product quality issues due to the software changes not meeting customers expectations.

Business and product requirements must be clear, concise and complete in order for the cross-functional team to effectively deliver quality features / functionality to market.

  • Developers and testers get into conflict when they do not share common goals and objectives (e.g. project milestones, processes, quality, etc).
  • Establish a shared vision and set clear expectations about key deliverables.
  • Think Win-Win
  • At the end of the day, if the organization wishes to avoid having to “Stop Testing”, everyone involved must “Start Thinking” like a team. This means creating an “eco-system” where the cross-functional team shares a common vision, communicates effectively, collaborates to solve difficult problems, respects others and “Plays to win”.

Scott Acker, VP Operations – Central Canada at PQA Testing