"Validate the requirements for this data warehousing project, please." -- Sure.

"I need you to verify that we met our go/no-go criteria by Thursday at 4 pm." -- Roger that.

If you use these words, "validate" and "verify," in a quality control environment, you doubtlessly hear them used incorrectly. If you don't use them, you should start. And in less than three minutes of your time, I can tell you why.

But first: it isn't your fault if you cannot distinguish verify from validate. They mean about the same thing in common usage, and take on meanings that are hard to tie back to quality control. Consider the most common non-technical uses of validate: you can ask your spouse to validate your feelings, or you can ask the attendant to validate your parking ticket. This is plainly a strange word.

Let's get these straight with three bold-faced mantras you can and should memorize:

Validation. The root is "valid," like a valid argument.
  • It is performed on requirements, plans, designs
  • An argument is valid when you can say "if the premises are true, then the conclusion will be correct"
  • Requirements are valid when you can say "if these are well-executed, then the results will meet our needs"
Verification. The root is "verify," like verifying someone's identification.
  • It is performed on work products: objects, results, software
  • You verify a person's identity by
    • Comparing their appearance to a photo-ID, or
    • By testing them: asking a few personal questions only the real person would know
  • You verify work products by
  • Comparing them to designs and plans, or
  • By testing them: executing test cases to see how the product performs
A comes before E, so we validate first, then we verify.
  • This is only a general rule, not a truism
  • Generally, you first ensure that the plan is a valid plan
  • Then, after doing the work, you verify that you followed the plan in all its particulars

So then what is the word "testing" for, and does it cover both verify and validate?

Ugh. You had to bring that up. In an IT environment, testing is too often thought of as a narrow type of quality control, performed only by the dedicated and independent QC personnel, and only on software products or communications systems. And when that narrow conception of testing is conflated with the broad concept of verification, trouble ensues.

But to answer the question: quality control includes both validation and verification, and both are a form of testing. But "testing" is often limited to the verification work. The problem is, that when you use "test" instead of "verify," strange things happen because of the very narrow connotation of "test."

NTRs? Non-testable requirements? I don't think they exist.

You needn't raise a search team to find the phrase "not testable" affixed to a requirement in an otherwise sensible organization: if it cannot be verified by QC personnel executing a pre-defined procedure with pre-defined expected outputs in advance of a deployment, a QC team may deem it "non-testable." Which is brief enough but inaccurate, for what they mean is "…not by us, anyway, in the way that we like to do it." But can a requirement really be unsusceptible of a test?

Let's use the right word: is it possible to have a good requirement that cannot be validated? That is, nobody in the organization is qualified to say that this requirement appropriately meets your needs. And more so, is it possible to have a requirement the delivery of which can never be verified? That is, the business says that the requirement will satisfy a business need, and that that business need is valuable enough to justify the effort, but there is nobody who will ever be able to tell whether you really delivered. Plainly nonsense.

In short, valid requirements are supposed to make a difference. Some difference. Any difference. If it is not conceivably possible to discern whether or not a requirement was met, how can one claim that it is required for business success, and is worth the investment of energy to deliver it? And that horrid tag "not testable" – I have seen this happen – can result in a valid requirement never being verified by anybody, and even missed for delivery. If instead, the traceability matrix had asked "who will verify" and "when will they verify," rather than "how are we testing," this would never have occurred.

But isn't this just two little words?

Not at all. Clear and meaningful use of words like validate and verify can spread to other parts of a traceability matrix, and into the culture of your organization. An obvious result of failing to properly use the terms validate and verify is unnecessary pain in tracking requirements across your life-cycle due to a paucity of apt terms.

  • Whose requirement is this?
  • Who owns it?
  • Is it testable?
  • Who signed-off on it?

None of those questions have clear meaning without a paragraph to explain them. But compare:

  • Who drafted this requirement?
  • Who validated it?
  • Who delivered it?
  • Who verified it?

These questions are clear. They mark distinct roles and responsibilities that "ownership" does not. Their meanings are mutually exclusively and more nearly exhaustive than terms like "own" and "approve." That is, Bob can both draft and validate a requirement, but the two separate roles are distinct, and both must happen. How much clearer traceability becomes when we eschew inapt terms like "ownership" for clear terms that carry their weight and define those steps that must occur for predictable success.

My passion for this sort of clarity may suppress invitations to dinner parties. But you can push for clarity within your organization without getting carried away. It will pay off. People spot and respond to – they thrive upon – clarity. Providing it, even within the most leviathan culture, or in the littlest ways, will generate marked results.