Thursday, December 24, 2015

Snowmageddon

[Originally posted January 2015]

 

I heard an interesting comment on a radio news station the other day by a meteorologist who was being interviewed following the massive overreaction to the recent snowfall in New York.  (For those who don’t know, there were predictions of an “epic”, “historic”, and “crippling” blizzard in New York earlier this week, shutting down the city, which for the most part didn’t materialize.  Yes, there was some snow, but it was hardly “historic”.)  Admitting that the National Weather Service had made an error, the meteorologist explained that they rely on forecasting models.  Weather forecasting is a science, he said, and like any science, it is not perfect.

 

I thought that idea fit nicely with how we view software testing.  Every test is a question that we are asking of the product.  Every test is a search for information that our stakeholders need.  Every test is an experiment.  A scientific experiment.  Yes, testing is a science, and indeed, it is not perfect.

 

That’s why we, as scientific testers, use things called “heuristics” to design our tests, to build our models, and to evaluate our findings.  A heuristic could be thought of as a “rule of thumb”; something that is useful, usually correct, but sometimes fallible or incomplete.  Every method that we have of determining if a test passed or failed is heuristic, which is why human judgement is a required skill in testing and for making release decisions.

 

Like NY Mayor deBlasio said, “Would you rather be prepared or unprepared?  Would you rather be safe or unsafe?  My job as the leader is to make decisions and I will always err on the side of safety and caution.”  Yes, weather forecasting, like testing, is not perfect (yet), but we rely on human judgement, using the best knowledge we have to make the best decisions we are able.

 

 

 

 

No comments: