Wednesday, December 30, 2015

The Inefficiency of Gifts and Software Metrics

The Wall Street Journal (12/24/15, “If You’ve Bought No Presents Yet, These Wise Men Applaud You”) reports that from a purely economic point of view, holiday gift giving is a wasteful practice because it reallocates resources inefficiently.  On average, gift receivers would have been willing to spend on themselves only about 70% of the cost of the present that the gift giver has paid, leaving an inefficiency ratio near 30%.   It’s more cost-effective to give cash or gift cards, they argue, because the value of the cash gift is undeniable. To the receiver, the value of x dollars given is exactly x. (Usually.)

 

Yet, the WSJ continues, not a single economist out of the 54 they interviewed for the article heeded to their own advice.  Every one of them both received and purchased gifts for their loved ones this holiday season.  It seems that despite the challenging hard data opposing presents, the warm feelings of the holiday season take over.

 

I see this a little differently. Perhaps the 30% economic “inefficiency” can be considered as an emotional surcharge built into the cost of the physical present.  How much more a person is willing to pay for a present, above what the receiver would have paid for it themselves, isn’t necessarily a real inefficiency that requires correction, instead, it’s a measureable attribute of the giver’s current emotional status.  Fluffy emotional stuff like love and guilt sometimes merge with hard data like economic efficiency ratios.

 

Which brings us to the tricky world of software metrics.

 

The traditional approach to measuring performance is heavily dependent on quantitative numbers-and-formula-based assessments.  For example, questions like: How many test cases did you write today?  How many bugs did you report?  How many tests did you run?  For how long was the environment down?  typify the hard-data approach to software metrics.  However, there is a hidden inefficiency here, too.

 

Students of software testing would recognize that quantitative software metrics like the questions above are almost always subject to measurement dysfunction, the idea that when you measure particular aspects with the intention of improving them, they will improve, but only at the detriment of other important aspects that you may not be measuring.  Adding context-driven qualitative measures to a traditional metrics program may help.  Instead of only depending on the numbers, a qualitative system looks for an assessment based on a fuller story.  Having a fuller conversation with the test team may provide a deeper understanding of the projects progress and danger points.

 

Like gift giving, there is an emotional aspect to software metrics as well.  Pride, fear, anger, despair, overconfidence, the list goes on.  These aren’t inefficiencies, they are expected and a natural part of human endeavor.

 

 

 

 

Tuesday, December 29, 2015

Result Variables

Over lunch with a few colleagues a few weeks ago, someone mentioned that he recently interviewed a QA candidate that gave the best answers to his questions out of everyone he’d ever interviewed before.  Why?  Because, he said, she was “results-driven” in her method of testing.  It turns out that what this colleague was describing about this candidate, is a particularly useful, fundamental, yet woefully underutilized approach to domain testing. (If you don’t remember what domain testing is, see my previous post)

 

Simply put, the idea is to test for the result variable as the primary objective of the test, in addition to only the input variables.  Say you’re testing a program that adds two numbers.  Typically, you would want to know something about these numbers to input, right?  What are their valid ranges?  What should happen if you entered max+1 or min-1 ?  Is zero OK?  What about negative numbers? Or decimals?  And if decimals are allowed, then to what precision?

 

These are all legitimate questions assuming that your primary objective is to test the inputting of the variables, but it doesn’t reveal much about the result of the calculation.  Even though checking for a buffer overflow on an input filter may be important, it may be more interesting to construct your input data to force values of interest on the output variable.  In the Domain Testing Workbook, Cem Kaner calls this testing for the consequence. “If a program does something or cannot do something as a result of data that you entered, that’s a consequence of the entry of the data.”

 

Consider the same set of questions above, except instead, focus on the result variable.  If you’re concerned about boundary testing, then if the maximum allowable value for the result variable is 100, for example, you have many interesting way to try to cross that boundary. (For example, enter 100 + 1; 0 + 101; -1 + 102; 99.99 + 0.02)  Testing with the primary focus of the result variable can generate a much richer set of tests to work with, and may lead you to discover more interesting information about your program than focusing on input boundaries alone.  This subtle distinction in the way you think about your tests could reap large benefits in the long run.

Monday, December 28, 2015

Ethics of Software Testing

As best I can remember, every company I have ever worked for had an official Code of Conduct. In them, workers are reminded that integrity and ethical conduct are expected in the course of performing their jobs and acting in the best interest of their clients.

 

There are numerous ways in which this applies to us as testers.  Put succinctly: Don’t Do Fake Testing.  Be honest, don’t plagiarize, act congruently.

 

James Bach has a good list of ethical principles and among them are:

 

·         Report everything that I believe, in good faith, to be a threat to the product or to the user thereof, according to my understanding of the best interests of my client and the public good.

·         Apply test methods that are appropriate to the level of risk in the product and the context of the project.

·         Alert my clients to anything that may impair my ability to test.

·         Recuse myself from any project if I feel unable to give reasonable and workman-like effort.

·         Make my clients aware, with alacrity, of any mistake I have made which may require expensive or disruptive correction.

·         Do not deceive my clients about my work, nor help others to perpetrate deception.

 

It’s not just about managing conflicts of interest and reporting outside business interests (although that’s important too.)  For testers, being ethical means we study about the relationship between our products and the world in which they run.  We don’t cheat, we don’t take credit for work we haven’t done.  We allow ourselves to grow by working through our assignments for ourselves.  We put our clients first because we report findings that we suspect may be problems for our users.  Expert testers are by definition, ethical.

 

Friday, December 25, 2015

Domain Testing

[Originally posted February 2014]

 

Earlier this year I took a trip down to Melbourne.  No, not all the way to Australia – I went to Melbourne, Florida; a small city on the eastern Floridian coastline, not too far from more famous Orlando, and certainly warmer than a New York January.

 

Melbourne is also the location of the Florida Institute of Technology, or as the locals call it, “Florida Tech.”  Cem Kaner, famous Testing author and speaker, originator of the Black Box Software Testing (BBST) education series, and my longtime professional friend is a professor of computer sciences there, and runs the Florida Tech’s Center for Software Testing, Education, and Research.  Cem, along with co-authors  Sowmya Padmanabhan and Doug Hoffman, have recently published a new 460-page softcover book, The Domain Testing Workbook, a companion to the latest online course in the BBST series.  What I have attended at Florida Tech was the first-ever live pilot of the Domain Testing class, the 4th in the BBST series.

 

You may think, as I once did, that Domain Testing is about testing software from a business or domain knowledge point of view.  This, actually, is not what the course was about at all.  Instead, Domain Testing is all about using variables and values of variables as the building blocks for your tests.  Think about testing as the domain of possible inputs and outputs to a system in a classic black box testing context.  The class was very intensive, and went far beyond and far deeper than simple equivalence class partitioning and boundary analysis.  There is surprisingly a LOT to know about testing variables.

 

This is not the place for an in-depth treatment of Domain Testing, but I want to relay a few key highlights that resonated with me:

 

I liked that the material was structured around a Domain Testing Schema.  The schema is a high level checklist that serves as an outline on how to do Domain Analysis, which is considered a key skill for software testers.  Starting with a variable tour, students were instructed to identify and classify the variables of a particular area of a sample program, using a mind-mapping tool of our choice.  Then, for a few of the variables, we were coached how to dig deep into them:  What are the variables’ primary dimensions and data types?  Could the values be ordered, and if so, on what scale?  Is it an input variable or a result variable?  Or maybe a temporary storage-related variable?  What are the consequences of the variable?  For example, could it be a variable that holds intermediate values of calculations, whose values constrain the set of valid values for the input variable that you’re testing?  And what are some of its secondary dimensions?

 

This variable domain analysis is a critical driver for another key skill for testers – technical analysis of potential software risk.  Depending on what kind of variable you are working with, different classes of risk are at play.  For example, bugs that would occur with an input string variable would not necessarily happen on an internal floating-point variable or an enumerated-list variable, and so different classes of tests need to be designed to trigger the different potential failure types.  One of our assignments was to come up with five different potential risks associated with an integer, floating-point, and string type variable.  For each of these 15 failure modes (5 risks for 3 variables), we needed to design at least one test that was specifically designed as a best-case representative to test for that risk, and explain what makes that test more powerful than other valid tests for that data-type-risk.  Like I said, the class was very intensive.

 

Which brings me to my concluding thought:  There was homework.  Homework!  I couldn’t believe it. But in retrospect, the homework is what made all the difference in the learning experience.  As much as I understood what was happening in class, when I sat down to work on the examples on my own, it was a whole different story.  The very action of working out the variable risk equivalence tables, making mistakes, and getting and applying feedback was the real learning experience, and was much more valuable than following the lecture in class.  And that, the cornerstone of the BBST courses, was yet another takeaway from my trip to Melbourne  - you can’t really learn how to do things unless you practice them.

 

Thursday, December 24, 2015

Snowmageddon

[Originally posted January 2015]

 

I heard an interesting comment on a radio news station the other day by a meteorologist who was being interviewed following the massive overreaction to the recent snowfall in New York.  (For those who don’t know, there were predictions of an “epic”, “historic”, and “crippling” blizzard in New York earlier this week, shutting down the city, which for the most part didn’t materialize.  Yes, there was some snow, but it was hardly “historic”.)  Admitting that the National Weather Service had made an error, the meteorologist explained that they rely on forecasting models.  Weather forecasting is a science, he said, and like any science, it is not perfect.

 

I thought that idea fit nicely with how we view software testing.  Every test is a question that we are asking of the product.  Every test is a search for information that our stakeholders need.  Every test is an experiment.  A scientific experiment.  Yes, testing is a science, and indeed, it is not perfect.

 

That’s why we, as scientific testers, use things called “heuristics” to design our tests, to build our models, and to evaluate our findings.  A heuristic could be thought of as a “rule of thumb”; something that is useful, usually correct, but sometimes fallible or incomplete.  Every method that we have of determining if a test passed or failed is heuristic, which is why human judgement is a required skill in testing and for making release decisions.

 

Like NY Mayor deBlasio said, “Would you rather be prepared or unprepared?  Would you rather be safe or unsafe?  My job as the leader is to make decisions and I will always err on the side of safety and caution.”  Yes, weather forecasting, like testing, is not perfect (yet), but we rely on human judgement, using the best knowledge we have to make the best decisions we are able.

 

 

 

 

Wednesday, December 23, 2015

Debunking Myths of Exploratory Testing

The article “Exploratory Testing: Tips and Best Practices” appears to present some helpful suggestions about implementing exploratory testing in your project.  However, certain assumptions and statements there indicate that perhaps that author is confused about some fundamental ideas of ET.  Below are a few difficult assertions in that piece, and my comments about them.

 

Ø  Exploratory Testing: Tips and Best Practices”

 

BB: We begin with the title.  There are, as is increasingly becoming known, no such things as best practices in the literal sense.  There may be good, recommended practices sometimes, but because different situations require different solutions, we can’t say that a practice is best in all situations, even for exploratory testing.

 

 

Ø  “Exploratory testing helps quality analysts and others involved in the testing field ensure systems and applications work for their users.”

 

BB:  Exploratory Testing does no such thing.  Testing cannot ensure anything.  Testing is a search for information; it is scientific experimentation.  Scientists know that experiments that are designed to confirm an expected value are of far less scientific value than experiments designed to discover how the hypothesis may be disproven (as we learn in BBST Foundations).  Similarly, the result of testing is to report findings and make recommendations rather than “ensuring” that the system works.

 

 

Ø  “Exploratory testing is often misunderstood as an approach…”

 

BB: He has is 100% backwards.  Exploratory testing is exactly an approach.  It is not a technique.  A technique is a way to do something, like following a recipe is  a way to bake a cake.  Exploratory testing isn’t so prescriptive.  By contrast, an approach is a more general concept.  It is the overall manner in which you act.  It is your gameplan, your strategic process, your perspective and your outlook.  Exploratory Testing is not a specific testing method used to accomplish a specific quality goal.  Instead, it is the overall manner in which testing occurs.

 

 

Ø  “You are not exploratory testing if you are following a script”

 

BB: Except when you are.  There is a continuum of exploration, and even when following a script you may be doing it in an exploratory way.

 

 

Ø  “The aim of exploratory testing is not coverage – it’s to find the defects and issues in a system that you won’t find through other forms of testing.”

 

BB: Except when the aim of exploratory testing IS coverage.  Usually, the aim of exploratory testing is to evaluate how much value the product will add or detract from its stakeholders, as the tester learns about the product.  It’s only about finding defects and issues when the information objective is bug hunting.

 

 

Ø  “Quite often edge case defects will have a high level of severity even if they are less likely to arise.  This is due to the nature of exploratory testing – it focuses on the parts of a system that are away from the normal usage pattern and are less likely to be well tested.”

 

BB: Except when the testing missions IS to execute the system in the normal usage pattern, for example in a “Ready for Business” or smoke test situation.

 

 

Ø  “Typically, exploratory testing needs a greater level of testing skill and experience than other testing techniques”

 

BB: As previously mentioned, exploratory testing is not a technique, it is an approach.  But the general idea is true.  ALL of testing requires highly skilled professionals.

 

 

Ø  “Keep a clear record of what you did….”

 

BB:  I agree.  But credit should have been given to Jonathan Bach who presented this material in his talk “How to Measure Ad Hoc Testing” at  STARWest 2000

 

 

Ø  “Use exploratory testing alongside automated testing”

 

BB: This statement assumes that exploratory testing and automated testing are two different things.  But they don’t have to be.  Just as so-called “manual testing” could be done in an exploratory or non-exploratory way; so to could test automation be done in an exploratory or non-exploratory way.  Exploratory automated testing is very powerful.

 

 

Ø  “ensure that the system is built to support exploratory testing, e.g. build functionality in slices that can be tested from start to finish as early as possible.”

 

BB: This is not necessary at all, as long as you focus your exploratory testing sessions around clear test missions.

 

 

 

 

Questions? Comments?  I’d be delighted to hear from you!

 

Tuesday, December 22, 2015

Classic Testing Mistakes - A Look Back

Back in 1997 Brian Marick (who later became one of the 17 original authors of the Agile Manifesto) wrote an article called “Classic Testing Mistakes” that had become fairly well known in the so-called QA community at the time.  According to Brian’s website, this is the paper that caused James Bach to exlaim, “Brian! I never thought you know anything about testing before!” at that time.  It’s interesting to take a historical look at this paper, and see what has changed over the past almost 19 years.  There is a lot to examine here, but I’d like to highlight two brief ideas:

 

Marick starts with the role of testing. “A first major mistake people make is thinking that the testing team is responsible for assuring quality”.  I think that by now, many organizations have abandoned that antiquated idea.  Quality is the responsibility of the entire organization, we are now told.  Indeed, I can’t think of a single place where I’ve worked where “Quality” was not listed on the company’s list of Core Principles.  On the other hand, there may be certain organizations in certain contexts where the “tester-gatekeeper” model might still sometimes be valid.  Once again, context is the key.

 

Along the same lines, Marick continues with a second classic mistake: Most organizations believe that the purpose of testing is to find bugs”  Instead, he explains, there is one key word missing:  Testers should be finding important bugs.  “Too many bug reports from testers are minor or irrelevant, and too many important bugs are missed”, he writes.  This is an interesting distinction, because it appears to be an early formulation of today’s context-driven working definition of quality and testing.  The now-famous definition of quality by Jerry Weinberg, “Quality is value to some person,” implies that testing is inherently subjective. What may be considered a bug to one stakeholder might be a key feature to another, therefore, the first order of business for the software tester is to understand what is important to their users so that they can focus on finding the important bugs, if that’s what they need to do.

 

Please take a look at Maricks paper.  Which of these do you think still apply? Have any of them changed? How?

 

 

Monday, December 21, 2015

Is QA Testing a cool job?

[Originally posted May 8, 2011]

Last week I was at the STAREast software testing conference in Orlando, Florida, where I was asked to be a panelist at the Bonus Session "Trends, Innovations and Blind Alleys", along with experts Jon Bach, Ross Collard, Rob Sabourin, Justin Hunter, Michael Bolton, and Julian Harty. Someone from the audience asked us about the perception of QA or Testing as not being "cool"; that entry level technologists wanted to get into the "cool jobs", like programming and development, and not Testing or QA, which wasn't.

I didn't have a response at first, but after a few minutes of quiet thought I chimed in to the ongoing discussion by my fellow panelists. I said that I don't think that a job is something that has the capacity to be "cool" or "not cool". People can be cool...or not cool.

It's up to you: If you come to work everyday with pride and self confidence, then you're working in a cool job. On the other hand, if you come in with an inferiority complex, having doubts about the amount of value you're adding to the project, then you yourself are creating the non-cool job that you find yourself in. It has nothing to do with your job itself; it has everything to do with what you bring to your job.

Sunday, December 20, 2015

Say "I don't know" like a NY Crime Scene Investigator

The television show CSI is all about searching for information by asking questions. (My favorite happens to be CSI NY.)

From the beginning of the episode when the crime is committed until the end of the show when they snatch the bad guy, the Crime Scene Investigators continue to search for clues, question, make inferences, question again, reconcile conflicting evidence, question once more, create hypothesis, and then ask even more questions. As each show approaches breakthrough point where all the pieces of information fit together and a plausible story is constructed, the investigators ask each other questions to which they don't yet know the answers.

If you watch with a critical eye, you can pick up some tips for yourself when you don't have answers being asked of you. Here are some examples:

* "That's what we have to find out"
* "But whatever it is will bring us one step closer to an answer"
* "I'll find out and let you know"
* "I just did the math and it doesn't make sense"
* "Maybe, [offer suggestion]"
* "We're working on it"
* "I don't have a good answer as to why"
* "It's starting to take shape, we just don't know what shape that is"
* Q: "Do you think that...?" A: "If it is, then..."