Friday, June 8, 2018

Super Experienced Testers in a Millennial World

Super Experienced Testers in a Millennial World

June 3rd 2018

NYC

 

 

The day started over a light breakfast of bagels and coffee, as the participants arrived to the Breather rental room off of 7th Avenue in NYC.  Once we were all settled in and the opening chimes rang, we began the LAWST-style peer-conference in the traditional way:  welcoming remarks by the content owner (Bernie Berger), a review of the ground rules by the facilitator (Geordie Keitt), and a round-robin check-in of the participants (Anna Royzman, George Hamblen, Jack Weintraub, Ben Weber, Andrew Hwang, Joel Montvelisky, Aaron Berger and Dave Rabinek).  The IP agreement was read, and all present agreed.

 

 

Formalities out of the way, we were ready to jump in.  George presented an ER about his transitions along his career from the time he received his college degree until today, highlighting the inevitable twists and turns that occurred and how he coped with them. The focus was on what he has learned from these changes, what were some trade-offs that were considered, and what advice he would give his younger self today.  Key takeaways from this session were:

 

• Always keep your eyes and ears open for opportunities
• Strike up conversations wherever you go, because you never know whom you might meet
• Put in chips to many people so you can cash in later
• Come in with an "add value" attitude
• Apply the Myers-Briggs model to office relationships, and read the books Work Types and How to be a Star at Work

 

The presentation was followed by a lively Open Season Q&A session where participants asked for further details and supported George's ER with vignettes of their own. 

 

 

 

After a short break, we returned to the table to participate in the first of the day's group activities: a brainstorming session to gather a list of methodologies, fads, technologies, acronyms, tools, and buzzwords in computing that's been popularized over the past half-century.  This was great fun, as we took this stroll  -- though at times frightening – down our collective memory lane. We captured each topic on it's own Post-It note.  After the idea generation slowed down, we categorized the Post-Its on a timeline based on it's historical peak in popularity, with Y2K being the central dividing point.  We ended up with two lists: Popular IT fads pre-Y2K, and those that became well known only within the past 18 years.  The output of this activity, a framing exercise, was to generate discussion to see if any trends can be drawn.  Can we trace today's Continuous Delivery Integration and DevOps models to the CMM and Punchcards of yesteryear? Can we see if any predictions for the future could be made based on observations of the past? 

 

 

 

 

 

This was an exciting start to an intriguing discussion but  instead of delving into it in detail, we decided to leave this for future treatment and instead broke for lunch.  We enjoyed sandwiches and each other's company at the cafĂ© across the street, and we're grateful to AST for their sponsorship of this special event.

 

 

Bellies full and minds still hungry, we returned to the workshop room for the second half of the day which Dave Rabinek began by presenting a phenomenal ER about his personal experience working in the famed "Bridgewater Culture".  Dave explained how the notion of Radical Transparency, as described in the book Principles by hedge fund manager Ray Dalio, impacted quality at Bridgewater. The ER and subsequent Open SeasonQ&A session developed into a spirited group discussion about the overlapping dynamics of truth, friendship, and vocabulary and its impact on quality.

 

Some key takeaways were:

 

• Know what you don't know and tell the group because you will be called out on that later
• Don't feel bad about your mistakes, you are expected to make them
• An accurate understanding of reality is the essential foundation for good outcomes
• Get comfortable with revealing your weaknesses
• Begin professional criticism with the phrase, "This is nothing personal, but…" 

 

 

Our next segment was another group activity which we later named "Mad Libs".  The activity was to fill in the blanks in the following two related statements:

 

I used to think that ________.  But I now think that _______.

 

This exercise drew out insights that each of us had over our aggregated 300 years of experience and brought to light various aspects of how our thinking has evolved over time. We discussed how we think of ourselves today compared to when we first began our careers, what we expect of ourselves as we age, how we learn, what our impression is of the state of the testing world, and methods for getting things done efficiently.  We had meaningful discussions around each of these, and we could see this single activity develop into something larger and more profound to share with today's generation of technologists.  Not everybody present agreed with everything said, and that's OK.

 

I used to think that_____

I now think that_____

Calling testing groups "QA" is a bad thing

Group names are irrelevant

Professional management & relationships should be based on friendliness 

Professional management & relationships should be based on honesty

A college degree matters

Best fit person for the job

I am a firefighter

I am a player/coach

Software testing as a craft is developing to the point that everyone will understand what skilled testing is

We need to educate, educate, educate, and they won't ever get it

There are such things as "guru"s

Don't let other people do your thinking for youThere is, in fact, no such thing as a guru.  Everyone has their own experiences and the network is the guru

QA was absolute

Software quality is benchmarked against customer expectations.  If you release bad software but they are expecting it to be even worse, you're OK

I had to compete with young technologists

It's your company, let me know how I can help.

You have to wait for the perfect job

 

Don't be afraid to jump in and try a job. You could be in, out and on to the next, better opportunity by the time you emerge from your analysis paralysis

I would reach a point where I "knew it" and could stop learning. Once you graduate from certain tasks, you are beyond them.

 

You never reach that point. You always need to keep learning, which is a good thing, it means your brain remains valuable

That when I reached my mid 50s I would wind down, with the finish line in sight, and not need to invest in developing my career

With 5 – 10 years to go, I still need to plan like a young guy, learn new things, and bring all the same energy and enthusiasm to the game. This is particularly important, so as not to appear like the "old guy"

 

Developing my professional network was a fun activity which I enjoyed doing

It was a vital activity which is bearing great fruits 20 and 30 years later, both personally and professionally

 

 

After another short break, we turned to the final activity of the day, which was a "show and tell" of the books on our resource table.  One of the many books brought in was the Conference Proceedings from STAREast 99, including the slides of a presentation given by George Hamblen in 1999 about managing outsourced testing projects.  That talk,  attended by Bernie Berger (and CemKaner), was the first time we had met each other.  

 

 

At the conclusion of the day, everyone confirmed what we had been feeling: that it was well worth spending half our weekend away from our families and other obligations to meet up with each other and share our experiences, forge new professional relationships, and be a part of this community-building experience. We decided to meet again in six months...

 

 

 

 

 


QWERTY

Aeiou

Why

Sent from my iPhone

Friday, January 15, 2016

A Fable: Thinking Out of the Box, part 2

[This is part 2 of my previous blog post “A Fable: Thinking Out of the Box”  please read part 1 first]

 

 

A week has passed since your “out of the box” experience and you haven’t stopped thinking about it since.  Every day you have been thinking, pondering, quietly reflecting on the very idea that there is professional wisdom “out there” that not only don’t you understand, but seems to be in direct contradiction to everything you’ve ever known about QA.  Determined, you decide to explore this a little more.

 

Once again, you rise up above your box, hovering over the landscape.  Floating back to that strange-idea neighborhood, you hope that this time you might be able to meet an owner of one of those boxes.

 

Good news! As you arrive, you see a scholarly looking bearded man next to the oldest looking worn-out box.

 

“Greetings, Tester!” he warmly welcomes you as he looks up from his smartphone.  “I was just updating my Twitter feed.  You know, to stay relevant in this testing industry you need to keep on top of the latest innovations and controversies.  There are amazing conversations about software testing happening in the #testing hashtag every day.  For instance, this #stop29119 petition I was just reading about.  But enough about me.  I see you came from the other side of town.  How can I help you?”

 

“Well,” you begin, “you see, I was just wondering how it could be that you and your box look so wise and experienced, yet, I saw that your box contains some strange sounding wisdom.  So I wanted to understand what that’s all about.”

 

“Very good!  Please give me an example so I can help you”, he says.

 

“OK, well how about when you say, ‘Regression testing is a waste of time’…how can you say such a thing?  We need to run our regression suite each time the product is updated to make sure that no new defects have been introduced!”

 

The bearded man smiles.  “A lot of people have trouble with that one”, he replies.  After pausing for a few moments, stroking his beard in deep thought, he continues.  “Think of it this way.  If you’re concerned that there may be a problem with the updated build, then of course you should test it.  But that’s not what I mean.  I’m  talking about re-running the exact same tests in the exact same way each time.  What new information will you learn about the product if you do that?  Maybe a little, but whatever you learn it’s hardly worth the effort.”

 

“Yes, but don’t you have to ensure there are no defects?”, you ask.

 

“Look.  Are you familiar with the minefield analogy of testing?  Think about walking through an open field that has explosive mines planted all around it in random spots.  You see the footprints of the person head of you, who has already gone through the minefield.  If you want to avoid stepping on the mines and blowing yourself up, you’re best off walking exactly in the same path, footprint for footprint, of the person that went before you, without any variation whatsoever.  You would be very careful to do that exactly, so that you’ll make it through without anything exploding around you.  The same thing is true when bug hunting.  If you want to AVOID finding any new bugs about the product, then you should follow exactly, without variation, the same tests that have been run before. But when testing, we DO want to find new information.  So when we run a new test we will get that new information, but when we run an old test it’s more likely that we won’t.

 

“Pay attention”, he continues, “because this is the key point”.  “I’m not saying that all regression testing is inefficient.  If you run the same tests a little harsher each time you will learn something from your experiment.  Remember, testing is scientific experimentation, and if you want to learn how to run great tests, you should learn how to design great experiments.  Just like a scientist -  because that’s what expert testers are...scientists.  So, for example, if you use more challenging test data than the last time, or try different features in different combinations or sequences each time, you will learn something and you may find new information to report to your stakeholders.  Isn’t that what you’d like to do?  Does this help you understand?”

 

You’re overwhelmed.  Somehow this makes sense, but you’re still confused.  This is a completely different way of thinking about testing than you’re used to.  It’s a different perspective…no...a CONTRARY perspective than what you’ve known up until now.  Mixed emotions are flowing through you: confusion, excitement, fear, skepticism, anticipation.  “I guess this makes sense”, you finally say.  “But I need some time to think this through.

 

“I’m so glad you said that!” the bearded man exclaims with a huge smile on his face. “Never ever believe anything anyone tells you without thinking it through for yourself! That’s one of the first rules of being an expert.  You are well on your way!”

 

“Thank you.  Hey, it’s late, and I need to get back.  Can I come back here sometime?  I have a feeling that there’s a lot more I can learn from you.”

 

“Are you kidding?  Of course you can…I’d be thrilled if you came by again.”

 

“Thanks again, take care.”

 

You head back home.  You’re looking forward to learning more about this interesting new outlook about testing. You get home to your box, and thoughtfully, open a Twitter account.

 

 

To be continued….

Monday, January 11, 2016

A Fable: Thinking Out of the Box, part 1

Imagine that you have a big box that contains all of your professional wisdom.  Inside is all your knowledge and understandings, your habits and customs, behaviors and impulses about your QA work.  Inside you’ll find words: “requirements”, “validation”, “functionality”.  There are phrases: “best practices”, “compare actual results with expected results”.  And there are sayings you hold to be universal truths: “regression tests should be automated”, “bugs found earlier in the lifecycle are cheaper to fix”, “tests need to be traceable to requirements.”

 

Now, you transcend your box.  You are hovering in the air looking down at your box from above.  You look around.  Right next to your box you see other people’s boxes, and they look exactly like yours and have similar contents.  You feel happy and secure, like you’re in the right place at the right time, proud to be a member of this club.  But you keep looking around, a little further away, and you see a few other boxes that don’t quite look like yours.  You float over there to take a closer look.  You see these boxes, some of them much older and worn out, as if they have been filled and emptied many times over a long period of time.  Longer than your box even existed.  Curious, you peek inside.

 

You can’t believe what you see!  Shock! Blasphemy!

 

“There are no such thing as best practices”

 

Huh?

 

“QA is not about finding defects, but testing is about searching for information”

 

What!?

 

“Regression testing is usually a waste of time”

 

What on earth is this?!  Who’s box am I in?  And how can it be that this is an OLD box, one with more years of experience than my own???

 

You’re scared.  You rush back the safety of your own box, your own world.  You feel betting in your own surroundings, but can’t stop thinking about that other box.  After all, everything you’ve ever known exists right here.  Could it be, possibly, that there exists professional wisdom out there that is different than your own??

 

To be continued…..

 

 

 

Wednesday, January 6, 2016

State of Testing Survey 2016

My friend Joel Montvelisky is conducting a survey for software testers.  According to his website blog, the survey seeks to identify the existing characteristics, practices and challenges facing the testing community in hopes to shed light and provoke a fruitful discussion towards  improvement.  The survey goes live tomorrow, Please check it out!

Wednesday, December 30, 2015

The Inefficiency of Gifts and Software Metrics

The Wall Street Journal (12/24/15, “If You’ve Bought No Presents Yet, These Wise Men Applaud You”) reports that from a purely economic point of view, holiday gift giving is a wasteful practice because it reallocates resources inefficiently.  On average, gift receivers would have been willing to spend on themselves only about 70% of the cost of the present that the gift giver has paid, leaving an inefficiency ratio near 30%.   It’s more cost-effective to give cash or gift cards, they argue, because the value of the cash gift is undeniable. To the receiver, the value of x dollars given is exactly x. (Usually.)

 

Yet, the WSJ continues, not a single economist out of the 54 they interviewed for the article heeded to their own advice.  Every one of them both received and purchased gifts for their loved ones this holiday season.  It seems that despite the challenging hard data opposing presents, the warm feelings of the holiday season take over.

 

I see this a little differently. Perhaps the 30% economic “inefficiency” can be considered as an emotional surcharge built into the cost of the physical present.  How much more a person is willing to pay for a present, above what the receiver would have paid for it themselves, isn’t necessarily a real inefficiency that requires correction, instead, it’s a measureable attribute of the giver’s current emotional status.  Fluffy emotional stuff like love and guilt sometimes merge with hard data like economic efficiency ratios.

 

Which brings us to the tricky world of software metrics.

 

The traditional approach to measuring performance is heavily dependent on quantitative numbers-and-formula-based assessments.  For example, questions like: How many test cases did you write today?  How many bugs did you report?  How many tests did you run?  For how long was the environment down?  typify the hard-data approach to software metrics.  However, there is a hidden inefficiency here, too.

 

Students of software testing would recognize that quantitative software metrics like the questions above are almost always subject to measurement dysfunction, the idea that when you measure particular aspects with the intention of improving them, they will improve, but only at the detriment of other important aspects that you may not be measuring.  Adding context-driven qualitative measures to a traditional metrics program may help.  Instead of only depending on the numbers, a qualitative system looks for an assessment based on a fuller story.  Having a fuller conversation with the test team may provide a deeper understanding of the projects progress and danger points.

 

Like gift giving, there is an emotional aspect to software metrics as well.  Pride, fear, anger, despair, overconfidence, the list goes on.  These aren’t inefficiencies, they are expected and a natural part of human endeavor.