Why I Hate the Term “Quality Assurance”

Why I Hate the Term “Quality Assurance”

I didn’t start out hating the term “Quality Assurance,” but “Quality” had to transform from gatekeeper to integral collaborator.

Karl Hentschel

I didn’t start out hating the term “Quality Assurance.” When I first entered the software testing field, I was excited about it. I enjoyed “finding the bugs” and keeping them from getting to the end users. Utilizing Thread Priority in Java when needed to help with workload priorities (which can help computer programmers). For years, I was proud to say I worked in software “Quality Assurance” and would eagerly bend the ear of whomever asked, “What is Quality Assurance?”. Like the song says, don’t get me started, I’ll tell you everything I know. You have been warned.

So, why the change? What transpired to change enthusiasm to loathing? First, it’s not the work. I love the role of tester and the mental challenges that are intrinsic to its assiduous application in software development. But through the years, it became apparent that there were some real problems with the concept of “Quality Assurance” as it relates to software development.

The Inherent Problem:

Coding Manufacturing

When I first began testing, the “waterfall” process was still the dominant method for producing software. Stakeholders had ideas for Project XYZ, built a lengthy and specific business requirement that was then reviewed by a technology group that also built a lengthy and specific technical requirement, and then development began. When development was complete, “Quality Assurance” was notified to test Project XYZ and a test plan, test scripts (using feature toggle and others techniques), test reports, and sign-off were needed. Often, this was followed by the dreaded, “Oh, by the way, the deadline for the project was yesterday, so get it done as quickly as you can.”

Even today, many places treat software development just like any other manufacturing process and much of the “conventional wisdom” is sourced from those processes, including “Quality Assurance”. And that’s the inherent problem. Every measure has a defined and accepted methodology and common terminology.

So how do you measure code? Most would say, “By the expected outputs of the given inputs.” And they would be correct. They would also be incorrect. Unlike manufacturing bicycle parts, writing code is a craft, an art more akin to painting or sculpture than to turning ” X 20 threads on lathe. There is no minimum specification for the number of bytes, keystrokes, lines of code or time elapsed for the code to be “acceptable”. There is no maximum limit for the code to be considered “unacceptable”. In fact, given three different industry examples credit reporting, retail sales and native advertising, a query for information to take two seconds to process might be considered unrealistic in the first, perfectly acceptable in the second, and thoroughly unacceptable in the last. In some cases, websites or software don’t have formatted scripts in a JSON file, which might increase the response time and hamper the user experience. It is possible to resolve such problems by converting a file into a JSON format with the help of XML, CSV, or YAML to JSON converter to speed up data processing and website management.

I hear the cries now. “Apples and Oranges! You can’t compare two things that are essentially different!” And yet the software industry tries every day to do just that, promoting processes and certifications that promise to “standardize” a field that by its very nature is a menagerie of different approaches and solutions to vastly different fields.

Too often, “Quality Assurance” is treated as a “step” or, more accurately, a hurdle in the software development process – even when the “agile” process is purportedly being used. Testing must be completed and testers are accountable for it, but many companies frequently fail to provide the appropriate project orientation for the testers, including sufficient time to prepare the test plan, test resources (both personnel and equipment), execute the test scripts, and report upon the results and address issues found.

The end result is a recipe for failure. A largely siloed group, tasked as a gatekeeper or policeman of code, can’t “assure” anything because they have no authority over their responsibilities. Any issues they find, barring complete and catastrophic failure, are usually “accepted” by the business trying to minimize project time overruns and placed in the technology backlog for review at a later date. They become the perfect scapegoat. The product is poor because “Quality Assurance” didn’t find the bugs. The product is late because “Quality Assurance” didn’t finish testing before the deadline. Thus is born the “Us” vs. “Them” paradigm, leading to the inevitable finger-pointing and blame-seeking.

Re-Defining and De-Siloing Quality

So when I began working at Bidtellect as the Director of Quality (NOT Quality Assurance), I made it my mission to correct the shortcomings I had previously observed. The Quality team is part of the Technology team. We consciously promote the term “Quality” (NOT Quality Assurance). We emphasize the information aspect of testing.

The team shifted left to participate in both the technical requirements and the business requirements phases. We also shifted right to provide documentation, training and go-to-market support, as well as provide first level troubleshooting of production issues reported by both internal and external users.

Many testers will be very uncomfortable with those last statements. “We already have too much to do and not enough time in which to do it and you’re shifting left AND right?”

Yes.

And here’s why.

No Longer the Gatekeeper

In Quality, the focus is on information. We question information provided by stakeholders (business requirements), and the Technology team (technical requirements) and third-party collaborators. Since many businesses tend to hire numerous outsourcing companies for various domains like BedrockIT services for server management, AWS for cloud computing, etc., hence it might be necessary to consult with third party companies before developing any software for our client. Also, while testing, we often find new questions to ask ourselves, developers, and stakeholders. We gather information from our testing, from subject matter experts, “veterans” (people with the company a long time), internal and external documentation, and from direct and indirect feedback from our internal and external end users. And we disseminate information that we gather among our team, with developers, with stakeholders, and with trainers, marketing and end users.

This means that we are not siloed from the process, but integral to it. This approach meshes incredibly well in an agile process. We can begin catching issues from the business requirements phase and continue through the technical requirements phase. We are fully prepared for the required testing and are able to quickly report the issues (technical or business) that we find. Because of the information collected, we are well suited to assist in documentation and go-to-market tasks and are able to quickly determine if a user issue is a defect or simply a misunderstanding of a feature. Both cases provide yet more information that assists with future business and technical requirements, documentation, and go-to-market readiness.

In this model, the Quality team is not a gatekeeper or policeman, but a collaborator. The responsibility for the quality of the work delivered is shared by all involved parties, from the business requirement until the feedback from internal and external end users. Communication is encouraged and a “We” environment is cultivated. Quality becomes a mindset, not a step.

The Native Holiday Infographic You Need Now

The Native Holiday Infographic You Need Now

‘Tis the Shopping Season…

 

If planning your holiday campaigns feels overwhelming – fear not. Arm yourself with the stats that matter for the most up-to-date programmatic strategy. The focus of this Infographic is the most recent data on holiday retail spend, holiday shopper patterns, and content marketing including mobile and video. Bonus: scroll down for 4 Key Takeaways you can implement now. 

Retail Holiday Spend 2018

 

Total Holiday Spend

Total U.S. Holiday retail spending increased 5.4% to $998.32 billion in 2018 

November + December

Total U.S. online sales in November and December 2018 reached $122 billion, an increase of 17.4% from 2017

Trillion Dollar 2019

eMarketer predicts that the 2019 holiday season will see healthy US retail spending growth of 3.7% to $1.035 trillion

Black Friday and Cyber Monday

Black Friday Weekend

About 18% of shoppers consolidate all of their shopping to the Black Friday-to-Cyber Monday period

Billion Dollar Weekend

Black Friday 2018 (Thanksgiving Day plus Black Friday) raked in $9.9 billion in online sales – a 19.7% increase over 2017

Mobile Revenue

Mobile accounted for 34% of Black Friday weekend revenue (26.3% smartphones, 7.7% tablets)

Online Only

63% of Black Friday weekend shoppers spent time shopping online – researching, browsing, and purchasing

Cyber History

Cyber Monday was the heaviest online spending day in history, with a reported $7.87 billion

Content Matters to Holiday Sales

Research First

53% of holiday shoppers say they always do research before they buy to ensure they are making the best possible choice

Last Minute Hunt

51% of Last-Minute Shoppers said they weren’t certain where they wanted to buy, or they had multiple retailers in mind, when they started shopping

Quick Gifters

48% of shoppers want to get their shopping done as quickly as possible.

Content First

Content makes consumers 131% more likely to buy.

Mobile is the Move

Higher CTR

According to data pulled from Bidtellect’s platform, in Q4 2018 Mobile CTR was 150% higher than Desktop and 19.2% higher than Tablet

Shopping In-Store

72% of Black Friday weekend shoppers used their mobile device to shop and/or browse, with 52% using their mobile device while in store

Last Minute Shoppers

65% of last-minute (two days before Christmas holiday) shoppers used their mobile device to shop and/or browse, with 43% using their mobile device while in store

Target Accordingly

67% of smartphone users are more likely to purchase from companies whose mobile sites or apps customize information to their location.

Don’t Forget Video

 

Click-to-Play Engagement

In-Feed Click-to-Play Video engagement grew 143% from 2017 to 2018

Video Inspo

Two Thirds (⅔) of shoppers say online video has given them ideas and inspiration for their purchases

The Top 4 Takeaways

Consider the Entire Customer Journey

Holiday shoppers might start early (before November) and shop all the way through December or start browsing online then buy more in store, or read articles for ideas then buy a month later… The possibilities are endless. So target and retarget using a multichannel approach.

Mobile Matters

Incorporate mobile into your strategy. Do it. Also remember that the the buy online, pick up in store option is gaining momentum

Go Native

The best way to distribute your content is with Native. Simple as that.  Our Top 3 KPIs for Retail Advertisers in 2018 were: 

  1. Drive Traffic to Blog Content
  2. Offer-Driven Creative (Like a Promotion)
  3. Drive Sales

Content. Content. Content.

Don’t shout at your shoppers, inspire them. Entice them. Offer articles, blog posts, and videos with ideas and inspiration to make their shopping easier. 

The Ultimate Guide to Black Friday 2019

Planning campaigns for the busiest shopping day of the year need not be overwhelming (we promise!) We put together everything you need to know as you prepare for Black Friday 2K19, including best practices, strategies, and need-to-know stats. 

Want more? Subscribe to our monthly newsletter.

Subscribe

* indicates required