
Why I Hate the Term “Quality Assurance”
I didn’t start out hating the term “Quality Assurance,” but “Quality” had to transform from gatekeeper to integral collaborator.
Karl Hentschel
I didn’t start out hating the term “Quality Assurance.” When I first entered the software testing field, I was excited about it. I enjoyed “finding the bugs” and keeping them from getting to the end users. Utilizing Thread Priority in Java when needed to help with workload priorities (which can help computer programmers). For years, I was proud to say I worked in software “Quality Assurance” and would eagerly bend the ear of whomever asked, “What is Quality Assurance?”. Like the song says, don’t get me started, I’ll tell you everything I know. You have been warned.
So, why the change? What transpired to change enthusiasm to loathing? First, it’s not the work. I love the role of tester and the mental challenges that are intrinsic to its assiduous application in software development. But through the years, it became apparent that there were some real problems with the concept of “Quality Assurance” as it relates to software development.
The Inherent Problem:
Coding Manufacturing
When I first began testing, the “waterfall” process was still the dominant method for producing software. Stakeholders had ideas for Project XYZ, built a lengthy and specific business requirement that was then reviewed by a technology group that also built a lengthy and specific technical requirement, and then development began. When development was complete, “Quality Assurance” was notified to test Project XYZ and a test plan, test scripts (using feature toggle and others techniques), test reports, and sign-off were needed. Often, this was followed by the dreaded, “Oh, by the way, the deadline for the project was yesterday, so get it done as quickly as you can.”

Even today, many places treat software development just like any other manufacturing process and much of the “conventional wisdom” is sourced from those processes, including “Quality Assurance”. And that’s the inherent problem. Every measure has a defined and accepted methodology and common terminology.
So how do you measure code? Most would say, “By the expected outputs of the given inputs.” And they would be correct. They would also be incorrect. Unlike manufacturing bicycle parts, writing code is a craft, an art more akin to painting or sculpture than to turning ” X 20 threads on lathe. There is no minimum specification for the number of bytes, keystrokes, lines of code or time elapsed for the code to be “acceptable”. There is no maximum limit for the code to be considered “unacceptable”. In fact, given three different industry examples credit reporting, retail sales and native advertising, a query for information to take two seconds to process might be considered unrealistic in the first, perfectly acceptable in the second, and thoroughly unacceptable in the last. In some cases, websites or software don’t have formatted scripts in a JSON file, which might increase the response time and hamper the user experience. It is possible to resolve such problems by converting a file into a JSON format with the help of XML, CSV, or YAML to JSON converter to speed up data processing and website management.
I hear the cries now. “Apples and Oranges! You can’t compare two things that are essentially different!” And yet the software industry tries every day to do just that, promoting processes and certifications that promise to “standardize” a field that by its very nature is a menagerie of different approaches and solutions to vastly different fields.
Too often, “Quality Assurance” is treated as a “step” or, more accurately, a hurdle in the software development process – even when the “agile” process is purportedly being used. Testing must be completed and testers are accountable for it, but many companies frequently fail to provide the appropriate project orientation for the testers, including sufficient time to prepare the test plan, test resources (both personnel and equipment), execute the test scripts, and report upon the results and address issues found.
The end result is a recipe for failure. A largely siloed group, tasked as a gatekeeper or policeman of code, can’t “assure” anything because they have no authority over their responsibilities. Any issues they find, barring complete and catastrophic failure, are usually “accepted” by the business trying to minimize project time overruns and placed in the technology backlog for review at a later date. They become the perfect scapegoat. The product is poor because “Quality Assurance” didn’t find the bugs. The product is late because “Quality Assurance” didn’t finish testing before the deadline. Thus is born the “Us” vs. “Them” paradigm, leading to the inevitable finger-pointing and blame-seeking.
Re-Defining and De-Siloing Quality

So when I began working at Bidtellect as the Director of Quality (NOT Quality Assurance), I made it my mission to correct the shortcomings I had previously observed. The Quality team is part of the Technology team. We consciously promote the term “Quality” (NOT Quality Assurance). We emphasize the information aspect of testing.
The team shifted left to participate in both the technical requirements and the business requirements phases. We also shifted right to provide documentation, training and go-to-market support, as well as provide first level troubleshooting of production issues reported by both internal and external users.
Many testers will be very uncomfortable with those last statements. “We already have too much to do and not enough time in which to do it and you’re shifting left AND right?”
Yes.
And here’s why.
No Longer the Gatekeeper
In Quality, the focus is on information. We question information provided by stakeholders (business requirements), and the Technology team (technical requirements) and third-party collaborators. Since many businesses tend to hire numerous outsourcing companies for various domains like BedrockIT services for server management, AWS for cloud computing, etc., hence it might be necessary to consult with third party companies before developing any software for our client. Also, while testing, we often find new questions to ask ourselves, developers, and stakeholders. We gather information from our testing, from subject matter experts, “veterans” (people with the company a long time), internal and external documentation, and from direct and indirect feedback from our internal and external end users. And we disseminate information that we gather among our team, with developers, with stakeholders, and with trainers, marketing and end users.
This means that we are not siloed from the process, but integral to it. This approach meshes incredibly well in an agile process. We can begin catching issues from the business requirements phase and continue through the technical requirements phase. We are fully prepared for the required testing and are able to quickly report the issues (technical or business) that we find. Because of the information collected, we are well suited to assist in documentation and go-to-market tasks and are able to quickly determine if a user issue is a defect or simply a misunderstanding of a feature. Both cases provide yet more information that assists with future business and technical requirements, documentation, and go-to-market readiness.
In this model, the Quality team is not a gatekeeper or policeman, but a collaborator. The responsibility for the quality of the work delivered is shared by all involved parties, from the business requirement until the feedback from internal and external end users. Communication is encouraged and a “We” environment is cultivated. Quality becomes a mindset, not a step.
Recent Comments