How to Make the Switch to Test Automation

How to Make the Switch to Test Automation

It’s happening!  Either you’ve finally convinced upper management that it’s time to use automation in testing, or they’ve come to you with a mandate to automate testing.  Hopefully, you’ve laid the groundwork that automation isn’t a panacea, manual testing isn’t going away, headcount isn’t going to go down, and feature throughput isn’t going to skyrocket.  If you haven’t, you need to set the expectations. Either way, it’s an exciting and scary time, especially if your experience with automation is little or none. So now what?

You can ask for help

If you’re thinking you can get automated testing up and running and useful on your own, you may be right. But you’re likely very wrong. Can you home-grow automation expertise? Yes. Do you have the time (years!)? Likely not. You need help, and that’s something that can be hard for some, especially if you’ve built a solid reputation as a testing professional. For the best return on investment and probability of success in a reasonable time frame, you will need to hire either a full-time automation professional, or the right consultant. Unless you’re an expert in automation, YOU AREN’T AN EXPERT! So put your ego aside and find the right help.

If I’m not an expert, how can I get the right help?

Excellent question! You may not be an automation expert, but you can figure out what your automation expert needs to bring to the table. Read blogs about automation, especially if they’re by automation engineers. You probably know people (either directly or indirectly) with whom you can have conversations about automation and what to look for, especially “red flags.” Get with your team and find out what you need (and don’t need) automation to do for you.

For my company and my team, we wanted:

  • Robust, low-maintenance tests that concentrated on the most critical parts of our systems.
  • Automated tests that would relieve testers of the tedious and time consuming tests.
  • An automation engineer that:
    • Shared our quality and testing philosophies.
    • Understood our automation goals.
    • Had done it at least once before.
    • Had done it more than once and in different ways, OR
    • Had done it once and wanted to approach it differently this time.
    • Had a proven track record mentoring testers in automation.
[/et_pb_section]

Are you asking the right questions?

To find the right candidate, you need to ask the right questions. Or do you? How do you get candidates to REVEAL what you want to know without directly asking? I like to use “Describe”, “Explain” or “Tell” requests rather than asking questions outright:

  • Explain the primary reason for automated tests.
  • Describe the benefits and pitfalls of automation.
  • Tell us about an automation achievement that stands out in your mind.
  • Explain what you would do differently if you had to do that project now.
  • Explain how you determine which tools to use.
  • Describe your plan for developing automated tests.
  • Explain the automation pyramid (if they haven’t yet mentioned it).
  • Explain how you would introduce manual testers to automation.
  • Describe your mentoring approach and how you will get testers excited about automation.

Miracles require a lot of work and preparation

So you’ve done your research, talked to your peers, defined your needs with your team, posted the position, interviewed the candidates and you found and hired the perfect candidate.

Congratulations! You’re home free!

Not exactly.

Hate to break it to you, but automated tests don’t just happen. And even if you manage to find someone that’s familiar with your vertical, she/he won’t be familiar with your specific products and code. You need to onboard them, just like any other tester, AND onboard them just like any other developer. Then you need to work with them to develop a strategy, decide what work needs to be done first, investigate tools and approaches, do a Proof of Concept, and then, after that, the work really begins!

At this point it’s worth mentioning again that YOU AREN’T THE EXPERT! They are. So you still need to keep your ego out of it.  You need to be realistic…and then some…with timelines and milestones. You’re not getting a fully developed testing framework and 100% code coverage in phase 1. Or even by phase 5. Or even if you purchase an existing automation solution (because no matter how “universal” the “kit”, you’re still gonna need to customize it for your “ride”).  You need to organize things based on your company’s and your team’s needs. Start with useful tests that eliminate pain points. Evaluate the “Return on Investment (ROI)” for your proposed automation efforts and determine which will give you the biggest bang for your buck. Always encourage and weigh the possibilities of solving tactical problems with strategic solutions, especially if the added effort is marginal.

Protect your investment

Whether you hire a permanent automation engineer, or a consultant, you MUST give them the time to get the work done.  The automation engineer is NOT a sprint test resource at this stage, and, in my opinion, shouldn’t ever be considered as such.  The job is to create, maintain and update the framework for automated tests, not test sprint features. The framework will never get to be a solid and useful tool if your automation engineer is constantly pulled away to test sprint work.  Do that, and you’ll not only not have a testing framework, but you’ll lose the automation engineer to a company that lets them do what they do best: build automation frameworks. And you may lose some of your best testing experience if your manual testers, once excited about getting the drudge testing out of the way so they can do some really good exploratory stuff, are a year in, with no solid framework, still doing the mind-numbing testing, and with no light at the end of the tunnel.

Don’t ignore your testers!

Your testers are your expertise in testing your systems. Leverage that experience and use it to propel the automation forward. They complement the automation, because they know which tests should be automated (and which shouldn’t). The automation engineer mentors them in how to use the framework to create automated tests. If they don’t have what they need in the framework, they inform the automation engineer and she/he develops what they need. As their experience grows, they can get deeper into framework to learn how its structured, and how to expand upon it.

Evaluate

Remember that ROIs can change with business changes and, subsequently, what gives the biggest bang for your buck will change. Do not be surprised if your first few forays aren’t as successful as you want or need them to be. Be the hitchhiker of the galaxy and DON’T
PANIC!  Your initial test framework architecture didn’t work? Learn, pivot and change (also easier to do with a small, tactical goal than a sprawling effort). You WILL SUCCEED if you persevere. You’ll look back and see that progress has been made, the framework is sorting out and stabilizing, your team is contributing tests for automation, and your ROI is now focused on more strategic goals than tactical ones.

Bidtellect Stats

Native auctions daily

distinctly targetable placements

30+ Partnerships

with leading supply and demand partners for the most expansive network in the ecosystem 

Pre-bid Viewability and Safety

 thanks to AdmantX and IAS so you can understand user behavior without jeopardizing privacy

Managed, Self-Serve, and Hybrid Options

for a service approach that works best for you

[b]+studio Creative Services

team for all image, copy, content, and creative needs

Post-Click Metrics

to understand how consumers engage with your content, factoring in number of sessions, pageviews, bounce rate, and time on site, giving you key insights about your campaigns, creative, content, and audiences so you can optimize accordingly.

Advanced Optimization

capabilities like Dynamic Creative Optimization (DCO).

How to Streamline Testing

How to Streamline Testing

How do you maintain the quality of your software while making your testing efforts more efficient? 

Back in my first post, I lamented the way that testing and code quality seemed to be little more than lip service in my prior employment experience and how I strove to change that perspective at my current company. I am very lucky to have the opportunity to attempt these changes and I feel that there have been many successes. Yet even with that opportunity, one thing doesn’t change: there is SO MUCH that needs to be done. So how do you maintain the quality of your software while making your testing efforts more efficient? Spoiler alert: there is no silver bullet. However, here are some things that help us streamline testing.

Planning

 

Having a plan for testing is essential to help streamline the process. It can be as informal as a checklist or as detailed as a complete list of tests, but defining the direction and scope of testing ahead of time is a huge time saver. Beware of putting too much effort into planning. The amount of planning should match the impact of the work. There is nothing more important than time, and spending thirty minutes filling out test planning paperwork for a ten-second test kills productivity, enthusiasm, and momentum.

Pull Requests and Code Review

Testers should be looking at the changes made to code, attempting to understand those changes, and asking questions when they don’t. This will help confirm if the test plan is accurate or needs revision prior to testing. It will establish a dialogue with developers that can reveal updated or missed requirements that can lead to unnecessary delays in testing. As testers and developers build their relationships, testers improve their code understanding, which in turn improves their understanding of what and how to test.

Regression Testing First

As with planning, not every test effort will require regression testing. But when regression testing is part of the testing plan, do it first. That way, if you find that something that previously worked is broken, it’s as early in the process as you can make it. Too often, a majority of the testing time is spent testing the changes first, and then regression testing is done. This approach usually means that regression testing is short-changed. If regressions are found late in the cycle, there may not be enough time to correct them before release, leaving product owners no choice but to “accept” them to get their new feature released according to their timelines.

Multiple Testing Environments

This might seem obvious, but it’s surprising how frequently companies only have one, or even just part of one, test environment. Here’s a good return-on-investment (ROI) exercise to support more than one testing environment: If important (and potentially revenue-generating) features are delayed because they are stuck in the testing queue, do the investigating and find out how much the “delay” is “costing” the company to help make your point. Having the ability to test in parallel reduces queue time and time spent in test set up and context shifting. Even a partial second environment can have a significant impact on streamlining testing.

“Tetris-ing”

Yes, this is a reference to the 80s block stacking game, but the analogy is valid. This tactic is most beneficial when you have more than one test environment, but it could still apply if you have just one. Not every feature will require a complete environment for testing. This is especially true for systems that have both front-end and back-end components. Identifying those features that do not require an end-to-end test environment and deploying them simultaneously means that two (or sometimes more) features can get tested at the same time. And as above, parallelism streamlines testing.

“NOT” Testing

(Unconventional Testing)

Production Testing – Don’t shake your head like that. Read a bit first. This may not be possible for some companies, but for others, it should be seriously considered. Testing takes time and time is money. All testing has a cost that should provide a “Return on Investment”. If the risk is very low, it can be more cost-effective to roll out a change to a single production machine, a small group of production machines, or the whole system and then monitor the effects than to configure a testing environment and run tests. An example would be an ETL query that has been modified to include or exclude a type or status in the result set. New integrations and Proof of Concept (POC) work also fall into this category since the work is essentially “on spec,” with the expectation that it will provide benefits in the long run.

Over-The-Shoulder Testing – Or you could call it “Developer-Tester Peer Testing”. Test environments don’t always have the resources that production environments do, especially for data heavy jobs. When this is the case, we often do a one-on-one code demonstration, where the developer walks through the existing and updated process(es) with the tester and must provide before and after data, either to show that it hasn’t changed, or that it has changed according to the new requirements. This type of testing should be planned, and the tester should posit various use cases, scenarios or possible issues (missing data, bad data, no connection, no directory, etc.) and the developer should be able to show or demonstrate the code that addresses them. This type of testing is where developer-tester relationships are built.

Developer Testing – And before you start yelling: no, I am not advocating or in any way suggesting that developers can replace testers.  But testers should realize that good developers do test, and great developers test a lot. They write unit tests, they generate files and do differentials on the outputs, and they validate data inputs and outputs.  And if a tester doesn’t know that those tests were performed, they will test them again. This is a duplication of effort. We advocate developers documenting their unit tests and any testing they’ve done, so instead of repeating those tests, testers “validate” them.  Or they can “reject” them (after a discussion) if they feel the testing doesn’t cover what was planned. They then perform any additional planned tests (like regression tests). The objective is to reduce the duplication of testing effort.

Automation

Or more accurately, testing with tools.  All testers use tools to test. Some use more complicated tools than others.  Look for opportunities to remove manual and repetitive test operations, whether it is generating test data, validating test data, or full-blown automation suites.  If you are making your first foray into automation test suites, resist the urge to begin with your user interface, especially if it’s is evolving. You’ll end up spending more time maintaining tests than using them.  Instead, look to validate your APIs and other backend systems. The initial investment is more, but the longevity and robustness of your tests will provide a much better ROI.

Central Library

In our business, we have many customers making requests and receiving responses from us.  Most use a standard format, but quite a few need custom requests, custom responses, or both.  We maintain sample requests and expected responses in a central area. Developers, testers and even account managers use these samples to either baseline existing code before making changes, verify that code changes haven’t negatively affected expected responses or for troubleshooting.  Having a central repository saves time and reduces duplication of data. Your business may also have common data or processes shared by many groups and could benefit from a central repository.

Test Management System

This is an extension of the central repository idea that is specific to testing.  We use a test management system to house all of our tests, whether automated or manual, in one location.  Tests can be curated and grouped by descriptions and keywords to map to coverage areas. This means that regression testing can be limited to only the areas affected by the changes under test rather than running a complete regression every time.  One copy means only one place that tests need to update when features change and only one place to look when searching for test documentation with new team members.

The examples above all have one or more common threads: return on investment (ROI – or as I like to ask, “Is the juice worth the squeeze?”),  reduce or eliminate duplication of effort, and pragmatism. It also offers employees ways to embrace many of the concepts mentioned in my previous post on the Quality Mindset, including heightening awareness, extending and encouraging trust and ownership, and accepting risk.  Each company’s needs and circumstances are different, so feel free to change up what you’ve read here or to use it as a springboard to come up with your own method of streamlining your testing.

To Have A Quality Mindset, You Need A Quality Environment

To Have A Quality Mindset, You Need A Quality Environment

In Part 2 of the Quality Series, why a Quality Mindset requires a Quality Environment…And how to build it.

Karl Hentschel

“Quality is never an accident. It is always the result of intelligent effort.”

– John Ruskin (1819-1900)

“Quality is not a consequence of following some set of behaviors. Rather, it is a prerequisite and a mindset you must have BEFORE you decide what you are setting out to do.”

– Edwin Catmull (1945- )

In my previous post, I explained how Bidtellect envisioned changing “Quality Assurance” from a step in the software development process into a shared Quality Mindset. But a Quality Mindset requires a Quality Environment in which to work.  The quotes above led me to many books, two of which, Good to Great by James C. Collins and Creativity, Inc. by Edwin E. Catmul, were instrumental in helping define a quality working environment.

The concepts below are intended for everyone in a company, but are especially intended for company leaders.  Note that I didn’t say “manager” or “coordinator” or “director” (although it really helps if leaders with those titles buy-in to the quality work environment concept).  Company leaders are the people in your organization to whom others look because they get the work done.

1. Develop and Support Competence

 

This may seem like a no-brainer, but in order for people to have a Quality Mindset, they need to know what they’re doing.  That means, they either bring experience to the company, the company provides experience to the person, or a combination of the two.  Don’t ask a plumber to build a brick wall or a mason to wire the lights, unless you plan on training them properly.

2. Model Discipline

But not the kind you’re probably thinking of right now.  I’m talking about internal discipline, not external discipline.  Not a manager, a director or vice president enforcing rules, but rather individuals doing the work to the best of their ability.  (Notice I said “work”, not “job”. This is important for later.) Discipline is (and should be) a modeled behavior and takes the form of disciplined thought and disciplined action.

3. Extend and Build Trust

Disciplined people, engaging in disciplined thought and taking disciplined actions don’t need to be managed.  They will do what needs to be done without needing to be told, if you trust them to do it.

4. Extend and Encourage Ownership

Ownership and trust or kind of “chicken-and-egg” because each fosters and reinforces the other.  I have a standing rule for my Quality team: You don’t have to ask permission to take responsibility.  Responsibility doesn’t necessarily mean you will do the work, but it is expected that if you take ownership, you are responsible for ensuring it gets done.  This ownership isn’t limited by your “job” but rather the work that needs to be done.

5. Heighten Understanding

A leader is responsible for heightening people’s awareness of what they do not know.  Do not confuse this with competence (although for junior people, it may by applicable).  This is more about making people aware of what occurs outside of their competencies. Get people thinking about “the company” as a whole, rather than just the testers or the developers or the technology team, etc.

6. Show (Frequent) Appreciation

Appreciation is often regarded as a leadership responsibility and it most certainly is.  But that doesn’t mean leaders are the only ones who should recognize individual or group accomplishments or appreciation for hard work, diligence or tenacity.  They should model appreciation and encourage it amongst team members.

7. Clear Obstacles

Bureaucracy is a reaction to incompetence and lack of discipline.  It is a leader’s responsibility to get the right people, facilitate the right chemistry, and develop and support them.  Once they engage in discipline and take ownership, the controls can be loosened and the bureaucracy decreased or removed.

8. Accept Risk

… and with it the mess it creates.  If you trust your people and allow them ownership, there will be times when they will get it wrong.  You should NOT be looking for ways to keep people from making mistakes. A true leader will enable people to resolve problems WITHOUT BLAME.  This reinforces the trust and ownership (above) and KEEPS FEAR AT BAY. People who are afraid they will get blamed, afraid they will become unemployed or afraid of ignominy DO NOT trust or take ownership…nor do they create their BEST work in the end.

Read More on the Bidtellectual Blog

Follow Us

Why I Hate the Term “Quality Assurance”

Why I Hate the Term “Quality Assurance”

Why I Hate the Term “Quality Assurance”

I didn’t start out hating the term “Quality Assurance,” but “Quality” had to transform from gatekeeper to integral collaborator.

Karl Hentschel

I didn’t start out hating the term “Quality Assurance.”  When I first entered the software testing field, I was excited about it.  I enjoyed “finding the bugs” and keeping them from getting to the end users.  For years, I was proud to say I worked in software “Quality Assurance” and would eagerly bend the ear of whomever asked, “What is Quality Assurance?”.  Like the song says, don’t get me started, I’ll tell you everything I know. You have been warned.

So, why the change?  What transpired to change enthusiasm to loathing?  First, it’s not the work. I love the role of tester and the mental challenges that are intrinsic to its assiduous application in software development.  But through the years, it became apparent that there were some real problems with the concept of “Quality Assurance” as it relates to software development.

The Inherent Problem:

Coding ≠ Manufacturing

When I first began testing, the “waterfall” process was still the dominant method for producing software.  Stakeholders had ideas for Project XYZ, built a lengthy and specific business requirement that was then reviewed by a technology group that also built a lengthy and specific technical requirement, and then development began.  When development was complete, “Quality Assurance” was notified to test Project XYZ and a test plan, test scripts, test reports, and sign-off were needed. Often, this was followed by the dreaded, “Oh, by the way, the deadline for the project was yesterday, so get it done as quickly as you can.”

Even today, many places treat software development just like any other manufacturing process and much of the “conventional wisdom” is sourced from those processes, including “Quality Assurance”.  And that’s the inherent problem. Every measure has a defined and accepted methodology and common terminology.

So how do you measure code?  Most would say, “By the expected outputs of the given inputs.”  And they would be correct. They would also be incorrect. Unlike manufacturing bicycle parts, writing code is a craft, an art more akin to painting or sculpture than to turning ¼” X 20 threads on lathe.  There is no minimum specification for the number of bytes, keystrokes, lines of code or time elapsed for the code to be “acceptable”. There is no maximum limit for the code to be considered “unacceptable”.  In fact, given three different industry examples credit reporting, retail sales and native advertising, a query for information to take two seconds to process might be considered unrealistic in the first, perfectly acceptable in the second, and thoroughly unacceptable in the last.

I hear the cries now.  “Apples and Oranges! You can’t compare two things that are essentially different!” And yet the software industry tries every day to do just that, promoting processes and certifications that promise to “standardize” a field that by its very nature is a menagerie of different approaches and solutions to vastly different fields.

Too often, “Quality Assurance” is treated as a “step” or, more accurately, a hurdle in the software development process – even when the “agile” process is purportedly being used.  Testing must be completed and testers are accountable for it, but many companies frequently fail to provide the appropriate project orientation for the testers, including sufficient time to prepare the test plan, test resources (both personnel and equipment), execute the test scripts, and report upon the results and address issues found.

The end result is a recipe for failure.  A largely siloed group, tasked as a gatekeeper or policeman of code, can’t “assure” anything because they have no authority over their responsibilities.  Any issues they find, barring complete and catastrophic failure, are usually “accepted” by the business trying to minimize project time overruns and placed in the technology backlog for review at a later date.  They become the perfect scapegoat. The product is poor because “Quality Assurance” didn’t find the bugs. The product is late because “Quality Assurance” didn’t finish testing before the deadline. Thus is born the “Us” vs. “Them” paradigm, leading to the inevitable finger-pointing and blame-seeking.

Re-Defining and De-Siloing Quality

So when I began working at Bidtellect as the Director of Quality (NOT Quality Assurance), I made it my mission to correct the shortcomings I had previously observed.  The Quality team is part of the Technology team. We consciously promote the term “Quality” (NOT Quality Assurance). We emphasize the information aspect of testing.

The team shifted left to participate in both the technical requirements and the business requirements phases.  We also shifted right to provide documentation, training and go-to-market support, as well as provide first level troubleshooting of production issues reported by both internal and external users.

Many testers will be very uncomfortable with those last statements.  “We already have too much to do and not enough time in which to do it and you’re shifting left AND right?”

Yes.

And here’s why.

No Longer the Gatekeeper

In Quality, the focus is on information.  We question information provided by stakeholders (business requirements), and the Technology team (technical requirements) and third-party collaborators.  While testing, we find new questions to ask ourselves, developers and stakeholders. We gather information from our testing, from subject matter experts, “veterans” (people with the company a long time), internal and external documentation, and from direct and indirect feedback from our internal and external end users.  And we disseminate information that we gather among our team, with developers, with stakeholders, and with trainers, marketing and end users.

This means that we are not siloed from the process, but  integral to it. This approach meshes incredibly well in an agile process.  We can begin catching issues from the business requirements phase and continue through the technical requirements phase.  We are fully prepared for the required testing and are able to quickly report the issues (technical or business) that we find.  Because of the information collected, we are well suited to assist in documentation and go-to-market tasks and are able to quickly determine if a user issue is a defect or simply a misunderstanding of a feature.  Both cases provide yet more information that assists with future business and technical requirements, documentation, and go-to-market readiness.

In this model, the Quality team is not a gatekeeper or policeman, but a collaborator.  The responsibility for the quality of the work delivered is shared by all involved parties, from the business requirement until the feedback from internal and external end users.  Communication is encouraged and a “We” environment is cultivated. Quality becomes a mindset, not a step.