Your web browser is out of date. Update your browser for more security, speed and the best experience on this site.

Update your browser
CapTech Home Page

Blog July 23, 2020

Three Factors That Will Change How We Think About Automated Testing

Jason Finns
Author
Jason Finns

Time to Market

Do you recall the good ole days of release weekends? You know, how software delivery used to work? Remember starting at 2:00 AM on Saturday, making a list of all the good you’ve espoused throughout your life, preparing sacrifices you plan to offer in hopes that the release goes well, and the system is back online sometime before Monday at 8:00 AM? If your answer is "What do you mean, remember? We have one of those coming up in two weeks!" then you probably haven't fully realized the majesty that is a fully automated CI/CD (Continuous Integration/Continuous Deployment) pipeline.

Now, your answer may be that you have a-rockin' Jenkins job that kicks off daily at 2:00 AM, builds your code, and runs your unit tests. If so, 2006 called to congratulate you. While this is a noble place to start, it is nowhere near the maturity required to deliver 200 times more frequently than your peers. According to a recent study, that is the rate at which mature DevOps leaders release software.

By the time you finish reading this article Amazon will have performed several hundred releases across their service offerings. A few years ago they were reporting that a release occurred every second. I can only imagine how frequently they release now.

Speaking of Amazon, give me a few minutes and I’ll spin up an environment on a leading cloud provider and have a “Hello World” site live and accessible. That’s right, minutes. Gone are the days of waiting weeks for hardware to be ordered, racked, secured, patched, and made available only to discover you need more horsepower.

The time between idea conception and delivery has never been shorter and it is showing no signs of slowing. But this doesn’t matter because your competitors are as slow delivering software as you are, right? I wonder if Nokia thought that when they had the cell phone market cornered, but Apple was off inventing this little thing called an iPhone or if most retailers thought of Amazon as a website that sold books and had little profit to show for it.

As well, you may be saying that your almighty business sponsor doesn't require this kind of feature delivery. That is a fair point. My question to you is how long would it take you to deploy a single line code change (log statements and other benign options aside) to production with near absolute certainty that nothing would break? If your answer is measured in hours or more, you are at risk of competitors beating you to market. Explaining your bottleneck this way may garner some favor.

Software systems are complex. A Boeing 787 has 6.5 million lines of code. Now, human lives may not depend on your method of properly handling every exception, but your customers probably do and that drives your business. It is an illusion to believe that we can consistently and frequently deliver features to production that drive our business when there is a "what will this break" question hanging in the air with every change. Oh, and what about that scary code no one understands or the code a developer finished one day before code freeze that she now believes could be re-written in a better, more maintainable way. Good luck tackling those without the automated testing safety net.

This confident and frequent deployment of business-driving features isn’t free. It isn’t even cheap. Spending as much budget on testing as you do development wouldn’t be unusual. However, multiplying the time it takes to manually test software (did I mention this is incredibly error-prone) times the number of releases you plan to perform will give you a number that far surpasses the up-front cost of test automation.

The future of software delivery is becoming hands-off, frequent, and ubiquitous. Enter Automated Testing. There is no "C" in your CI/CD pipeline without it.

Blending of Roles

Did I mention that software is complex? If you aren’t yet convinced, Windows 10 has about 50 million lines of code. I fondly recall my days as a waterfall development manager where we had a “Testing Team” that formulated test scripts for the application we were building. Now, don’t confuse “scripts” in this sense with the whiz-bang automation we’ve been discussing. Think of an ancient sage holding a tattered scroll. Those kinds of scripts. Well, they weren’t that bad, but they were effectively a written set of steps a tester would take to ensure the system worked properly. While this was a respectable discipline for the time, how do you go about testing 50 million lines of code referencing a Word document? Your application may not be Windows 10 but given that the average iOS app is still in the 50,000 line range (reference here) you can see that automated software testing is an engineering discipline in its own right.

Enter the Automated Test Engineer. The need for a specialized skill set required to develop effective and robust tests that become part of a larger regression suite gave rise to the Automated Test Engineer. According to an Indeed search I performed recently, there are six times as many job openings for Automated Test Engineers as compared to the classic role. Throw in the move to an Agile methodology with its shorter development cycles and closer collaboration between testing and development and you get the next phase of this evolution, SDET (Software Development Engineering and Test).

Who are these people and why are they here you may ask? They are here to merge the development and testing disciplines into one beautiful role that can sway seamlessly between developing software and developing robust test. There are currently far fewer of these roles than for an Automated Test Engineer. However, it seems apparent that the roles of the Software Engineer and Automated Test Engineer have been circling around like particles in the Lage Hadron Collider (reference here). It is hard to imagine a world in five years where we aren’t looking back fondly on the collision and this singular role that was produced.

Machine Learning

If you think Large Hedron Colliders are cool, wait until you see how Machine Learning (ML) changes the world. Ok, there may never be a movie like The Matrix made about Automated Testing, but ML is making its way into this arena. Why wouldn’t it? If you have ever been down the path of a user interface change breaking a dozen tests in your regression suite you know the question going through your mind, “there has to be a better way.” There is. First, tools are maturing in ways that make them more robust and less susceptible to inconsequential interface changes. The next step in this evolution, however, is likely to be ML.

If we can train an algorithm to predict when a race car will need to make a pit stop then why can’t we train it to know that we want to programmatically click a button no matter where it appears on the screen? Appium has developed a plugin for Selenium that does just that.

While we are on the topic of training, why can’t we train an algorithm to monitor activities in production and produce test cases for the most common use cases? We may never get to the point where software tests itself but if ML can give us 70% of our test scenarios automatically why not let it?

If all of that is too “Jetsons” for you (if you don’t know who the Jetsons are it means I am older than I thought), there is a nearer-term practical application. The annoying thing about tests is that they find problems and fail. Here we’ve poured our blood, sweat, and tears into a suite of tests cases that are all well-behaved and pass… until they don’t. Like a driver hurling insults at us in traffic after being accidentally cut off, they let us know how disappointed they are with our failure. Then we go off like Sherlock Holmes trying to piece together logs, database activity, and UI state trying to solve the “what caused the failure” mystery. Artificial Intelligence feeds on data like a blue whale eating krill. Why not feed it data from our testing activities and let it learn why tests fail so that we can shorten our investigations (see an interview with Geoff Meyer at Dell EMC for an example of this). It’s elementary my dear Watson!

Conclusion

Automated Testing may not have been the coolest kid on the block, but I wouldn’t try to deliver software without it. Roles, tools, and our paradigms in this space are ripe for seismic shift and the initial tremors are already being felt.