Shift left refers to the practice of moving testing, quality, and performance evaluation early in the software development process in order to reduce testing bottlenecks and speed up release cycles.
In order to effectively shift-left teams need smarter testing tools like Predictive Test Selection.
As we’ve talked with some of the most innovative companies in the industry about their testing practices, we’ve noticed that they are concerned with both quality and speed.
Automated testing helps on both fronts. It makes it easier to ship code changes with confidence because your tests help ensure that the software continues to function as intended. And it makes it easier to ship faster because you no longer need to rely on time-consuming manual testing to find all the problems.
However, as software projects gain more and more tests, another problem emerges: tests themselves become a bottleneck for rapid delivery. This is why many teams push their long-running test suites later in their software delivery lifecycle. But, like a game of whack-a-mole, another problem surfaces!
Because many useful tests are often run later in the software delivery lifecycle, developers don’t hear about problems caused by their changes until long after they have moved on to other tasks. Depending on the organization, this can result in all kinds of inefficiencies. In the best-case scenario, the original developer is able to quickly jump in and resolve the issue. But often, days or weeks elapse as several QA engineers and developers get involved to identify and repair the problem.
This is why the prevailing DevOps recommendation is to shift tests left in the software delivery lifecycle.
When you have a lot of tests, how do you choose which tests to run first? Many teams tag certain tests in a large test suite and painstakingly implement mechanisms for running these tests earlier in the development process. We call this manual test selection.
The problem with manual test selection is that it relies on static human understanding of the connections between software modules and tests. But codebases are constantly changing. In fact, the tests that are relevant to a code change may not be obvious at all.
Predictive test selection is a technique pioneered by Facebook and Google for this very problem. Predictive test selection is a method of performing Test Impact Analysis that uses AI and machine learning to identify the right tests to run for a particular code change.
At Launchable, we are working to productize this technique for all software teams to use. With predictive test selection, we’ve found that for many software projects you can run only 10-20% of the tests and achieve 90% confidence that you’ve found a failure if one exists for a code change.
For context, here’s what that can mean in a couple of scenarios:
Reduce a 5-hour run to only 30 minutes
Reduce a 1-hour run to only 6 minutes
Using our technology, you can create a much shorter dynamic subset of your long-running tests that normally run later in the development cycle and then run them on every pull request (pre-merge). In addition, one of the great things about this approach is that it doesn’t require costly pipeline changes. You can just add Launchable to your existing build and test scripts.
Does any of this sound interesting? Reach out if you have thoughts or sign up for the beta to get access to Launchable before anyone else!
Want to learn more? Book a demo today to find out how we can help you achieve your engineering and product goals in 2022 and beyond.