The newest profile we've built, Ant.
How to connect Launchable to Ant with the Launchable CLI.
Now, you can run a subset of a long-running test suite more often.
The Launchable CLI provides a simple, uniform interface between your CI process and the Launchable service.
To make integration even easier, we’ve built profiles into the CLI for popular build and testing tools. These profiles abstract away the nuances of specific languages and test runners so you only have to add a few lines to your build script to get started.
One of these profiles is for Ant. In this post, we’ll show you how to connect Launchable to Ant to train a model and get faster test feedback.
Launchable uses machine learning to identify the most important tests to run for a specific code change. This capability lets you run a smaller set of important tests earlier and more frequently in your software development lifecycle, providing faster feedback to developers without compromising quality.
To enable this, you can use the Launchable CLI to connect Launchable to your test runner both to train a machine learning model and to get and run test recommendations supplied by Launchable.
After installing the Launchable CLI package from PyPI and setting your API key, you’ll need to add three Launchable commands to your build/test pipeline.
First, you need to record a build to tell Launchable the code changes that are being tested. Launchable uses this information to train the machine learning model and to recommend tests to run for those specific changes.
To do this, you add the `record build` command to your CI script. You give the build a name and point the CLI to your Git repository on the CI server (`.`):
launchable record build --name <BUILD NAME> --source src=.
Then, you need to record test results after running tests to train the machine learning model:
launchable record tests --build <BUILD NAME> ant <PATH TO JUNIT XML>
You'll notice that this command uses the same `<BUILD NAME>` value; that's how Launchable learns that these test results relate to the new commits in the build.
Once you’ve started sending test results to train a model, you can start subsetting tests. Over time, the test recommendations will improve as the model learns.
Before running tests, you need to request a dynamic subset of tests from Launchable and pass those to Ant to run them.
You provide the same build name, a target percentage of test duration to run in the subset, and the directories where your tests reside:
launchable subset--build <BUILD NAME> \--target <PERCENTAGE DURATION> \ant <PATH TO SOURCE> > launchable-subset.txt
The subset command outputs a list of recommended tests specifically for that buildto launchable-subset.txt which you then pass into Ant to run. You do this by modifying your build.xml file:
<project>…<target name="check-launchable"><available file="launchable-subset.txt" property="launchable"/></target><target name="junit" depends="jar,check-launchable"><mkdir dir="${report.dir}"/><junit printsummary="yes"><classpath><path refid="classpath"/><path refid="application"/></classpath><formatter type="xml"/><batchtest fork="yes" todir="${report.dir}"><fileset dir="${src.dir}" ><includesfile name="launchable-subset.txt" if="${launchable}" /><include name="**/*Test.java" unless="${launchable}" /></fileset></batchtest></junit></target>…</project>
Then, you run the subset of tests:
ant junit
See docs.launchableinc.com for more documentation and examples.
Now you can run a subset of a long running test suite more often. For example, you could run a subset of long running end-to-end UI tests on every git push instead of only after every merge. Or you could subset your pull request tests to get faster feedback, earlier.
Want to learn more? Book a demo today to find out how we can help you achieve your engineering and product goals in 2022 and beyond.