Option to allow failures in Fan-In/Out Workflow
The requires tag in workflows really limits what is possible. Requires makes a lot of sense for the deployment scenario used to demo it, but waiting for a group of jobs to complete (pass or fail) should be an option.
Scenario: Setup -> Run multiple sets of tests in parallel -> Combine test results/coverage results/artifacts and report to PR
That scenario above isn’t possible because if a single test fails in any of the jobs running in parallel the the fan-in step won’t run.
Wouldn't a simple workaround be to have each test job report its results, but still return a passing exit code, then have a job which combines the results and fails if any one test fails?
Philip Beber: That should work, but then the statuses are harder to follow, especially from Github. Also more work to setup.
I've created a python script which can detect if the currently running node is the last running parallel job, and perform some action if so. https://gist.github.com/elijahr/7594c732c4b893d35a85511ae31ad5da
It's a bit of a hack but it will work for some fan-out/in use cases. I'm using it to combine pytest JUnit artifacts from all the failed jobs and generate a single notification.
Our use case is slightly different but needs the same functionality.
We have a plethora of selenium tests that actually cause strain on our staging system. So instead of running all of them in parallel we wanted to create 3 separate tracks to run them. This means that only 3 jobs will ever be running at one time. If one of the jobs fails, then the next job can't start up failing to provide the coverage we're looking for.
My team is attempting to migrate to CircleCi. This is a need for us. We hope this functionality gets prioritized. Thanks.
Any updates on this? We desperately need this, because we have multiple workflows where we need to run jobs at the end of those workflows even if previous jobs had failed.
I need this feature as well, we are running multiple sets of tests in parallel and need to run clean up after all required job finishen. But if one test fails we can't automatically run clean up.
Do you have any updates on it?
Liya Ai any updates on this? Are you still working on this feature or ditched it?
Any updates on the implementation of this feature? CircleCI interface did a great job pushing us to use more machines via parallelism but the lack of fan-in on failures is leaving us with less functional than before.
Hi, this feature would be great. I guess we all have the same use case: combine test results no matter if any parallel job failed.
In my case, I would generate a stats report with the most common errors across all test results.