hossman_lucene at fucit
May 7, 2012, 10:37 AM
Post #10 of 13
: as a build artifact). Yet another problem is that jenkins wouldn't
Re: Annotation for "run this test, but don't fail build if it fails" ?
[In reply to]
: _fail_ on such pseudo-failures because the set of JUnit statuses is
: not extensible (it'd be something like FAILED+IGNORE) so we'd need to
That was really the main question i had, as someone not very familiar with
the internals of JUnit, is wether it was possible for our test runner to
make the ultimate decision about the success/fail status of the entire
run based on the annotations of the tests that fail/succed
I know that things like jenkins are totally fine with the idea of a build
succeeding even if some of the junit testsuite.xml files contain failures
(many projects don't have tests fail the build, but still report the test
status -- it's one of the reasons jenkins has multiple degrees of "build
health) but the key question is could we have our test runner say "test
X failed, therefore the build should fail" but also "test Y failed, and
test Y is annotated with @UnstableTest, therefore don't let that failure
fail th entire build.
: are a good match, really. ASSUMPTION_IGNORED status is probably most
: convenient here because of how it can be technically propagated back
Ultimatley i think it's important that these failures be reported as
faulres -- because that's truly what they are -- we shouldn't try to
sugar coat it, or pretend something happened that didn't. Ideally these
tests should be fixed, and my hope is that if we stop @Ignore-ing them
then they are more likeley to get fixed because people will see them run,
see the frequency/inconsistency that they fail with and experiment with
fixes to try and improve that. But in the meantime, it's reasonable to
say "we know this test sometimes fails on jenkins, so let's not fail the
whole build just because this is one of those times"
: Any ideas? Hoss -- how do you envision "monitoring" of these tests? Manually?
I think a Version 2.0 "feature" would be to see agregated historic stats
on the pass/fail rate of every test, regardless of it's annotation, so we
a) statistically, how often does test X fail on jenkins?
b) statistically, how often does test X fail on my box?
c) statistically, how often does test X fail on your box? oh really -
that's the same stats that Pete is seeing, but much higher then anyone
else including jenkins and you both run Windows, so maybe there is a
platform specific bug in the code and/or test?
But as a shorter term less complicated goal would just be to say: "Tests
with the @UnstableTest annotation are run, and their status is recorded
just like any other test, but their success/failure doesn't impact the
overall success/failure of the build. People who care about these test
can monitor them directly" So effectively: if you care about the test,
then you have data about it you can fetch from jenkins and/or any other
machine running all tests, but you have to be proactive about it -- if
you don't care about it, then it's just like if the test was @Ignored.
If dealing with this entireley in the runner isn't possible because of the
limited junit statuses (and how those test statuess effect the final suite
status) then my strawman suggestion would be...
1) "ant test"
- treats @UnstableTest the same as @AwaitsFix
- fails the build if any test fails
2) "ant test-unstable"
- *only* runs @UnstableTest tests
- doesn't fail the build for any reason
- puts the result XML files in the same place as "ant test"
(so jenkins UI sees them)
3) jenkins runs "ant test test-unstable"
* if a test is flat out broken / flawed in an easy to reproduce way we
mark it @AwaitsFix
* if a test is failing sporadically and in ways that are hard to
reproduce, we mark it @UnstableTest
* people doing experiments trying to fix/improve an @UnstableTest can dig
through jenkins reports to see how that test is doing before/after various
To unsubscribe, e-mail: dev-unsubscribe [at] lucene
For additional commands, e-mail: dev-help [at] lucene