mwinter at opensourcerouting
Apr 26, 2012, 10:03 AM
Post #3 of 8
On Apr 26, 2012, at 8:58 AM, Henderson, Thomas R wrote:
>> there's now a freeze_0.99.21 tag on the master git. This is meant to
>> imply that we'd like to release this as 0.99.21 and people that have
>> testbeds are welcome to test the version.
>> Until the 0.99.21 release is made, only high-importance and reasonably
>> simple bugfixes & regression fixes will be merged.
>> A git tree very similar to this is alrady passing through large-scale
>> testing at OpenSourceRouting.org's facilities, therefore the release
>> will probably appear after a shortened testing period of about a week
>> or so.
> If you don't mind, I had a few questions about this pending release.
let me answer part of the questions
> - can you clarify which bugs (if any) are being worked before the release? Are they in the tracker? I've noticed the persistence of a lot of bugs tagged 'blocker' that do not seem to block releases.
> - what protocols are being tested, how are they being tested, and by whom?
We (opensourcerouting.org) are testing the following protocols:
RIPv1, RIPv2, RIPng, OSPFv2, OSPFv3, ISIS, BGP (ipv4 & ipv6)
(but with the limitation that all the IPv6 protocols are less thorough tested at this time - mainly because of our resources. We hope to have more intensive testing added very soon)
For testing coverage, we have 3 categories for our tests:
1) Protocol Compliance - We are using a commercial product (Ixia ANVL) here.
2) Protocol Fuzzer - Using a commercial prodocut (MU Dynamics) here.
3) Scale & Performance - Using our own set of tests.
For details, go to www.opensourcerouting.org/wiki/Testing+Efforts
Part 3 (Scale & Performance) is still lagging behind with implementation, but the testplan is online and if anyone has any suggestions to add there, then I very WELCOME any feedback.
We hope that other people run their own tests, but I can't talk for them. If someone else here does large tests and wants to coordinate anything then let me know.
> Is opensourcerouting.org or anyone else going to report back to the list on what tests have passed and failed (or even on what platforms/compilers it builds)?
We will before (or together) with the release. I just need to make it better readable (ie If I tell you that ANVL Test OSPF-15.2 fails, then this is useless for most of you until I give a decent description on the test).
We (at this time) don't file all the bugs in bugzilla as I still need to spend a lot of time to verify if actually Quagga is broken or just the test fails (because the test is incorrect) or the test is buggy. I don't want to flood bugzilla with lots of bugs which are not yet verified if they are real. But I filed some in the past and will file more after I verified them.
My goal would be to include a summary with failures in some release notes and detailed spreadsheets with the coverage on our wiki. I hope to have the initial spreadsheets online soon.
For some quick summary (only bug count), look at the slides from the talk I gave at the RIPE 64 meeting last week in Slovenia:
Keep in mind that failure counts in the presentation means "test failed". It's not yet verified (for all) if this is a Quagga bug (see above)
- Martin Winter
> - are you distributing any release notes that list the changes since the last release? I tend to see these in Paul's eventual release announcements but they do not seem to be maintained in the codebase. Would the project be interested in maintaining a top-level RELEASE_NOTES?
> My sense is that you will release 0.99.21 some day in the near future unless anyone seriously complains, based on best-effort testing, but I'm wondering whether the project is applying any other criteria to making the release.
> p.s. the URL in the HACKING.tex document http://wiki.quagga.net/index.php/Main/Processes is no longer resolvable and the page doesn't seem to exist in the sourceforge wiki.
> Quagga-dev mailing list
> Quagga-dev [at] lists
Quagga-dev mailing list
Quagga-dev [at] lists