moseley at hank
Jul 2, 2012, 12:42 PM
Post #3 of 9
On Fri, Jun 29, 2012 at 1:12 PM, Gianni Ceccarelli
<dakkar [at] thenautilus>wrote:
> Here's what we do:
> - we have a (VCS-managed) set of tarballs downloaded from CPAN
> - we run a CPAN-like server providing those tarballs
> - we have a rather large set of distroprefs to skip unreliable tests
> and apply local patches
> - we usually update to the latest CPAN (and perl) releases
> - sometimes we have to hold back on a set of modules because of
> problems (currently we can't update Catalyst, for example, due to
> some internal libraries exploiting undocumented behaviours; yes,
> we're going to fix our libraries)
> - we smoke the whole set of modules whenever we update a batch of them
> - every iteration, we build a package (in our case, RPM) with the
> version of perl and all modules that we are going to use
> - we develop and run all our test suites against that package
> - the application packages require the specific perl/cpan package
> version they were developed on
> It works, and it's not that much work.
You have tarballs of every single dependency? How do you determine what
And you build a single RPM with perl and all dependencies? Do you use
Perlbrew for that? I'd love to hear more about that process.
Here's the difference between my thinking and the operations manager: He
wants to make sure that the same code is used from development to
production. The idea is to reduce the risk of bugs from using different
versons of dependencies. (That's one reason we are stuck running very old
My view, as a developer, is I want good test coverage. When it's time to
cut a release what is important to me is to see 100% tests passing. If the
code works, well, it works.
Small sample size, but I haven't heard that regressions from CPAN are a big
problem -- I know they happen, of course, but the question I'm after is it
so significant to warrant building much more complex build systems instead
of using CPAN in the normal away? I just don't think so.
I also don't see a need to manage multiple stacks of modules for different
stages of the application.
So, I'm looking at this environment, which seems about as simple as you can
hope for. Anyone see any any holes in this approach?
- Run a local CPAN ("DarkPAN") repo for our in-house modules. e.g.
CPAN::Site with pass through. Our cpan(m) clients are configured to first
fetch from local CPAN, if if not found then fetch from public CPAN.
- For rare CPAN regressions install those distributions into our local
DarkPAN which clients will install in preference over the version on CPAN.
Add test for to catch a regression in the future. (Hopefully a rare and
- Developers check out and install dependencies as normal locally
(local::lib, perlbrew) and make sure code has good test coverage.
- In-house modules (as well as apps) are "released" to our local
DarkPAN. Dist::Zilla's "release" makes this trivial.
- Automated testing can check both "trunk" and "release" -- trunk by
checking out and running tests, and "release" by installing the most recent
version with a cpan client and running tests.
And the release process is very similar:
- QA team (or a developer) runs "cpan Our-App" on the target platform,
letting it bring in any dependencies as normal.
- Run the the unit tests to satisfy the development team that the app is
working as expected with the installed dependencies. This is essentially
developer "hand-off" to the QA team confirming the app works as expected.
- Then QA team tests the app and, if passes tests, the app is moved to
staging and then production.
I think the significant thing here is I'm not really worrying about
specific versions of dependencies. Sure, code depends on a minimun version
of a module but that's just so tests have a chance of passing. What's
important is that the unit tests pass. It's really no different than
running "cpan Catalyst::Runtime" and making sure all tests pass.
Sure, it's possible that a newer module ends up on production than dev, but
that would mean unit tests AND QA failed to detect a bug. And let's be
honest, the vast, vast majority of bugs that find their way to production
are in our own code.
moseley [at] hank