Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Wikipedia: Wikitech

Code review statistics and trends

 

 

Wikipedia wikitech RSS feed   Index | Next | Previous | View Threaded


robla at wikimedia

Aug 22, 2012, 6:37 PM

Post #1 of 25 (4483 views)
Permalink
Code review statistics and trends

Hi everyone,

Our Analytics crew have worked out how to generate a graph that gives
us a view into our code review backlog:
http://gerrit-stats.wmflabs.org/graphs/mediawiki

The red line is roughly the equivalent of this search in the Gerrit search box:
is:open -CodeReview=+2 -CodeReview=+1 -CodeReview=-1 -CodeReview=-2
project:^mediawiki.*

...which, in English, means "everything in the mediawiki/* projects
that hasn't been marked with a positive or negative review yet, and
hasn't been merged or abandoned yet"

These numbers seem to be +/- 10 revisions, and not evenly off over the
history, so bear that in mind as you look at the numbers. In
particular, it seems to paint a slightly rosier picture for how we're
doing on keeping up with the backlog than we are.[2]

That said, we seem to be doing pretty good on keeping up - better than
I thought had you asked me before I had the graph staring me in the
face. We still have quite a backlog, but it appears to be shrinking
by a modest amount. Our peak backlog appears to be mid July. For
those of you that have been reviewing, thanks for keeping up!

As of this writing, there's 207 revisions that have neither positive
nor negative reviews associated with them. That's still seems like a
pretty big number. 30 of those are more than a month old, and some
date back to May.

How is the process working for everyone? Is stuff getting reviewed
quickly enough? How has it been for the reviewers out there?

Rob

[1] https://github.com/wikimedia/limn
[2] For those interested in the gory details. Unfortunately, it's not
a perfect history due to the way Gerrit stores the history of
approvals (or rather, the fact that Gerrit doesn't store the
"history", just the current approval state for any given patchset).
In addition to known discrepancies, there may well be other issues.
In tracking it over the past couple of days, it looks like the last
few days are slightly undercounted (relative to the historical
numbers), as they drift upward every day so. Everything prior to
August 12 is stable, though, so we seem to be getting *consistent*
numbers for everything before August 12 (though quite possibly
overcounted by 10-ish revisions). It also more-or-less lines up with
the few manual datapoints that I have.

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


z at mzmcbride

Aug 22, 2012, 7:46 PM

Post #2 of 25 (4381 views)
Permalink
Re: Code review statistics and trends [In reply to]

Rob Lanphier wrote:
> Our Analytics crew have worked out how to generate a graph that gives
> us a view into our code review backlog:
> http://gerrit-stats.wmflabs.org/graphs/mediawiki

Thanks for the graph. There are a few typos in the graph's title, by the
way. "Mediawiki CodeReview" should be "MediaWiki code review". "MediaWiki"
has a capital W and "CodeReview" is a mostly obsolete MediaWiki extension.
The graph should also include some explanation text (or at a minimum, a link
to your mailing list post). I'm not sure if Limn currently supports this
feature, but it should.

> As of this writing, there's 207 revisions that have neither positive
> nor negative reviews associated with them. That's still seems like a
> pretty big number. 30 of those are more than a month old, and some
> date back to May.
>
> How is the process working for everyone? Is stuff getting reviewed
> quickly enough? How has it been for the reviewers out there?

I'm not quite sure why revisions with +1 and +2 are excluded from this
graph. This seems to fall into the trap of "code review is the same as code
deployment." We've discussed this previously. When people say they want
faster code review, they really only mean that as a means to an end (i.e.,
they really want revisions merged/deployed in a more timely manner). Could a
separate graph be created that shows the number of un-merged, un-abandoned
revisions with 0, +1, or +2? That seems like it'd be a more accurate picture
of the code review backlog.

Generally, I still don't think there's any incentive to clear the queue. If
a revision sits for six months, what happens? You wait for someone to "lob a
grenade"? :-)

Perhaps revisions that receive no feedback for thirty days should be
auto-merged and auto-deployed. The current code review/deployment situation
is all based on whim for any code that isn't magically deemed a high
priority to the Wikimedia Foundation, though I suppose graphs are as good a
place as any to start to resolve that problem.

MZMcBride



_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


jeremy at tuxmachine

Aug 22, 2012, 8:22 PM

Post #3 of 25 (4368 views)
Permalink
Re: Code review statistics and trends [In reply to]

Hi,

On Thu, Aug 23, 2012 at 2:46 AM, MZMcBride <z [at] mzmcbride> wrote:
> Perhaps revisions that receive no feedback for thirty days should be
> auto-merged and auto-deployed.

That's a pretty horrible solution. IMO, all code *must* be reviewed[0]
before merge.

[0] or at least submitted by a very trusted person. but really it
should all be reviewed. post commit review is fine sometimes but not
ok if it that means the review just never happens at all.

Maybe there could be an automated test for how we're doing and when we
hit certain levels trigger some action.

"warning" could do some or any of
* tweet
* msg an IRC channel
* send an email

"critical" could lock repos so that
* even people that typically have merge rights can't merge
* make exceptions for any really old changesets by "volunteers" (e.g.
> 20 or 60 days)
* make exceptions for changes that have some extraordinary rationale
(why they should be an exception) and maybe 2 approvals instead of
just one
NB: I have no idea how well gerrit supports stuff like 2 approvals
instead of 1. but temporarily changing merge rights should be easy to
automate. (without changing group memberships)

Levels should probably be a range so that we don't flap. (don't
trigger a level until you reach the high and don't clear the level
until you drop below the low)

-Jeremy

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


smazeland at wikimedia

Aug 22, 2012, 11:42 PM

Post #4 of 25 (4364 views)
Permalink
Re: Code review statistics and trends [In reply to]

On Thu, Aug 23, 2012 at 3:37 AM, Rob Lanphier <robla [at] wikimedia> wrote:
> Our Analytics crew have worked out how to generate a graph that gives
> us a view into our code review backlog:
> http://gerrit-stats.wmflabs.org/graphs/mediawiki

Yay. I've been awaiting this. Great to have something now.

> These numbers seem to be +/- 10 revisions, and not evenly off over the
> history, so bear that in mind as you look at the numbers. In
> particular, it seems to paint a slightly rosier picture for how we're
> doing on keeping up with the backlog than we are.[2]

The graph for new changesets fluctuates a lot. I would guess this is
due to change sets submitted by user l10n-bot. Maybe it's a good idea
to filter those out, to get a line that's a little easier to
interpret.

> As of this writing, there's 207 revisions that have neither positive
> nor negative reviews associated with them. That's still seems like a
> pretty big number. 30 of those are more than a month old, and some
> date back to May.

Add a third line that displays the open number of patch sets including comments?

> How is the process working for everyone? Is stuff getting reviewed
> quickly enough? How has it been for the reviewers out there?

Review is a lot of work, especially if change sets are large. I try to
do a lot of it, both i18n/L10n related, as well as "easy patch sets".
I don't consider myself a well enough developer to approve the
harder/larger patch sets.

For some of the deprecated methods replacements, I have recently
teamed up with IAlex. Reciprocity works well in review.

A few things are very, very annoying. Most annoying is that pushing
directly to master still appears to be possible, which completely
removes patch sets from sight. They're there when you pull, but one
actually has to inspect the git log to see it's there and who dunnit.
From what I understand, Chad is planning on plugging this hole, but
it's not going quick enough for me.

Cheers!

Siebrand

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


niklas.laxstrom at gmail

Aug 23, 2012, 1:44 AM

Post #5 of 25 (4374 views)
Permalink
Re: Code review statistics and trends [In reply to]

On 23 August 2012 04:37, Rob Lanphier <robla [at] wikimedia> wrote:

> How is the process working for everyone? Is stuff getting reviewed
> quickly enough? How has it been for the reviewers out there?

I'm doing a lot of code review[1], but I feel like it's wasting my
time by being inefficient. I'm still looking forward to diffs in
emails and/or one page diff view in Gerrit. Especially re-reviewing
big patches is just plain annoying when you have to go over all files
just to find the changes.

My dashboard is mostly useless, it's full of -1 or +1 patches waiting
for an action by the submitter. Or +0 patches I gave +1 or -1 before
new patchset was submitted - they are indistinguishable from totally
new patches. It would also be nice to have different queues for things
like GSoC, Translate, sprint related, i18n review requested.

It has also happened that my inline comments have been ignored because
a new patchset was submitted without addressing them, and subsequent
reviewers didn't notice the comments anymore.

My own patches have been reviewed quickly, but that's because sprint
related patches are reviewed by other team members and non-sprint
related patches usually by Siebrand. There have been some exception
for patches that need review for someone outside of our team.

[1] Since there are no statistics for this, I have no idea whether I'm
doing more or less than average, but I'm definitely spending a lot of
time on it. It's mainly the positive feedback I get that makes it
rewarding.

-Niklas

--
Niklas Laxström

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


eranroz89 at gmail

Aug 23, 2012, 3:02 AM

Post #6 of 25 (4376 views)
Permalink
Re: Code review statistics and trends [In reply to]

A real waste of time for reviewer is checking for code conventions which
can be done automatically.

Some static code analysis tools to automatically check conventions (spaces,
missing messages['qqq']...)
would help reduce the effort needed for review, allowing the human code
reviewer to fully concentrate on real issues,
and for the commiter to get almost immediate response for these small
annoying convention issues.

Is there a code analysis plugin that can be setup with jenkins?

Eran Roz

On Thu, Aug 23, 2012 at 11:44 AM, Niklas Laxstrm <niklas.laxstrom [at] gmail
> wrote:

> On 23 August 2012 04:37, Rob Lanphier <robla [at] wikimedia> wrote:
>
> > How is the process working for everyone? Is stuff getting reviewed
> > quickly enough? How has it been for the reviewers out there?
>
> I'm doing a lot of code review[1], but I feel like it's wasting my
> time by being inefficient. I'm still looking forward to diffs in
> emails and/or one page diff view in Gerrit. Especially re-reviewing
> big patches is just plain annoying when you have to go over all files
> just to find the changes.
>
> My dashboard is mostly useless, it's full of -1 or +1 patches waiting
> for an action by the submitter. Or +0 patches I gave +1 or -1 before
> new patchset was submitted - they are indistinguishable from totally
> new patches. It would also be nice to have different queues for things
> like GSoC, Translate, sprint related, i18n review requested.
>
> It has also happened that my inline comments have been ignored because
> a new patchset was submitted without addressing them, and subsequent
> reviewers didn't notice the comments anymore.
>
> My own patches have been reviewed quickly, but that's because sprint
> related patches are reviewed by other team members and non-sprint
> related patches usually by Siebrand. There have been some exception
> for patches that need review for someone outside of our team.
>
> [1] Since there are no statistics for this, I have no idea whether I'm
> doing more or less than average, but I'm definitely spending a lot of
> time on it. It's mainly the positive feedback I get that makes it
> rewarding.
>
> -Niklas
>
> --
> Niklas Laxstrm
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l [at] lists
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Platonides at gmail

Aug 23, 2012, 5:19 AM

Post #7 of 25 (4360 views)
Permalink
Re: Code review statistics and trends [In reply to]

On 23/08/12 05:22, Jeremy Baron wrote:
> Hi,
>
> On Thu, Aug 23, 2012 at 2:46 AM, MZMcBride wrote:
>> Perhaps revisions that receive no feedback for thirty days should be
>> auto-merged and auto-deployed.
>
> That's a pretty horrible solution. IMO, all code *must* be reviewed[0]
> before merge.

I actually like it.
If "Evil approval bot" mailed you warning that it will merge 12 pending
changesets in two days if there's no action from your part, that would
force some promptly action by you.

I recently had a trivial patch to operations/puppet waiting for more
than a month. When I noticed I hadn't added any Reviewer, and added to
it, the changeset was fixed in the same day. But that also shows that
nobody looked for new changes there.

I have also seen people approving their own commits to core, something
I'm not comfortable with.

We could add some rules like: "If someone with approval to core has only
positive votes, he can approve it after 1 week. If there was no review
at all, he can approve it after 2 weeks".

(Push repositories would obviously still allow auto-approval, and we
could add exceptions for eg. translation changes)



I was also recently unhappy when I discovered that one patch I thought I
had open, had been abandoned without explanation. There can be good
reasons for doing that, this is a bad idea, no longer needed, fixed in a
different way in I123456... or even "closing because it has been waiting
for a new patch for too long and I don't like seeing this open" (which I
suspect was the case), but *please*, if you're closing another people's
patch, leave an explanation!


_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


daniel at nadir-seen-fire

Aug 23, 2012, 6:07 AM

Post #8 of 25 (4360 views)
Permalink
Re: Code review statistics and trends [In reply to]

On Thu, 23 Aug 2012 05:19:39 -0700, Platonides <Platonides [at] gmail>
wrote:

> On 23/08/12 05:22, Jeremy Baron wrote:
>> Hi,
>>
>> On Thu, Aug 23, 2012 at 2:46 AM, MZMcBride wrote:
>>> Perhaps revisions that receive no feedback for thirty days should be
>>> auto-merged and auto-deployed.
>>
>> That's a pretty horrible solution. IMO, all code *must* be reviewed[0]
>> before merge.
>
> I actually like it.
> If "Evil approval bot" mailed you warning that it will merge 12 pending
> changesets in two days if there's no action from your part, that would
> force some promptly action by you.
>
> I recently had a trivial patch to operations/puppet waiting for more
> than a month. When I noticed I hadn't added any Reviewer, and added to
> it, the changeset was fixed in the same day. But that also shows that
> nobody looked for new changes there.
>
> I have also seen people approving their own commits to core, something
> I'm not comfortable with.
>
> We could add some rules like: "If someone with approval to core has only
> positive votes, he can approve it after 1 week. If there was no review
> at all, he can approve it after 2 weeks".
>
> (Push repositories would obviously still allow auto-approval, and we
> could add exceptions for eg. translation changes)
>
>
>
> I was also recently unhappy when I discovered that one patch I thought I
> had open, had been abandoned without explanation. There can be good
> reasons for doing that, this is a bad idea, no longer needed, fixed in a
> different way in I123456... or even "closing because it has been waiting
> for a new patch for too long and I don't like seeing this open" (which I
> suspect was the case), but *please*, if you're closing another people's
> patch, leave an explanation!

I don't know if I quite like the thought of an evil reviewer bot.

Though it does make me think of a feature I thought of adding to Gareth.

I can't remember why but one day this feature idea randomly popped into
mind. Review tagging.
Basically things for review can be tagged with a number of tags. For us we
may create tags like (parser, api, skin, specialpages, i18n, etc...) and
anyone can add these tags to the review item.
Now besides making things searchable users can star/watch tags rather than
only star/watch individual review items.
This means that Gareth could automatically notify you about new commits in
a part of MW that you are stalking/obsessed with.
((This is something we have always basically been missing))
By getting reviewers to watch their own area of MW they can automatically
jump in to review changes in that area. Without needing to hover over a
dashboard of changes or be overwhelmed by notifications of every single
commit coming in.
And getting your commit reviewed wouldn't be anymore a matter of "Try to
find one of the few people in our large community who reviews the topic
your commit is in" but instead would be the much simpler "Tag your commit
with the relevant topics so that reviewers can see it".

Now, naturally as in the spirit of Gareth I didn't stop there with manual
tagging and thought of a rules system to add tags automatically:
if filechanged includes/parser/* add tag 'parser'
if filechanged includes/api/* add tag 'api'
if filechanged (includes/Skin.php, includes/SkinTemplate.php,
includes/SkinLegacy.php) add tag 'skin'
etc...

We could basically have tags automatically added based on what files were
updated in the commit.

Personally I'd probably be hovering over any commit tagged with skin,
linker, and a few other tags.

--
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


madman.enwiki at gmail

Aug 23, 2012, 9:10 AM

Post #9 of 25 (4360 views)
Permalink
Re: Code review statistics and trends [In reply to]

On Thu, Aug 23, 2012 at 8:19 AM, Platonides <Platonides [at] gmail> wrote:
> I recently had a trivial patch to operations/puppet waiting for more
> than a month. When I noticed I hadn't added any Reviewer, and added to
> it, the changeset was fixed in the same day. But that also shows that
> nobody looked for new changes there.

Thank you, Platonides; I kept meaning to mention this hurdle on the
list but forgot. New developers don't know who they should add as a
reviewer for their commits or if they should be adding any reviewers
at all. Thus, it seems likely that it will take more time for their
commits to be reviewed (as Niklas says, Siebrand and his fellows *are*
awesome at getting to these when they can). But this is frustrating
before they understand what kinds of backlog everyone else personally
has and how to best work within such an environment.

Cheers,
-Madman

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


dvanliere at gmail

Aug 23, 2012, 10:22 AM

Post #10 of 25 (4366 views)
Permalink
Re: Code review statistics and trends [In reply to]

On 2012-08-23, at 2:42 AM, Siebrand Mazeland (WMF) wrote:

> The graph for new changesets fluctuates a lot. I would guess this is
> due to change sets submitted by user l10n-bot. Maybe it's a good idea
> to filter those out, to get a line that's a little easier to
> interpret.

Hey Siebrand,

I prefer to keep the data collection as simple as possible. One way of fixing this issue is as follows:

1) Go to http://gerrit-stats.wmflabs.org/graphs/mediawiki/edit
2) Click on 'Options'
3) Click on 'Advanced' (right side of screen)
4) Click on 'rollPeriod' (bottom of screen, yellow box)
5) This allows to create a moving average, so you can replace the 1 with 7 meaning that each datapoint is the average of the past 7 days. This option applies to both metrics but it really smooths out the outliers.

If you want to save this then please use another slug name (click on 'Info') and replace 'slug' and then click 'Save' else you will have changed Robla's original chart.


Best,
D
_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


rlane32 at gmail

Aug 23, 2012, 11:58 AM

Post #11 of 25 (4361 views)
Permalink
Re: Code review statistics and trends [In reply to]

> I actually like it.
> If "Evil approval bot" mailed you warning that it will merge 12 pending
> changesets in two days if there's no action from your part, that would
> force some promptly action by you.
>

Not a chance in hell we'll ever allow this for the operations repo or
the mediawiki deployment branches.

> I recently had a trivial patch to operations/puppet waiting for more
> than a month. When I noticed I hadn't added any Reviewer, and added to
> it, the changeset was fixed in the same day. But that also shows that
> nobody looked for new changes there.
>

That's because ops tends to be swamped with requests constantly. We
work mostly on an interruption model. I doubt we'll change our
workflow any time soon to check for changes that haven't asked for a
review.

> I have also seen people approving their own commits to core, something
> I'm not comfortable with.
>

I dislike this, very much. The workflow in ops requires this, but
there's no reason it should happen in mediawiki core.

> I was also recently unhappy when I discovered that one patch I thought I
> had open, had been abandoned without explanation. There can be good
> reasons for doing that, this is a bad idea, no longer needed, fixed in a
> different way in I123456... or even "closing because it has been waiting
> for a new patch for too long and I don't like seeing this open" (which I
> suspect was the case), but *please*, if you're closing another people's
> patch, leave an explanation!
>

Indeed. Like bugs, something should never be dropped without saying why.

- Ryan

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


hashar+wmf at free

Aug 23, 2012, 12:38 PM

Post #12 of 25 (4361 views)
Permalink
Re: Code review statistics and trends [In reply to]

Le 23/08/12 12:02, Eran Rosenthal a crit :
> Is there a code analysis plugin that can be setup with jenkins?

We could use PHP CodeSniffer which can easily be extended to implement
our existing style.

A very basic skeleton is in our git repository:

https://gerrit.wikimedia.org/gitweb/mediawiki/tools/codesniffer.git

It needs more rules to be implemented and of course a Jenkins job
to trigger it on each commit :-)

--
Antoine "hashar" Musso


_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


hashar+wmf at free

Aug 23, 2012, 12:45 PM

Post #13 of 25 (4365 views)
Permalink
Re: Code review statistics and trends [In reply to]

Le 23/08/12 10:44, Niklas Laxström a écrit :
> I'm doing a lot of code review[1], but I feel like it's wasting my
> time by being inefficient. I'm still looking forward to diffs in
> emails and/or one page diff view in Gerrit. Especially re-reviewing
> big patches is just plain annoying when you have to go over all files
> just to find the changes.

I review big patches by fetching the change from the repository and
using my local diff tool. Of course you need to use the Gerrit diff
system to add inline comments.

> My dashboard is mostly useless, it's full of -1 or +1 patches waiting
> for an action by the submitter. Or +0 patches I gave +1 or -1 before
> new patchset was submitted - they are indistinguishable from totally
> new patches. It would also be nice to have different queues for things
> like GSoC, Translate, sprint related, i18n review requested.

I am pretty sure Gerrit will come with saved search in the future. Chad
will be able to confirm.
As for the dashboard, just take care about lines which have no score
from you (they should be bold) and do the -1 +1 score. Ignore the rest.
If a patch was reset to +0, then it needs to be reviewed again by you
(most probably a trivial review).

> It has also happened that my inline comments have been ignored because
> a new patchset was submitted without addressing them, and subsequent
> reviewers didn't notice the comments anymore.

It happened to me a few time to. One feature that could be able to
Gerrit would be to prevent submitting a change till all inline comments
have been replied to and marked as closed/fixed.

<snip>

--
Antoine "hashar" Musso


_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


innocentkiller at gmail

Aug 23, 2012, 1:15 PM

Post #14 of 25 (4366 views)
Permalink
Re: Code review statistics and trends [In reply to]

On Thu, Aug 23, 2012 at 3:45 PM, Antoine Musso <hashar+wmf [at] free> wrote:
> I am pretty sure Gerrit will come with saved search in the future. Chad
> will be able to confirm.
>

They're not saved yet, but they do exist in 2.5. Docs from rc0:
http://gerrit-dev.wmflabs.org/r/Documentation/user-custom-dashboards.html

-Chad

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Platonides at gmail

Aug 23, 2012, 2:34 PM

Post #15 of 25 (4380 views)
Permalink
Re: Code review statistics and trends [In reply to]

On 23/08/12 15:07, Daniel Friesen wrote:
> I don't know if I quite like the thought of an evil reviewer bot.
>
> Though it does make me think of a feature I thought of adding to Gareth.
>
> I can't remember why but one day this feature idea randomly popped into
> mind. Review tagging.
> Basically things for review can be tagged with a number of tags. For us
> we may create tags like (parser, api, skin, specialpages, i18n, etc...)
> and anyone can add these tags to the review item.
> Now besides making things searchable users can star/watch tags rather
> than only star/watch individual review items.
> This means that Gareth could automatically notify you about new commits
> in a part of MW that you are stalking/obsessed with.
> ((This is something we have always basically been missing))
> By getting reviewers to watch their own area of MW they can
> automatically jump in to review changes in that area. Without needing to
> hover over a dashboard of changes or be overwhelmed by notifications of
> every single commit coming in.
> And getting your commit reviewed wouldn't be anymore a matter of "Try to
> find one of the few people in our large community who reviews the topic
> your commit is in" but instead would be the much simpler "Tag your
> commit with the relevant topics so that reviewers can see it".
>
> Now, naturally as in the spirit of Gareth I didn't stop there with
> manual tagging and thought of a rules system to add tags automatically:
> if filechanged includes/parser/* add tag 'parser'
> if filechanged includes/api/* add tag 'api'
> if filechanged (includes/Skin.php, includes/SkinTemplate.php,
> includes/SkinLegacy.php) add tag 'skin'
> etc...
>
> We could basically have tags automatically added based on what files
> were updated in the commit.
>
> Personally I'd probably be hovering over any commit tagged with skin,
> linker, and a few other tags.
>

I think we had such support in CodeReview (or at least considered adding
it).
Having to find out / magically know who to add as a reviewer is a gerrit
drawback.



_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


epriestley at phacility

Aug 23, 2012, 2:54 PM

Post #16 of 25 (4365 views)
Permalink
Re: Code review statistics and trends [In reply to]

For what it's worth, Phabicator has a fairly close match to this workflow in Herald, which allows you to trigger audits/reviews/notifications based on various business rules:

http://i.imgur.com/GSYSD.png

(Phabricator's "projects" are roughly like tags.)

On Aug 23, 2012, at 2:34 PM, Platonides wrote:

> On 23/08/12 15:07, Daniel Friesen wrote:
>> I don't know if I quite like the thought of an evil reviewer bot.
>>
>> Though it does make me think of a feature I thought of adding to Gareth.
>>
>> I can't remember why but one day this feature idea randomly popped into
>> mind. Review tagging.
>> Basically things for review can be tagged with a number of tags. For us
>> we may create tags like (parser, api, skin, specialpages, i18n, etc...)
>> and anyone can add these tags to the review item.
>> Now besides making things searchable users can star/watch tags rather
>> than only star/watch individual review items.
>> This means that Gareth could automatically notify you about new commits
>> in a part of MW that you are stalking/obsessed with.
>> ((This is something we have always basically been missing))
>> By getting reviewers to watch their own area of MW they can
>> automatically jump in to review changes in that area. Without needing to
>> hover over a dashboard of changes or be overwhelmed by notifications of
>> every single commit coming in.
>> And getting your commit reviewed wouldn't be anymore a matter of "Try to
>> find one of the few people in our large community who reviews the topic
>> your commit is in" but instead would be the much simpler "Tag your
>> commit with the relevant topics so that reviewers can see it".
>>
>> Now, naturally as in the spirit of Gareth I didn't stop there with
>> manual tagging and thought of a rules system to add tags automatically:
>> if filechanged includes/parser/* add tag 'parser'
>> if filechanged includes/api/* add tag 'api'
>> if filechanged (includes/Skin.php, includes/SkinTemplate.php,
>> includes/SkinLegacy.php) add tag 'skin'
>> etc...
>>
>> We could basically have tags automatically added based on what files
>> were updated in the commit.
>>
>> Personally I'd probably be hovering over any commit tagged with skin,
>> linker, and a few other tags.
>>
>
> I think we had such support in CodeReview (or at least considered adding
> it).
> Having to find out / magically know who to add as a reviewer is a gerrit
> drawback.
>
>
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l [at] lists
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


ssastry at wikimedia

Aug 23, 2012, 3:06 PM

Post #17 of 25 (4368 views)
Permalink
Re: Code review statistics and trends [In reply to]

This looks good to me. -Subbu.

> For what it's worth, Phabicator has a fairly close match to this workflow in Herald, which allows you to trigger audits/reviews/notifications based on various business rules:
>
> http://i.imgur.com/GSYSD.png
>
> (Phabricator's "projects" are roughly like tags.)
>
> ... snip ...


_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


jarry1250 at gmail

Aug 25, 2012, 2:01 PM

Post #18 of 25 (4364 views)
Permalink
Re: Code review statistics and trends [In reply to]

I realise that many contributors are WMF staff, and many WMF staff
work a relatively predictable 5-day week, but the "new changesets"
graph still seems a little spiky to my eyes.

Given the +- 10 changesets range, how much confidence should I be
placing in these numbers?

Thanks,
Harry

--
Harry Burt (User:Jarry1250)

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


dvanliere at gmail

Aug 25, 2012, 3:23 PM

Post #19 of 25 (4352 views)
Permalink
Re: Code review statistics and trends [In reply to]

Hi Harry
The change set numbers are accurate and the spikes are caused by translatewiki. See my response to siebrand on how to remove the outliers and create a smoother chart.

Best


Diederik

Sent from my iPhone

On 2012-08-25, at 17:01, Harry Burt <jarry1250 [at] gmail> wrote:

> I realise that many contributors are WMF staff, and many WMF staff
> work a relatively predictable 5-day week, but the "new changesets"
> graph still seems a little spiky to my eyes.
>
> Given the +- 10 changesets range, how much confidence should I be
> placing in these numbers?
>
> Thanks,
> Harry
>
> --
> Harry Burt (User:Jarry1250)
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l [at] lists
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


gpaumier at wikimedia

Aug 31, 2012, 6:53 AM

Post #20 of 25 (4289 views)
Permalink
Re: Code review statistics and trends [In reply to]

Hi,

On Thu, Aug 23, 2012 at 3:37 AM, Rob Lanphier <robla [at] wikimedia> wrote:
>
> Our Analytics crew have worked out how to generate a graph that gives
> us a view into our code review backlog:
> http://gerrit-stats.wmflabs.org/graphs/mediawiki

There seems to be a 10-day lag (no data after August 21st). Is this a
bug or a feature?

--
Guillaume Paumier

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


dvanliere at gmail

Aug 31, 2012, 6:58 AM

Post #21 of 25 (4288 views)
Permalink
Re: Code review statistics and trends [In reply to]

>
> There seems to be a 10-day lag (no data after August 21st). Is this a
> bug or a feature?


Data hasn't been pushed to gerrit for 10 days, something is wrong with the
script. We will fix it today.
D
_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


bawolff+wn at gmail

Aug 31, 2012, 4:39 PM

Post #22 of 25 (4271 views)
Permalink
Re: Code review statistics and trends [In reply to]

Slightly hijacking this thread but is related.

I tried my hand at creating some gerrit related statistics:
https://toolserver.org/~bawolff/gerrit-stats.htm

Statistics might be the wrong word, its more the first 25 results of a
saved search in reverse order - but also includes the total number of
changesets with no review whatsoever, and some other things. There may
be more things on it in the future.

Last of all it includes a wall of shame, because I missed the old wall.

This is still a rather rough version (As can be noted by lack of css
or anything pretty). It also will update rather sporadically (I don't
feel all that good about putting my private key on the toolserver to
make this into a cron script on that server, so for now it is run by
hand).

Anyhow, feedback appreciated :)

[.Also source isn't public at the moment, but if anyone wants it let me
know. It is a fairly hacky php script at the moment]

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


matma.rex at gmail

Sep 1, 2012, 2:17 AM

Post #23 of 25 (4274 views)
Permalink
Re: Code review statistics and trends [In reply to]

2012/9/1 bawolff <bawolff+wn [at] gmail>:
> Slightly hijacking this thread but is related.
>
> I tried my hand at creating some gerrit related statistics:
> https://toolserver.org/~bawolff/gerrit-stats.htm

The "Wall of Shame" links don't work for me – neither "View gerrit
query" or number-links. (Using Opera.)

-- Matma Rex

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


bawolff+wn at gmail

Sep 1, 2012, 12:26 PM

Post #24 of 25 (4265 views)
Permalink
Re: Code review statistics and trends [In reply to]

On Sat, Sep 1, 2012 at 6:17 AM, Bartosz Dziewoński <matma.rex [at] gmail> wrote:
> 2012/9/1 bawolff <bawolff+wn [at] gmail>:
>> Slightly hijacking this thread but is related.
>>
>> I tried my hand at creating some gerrit related statistics:
>> https://toolserver.org/~bawolff/gerrit-stats.htm
>
> The "Wall of Shame" links don't work for me – neither "View gerrit
> query" or number-links. (Using Opera.)
>
> -- Matma Rex

Sorry I was encoding the url the wrong way. Should be fixed now. No
idea why it was broken in Opera and not Firefox

--bawolff

_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


hashar+wmf at free

Sep 1, 2012, 12:58 PM

Post #25 of 25 (4263 views)
Permalink
Re: Code review statistics and trends [In reply to]

Le 01/09/12 01:39, bawolff a crit :
> I tried my hand at creating some gerrit related statistics:
> https://toolserver.org/~bawolff/gerrit-stats.htm

Kudos on bringing back the wall of shame!

--
Antoine "hashar" Musso


_______________________________________________
Wikitech-l mailing list
Wikitech-l [at] lists
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Wikipedia wikitech RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.