asharma at wikimedia
Aug 17, 2011, 10:12 AM
Post #3 of 3
Way cool - Look forward to a brown bag on this project - Diederik? :-)
Re: Announcing Wikihadoop: using Hadoop to analyze Wikipedia dump files
[In reply to]
On Wed, Aug 17, 2011 at 10:05 AM, Tomasz Finc <tfinc [at] wikimedia> wrote:
> Very cool!
> On Wed, Aug 17, 2011 at 9:58 AM, Diederik van Liere <dvanliere [at] gmail> wrote:
>> Over the last few weeks, Yusuke Matsubara, Shawn Walker, Aaron Halfaker and
>> Fabian Kaelin (who are all Summer of Research fellows) have worked hard
>> on a customized stream-based InputFormatReader that allows parsing of both
>> bz2 compressed and uncompressed files of the full Wikipedia dump (dump file
>> with the complete edit histories) using Hadoop. Prior to WikiHadoop and the
>> accompanying InputFormatReader it was not possible to use Hadoop to analyze
>> the full Wikipedia dump files (see the detailed tutorial / background for an
>> explanation why that was not possible).
>> This means:
>> 1) We can now harness Hadoop's distributed computing capabilities in
>> analyzing the full dump files.
>> 2) You can send either one or two revisions to a single mapper so it's
>> possible to diff two revisions and see what content has been addded /
>> 3) You can exclude namespaces by supplying a regular expression.
>> 4) We are using Hadoop's Streaming interface which means people can use this
>> InputFormat Reader using different languages such as Java, Python, Ruby and
>> The source code is available at: https://github.com/whym/wikihadoop
>> A more detailed tutorial and installation guide is available at:
>> (Apologies for cross-posting to wikitech-l and wiki-research-l)
>>  http://blog.wikimedia.org/2011/06/01/summerofresearchannouncement/
>> Wikitech-l mailing list
>> Wikitech-l [at] lists
Wikitech-l mailing list
Wikitech-l [at] lists