Platonides at gmail
Apr 23, 2012, 8:28 AM
On 23/04/12 14:45, Daniel Kinzler wrote:
Re: Request for Comments: Cross site data access for Wikidata
[In reply to]
> *#* if we only update language links, the page doesn't even need to be
> re-parsed: we just update the languagelinks in the cached ParserOutput object.
It's not that simple, for instance, they may be several ParserOutputs
for the same page. On the bright side, you probably don't need it. I'd
expect that if interwikis are handled through wikidata, they are
completely replaced through a hook, so no need to touch the ParserOutput
> *# invalidate the (parser) cache for all pages that use the respective item (for
> now we can assume that we know this from the language links)
And in such case, you don't need to invalidate the parser cache. Only if
it was factual data embedded into the page.
I think a save/purge shall always fetch the data. We can't store the
copy in the parsed object.
What we can do is to fetch is from a local cache or directly from the
You mention the cache for the push model, but I think it deserves a
> === Variation: shared database tables ===
> * This approach greatly lowers the amount of space used in the database
> * it doesn't change the number of http requests made
> ** it does however reduce the amount of data transferred via http (but not by
> much, at least not compared to pushing diffs)
> * it doesn't change the number of database requests, but it introduces
> cross-cluster requests
You'd probably also want multiple dbs (let's call them WikiData
repositories), partitioned by content (and its update frequency). You
could then use different frontends (as Chad says, "similar to FileRepo").
So, a WikiData repository with the atom properties of each element would
happily live in a dba file. Interwikis would have to be on a MySQL db, etc.
Wikitech-l mailing list
Wikitech-l [at] lists