I'm trying to import data from a DMOZ slice, and the process keeps getting killed after about 65,000 links. There are about 300,000 links total, so I'm wondering if there's a way to re-start the import from the point where it was killed the last time through. I've used --rdf-update to make sure no duplicates get added, but even still, it keeps running out of steam in the 60,000-70,000 range. Is there some kind of "resume" feature? Is this possible?
If that's not an option, what would be the easiest way to delete a chunk from the import file (i.e. the chunk representing the data that has so far been successfully imported)? Maybe I'm just an ignorant tool, but I don't know how to do that except by using a text editor, and my text editor isn't too happy about working with a 65MB file...
Thanks in advance for any advice.
Fractured Atlas :: Liberate the Artist
Services: Healthcare, Fiscal Sponsorship, Marketing, Education, The Emerging Artists Fund
If that's not an option, what would be the easiest way to delete a chunk from the import file (i.e. the chunk representing the data that has so far been successfully imported)? Maybe I'm just an ignorant tool, but I don't know how to do that except by using a text editor, and my text editor isn't too happy about working with a 65MB file...

Thanks in advance for any advice.
Fractured Atlas :: Liberate the Artist
Services: Healthcare, Fiscal Sponsorship, Marketing, Education, The Emerging Artists Fund