Greetings!
I issued the following command:
./nph-import.cgi --import RDF -source content.rdf.u8 --rdf-category="Top" --rdf-add-date="2002-01-01" --destination="/var/www/cgi-bin/path/cgi/admin/defs"
I began an import of DMOZ data and got to Recreation and had 1 million plus urls.
The process got killed due to reboot, or my client machine going offline, not sure which.
I now have issued:
./nph-import.cgi --import RDF --rdf-update -source content.rdf.u8 --rdf-category="Top" --rdf-add-date="2002-01-01" --destination="/var/www/cgi-bin/path/cgi/admin/defs"
I noticed that even with the --rdf-update it still started at the Adult category and worked its way down. And, this is not a new RDF file, it is the same one, so it would seem to me that it should read from Adult to Recreation, realize that it stopped at Recreation, and start from there again. It is not doing that.
Is there a problem or is it working as designed? Is there any harm in letting it continue and simply getting rid of duplicates via a "get rid of duplicates" script that must be a part of Links SQL?
Thanks.
I issued the following command:
./nph-import.cgi --import RDF -source content.rdf.u8 --rdf-category="Top" --rdf-add-date="2002-01-01" --destination="/var/www/cgi-bin/path/cgi/admin/defs"
I began an import of DMOZ data and got to Recreation and had 1 million plus urls.
The process got killed due to reboot, or my client machine going offline, not sure which.
I now have issued:
./nph-import.cgi --import RDF --rdf-update -source content.rdf.u8 --rdf-category="Top" --rdf-add-date="2002-01-01" --destination="/var/www/cgi-bin/path/cgi/admin/defs"
I noticed that even with the --rdf-update it still started at the Adult category and worked its way down. And, this is not a new RDF file, it is the same one, so it would seem to me that it should read from Adult to Recreation, realize that it stopped at Recreation, and start from there again. It is not doing that.
Is there a problem or is it working as designed? Is there any harm in letting it continue and simply getting rid of duplicates via a "get rid of duplicates" script that must be a part of Links SQL?
Thanks.