Guest martin30 Posted December 10, 2003 Posted December 10, 2003 I can't find a program that actually works to parse odp dumps. I don't want to search dmoz in real time but to create my own sql database. I couldn't get anything useful in http://dmoz.org/Computers/Internet/Searching/Directories/Open_Directory_Project/Use_of_ODP_Data/Upload_Tools Does anybody know how to export the results to a sql database?
samiam Posted December 10, 2003 Posted December 10, 2003 I haven't used it, but is http://www.ohardt.com/computer/dev/java/ "ODP Data Parser A simple XML parser that inserts the ODP strucure into MySQL DB odp.reader-0.1.zip" along the lines of what you're looking for? Also take a look at http://rainwaterreptileranch.org/steve/sw/odp/rdflist.html
Guest martin30 Posted December 10, 2003 Posted December 10, 2003 Thank you! But those programs parse the odp dump while you are on a server, and since I am not, I would have to upload the whole dmoz file when I am only wanting to parse a relatively small part of the directory.
hmf Posted December 15, 2003 Posted December 15, 2003 Then your bets bet is tulipchain. (http://ostermiller.org/tulipchain/) It reads odp content throgh the web interface. This makes sense if you are only intersted in your categories data. The approach of tulipchain is not usable for parsing larger parts of the directory) However it does not insert its data into a database. I am currently working on such a feature. However I am using Berkeley DB/Object database, because mySQL is far too slow on hierarchical data.
Meta theseeker Posted December 15, 2003 Meta Posted December 15, 2003 TulipChain is editor only I believe.
Meta windharp Posted December 15, 2003 Meta Posted December 15, 2003 Nope, TulipChain is free for all. Several of its advanced features are available to editors only, but the program itself can be used by anyone. But as stated above it does not solve the problem. It can produce HTML output of a complete tree with some tricks, but _if_ you want to spider a small part of the ODP I suggest spidering it directly, not with a tool like this that produces output that has to be spidered again. Curlie Meta/kMeta Editor windharp
bobrat Posted December 15, 2003 Posted December 15, 2003 Parsing and extracting a subset of data fromthe RDF dumps is not in itself time comsuming. My experience is that downloading the full RDF dump, and extracting a subset, is fairly easy and fast to do. Adding the data to mySQL, which is what I have been doing, seems to be the time killer. [Which is why I was only extracting a subset for one area of ODP]
Meta theseeker Posted December 15, 2003 Meta Posted December 15, 2003 Adding the data to mySQL, which is what I have been doing, seems to be the time killer. I extract most of the information and put it into mySQL tables, though my programs are very editor oriented so probably wouldn't be of much use to anyone else. But one way I speed this up is to extract the information into a tab-delimited flat file--for example, if I have a table with URL, Title, Description, and Category the each site would be on one line, with the URL, title, description and category separated by tabs. Once I'm finished writing all the data to flat files, I then Truncate the table (or delete everything from the table), and then I use LOAD DATA LOCAL INFILE "flatfile.txt" INTO TABLE tablename I don't use an index on the table containing the 4 mil+ sites, but if you do, you don't want the index there until you've loaded all the sites into the table. Then create the index. One other very big time (and space) saver, is to create a catid table. This table should have only two fields: catid and path, where catid is the cat id in the RDF for each category, and path is the full path of the category, like Arts/Online_Writing. Then, in any other table where you need to specify a category, use the catid. When you query a table with a catid, join it with the catid table to get the path. Using those techniques, and a few others that are more complicated to explain, I've reduced the parsing time to under an hour.
Guest martin30 Posted December 15, 2003 Posted December 15, 2003 Thank you hmf That is exactly what I was looking for! Great!
nakulgoyal Posted April 11, 2004 Posted April 11, 2004 Re: Thanks hmf !! The info you provided was useful for me as well. Just for General Knowledge you see :-)
giz Posted April 11, 2004 Posted April 11, 2004 >> But those programs parse the odp dump while you are on a server, and since I am not, I would have to upload the whole dmoz file when I am only wanting to parse a relatively small part of the directory. << You could always run a local copy of Apache, PHP, and mySQL, so that it appears that you are on a server. Access it through http://localhost/ or http://127.0.0.1/ etc.
nakulgoyal Posted April 11, 2004 Posted April 11, 2004 >> But those programs parse the odp dump while you are on a server' date=' and since I am not, I would have to upload the whole dmoz file when I am only wanting to parse a relatively small part of the directory.[/i'] << You could always run a local copy of Apache, PHP, and mySQL, so that it appears that you are on a server. Access it through http://localhost/ or http://127.0.0.1/ etc. Good idea g1smd !! I will just try it !!
marengo Posted June 18, 2004 Posted June 18, 2004 I am making my online directory for webmasters now - http://bestcatalog.net/ and I used Extreme Dmoz Extractor (by Nicecoder): very good application. http://www.nicecoder.com/dmoz_extractor.php
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now