I don't know the full story, but the editor side has a Berkeley flat-file database with each category as a folder, and many tools to manipulate the category data between unreviewed and reviewed status, move to other categories, etc. All editing actions are also logged, by user, and by category, etc.
The site navigation uses the folder heirarchy with breadcrumb navigation, and @links and RelatedCategory links to bridge to other categories that could be said to be child or related categories.
After an edit, the editor side HTML pages are generated within seconds, but the data can take hours or days to be copied over to the public side. There is a "job queue" that lists pages to be regenerated.
In the background, some spidered process runs for 4 or 5 days gethering the category data into the RDF. It works slowly as it has half a million categories to traverse. The RDF pops out about weekly and is made available on a separate part of the site (along with old archived copies from way back).
The RDF file is also then fed to the search server, and after a couple of days the search database is up to date with whatever was in the RDF, but already the search is a week behind the reality of what the editors are seeing in their categories.