sfxmystica, the issue is fairly well understood, and the solutions (at least in the abstract) are not that hard. Actual implementation would be a lot of work, of course.
But perhaps the main message is still not getting through: that for editors and for staff, this is simply and genuinely not a problem at all; and even for submitters of legitimate sites, it is an emotional issue only, with no practical consequences whatsoever.
Only for spammers does any of this information have practical consequences, and for them every bit of information is absolutely critical. So editors really want less information getting out. The advantage of a human-driven forum like this is that when you ask about a website, an experienced editor basically ALWAYS looks at it. If it was accidentally deleted wrongfully, that will be fixed; if it was accidentally not deleted properly, that too will be fixed. (The former may happen about 1% of the time, the latter happens somewhere under 10% of the time.)
Now do your profit-loss analysis. By asking here, affiliate spammers are running a risk of exposure, with virtually 100% chance of getting the tactical nuke on their submittal. (There's a small chance of too much information slipping out, although we try to be very careful about that -- that's why about all we'll say is "read the guidelines".) Legitimate submitters are running no risk at all, and very occasionally might get a problem fixed. (Several times a year, a listing might be accelerated, but that is something you should expect not to happen.)
And editors aren't inconvenienced more than they're willing to be.
Editors -- not harmed; legitimate submitters occasionally helped (as much, that is, as such information can help them), spammers occasionally hurt. Where's the problem? What in all of this would suggest to the editors' technical support teams that an automatic approach is needed? What can the automatic approach give that's _useful_ (as opposed to emotionally frustration-building) to legitimate submitters? And even assuming a perfect design, how likely is it that some implementation flaw could be exploited to give spammers all the keys they so ardently desire?
Just in passing, sfxmystica, as an exercise if you want to brainstorm: find a flaw in your algorithm, and explain how spammers can exploit it to tell exactly when a site was rejected. (It is a trivial exercise, and I know of at least one consumer GPS reception device that actually used the same strategy to get around the military encryption of the LSBs of the GPS signal.) Then find a solution, and find the flaw in that solution ... (So far I have gone, but I am not sure there's a socially acceptable technological solution to THAT flaw.)
See, the insurmountable advantage that humans have is that when they are abused, they instinctively react against that abuse: whereas an inadequate algorithm could leak information indefinitely. At this point, humans do better what is needed, and don't waste their time doing too much of what isn't.