The value of ODP is directly proportional to how good the Editors are.
If the Editors are misplacing submissions by either ignoring them or by putting them in the wrong categories, then the directory becomes less and less useful.
Therefore, I believe it is important that senior editors have a way of determining who the weaker editors are so that they can help educate and improve how they are handling submissions.
What sort of measurements do you think could be made to determine whether or not an Editor is doing a good job?
Examples:
a) How often a submission they made is switched to another category
b) How often a complaint is made about a particular category they are editing
c) How often a correction (grammatical, factual, etc) is made to a submission that they have accepted and added to the directory
d) The CTR on the submissions they accept versus other submissions in the same category
e) The amount of backlog in the categories they edit
Others? Please suggest.
Editors could see their own grade and even get a breakdown of how they earned their grade. This information should only be shared with a select few editors responsible for evaluations.
Obviously, and I hope this doesn't need repeating, the software will *never* be the final judge and jury and it will *never* be sufficient for finding all of the weak editors.
However, it can and should be one more heuristic in finding out who these editors are.
If the Editors are misplacing submissions by either ignoring them or by putting them in the wrong categories, then the directory becomes less and less useful.
Therefore, I believe it is important that senior editors have a way of determining who the weaker editors are so that they can help educate and improve how they are handling submissions.
What sort of measurements do you think could be made to determine whether or not an Editor is doing a good job?
Examples:
a) How often a submission they made is switched to another category
b) How often a complaint is made about a particular category they are editing
c) How often a correction (grammatical, factual, etc) is made to a submission that they have accepted and added to the directory
d) The CTR on the submissions they accept versus other submissions in the same category
e) The amount of backlog in the categories they edit
Others? Please suggest.
Editors could see their own grade and even get a breakdown of how they earned their grade. This information should only be shared with a select few editors responsible for evaluations.
Obviously, and I hope this doesn't need repeating, the software will *never* be the final judge and jury and it will *never* be sufficient for finding all of the weak editors.
However, it can and should be one more heuristic in finding out who these editors are.