MeatballWiki | RecentChanges | Random Page | Indices | Categories

Self-publishing vs. academic peer review

Self-publishing brings with it concerns of the VanityPress. Will we be deluged with "scientific" findings that have had no quality control? How will we assess which papers are worth reading and which are junk? In their study, Swan and Brown (2003) discovered that by far the number one concern for all researchers was PeerReview, despite through which model the papers were published. 94% of respondents felt peer review was important, far higher than any other category in their survey.

Peer review is expensive (Williamson, 2003). Each article could cost anywhere from £60 to £400 per article (Donovan, 1998). Of the entire publication process, the editorial process is the most expensive compared to any other phase (i.e. copy-editing, origination and printing). Fees might include editors' honoraria, editorial assistants' salaries, costs of peer-review software, commissioning fees, and the costs of running an editorial office but no fees for reviewers themselves. Despite this cost, Williamson informs that peer review is plagued with problems. Bias in the PeerPart runs on any number of dimensions, from geographic location, to prestige, to gender--any social category, really. Further, peer review is poor at detecting defects, barely catching any.

For these reasons, Pöschl (2004) gives us a polemic against the pointlessness of traditional peer review. He begins with the sentence, "A large proportion of scientific publications are careless, useless or false, and inhibit scholarly communication and scientific progress." (p.105) He complains that scientists are evaluated by the number of papers they publish, which leads them to publish more, faster, and have to spend an enormous amount of time keeping up with their field, leading to InformationOverload. Consequently, they repeat a lot of work needlessly over and over again. Peer reviewers and referees' competency and possibly conflicting interests may hold back scientific research unnecessarily. Useful referee comments are kept private.

One solution is to filter out the bad papers technologically through CollaborativeFiltering. Pöschl (2004) proposes an "OnlineCommunity" model, where authors publish articles almost immediately without peer review. Peer review will come from interactive comments from the readership from which the author can improve the paper until reader responses quiesce, much as the ScoopEngine controls article submissions. Mizzaro (2003) suggests a more complex system that computes statistically the weight of each paper based on reader ratings. His scheme requires rating scores for both reviewers and authors; this allows to build into the system an "authority" concept, which in turn allows both (i) to give more importance to responses from more "authoritative" readers, and (ii) to automatically compute the ability/reputation of readers acting as referees.

Breaking away from the publish and review cycle, Kling, Spector, McKim (2002) propose the GuildModel.


Donovan, B. (1998). The truth about peer review. Learned Publishing, 11(3), 179-184.

Kling, R., Spector, L., and McKim, G. (2002). Locally controlled scholarly publishing via the Internet: The guild model. The Journal of Electronic Publishing, 8(1). Available from http://www.press.umich.edu/jep/08-01/kling.html

Harnad, S. http://www.nature.com/nature/webmatters/invisible/invisible.html

Mizzaro, S. (2003) Quality control in scholarly publishing: A new proposal. Journal of the American Society for Information Science and Technology, 54(11), 989-1005.

Pöschl, U. (2004). Interactive journal concept for improved scientific publishing and quality assurance. Learned Publishing, 17(2), 105-113

Swan, A. and Brown, S. (2003). Authors and electronic publishing: what authors want from the new technology. Learned Publishing, 16, 28?33.

Williamson, A. (2003). What happens to peer review? Learned Publishing, 16(1), 15-20.


Arms, W. Y. (2002). Quality control in scholarly publishing on the Web. The Journal of Electronic Publishing, 8(1). Retrieved December 17, 2004 from http://www.press.umich.edu/jep/08-01/arms.html

I fear that acamdemic these days are also often short of time. This can result in the AcademicPeerReview being less thorough than it could be. I imagine most Wiki reviewers however, do so as a hobby, and hence have the opportunity to devote more care than they would to a job-oriented review task. -- Anon

I believe AcademicPeerReview has been criticised as having the same organization structure weakness as Stalin's Communism, which democracies address:

There's no feedback loop - The only way the followers can influence the leaders is to exit or attack the entire organization.

'AcademicPeerReview' incompletely describes the functions performed - the 'Peer Review' board also judges the work of inferiors/subordinates, who cannot get published and advance, or even survive, in the field without the Superior's 'nod'. As one above said, these folks get busy - they also get defensive and involved, as we all do, in turf wars, where conceptual wrongs are not factual, but territorial - While the created ConceptualTerritory? does provide a powerful incentive for intellectual craft-ones-ship, which may be why truly wellcrafted intellectual organizations have very clear turf bondaries, with unequal divisions, they also have the blind spotsl of their honchos. -- BrianCady

I've collected a bunch of peer-review related stuff, most of it critical of the current state of affairs, at http://www2.iro.umontreal.ca/~paquetse/cgi-bin/om.cgi?back=Peer+Review -- SebPaquet

AcademicPeerReview acts more like a gate keeper to prevent bad quality papers and kooks from flooding the scientific literature with noise. After that, it fails horribly to improve the quality of papers, but in practice that isn't so bad. After a paper is published, the real academic peer review is whether or not it has legs. If people cite it, build on it, and use it, and it works (if it is science), then it will live on. Otherwise, it will be forgotten with good riddence. However, all of this depends on other factors about how accessible and findable your paper is. If it is locked up behind expensive journals or journals with bad connections to electronic indices, then an excellent paper will be passed by. The goal should be maximized access to maximize impact. However, the granting and tenure processes paradoxically hold this back.

I think that peer reviewers should get more reputation and should be held more responsible for the final article they review. So their names should be below the authors list as something like "peer reviewed by:". Peer reviewers could organize in communities so that their reputation and the reputation of their community are at stake (currently AcademicPeerReview is anonymous, no risk, no reputation). All peer reviews, especially the negative ones, should be published for transparency. An online community article "peer reviewed by: meatball community - SunirShah, SebPaquet, MartinHarper" could be made to mean something. Wikis could hold the communities and they could publish the articles in subspaces not open for general editing (no need to constantly protect them). There might even be editorial subspaces only open to authors and peer reviewers. The FractalWiki is a spike where things like this can be configured and tested.


I think you have a very good seed of an idea here that even has broader implications, since in my opinion, this also speaks to the whole issue of Trust.

I do not understand the reference to the FractalWiki spike though and would appreciate any additional comments since I am currently trying to decide on how to upgrade one of the wikis that I sponsor and am quite intrigued by what I have seen of it and the ProWiki service. -- HansWobbe.

Although WikiFractality, WikiContextuality and AutoLinkStrategies have made their way into the standard ProWiki software, we usually use only a small fraction of the complexity that is possible. FractalWiki is a place were we play around with with these things. -- HelmutLeitner

Peer review will also become very important in normal book publishing, because any author can now publish electronically or by print on demand. But if he does, he bypasses the quality enhancing services of a publishing company. An OnlineCommunity could act to replace these services as a ReflectionCommunity to support authors, to do peer review, to give recommendations to readers and publishing companies. -- HelmutLeitner

Helmut, I like your point of listing PeerReviewers explicitly I'm going to poach it. In theory, if not in practice, everyone in the guild of the GuildModel ought to be held responsible for the output of that guild. However, I'm reading a little more about OralCulture vs. PrintCulture? modulo TheOpenSocietyAndItsEnemies?, and thinking about how wedded academia is to not have responsibility to one's peers. Making you implicitly accountable for your colleague's publications will result in greater control and less 'freedom' to publish than is currently afforded by academia (although good luck publishing a non-mainstream idea in a mainstream journal without the Editor's personal love). The identity myths of academia suggest one is allowed to research and write whatever one feels like, probably in overreaction to suppression from before. I wonder if freedom to speak is a good thing or a bad thing, or if there is a middle path of unshackling responsibility that is more optimal in terms of leading to valuable insight. After all, academics now have a RightToLeave that wasn't really present beforehand, with a global linga franca and a burgeoning academic stratum to society. There's lots of places to publish and probably likeminded individuals. -- SunirShah



MeatballWiki | RecentChanges | Random Page | Indices | Categories
Edit text of this page | View other revisions | Search MetaWiki