WikiSpam management without the need for a GodKing. The question remains if the wikipedians still like this kind of SoftSecurity, and if they do, whether it will actually work.
The first problem I see with this suggestion is it's open to (and invites) abuse, since it's essentially an anonymous "poll" as to whether a page is reliable. An edit war could be masked by an unscrupulous varmint by simply reloading the page repeatedly. Avoiding this vulnerability by switching to "viewed by unique IPs" invites a distributed attack.
The second problem is that it assumes a connection between readers and editors. If a page is viewed a lot by people who are averse to correcting mistakes, it will appear more reliable by this metric than one where all the viewers are confident editors, yet the converse may be true!
It also demands a database update on each page view, which is usually a bad decision scalability-wise. The maintenance team may be highly unwilling to support such a resource hog.
Still, it's an interesting first proposal for a wiki ReliabilityMetric. -- ChrisPurcell
Of course, mechanisms like this cannot replace human peer review, and therefore should not even try. Instead, a symbiosis between automatism and manual interference should be targeted. The idea behind this can be understood when thinking about RecentChanges pages. They can be seen as a heuristic on what is relevant at the moment. This is open to abuse, and sometimes it is not a relevant but a spammed page on the top. But the RecentChanges page faciliates peer review in this case, because people willing to contribute will look on the page because it appears in RecentChanges, and will quickly spot the spam. The same could be done with this ReliabilityMetric: for example, there could be a reliability top ten page listing the pages with a high metric. This will again attract peer review, so abuse would be spotted quickly.