Built into the current Internet is a concept of nearly perfect freedom of speech. At least in the sense that it's very easy to publish what many would consider censurable material, even if you have a hard time getting people to listen to you. The anonymity of the Internet, plus its core VanityPress values, have given everyone with access a nearly unlimited SoapBox from which to disseminate information.
Of course, much speech in the RealWorld is not similarly unconstrained. One cannot freely advocate Naziism in Germany, nor could one distribute videos of sex to minors in America. Society demands protections from material it considers universally offensive, and it demands constraints on material it considers not suitable for all classes (usually, children).
Of course, societies may not agree with each other. Naziism is freely discussable in America, whereas pornography is less contrained in Scandanavian countries.
One solution offered is extensive filtering networks. One could delegate your editorial choices to a trusted body, say the government of Germany or the Christian Coalition. That trusted body would be charged with selecting parts of the Internet considered safe for consumption by its subscribership.
The current W3C recommendation is called [Platform for Internet Content Selection (PICS)]. It is an extension of the HTTP/1.1 headers and is currently supported by the major browsers. It's an outgrowth of SurfWatch and a foundation layer for MetaData?. While it allows (and chiefly relies upon) websites to rate themselves according to the standard PICS criteria, it is also architected to allow third parties to rate content for the end user.
LawrenceLessig argues heavily against ContentFiltering like PICS in CodeAndOtherLawsOfCyberspace. He quickly defeats the presumption that filtering software on the 'Net is limited to desktops of end users, noting that large-scale filtering fishnets can be placed anywhere in the communication path, including at upstream nodes (he calls this UpstreamFiltering?). Filters at this level can block content invisibly to end users, and without appeal. While this may be favourable for AOL to SandBox their userbase, it is not favourable for the general public.
Instead, Lessig favours restricting access based on DigitalCertificate?""s. He calls this zoning, or for our purposes, ContentZoning?.
CollaborativeFiltering seeks to find the average view of a group (or sometimes "community") of people. RatingGroups would succeed famously at ContentFiltering.
Based largely upon CodeAndOtherLawsOfCyberspace.