[Home]TwoWayWeb

MeatballWiki | RecentChanges | Random Page | Indices | Categories

[The TwoWayWeb] is the original theory behind the web, repackaged by Dave Winer. The original browser ([WorldWideWeb.app]) created by Tim Berners-Lee was both an editor and a browser. Wherever you had write access you could edit the page you where seeing. The intelligence was all put in to the client, while the server (and so the network) was just a dumb file server. The client app would render it in a uniform way which suited the users preferences and the abilities of the platform it was running on. Imagine that: the whole world wide web looking uniform and the same, never having to discover or learn how a website worked because they all work the same. This implied however that the look and feel of the world wide web was the job of the browser developer.

This is however not the way the story developed. It turned out that it was easier to just write a dumb browser. It also became clear that the people who created a website came from a paper world and soon there was a browser which gave them more control about the rendering of the webpage. Every website designer wanted to show of their own ability to do things differently and this new generation of browsers gave them this chance. The limitations of paper (fixed size, fixed rendering, etc) all where put in to webpages which made these browsers even more "render it tabloid like".

Later in WardsWiki this idea of a editable website was re-implemented. While Wiki is world-editable, WebLogs re-implemented this idea in the form of a private editable website. In the process much of the intelligence shifted from the browser to the server: the network became smart and the nodes dumber.

Consider that the server/editor/bowser are more or less combined for the website owner, while most users still do not have editor or server capabilities. Also, most of the smart stuff in browser these day have to do with being compatible with other browsers, and have nothing to do with editing webpages.

See also DavidIsenberg? on [Rise of the Stupid Network]


Opinions

I fully expect that the a future new version of the web will combine editor/browser/server in to one client which will then connect to an peer-2-peer network repository of versioned data, where anyone can edit any page (and every edit creates a new version of this page which is added to the repository), and anyone can see any version of the page in the repository. Naturally we will by then have left the world of the command line (the url) far behind. -- AnonymousContributor?

See also: PeerToPeerWiki

This page sounds like "wiki is a reimplementation of the original web idea". Can't comment on that. AC, of course it would be possible that a future standard server installation would include a web/wiki server that offers p2p features. Perhaps everybody has his own domain, webserver, wikispace, all that. But I don't think that there is a command line world that can ever be left behind. There will always be the need to reference and address. Even an image-sending mobile phone still has to call numbers (symbols if you prefer) to connect. Persons and files have to have names, programming objects exist at an address in memory. -- HelmutLeitner

This page is still only my own opinion... I hope people will feel free to alter it. Wiki as a concept is in many ways really in-line with the original web ideas: ContentOverForm, PlatformIndependent?, BrowserAndEditorInOne?. But on details they are very different as well. On urls, I just don't know. It's my personal preference to drop them asap. Maybe, when cameras become ubiquitous, we could replace them with [these] kind of 2d codes? -- AnonymousContributor?

AC:

There are quite a few concepts wrapped up in this nascent vision of yours that I would like to comment on eventually, but that I feel I will need a bit of time to mull over first. In the mean time, I thought you might appreciate a bit of encouragement. I quite like the idea of PeerToPeer capabilities and think that they will inevitably come to be, all be it not before some of the infrastructure needed to let an Individual manage thier node more efficiently is in place: e.g. I think Individuals participating as a node in such a scheme require the ability to:

Regards for now -- HansWobbe

Thanks for the kind words. It's not as if I have it all figured out, it's more that the stars seem to have dimed a bit and the light of dusk is starting to illuminate the sky (Sorry, I have been reading Shakespeare lately). You point out some really interesting questions. But I think the words need to be redefined.

Agreed. I'll give this a bit of thought during the next week. We may need to create a bit of a Glossary of Terms for the context of this page, given your "What is a node... -- hwo.

What is a node in this context? There are many nodes.

A bit of data could be a node, any view of it would mean that it will be copied, like it is currently the case with webpages. We pretend in a kind of childish make believe world that we visit a website, but this is just metafor and we should not confuse it with reality. Any edit will create a new node which has metadata about it's origin and thus previous version and I would assume that is also has a list of newer versions: trace back links. Forking is normal in this system, in the sense that there is not one authoritative node, but webs of generations of nodes and the users need to use wisdom and insight if they want to learn anything useful, This is nothing new, although it sometimes seems that way.

A client could be seen as a node, the client only needs to give write access to its user and read access to others. Maybe even selected others (GatedCommunity) the only vector for spam are the trace back links, maybe the client could examine the edited version with an diff and delete the trace back link if it is not simular enough? Seems the obvious thing to do.

I would not advice to unify the different interfaces, but to use something like xhtml2 (or some other encoding) which is then translated to the encoding the user prefers to edit in and when saved have it translated back again. Embrace that people are different and give them the tools they can use. Maybe someone could make a translate node which translates english text to mandarin, which human editors can then use as a basis for a beter translation? I do think that something like RSS and RDF will be needed to make control and sharing posible. Maybe even encryption?

Maybe we need to drop even the preconception that only data can be shared. Why not share code? Very high level languages (e.g. Scheme) run in a virtual machine isolated from the hardware anyway so with a few restrictions it could be used to make this WebOfPeers? more interesting.

One view I have debated in the past is that "Code is just Data in a particular Context. Now if you are proposing to share computing cycles (as I suspect, that is a different matter. -- hwo

Copyright is both hard and easy. It's easy because you give the right to copy to someone the moment your client gives then read-access and thus a copy. It's hard because of obvious reasons, one being that the law has not yet been rewritten to the new reality where everyone has a copier. Just as that an monopoly on scribing becomes redundant and irrelevant with the advent of the printing-press. Likewise in time the right to distribute (which currently is the real meaning of copyright) becomes irrelevant when the means to distribute become common place. Rights mean nothing when the means are not available, but the reverse is true as well: exclusive rights can not exist without exclusive means. (See Cory Doctorow's [DRM Manifesto] as well)

Aspects of this that interest me include:
-- hwo

linking is hard in a Peer2Peer world where peers enter and exit all the time, copying is much easier in such a world. -- AnonymousContributor?

No just law can go against common sense, but common sense is dependant on the frame of reasoning people have. With new technology the frame of reasoning will change and laws which seemed reasonable before become unreasonable. Examples: Keeping slaves became unreasonable when the industrialization came. Child labor became unreasonable when their work was no longer valuable enough. Schooling became common sense when the factories needed obedient workers which could read an order from hierarchies of absent managers (that schooling also had the effect that one adult kept 30 children out of harms way, so that both patents could work outside of the home was another motivation).

The above is little more then brainstorming. Please note that it's 01:16 at night when/where I write this so maybe it makes more sense to me then to you because I should have been in bed hours ago. -- AnonymousContributor?

(NB: This page it becoming to long, please advice.)

I'm not sure why you are concerned since it seems it might be for one or more of a couple of reasons:
-- HansWobbe (hwo)

Mainly because I was tired at the time and I noticed that the text began to sprawl at an alarming rate. -- AnonymousContributor?

Condense individual statements to insights and elide conversational filler. Filler obscures insights. Once insights are collectively agreed upon, move them above the line as part of DocumentMode. If a tension exists between two insights, you can place both above the line with qualifiers. (X, but conversely Y.) You don't need to qualify like Wikipedia since it's implied that some believe everything written. Have courage. Things can always be moved below the line. -- SunirShah


CategoryWikiTechnology CategoryHistory?

Discussion

MeatballWiki | RecentChanges | Random Page | Indices | Categories
Edit text of this page | View other revisions
Search: