[Home]PageModel

MeatballWiki | RecentChanges | Random Page | Indices | Categories

The most generalized form of a wiki-like PageDatabase has the following structure

Some properties are very important. For instance, every page should have some data attached to it. For wikis with heterogeneous page types, pages should have an associated MimeType? to explain what that data happens to be.


There are two options for how to use this system: complex queries or complex storage. See MeatballDatabaseRelations for the assumed "revision" system.

Complex queries

This is the currently-preferred option: meta-data is only allowed to change when its subject does.

CREATE TABLE statements (
  subject int NOT NULL,
  predicate text NOT NULL,
  object int NOT NULL,  -- index into a number of tables, for the various types of object supported
  added int NOT NULL,  -- revision statement associated with
  removed int  -- revision number when statement was removed, NULL if not yet deleted
);

This approach has the advantage of underlying simplicity. However, arbitrary queries must be supported to make proper use of it. For example, if one stores a page's parents, one might want to know the children of a page (query predicate and object instead of subject), the grandparents of a page (SQL JOIN), or even the entire list of the page's ancestors (not possible in basic SQL, needs iteration). This makes providing a good model interface difficult (essentially recreating SQL), but also makes caching very tricky, as dependencies may be fantastically difficult to model, let alone verify when data changes.

Complex storage

This is the approach taken by PeriPeri/1: meta-data can added willy-nilly, and the engine also stores the implications of the metadata, using a subset of the WebOntologyLanguage? (OWL) to describe the rules of inference.

This approach greatly complicates the RDF module, which must not only understand rules of inference, but must also garbage-collect implied statements when other statements are invalidated. However, dependencies can now be greatly simplified by only allowing simple queries — namely, 'what statements hold for page X', which is trivial to support.

It also complicates the database, which not only needs to store what statements have been asserted, as before, but also what statements have been implied, and further — to allow garbage collection — what statements were used to deduce each implication, and which therefore are the implication's dependencies.

CREATE TABLE statements (
  id int PRIMARY KEY AUTO_INCREMENT NOT NULL,
  subject int NOT NULL,
  predicate text NOT NULL,
  object int NOT NULL
);

CREATE TABLE assertions (  -- When statements were asserted
  statement int,
  added int,  -- revision that asserted the statement
  removed int  -- revision that invalidated the statement
);

CREATE TABLE implications (  -- What statements have been implied
  id int PRIMARY KEY AUTO_INCREMENT NOT NULL,
  statement int NOT NULL
);

CREATE TABLE implicationDependencies (  -- What statements led to what implications
  implication int NOT NULL,
  statement int NOT NULL
);

Experience from the PP/1 engine suggests that teaching a new kind of inference to a metadata engine would be much easier than teaching it a new kind of query and the associated dependency.

Note that the caching invalidation system of the former is of essentially equivalent complexity as the garbage-collection system of the latter, and will hence require an equally complex implementation: if it is more ad-hoc, it will merely be more prone to breaking.


Discussion

MeatballWiki | RecentChanges | Random Page | Indices | Categories
Edit text of this page | View other revisions | Search MetaWiki
Search: