[Home]MeatballWikiBugs

MeatballWiki | RecentChanges | Random Page | Indices | Categories

Add your bug to the list below. See MeatballWikiBugsResolved for old issues, including some where the users were found to be buggy instead of the software.


Non-consecutive revision numbers

See the revision numbers for WxWikiServer for an example. The count is wrong too.

This is not a bug. There is a limited amount of space (mainly memory) available for old revisions. Once this limit is reached, old revisions are no longer saved until there is room for them in the keep file. Currently this limit is 400,000 bytes--higher limits have been tried but they have caused the script to crash. (The current server has a very low limit on the amount of memory a CGI script can use.) When usemod.com is moved to the new server (in July or August 2004), this limit will be raised (probably to something like 1MB).

The maintenance action removes most (not necessarily all) old revisions (older than a threshold time). Since all of the revisions are newer than this threshold (14 days by default), the maintenance I just ran didn't remove any of them. As a temporary measure, I copied and then removed the ".kp" file for WxWikiServer which holds all the old history. (I can replace it if needed.)

For this particular page, the threshold was reached because of frequent saving by the same author. I recommend using the "Preview" button more often while editing, and using the "Save" button less frequently. If you are saving to detect/avoid edit conflicts, you should know that the preview will also check for edit conflicts.

I don't know what you mean by "The count is wrong too." If you are referring to the number of changes on RecentChanges, they are stored outside the page and are not subject to the limits above. --CliffordAdams

Whoops, sorry about that. Didn't realize it was a problem... I'll try to use the preview option more often :). BTW any way around this? Or is the recursive copying of the revisions just too slow? -- RyanNorton

Empty line appended on Save

Hard to believe this hasn't been found and rectified ages ago. Occurred with Netscape 7.1. Haven't checked whether it happens with other browsers.

I believe this is not a bug, but deliberate. As I recall, if the last character in the source is not a newline, UseMod appends a newline. Can't recall the motivation for this. Perhaps it has to do with Netscape 4's bug when posting utf-8 form data.

Other wikis don't need to do this, so it would be interesting to know why it needs to be done.

All I remember is that this change was deliberate, and it fixed a problem with a particular browser. Perhaps this behavior should become a minor option in a later version of UseModWiki. --CliffordAdams

Problems with http://meatballwiki.org/wiki/

Add RecentChanges title to RSS.

Improve Tavi-style history to always display diff table, even after clicking on "Compare". Consider disabling (diff) link on RecentChanges as it is redundant with the Tavi-style history. Removing feature makes for good FeatureKarma.

Upgrade TextFormattingRules with new syntax.

Fix KeptPages. It seems to be keeping the oldest entry, even if it isn't the last author, major, or minor change. Once again, it's two weeks since the page has been replaced. Goddamn, I invented it and even I don't get it. --ss

Look at the other bugs on this page. In particular fooLinkPattern. fooMeatBall:link foohttp://example.com fooRFC1234 fooISBN1234567980 Done. Put a \b in front of all the patterns in CommonMarkup

MeatBall:action=rc&rcidonly=not_a_link doesn't throw an error.

1.1.1. SampleUndefinedPage
1.1.2. [here] SampleUndefinedPage

1.1.1. SampleUndefinedPage?

Perhaps just elide <a>?</a>. Done.

The table of contents and numbered headings include entries in the diff output! Fixed.

Or not Fixed.

1.1.2. [here] SampleUndefinedPage?


WantedPages

There's some strangeness on WantedPages. For example, "AnswerMe" shows up as having one backlink, but when I click on the backlink search, I see two pages linking to it. Also, strangely, "re" appears as a wanted page with one backlink. But its backlink search spits out every single page in the whole wiki. Not really a big deal, just wanted to call it to attention.

That seems to be WantedPages being cleverer than the backlinks engine. (It's probably time-costly.) Check out the AnswerMe backlinks to see what I mean: one (here) is "nowiki"-ed, one is in the form Wiki:AnswerMe, and one is actually a broken link.(I couldn't find "re" as a backlink.) -- ChrisPurcell

H'm. I had guessed that the "back" action actually polled the LinkDatabase, but I guess it only does a text search. Maybe that explains why "back=re" pulls up the whole database; "back=X" must search for the WikiName regex in X, and in the case of "re" it turns up the empty string (which matches every page). Right now "re" is at #2574, and I don't see it in a quick scan of the LinkDatabase, so I wonder where WantedPages is getting it from. And I just noticed "e" at #1813.


Long pages cause browser and server problems.

Not actually a bug in the meatball script, but a problem that it may have to work around...

KuroshinSubmissionQueue is [no longer - see below] currently too long to be saved by the MacOS versions of Mozilla, Netscape, IE, or iCab. Attempting to save it using those browsers produced a variety of broken behaviors, usually resulting in truncated content.

KuroshinSubmissionQueue is no longer too long to be saved [...] I've split the page into three, and also eliminated a repeated chunk due to someone's sloppy copy/paste. Much better now.
For the morbidly curious -- I fixed it by loading the edit page, saving the html of that page to disk, then trimmed chunks out of the <TEXTAREA> until it worked again.

Even when using a non-buggy browser, a somewhat longer page actually produced a server error from Apache. It seems that extremely long pages cannot be handled reliably by web browsers or servers. Is there some way that UseModWiki can handle this gracefully?


Cancelling Preview can result in truncated Edits being saved [Working on it]

DANGEROUS DAMAGING BUG

Try this - find a long page, go into edit mode, click preview, while waiting for the request to be completely sent to the server cancel the page load (apple-period, ctrl-C, ESC, close window, big red switch, whatever). Now go look at that page again, and you'll find it's been truncated. It looks like the UseMod code sees an incoming request, doesn't get to see the SUBMIT=Preview, and treats it as an SUBMIT=Edit

This just happened (2001-05-25.2200) to me (EricScheid) with [FeatureKarma]

I can understand how this came to be though - the edit form can be submitted by pressing ENTER while the focus is in the summary field, and thus the SUBMIT=SAVE field doesn't get sent. Losing new content is worse than having to just go back a version. You could insert an extra hidden field on the form after the TEXTAREA, and if absent throw an exception ... although technically you shouldn't rely on the browsers sending form fields in the same order as they appear in the form (even though just about every browser does).

Darn, I was just about to suggest a warning signal if the received edit appears to be a truncation, but then I realised that in many cases the user won't see that response from the server (eg. close window, power loss, user-cancel, etc)

I see the problem, and will try to figure out a fix soon. I restored revision 20 from the kept page list. (To restore an old revision just click "View other revisions", then "View" the old revision, and finally edit/save it.) --CliffordAdams (P.S. Edit conflict detection still works :-)

Normalized form for newlines [Will change for 0.93]

I was almost positive that UseModWiki stripped carriage returns from input, leaving only newlines in each page. But occasionally I see pages that are just barely tweaked, but for which the diff shows the entire page under both "deleted" and "added". It might make things easier if carriage returns were stripped when saving a page.

Release 0.92 does not strip carriage returns. The next release will strip them, although I want to do some testing to make sure it doesn't cause problems on Win32 systems. --CliffordAdams

This isn't the problem. The reason why the diff is spaced out is that in sub ColorDiff the regex s/\r?\n/<br>/g turns all newlines into line breaks. Changing this to paragraph breaks didn't look very nice either. I'm not overly tempted to change it as the diff output matches the underlying WikiSyntax more accurately than the rendered page, and if you're going to use the diff to fix the page or to reply, then it's somewhat more useful to see each separate line, well, separated. -- SunirShah


Saving prior revisions [Fixed for 0.92]

Cliff, when I view a prior version, edit it, and then save it, I am told that my edit conflicts with another edit. I suspect this leaked through testing 'cause you tested pages that only you had edited, which thereby escape your conflict detector. Unfortunately, pressing save from the conflict editor repeatedly brings me right back to the conflict editor.

Ick. I have a sinking feeling I know what the problem is. (It gets the timestamp from the old revision, which will always conflict with the most-recent timestamp.) This may take a bit of work to fix, since I'll need to get the timestamp from the new version. I'm leaning toward a fix which is inefficient for old-revision editing (which should be rare), but requires few changes for the ordinary path. For the moment you can cut/copy the old text, then paste it into an edit of the current revision. --CliffordAdams


Cached pages: (semi-resolved)

I just edited LimitTemptation. It showed up on the page. I then decided I wanted to add a little more. When I clicked on EditPage?, it brought up the previous version, not the latest version. -- AnonymousDonor

OK, I just realized I had my browser set to always load from cache. It works when I say never load from cache. -- AnonymousDonor

This is still kind of a bug. There should be a meta tag telling the browser to reload every time. Not like InternetExplorer pays attention, though.

I may look into this further. Does anyone know of a good site which explains what should be done in this case?

I think http://www.mnot.net/cache_docs/ is a good document on the issue. -- kkovacs

August 28, 2000: I found the no-caching headers, but I'm not sure they are the best general solution. I may make the no-cache headers a user preference for those who experience problems. --CliffordAdams

September 16, 2000: A friend of mine maintains a web app and they run into this all the time with InternetExplorer. Their solution has been to generate a random number to embed as a query parameter to the URL so that I.E. won't try to cache pages. -- RusHeywood?

TWiki had this problem too - see TWiki:Codev/RefreshEditPage, it also happens with Opera and Mozilla. We solved it with a random-number suffix as well. Using no-cache is a bad idea, as it stops being being able to hit the Back button to get back to the form (see TWiki:Codev/BackFromPreviewLosesText - TWiki has an Edit-Preview-Save cycle where Preview can't be skipped). For links on this whole confusing area, see TWiki:Codev/BrowserAndProxyCacheControl. --RichardDonkin

From some experimentation, it seems that browsers usually do the "right thing" if the URL contains the magic word "cgi" (like /cgi, or /cgi-bin). Normal-looking wiki URLs (without "cgi") tend to require refreshing to get the new page content.

I think I've solved the caching issue in my AcadWiki implementation (a PhpWiki): (There's only a remaining problem with GMT - local TZ offsets. But this can be defined by the wiki maintainer. Currently I have a two hours mismatch problem.) See http://xarch.tu-graz.ac.at/home/rurban/acadwiki-1.3.5pre/viewsrc.php?show=lib/stdlib.php#src for the first function LastModifiedHeader?() which looks for the if-modified-since header from the client, looks in the db for a matching lastmodified timestamp and sends a "Not modified" 304 result or the Last-Modified header as plain header and/or http-equiv meta tag.

Script generated data just sends the timestamp from the latest edit.

What I still have to add is to check not only the page time in the database, but also all scripts and templates which could influence the html. I never needed the pragma:nocache header so far. It just works on netscape and internet explorer, though netscape has much better diagnostics (Ctrl-I). --ReiniUrban

I installed UseModWiki at our site, and run into this problem immediately. Since our users use maybe all the possible browsers ranging from lynx to Opera to IE to anything else, I had to find a solution.

--- usemod092/wiki.pl   Sun Apr 22 00:44:10 2001
+++ wiki.cgi    Sun Apr 29 00:01:57 2001
@@ -973,6 +973,8 @@
 
 sub GetHttpHeader {
   my $cookie;
+  my $now;
+  $now = gmtime;
   if (defined($SetCookie{'id'})) {
     $cookie = "$CookieName="
             . "rev&" . $SetCookie{'rev'}
@@ -981,12 +983,20 @@
     $cookie .= ";expires=Fri, 08-Sep-2010 19:48:23 GMT";
     if ($HttpCharset ne '') {
       return $q->header(-cookie=>$cookie,
+                        -pragma=>"no-cache",
+                        -cache_control=>"no-cache",
+                        -last_modified=>"$now",
+                        -expires=>"+10s",
                         -type=>"text/html; charset=$HttpCharset");
     }
     return $q->header(-cookie=>$cookie);
   }
   if ($HttpCharset ne '') {
-    return $q->header(-type=>"text/html; charset=$HttpCharset");
+    return $q->header(-type=>"text/html; charset=$HttpCharset",
+                      -pragma=>"no-cache",
+                      -cache_control=>"no-cache",
+                      -last_modified=>"$now",
+                      -expires=>"+10s");
   }
   return $q->header();
 }

This pretty much solved the issue for us. I hope it (or someting like this) will be included in UseModWiki 0.93.

-- kkovacs

I will try to include something like this, at least as an option in 0.93. --CliffordAdams

TWiki has had some problems in this area - see TWiki:Codev/BrowserAndProxyCacheControl for links on caching and
the actual problem pages for the Perl code to fix them. Using HTTP headers has been a very effective way of solving what appeared to be browser caching bugs (particularly with IE5 but also with Opera.) It's well worth reading the linked tutorials on cache headers as this is quite a complex area. For example, Pragma: no-cache is only really effective when sent by the browser, not by a web server. --RichardDonkin


Preceeding text like bugLinkPattern: [Buggy user expectations :-]

bugMeatballWiki. Enough said.

Hmmm... Maybe a bug. Maybe not. Maybe a buggy user. :-)

I'm inclined to leave the LinkPattern alone, as the current pattern is already entirely too complex without adding more lookbehind complexity. One can always use nowiki, like notabugMeatballWiki to inhibit any links.

This "bug" will remain in 0.88, but I might consider changing it for 0.90 (if I'm bugged enough about it). --CliffordAdams

The LinkPattern is supposed to be easy, notaPainIntheButt. ;) Note the case fooBarBaz. What does the ? signify? The unknown page "fooBarBaz" or "BarBaz"? Is it really that difficult to put a \W or a \s before the pattern match? -- SunirShah

Yes, its that difficult... The \s really doesn't work: consider (MeatballWikiBugs), \W doesn't work if the link starts at the beginning of the page. The only reasonable ways I see to do it would be an alternation (^|\W), or a negative lookbehind assertion like (?<!\w) (see the perlre manpage). I don't like the idea of complexifying the LinkPattern even more--it is already pretty complex if you use subpages.

Also, it's not just the simple LinkPattern--for consistency the same thing should be done for all the other links: cases like foohttp://www.yahoo.com/ for instance. Finally, I'm also concerned about the complexity of the regular expressions, especially in light of this little gem from the "perlre" manpage:

       One warning: particularly complicated regular expressions
       can take exponential time to solve due to the immense
       number of possible ways they can use backtracking to try [to match.]

The perlre manpage also says that zero-width lookahead/behind is OK, but gives no reassurances for width-1 lookbehind... :-(

This case should be fine. The problem arises when pattern contains a lot of alternation (a limitation of using DeterministicFiniteAutomata? to match). In general, if it would be difficult for you as a human to look at text and determine how exactly it matched the pattern, exponential time is required to match it; otherwise not.

After KeptPages and HiddenPages are implemented I'll take another serious look at this issue. --CliffordAdams

From perlre(1):

       Perl defines the following zero-width assertions:

           \b  Match a word boundary
           \B  Match a non-(word boundary)

Shouldn't that help? Then again, I have never looked at the script. --AlexSchroeder

I don't really want to rely on Perl's definition of a "word boundary" matching the wiki's definition. Sometimes I wonder if wikis have been successful despite the difficulty of explaining LinkPatterns to beginners. --CliffordAdams

I fixed it using \b. Suck it. ;) -- SunirShah


Ordinal lists fail with paragraph breaks:

  1. This is number 1. Good.

  1. This is number 1. Bad. Should be 2. Note the space between the above line and this one.

If you ask for two separate lists, you get two separate lists.

The old code used to try very hard not to close lists, so it allowed <P> tags within the lists. The old HTML output was like:


<OL>

<LI>Item One

<p>

<LI>Item Two

<p>

</OL>

Next paragraph of ordinary text.

Note the second P tag is within the list, and not adjacent to the real paragraph. This was a real problem for PRE-formatted regions on the C2 wiki (which use similar code).

The new HTML output is like:


<OL>

<LI>Item One

</OL>

<p>

<OL>

<LI>Item Two

</OL>

<p>

Next paragraph of ordinary text.

...where each list is closed before outputting the paragraph.

Also, consider the following:

# One

# Two

# Three



# One (second list)

# Two

# Three

  1. One

  1. Two

  1. Three

  1. One (second list)

  1. Two

  1. Three

...should this be displayed as a single list with 6 items?

I'm still thinking about what is the right thing to do. I've even considered removing support for the ordered lists completely. (In my opinion, if you want that much control over presentation, HTML may be better than wiki-markup.) I'd like to move the wiki closer to XHTML compliance, which may require some changes. Any suggestions? --CliffordAdams

I think the current practice of left aligning the bullets is a mistake. First, it looks ugly. Second, it's hard to use. I think something more in line with Python's system would have been better. In that case, it would be easy to solve your problem just by matching indents and counting:


    1 One

    2 Two

    3 Three



    1 One (second list)

    2 Two

    3 Three

Also, the WikiSyntax would look like what was output. I think the goal for WikiSyntax would be to correctly HTMLize what used to be .TXT files. For example,


    * '''Foo.''' Today, archaeologists discovered in an Anasazi village in Arizona the origins of the word ''Foo.'' Apparently, they meant it to mean, "You know, like, stuff." Scientists are perplexed, but they think it points to extraterrestrial intelligence.</nowiki></tt>

         * On the other hand, scientists in southern France have unearthed cave paintings of Ugh, the rock opera star from the Palezoic, apparently enchanting an audience with a rendition of "O Foo, How I Argh Ugh Gah".

    * Then again, this story is really stupid. However, I just want to use up a lot of text in order to show a point.

         * Maybe so.

is more legible (at least in the TEXTAREA) than


* '''Foo.''' Today, archaeologists discovered in an Anasazi village in Arizona the origins of the word ''Foo.'' Apparently, they meant it to mean, "You know, like, stuff." Scientists are perplexed, but they think it points to extraterrestrial intelligence.

** On the other hand, scientists in southern France have unearthed cave paintings of Ugh, the rock opera star from the Palezoic, apparently enchanting an audience with a rendition of "O Foo, How I Argh Ugh Gah".* Or maybe.

* Then again, this story is really stupid. However, I just want to use up a lot of text in order to show a point.

** Maybe so.

On the other hand, subtle spacing errors can cause problems. -- SunirShah

Later. I just submitted on UseMod:UseModWiki/Patches code for arbitrary named counters. This would provide somewhat of a solution to the above problem by doing something ala

:#i#.  This is number 1. Good.

:#i#.  This is number 2. Good. Note the space between the above line and this one.

Even later. I decided not to add this patch to MeatballWiki. FeatureKarma being fairly low at the moment. After all, I did add TableOfContents. -- SunirShah


Redirecting to a redirect does/doesn't work? [Feature]

At the moment, REDIRECT is not recursive. At first thought this might be considered a bug, but one would hope that someone with the nous to author a #REDIRECT would check that the consequent redirection actually works. By being non-recursive the code avoids getting stuck in a recursion loop (easy enough to code around, but who wants to code all day?). It also prevents arcane ContentSwizzling.

This is a feature. Another reason is to reduce the worst-case time taken by the wiki script--opening a page is a non-trivial expense in the CGI context. --CliffordAdams


GetNewUserId? seems flawed [Error message bug]

To me it seems that function &RequestLock? should be called after the last while not before.

  sub GetNewUserId {
  my ($id);

  $id = 1001;
  while (-f &UserDataFilename($id+1000)) {
    $id += 1000;
  }
  while (-f &UserDataFilename($id+100)) {
    $id += 100;
  }
  while (-f &UserDataFilename($id+10)) {
    $id += 10;
  }
  &RequestLock() or die(T('Could not get user-ID lock, last id = ' . $id));
  while (-f &UserDataFilename($id)) {
    $id++;
  }
  &WriteStringToFile(&UserDataFilename($id), "lock");  # reserve the ID
  &ReleaseLock();
  return $id;
  }

The placement of the lock is correct. If the lock request was after the last while loop, two different processes could get the same $id number. The lock request could be placed before the first while loop, but I prefer to keep the wiki unlocked as much as possible, so I left it unlocked until the last reasonable moment.

I think I see the cause of the confusion--if the lock request fails, the curent code reports something like "last id = 3190" even if the last existing ID is slightly higher (like 3194). I don't really remember why I had the code report the last id number--it isn't relevant to any reasons for failing to get a lock. I will change the message. (The message is also broken because it should use the translate-string message--the code above requests a translation of "..., last id = 3190", where 3190 will change.) So I guess you did find a bug--just not the one you thought you found. --CliffordAdams

Clifford, the 'last id' bit was added by me in my local copy for debugging. I should have been more careful when I pasted some code here. Sorry for the confusion. --ErikZachte?


CategoryMeatballWiki


Discussion

MeatballWiki | RecentChanges | Random Page | Indices | Categories
Edit text of this page | View other revisions
Search: