See the revision numbers for WxWikiServer for an example. The count is wrong too.
Hard to believe this hasn't been found and rectified ages ago. Occurred with Netscape 7.1. Haven't checked whether it happens with other browsers.
Other wikis don't need to do this, so it would be interesting to know why it needs to be done.
Add RecentChanges title to RSS.
Improve Tavi-style history to always display diff table, even after clicking on "Compare". Consider disabling (diff) link on RecentChanges as it is redundant with the Tavi-style history. Removing feature makes for good FeatureKarma.
Upgrade TextFormattingRules with new syntax.
Fix KeptPages. It seems to be keeping the oldest entry, even if it isn't the last author, major, or minor change. Once again, it's two weeks since the page has been replaced. Goddamn, I invented it and even I don't get it. --ss
Look at the other bugs on this page. In particular fooLinkPattern. fooMeatBall:link foohttp://example.com fooRFC1234 fooISBN1234567980 Done. Put a \b in front of all the patterns in CommonMarkup
MeatBall:action=rc&rcidonly=not_a_link doesn't throw an error.
The table of contents and numbered headings include entries in the diff output! Fixed.
Or not Fixed.
There's some strangeness on WantedPages. For example, "AnswerMe" shows up as having one backlink, but when I click on the backlink search, I see two pages linking to it. Also, strangely, "re" appears as a wanted page with one backlink. But its backlink search spits out every single page in the whole wiki. Not really a big deal, just wanted to call it to attention.
H'm. I had guessed that the "back" action actually polled the LinkDatabase, but I guess it only does a text search. Maybe that explains why "back=re" pulls up the whole database; "back=X" must search for the WikiName regex in X, and in the case of "re" it turns up the empty string (which matches every page). Right now "re" is at #2574, and I don't see it in a quick scan of the LinkDatabase, so I wonder where WantedPages is getting it from. And I just noticed "e" at #1813.
Long pages cause browser and server problems.
Not actually a bug in the meatball script, but a problem that it may have to work around...
KuroshinSubmissionQueue is [no longer - see below] currently too long to be saved by the MacOS versions of Mozilla, Netscape, IE, or iCab. Attempting to save it using those browsers produced a variety of broken behaviors, usually resulting in truncated content.
Even when using a non-buggy browser, a somewhat longer page actually produced a server error from Apache. It seems that extremely long pages cannot be handled reliably by web browsers or servers. Is there some way that UseModWiki can handle this gracefully?
DANGEROUS DAMAGING BUG
Try this - find a long page, go into edit mode, click preview, while waiting for the request to be completely sent to the server cancel the page load (apple-period, ctrl-C, ESC, close window, big red switch, whatever). Now go look at that page again, and you'll find it's been truncated. It looks like the UseMod code sees an incoming request, doesn't get to see the SUBMIT=Preview, and treats it as an SUBMIT=Edit
This just happened (2001-05-25.2200) to me (EricScheid) with [FeatureKarma]
I can understand how this came to be though - the edit form can be submitted by pressing ENTER while the focus is in the summary field, and thus the SUBMIT=SAVE field doesn't get sent. Losing new content is worse than having to just go back a version. You could insert an extra hidden field on the form after the TEXTAREA, and if absent throw an exception ... although technically you shouldn't rely on the browsers sending form fields in the same order as they appear in the form (even though just about every browser does).
Darn, I was just about to suggest a warning signal if the received edit appears to be a truncation, but then I realised that in many cases the user won't see that response from the server (eg. close window, power loss, user-cancel, etc)
I was almost positive that UseModWiki stripped carriage returns from input, leaving only newlines in each page. But occasionally I see pages that are just barely tweaked, but for which the diff shows the entire page under both "deleted" and "added". It might make things easier if carriage returns were stripped when saving a page.
This isn't the problem. The reason why the diff is spaced out is that in sub ColorDiff
the regex s/\r?\n/<br>/g
turns all newlines into line breaks. Changing this to paragraph breaks didn't look very nice either. I'm not overly tempted to change it as the diff output matches the underlying WikiSyntax more accurately than the rendered page, and if you're going to use the diff to fix the page or to reply, then it's somewhat more useful to see each separate line, well, separated. -- SunirShah
Cliff, when I view a prior version, edit it, and then save it, I am told that my edit conflicts with another edit. I suspect this leaked through testing 'cause you tested pages that only you had edited, which thereby escape your conflict detector. Unfortunately, pressing save from the conflict editor repeatedly brings me right back to the conflict editor.
Cached pages: (semi-resolved)
I just edited LimitTemptation. It showed up on the page. I then decided I wanted to add a little more. When I clicked on EditPage?, it brought up the previous version, not the latest version. -- AnonymousDonor
OK, I just realized I had my browser set to always load from cache. It works when I say never load from cache. -- AnonymousDonor
This is still kind of a bug. There should be a meta tag telling the browser to reload every time. Not like InternetExplorer pays attention, though.
I may look into this further. Does anyone know of a good site which explains what should be done in this case?
I think http://www.mnot.net/cache_docs/ is a good document on the issue. -- kkovacs
From some experimentation, it seems that browsers usually do the "right thing" if the URL contains the magic word "cgi" (like /cgi, or /cgi-bin). Normal-looking wiki URLs (without "cgi") tend to require refreshing to get the new page content.
I think I've solved the caching issue in my AcadWiki implementation (a PhpWiki): (There's only a remaining problem with GMT - local TZ offsets. But this can be defined by the wiki maintainer. Currently I have a two hours mismatch problem.) See http://xarch.tu-graz.ac.at/home/rurban/acadwiki-1.3.5pre/viewsrc.php?show=lib/stdlib.php#src for the first function LastModifiedHeader?() which looks for the if-modified-since header from the client, looks in the db for a matching lastmodified timestamp and sends a "Not modified" 304 result or the Last-Modified header as plain header and/or http-equiv meta tag.
Script generated data just sends the timestamp from the latest edit.
What I still have to add is to check not only the page time in the database, but also all scripts and templates which could influence the html. I never needed the pragma:nocache header so far. It just works on netscape and internet explorer, though netscape has much better diagnostics (Ctrl-I). --ReiniUrban
I installed UseModWiki at our site, and run into this problem immediately. Since our users use maybe all the possible browsers ranging from lynx to Opera to IE to anything else, I had to find a solution.
--- usemod092/wiki.pl Sun Apr 22 00:44:10 2001 +++ wiki.cgi Sun Apr 29 00:01:57 2001 @@ -973,6 +973,8 @@ sub GetHttpHeader { my $cookie; + my $now; + $now = gmtime; if (defined($SetCookie{'id'})) { $cookie = "$CookieName=" . "rev&" . $SetCookie{'rev'} @@ -981,12 +983,20 @@ $cookie .= ";expires=Fri, 08-Sep-2010 19:48:23 GMT"; if ($HttpCharset ne '') { return $q->header(-cookie=>$cookie, + -pragma=>"no-cache", + -cache_control=>"no-cache", + -last_modified=>"$now", + -expires=>"+10s", -type=>"text/html; charset=$HttpCharset"); } return $q->header(-cookie=>$cookie); } if ($HttpCharset ne '') { - return $q->header(-type=>"text/html; charset=$HttpCharset"); + return $q->header(-type=>"text/html; charset=$HttpCharset", + -pragma=>"no-cache", + -cache_control=>"no-cache", + -last_modified=>"$now", + -expires=>"+10s"); } return $q->header(); }
This pretty much solved the issue for us. I hope it (or someting like this) will be included in UseModWiki 0.93.
-- kkovacs
bugMeatballWiki. Enough said.
The LinkPattern is supposed to be easy, notaPainIntheButt. ;) Note the case fooBarBaz. What does the ? signify? The unknown page "fooBarBaz" or "BarBaz"? Is it really that difficult to put a \W or a \s before the pattern match? -- SunirShah
One warning: particularly complicated regular expressions can take exponential time to solve due to the immense number of possible ways they can use backtracking to try [to match.]
From perlre(1):
Perl defines the following zero-width assertions:
\b Match a word boundary \B Match a non-(word boundary)
Shouldn't that help? Then again, I have never looked at the script. --AlexSchroeder
I fixed it using \b. Suck it. ;) -- SunirShah
Ordinal lists fail with paragraph breaks:
If you ask for two separate lists, you get two separate lists.
The old code used to try very hard not to close lists, so it allowed <P> tags within the lists. The old HTML output was like:
<OL> <LI>Item One <p> <LI>Item Two <p> </OL> Next paragraph of ordinary text.
Note the second P tag is within the list, and not adjacent to the real paragraph. This was a real problem for PRE-formatted regions on the C2 wiki (which use similar code).
The new HTML output is like:
<OL> <LI>Item One </OL> <p> <OL> <LI>Item Two </OL> <p> Next paragraph of ordinary text.
...where each list is closed before outputting the paragraph.
Also, consider the following:
# One # Two # Three # One (second list) # Two # Three
...should this be displayed as a single list with 6 items?
I'm still thinking about what is the right thing to do. I've even considered removing support for the ordered lists completely. (In my opinion, if you want that much control over presentation, HTML may be better than wiki-markup.) I'd like to move the wiki closer to XHTML compliance, which may require some changes. Any suggestions? --CliffordAdams
I think the current practice of left aligning the bullets is a mistake. First, it looks ugly. Second, it's hard to use. I think something more in line with Python's system would have been better. In that case, it would be easy to solve your problem just by matching indents and counting:
1 One 2 Two 3 Three 1 One (second list) 2 Two 3 Three
Also, the WikiSyntax would look like what was output. I think the goal for WikiSyntax would be to correctly HTMLize what used to be .TXT files. For example,
* '''Foo.''' Today, archaeologists discovered in an Anasazi village in Arizona the origins of the word ''Foo.'' Apparently, they meant it to mean, "You know, like, stuff." Scientists are perplexed, but they think it points to extraterrestrial intelligence.</nowiki></tt> * On the other hand, scientists in southern France have unearthed cave paintings of Ugh, the rock opera star from the Palezoic, apparently enchanting an audience with a rendition of "O Foo, How I Argh Ugh Gah". * Then again, this story is really stupid. However, I just want to use up a lot of text in order to show a point. * Maybe so.
is more legible (at least in the TEXTAREA) than
* '''Foo.''' Today, archaeologists discovered in an Anasazi village in Arizona the origins of the word ''Foo.'' Apparently, they meant it to mean, "You know, like, stuff." Scientists are perplexed, but they think it points to extraterrestrial intelligence. ** On the other hand, scientists in southern France have unearthed cave paintings of Ugh, the rock opera star from the Palezoic, apparently enchanting an audience with a rendition of "O Foo, How I Argh Ugh Gah".* Or maybe. * Then again, this story is really stupid. However, I just want to use up a lot of text in order to show a point. ** Maybe so.
On the other hand, subtle spacing errors can cause problems. -- SunirShah
Later. I just submitted on UseMod:UseModWiki/Patches code for arbitrary named counters. This would provide somewhat of a solution to the above problem by doing something ala
:#i#. This is number 1. Good. :#i#. This is number 2. Good. Note the space between the above line and this one.
Even later. I decided not to add this patch to MeatballWiki. FeatureKarma being fairly low at the moment. After all, I did add TableOfContents. -- SunirShah
At the moment, REDIRECT is not recursive. At first thought this might be considered a bug, but one would hope that someone with the nous to author a #REDIRECT would check that the consequent redirection actually works. By being non-recursive the code avoids getting stuck in a recursion loop (easy enough to code around, but who wants to code all day?). It also prevents arcane ContentSwizzling.
To me it seems that function &RequestLock? should be called after the last while not before.
sub GetNewUserId { my ($id); $id = 1001; while (-f &UserDataFilename($id+1000)) { $id += 1000; } while (-f &UserDataFilename($id+100)) { $id += 100; } while (-f &UserDataFilename($id+10)) { $id += 10; } &RequestLock() or die(T('Could not get user-ID lock, last id = ' . $id)); while (-f &UserDataFilename($id)) { $id++; } &WriteStringToFile(&UserDataFilename($id), "lock"); # reserve the ID &ReleaseLock(); return $id; }