When navigating through a linear stream of information, a user only has direct access to the current location and no others. For instance, when listening to an audio stream, the user only has access to what she is currently listening to. When watching a video, she only has direct access to the current frame. The common solution to this problem is to provide a visual spatial navigation system, such as a time slider or a set of key moments (e.g. chapters on a DVD), which is an analogy to the entire stream. This strategy requires a second CommunicationChannel to convey location that must be active synchronized with the primary channel. This dichotomy is made glaringly apparent when the two channels fall out of synch due to an error, such as when the time slider has fallen behind the video stream.
The second channel is not always available. In many situations, particularly involving audio streams, the user only wants and has the audio stream itself to provide the current context. Navigating through a linear stream like this requires some creativity. The common technique is to provide forward and reverse stream slides. These tools change the velocity the user is moving through the stream (n.b.: reverse changes the direction). While simple to grasp and implement, sliding along the stream requires visiting every single point between where you are and the desired end point. Consequently, a lot of time is wasted (re)consuming unwanted information. The common solution is to provide very accelerated sliders, fast forward and fast reverse. To accommodate FittsLaw, since the user is likely to overshoot when fast sliding, either the slower, fine-grained sliders are included to accurately approach the target or the sliders possess DynamicVelocity to get very close to the target.
This solution is less useful for streams that do not make much sense when they are grossly velocity scaled, such as audio which is incomprehensible at fast speeds. What the user needs is a way to understand the current context of the stream as he is moving quickly through it. This requires playing back the stream at normal speeds while navigating through it. This contraint creates two major problems. First, if the stream has to be played at normal speeds, then it is difficult to move fast through the stream. Second, playing the stream requires moving forward through the stream, which contradicts the act of reversing.
Therefore, use an AsymmetricJump. First, jump coarsely in large segments (e.g. 30 seconds) backwards / forwards rather than rewind / forward. This saves time consuming all the intervening information which is usually incomprehensible and generally useless. Listening to skipped audio is a throwback to the analog tape and record world, and unnecessary when the digital world allows RandomAccess. This makes a lot more sense when you consider you have to frequently stop forwarding / rewinding to listen to the stream to guage your current position. Jumping combines the actions of moving along the stream, stopping, and playing back the stream from its new position into a single rapid movement.
Second, if you go too far in one direction, you need to come back, so you need to jump in the opposition direction. However, this jump should be much shorter than the jump in the major direction (e.g. 7 seconds). Otherwise, there would be no reasonable way to jump to the middle of a segment bounded by the long jump. Suppose you wanted to jump back 40 seconds but both jumps were 30 seconds long. To do so, you'd have to jump back 60 seconds and wait 20 seconds to get where you needed to go. Making the opposite jump short makes it much quicker to navigate within segments. The last thing you want to do is frustrate the user by forcing them to consume useless information while waiting for their intended target. Consequently, the short jump can only be as long as the user is willing to wait for their intended target to show up. However, the short jump needs also be a decent sized fraction of the long jump to make it easy to correct an excessive number of long jumps. In an experimental study for audio streams, the 7 seconds / 30 seconds ratio seems to work well, particularly since these times conform to natural speaking times for a sentence and whole point (respectively).
A natural question is, does this really make sense? Consider that when scannning a stream, the user has a target in mind, and that target has to be in one direction from the current position (forward or reverse). That means there already is an asymmetry between the directions. From FittsLaw, the user first generally makes a very coarse-grained move towards their target, and then they narrow in using feedback. The asymmetry in jumps makes it easy to move very quickly along the major direction, and correct along the opposite direction with a finer tuned jump. If you correct too far in the opposite direction, you can jump back once in the major direction and correct again. As long as the jump lengths are close ratios, this does not take that much time.
In practice, you implement this with two buttons, labelled forward and reverse. However, the first button you hit will become the long jump, the other the short jump. So if you are listening to a lecture, miss a key point, and you want to go rewind, you just hit reverse to instantly go back 30 seconds. If it's too far, you will naturally correct by hitting forward to go forward 7 seconds at a time. Similarly, if the current section is boring and you want to skip it, you just hit forward. If you go forward too far, say already into the next section, you correct naturally by hitting reverse once to go back 7 seconds at a time. After a period equivalent to the long jump, you can reset the jump lengths back to their uncertain state.
Discussion
It seems to be that these concepts are at least slightly related to some of my recent work with Page Annotations systems, in which the onMouseOver (especially if the displayed material contains additional hyperlinks) is analogous to the stream slider. -- HansWobbe
How does onmouseover work in your systems? -- SunirShah
I move my mouse over a visual cue to trigger Actions. There are many different ways that I have implemented concurrently. Some of the more obvious ones are DiiGo annotations. TrailFire is another one. Interestingly, both of these were brought to my attention by FridemarPache. I was noticing this week that I now have 500+ DiiGo annotations and am adding about a dozen a week since I've found ProtoPage which automatically polls DiiGo Tag sets and displays the result within Public or Restricted (private) categories. Interestingly, I'm not driving this, my various Audiences are pushing me to feed them more content, quite a bit of which needs to be "need to know" at least initially, while NDAs and other Contractual matters are resolved. I can say a lot more about this, but will only do so in anybody cares enough to actually ask me to. -- HansWobbe