Some sites don't appreciate this because the front page
Also, the content exists on those sites primarily to draw people through the facades where the hit counters spin wildly. Skipping that whole "buy me" phase defeats the purpose of the site. The owners might even choose to implement a DeepLinkDefense. For the browser, nothing could be better.
There's also a question of what it means to be deep when all pages on the Internet are basically published at the same level. A UniformResourceLocator can take you where you want to go. Then again, maybe not. It would be hard to suggest that http://microsoft.com was at the same "depth" as http://msdn.microsoft.com, let alone http://msdn.microsoft.com/library/psdk/gdi/cordspac_7ckz.htm.
Also, several pages are inaccessible to the public. Framesets, session-dependent URLs, pages that depend on cookies, pages that change over time. However, these aren't DeepLinks since there is no way to link to them from outside the site.
Another thing to consider is the relative instability of links on a remote system. Domain names are pretty stable, but the rest usually isn't -- unlike what TimBernersLee recommends . Therefore, if you want your link to remain unbroken for a long time, don't use a DeepLink. Even if we don't agree with that. If you don't like the remote site's politics, don't link to it.
(Most web authors seem to know this, but I've found it a problem in hypertext helpfiles. People forget that searching usually produces deep links.)
It's interesting to note that essentially, if you exclude the front page, which no one pays attention to except newcomers, and RecentChanges, which people pay way too much attention to, a wiki is a big ball o' deep links. The URL format is generally fairly simple to remember, for instance, and all pages are generally at the same "level" in the site. Also, wikis as knowledge repositories such as MeatballWiki are meant to be deeply linked into.
Problems associated with DeepLink go away when using a tool like TouchGraphLinkBrowser. --DennisDaniels