Let me circle this back to ads in RSS feeds. You can be fairly sure that every single person subscribed to your feed is a daily reader and it's not likely random searchers would add your feed. The people reading your feed are using a feed because they don't want to miss a single word you're saying. They're not just fans reading your site, they're more die-hard than that. Who would you subject to advertising, if you had a say in the matter: random visitors or your biggest fans?
I think Matt is overlooking the fact that RSS is beginning to cross over from personal publishing to commercial publishing. News organizations and professional bloggers aren't necessarily subscribed to because die-hard fans don't want to miss a word. Of the few hundred feeds I'm subcribed to, a very small (and shrinking!) percentage are personal subscriptions. The majority of my subscriptions are "industry" related - topical news aggregrators, keywords from Del.icio.us, and single-issue weblogs like Read/Write Web or if:Book feature prominently.
RSS is now my primary approach to many of these sites. A number of them subsist entirely on ad money. While I'm not thrilled to be advertised at in my own feed reader, I do acknowledge that a lot of these high-volume skim-feeds may not ever get a proper visit from me, and therefore need to push their support mechanism out where I can see it. As long as the ads stay plain text and on topic, I'll suffer them.
Bill Gates, describing customizable television in Newsweek:
"Ninety percent of that stuff you don't care about," says Gates. "We'll let you have a custom ticker [with stock quotes, scores and other information that you pick]."
I sent this mail to Richard MacManus the other day. It's a response to his post about the suckiness of online feedreaders, and a manifesto of sorts about the qualities that make a good RSS reader. Writing it helped clarify some of my thoughts about potentially interesting extensions to ReBlog.
In Competition for Bloglines, you say: "...the User Interface of Bloglines is beginning to get very creaky. It still uses frames, for crying out loud! There's not a whiff of Ajax in the Bloglines UI and narry a hint of tagging."
I don't know if Ajax and tags are the magic bullets that can make or break a web-based feed aggregator.
Ajax is a great tool, but I don't think it's really relevant here (more below).
Tags are post-processing organizational aids: encounter a resource, read/view/understand it, and then tag it for later recall. This does not help with the efficiency of processing of new information, and in fact hinders it by creating an additional categorization step for new items. It's analogous to Apple's new Spotlight feature in Mac OS X Tiger: they added a file-processing step to the kernel that indexes resources as they are written to the filesystem, and this takes time. Thankfully it's just CPU time and not cognitive time - tagging always explicitly takes someone's attention.
The great strength of Del.icio.us is that it helps people leverage each others' attention, creating something like a distributed computing application for categorization... "Taxonomizing@Home!" =oD
The problem with a lot of feed readers is that they have a simple list-of-lists UI. None of them learn from past behavior to affect incoming information. You're just presented with a list of feeds, and a list of items in each of those feeds. If you're subscribed to the "Map" tag from Del.icio.us, for example, you see Map24 and Google Maps scroll by a few dozen times a day. Over time, this detracts from the value of the Del.icio.us feed by continually showing too much old information. There have been suggestions on the Del-discuss list to include a link in the feed for a tag only when it is first added, but it has been rightly pointed out that this strips out too much information: it's important to know that 100 people tagged this link, while only 10 people tagged that link.
I think there's some pent-up demand for an RSS client that handles this logic, because no matter how intelligent Del.icio.us gets about dupes, it will never be able to know that you've just seen the same link on Waxy, Kottke, Slashdot and Mefi. Steve Gillmor has written about an idea he calls "Information Triage", which describes a possible approach to dealing with the deluge.
I have a personal interest in all this, because I am one of the maintainers of ReBlog, a web-based feed reader whose specialties are republishing (an outgoing feed of items you deem interesting is published - this is currently used to run Eyebeam's ReBlog, Unmediated, and other sites) and a swanky user interface that makes feed-processing faster and easier through respect for Fitt's Law and a forthcoming keyboard-navigation interface.
Unfortunately, my own experience with subscribing to 200+ feeds has been frustrating: if I don't keep up with them, they quickly spin out of control. I don't think it's unusual for heavy RSS users to open up their reader software in the morning to find 1000+ unread items, with no sensible way to understand which items are new, which ones are repeats, or where to devote attention first. My personal feeling is that in order to successfully handle a large number of subscriptions, a good RSS reader must have some concept of statistical analysis, similar to Bayesian spam-filtering methods. If you've ever use DEVONThink, you'll know what I mean.
If the software can tell me that an item I'm looking at contains similar language (or links to the same URL) as 20 other items from 5 different feeds I'm subscribed to, this is tremendously valuable. I can start to operate on information in the aggregate. I can use my reader as a force multiplier, to get a high-level overview of the information flowing my way. I don't have to drink from an RSS firehose all the time.
Ajax has a place here as an interface assist. ReBlog already uses Ajax techniques heavily, to archive & publish items. These could be extended to providing more information about items as well. I'm imagining a reader that slowly fills in context about a given RSS item as you read it, by requesting and displaying stats about the popularity of that link within your set of subscriptions, or historical data such as when it first crossed your path. It could provide an option to ignore all future occurrences of the URL - who needs to see 30+ links to Backpack in a given week, after you've seen one and acknowledged it?
This kind of smart-client aggregation triage could be a serious advantage for anyone who needs to keep up with large volumes of information on a regular basis.
Disney has deemed irreverence as one of the five core equities of the Muppets (humorous, heartwarming, puppet-inspired and topical being the other four)
I think a new divide has opened up, one that is based far more on choice than on circumstance. Several million people (and the number is growing, daily) have chosen to become the haves of the Internet, and at the same time that their number is growing, so are their skills.
Is it a divide if it's based on choice? Sounds more like the classic early/late adopter distinction to me. I basically fit onto the left-hand column for every row in Seth's table, which is an odd feeling. Bored with Flickr is a little harsh, but I must say that I'm more exited by the opportunities for social behavior that Upcoming affords than the passive social behavior that Flickr does. Flickr : past :: Upcoming : future.
Mboffin posted an animation showing the development of a basic web log in the form of a stream of screenshots. It's a 900k GIF, coincidentally the same dimensions as the monochrome monitor of my first computer.
My first impression is that it's a disarmingly simple demonstration of standards-based web design that explains the process better than any book or advocacy project ever did. It approaches Jon Udell's Heavy Metal Umlaut movie in communicative density and progressive unveiling of an idea.
The animation starts with a blank page, and a title: "Site Name." The process of constructing a website framework in this way and the mode of presentation feels like a handy metaphor for a few larger trends in current development for the web (more later).
Site content is slowly added. The page is built from the information up, a radical departure from the html terrorism of the bad old days. New chunks of content are added using obviously semantic markup: headers, unordered lists, plain paragraphs. When I was designing for the web before 2000 or so, I would start at the other end: sketch out a site design, build up a comp in Illustrator and Photoshop, decide where the table cut lines would need to be, and then work out the tedious process of pixel arithmetic GIF slicing. Flexibility had to be considered and planned for. Change was expensive.
Visual design comes next, and proceeds according to the internal logic of the box model. It's worth noticing that at this point in Mboffin's animation, work on the HTML effectively stops, and all effort moves to the style sheets. Previously generated chunks of content are grouped into columns and moved about the page, with backgrounds and borders adjusted. CSS is uniquely well-suited to this sort of incremental play and parameter adjustment, and closely matches Malcom McCullough's understanding of craft as an activity which requires both skill (planning, foresight) and feedback (responsiveness, flexibility). It provides a language that simultanesouly supports generalizations (e.g., "make all the borders red, and give them 10 pixels of space on the left") and specificity ("...except this one").
The final appearance is unmistakably now, from the image-header to the centered dual-column box and subtle gradients. I love the way in which it was approached as a loose pattern with cumulative refinements, rather than a rigidly pixel-specified Photoshop comp.
This particular approach to simplicity feels like an idea that is currently hitting its stride in web development circles, expressed through the mantra of "release early, release often". This example starts from the core of an idea, the textual content of the website. Other examples include event-sharing services and junk-management fads that focus narrowly on a single primary goal, short stories to the sprawling novels of 2001's obsession with content management behemoths. It will be fun to participate in the aggregation of all these porous mini-services into a loosely-coupled "emergent Internet operating system" over the next few years.