About two weeks ago there has been a heated debate over the future of RSS – the standard for subscribing to news feeds. Like many technological concepts, RSS can mean many things and needs to be looked at from multiple angles.
RSS, The Consumption of Long-form Content
MG Siegler wrote in a recent TechCrunch article: “It’s a mass consumption tool, not consumption tool for the masses”. I think this is a great point, because one of the main problems with consuming information through for example Google Reader is that you need to go through so many piles of crap to find something interesting.
Consuming long-form content is important, but it needs to happen when there is a very high level of attention. To find that piece of long-form content, you want to go through many pieces of short-form content, not unnecessarily read piles of long-form content. It’s questionable whether headlines are good enough to do this, I think not. I think you need a lot of ‘activity’ around a certain piece of long-form content for it to reach a good attention level.
This long-form content can be much more than just an article or a book, it’s anything that takes up time (movie, song, event, interaction, game, talking with someone, traveling somewhere, etc). This time that is spend needs to somehow be in sync with the attention of that ‘consumer’.
The consumption of information is not going to happen either through an RSS reader interaction or a noisy social network interaction. It’s going to be something new, and attention and relevance is going to be key.
There have been several alarmists screaming that the internet is making us stupid. I do agree that the internet is changing our brain – sensing what’s happening and having a more ‘symphonic intelligence’ (like a orchestra conductor) are an increasingly important skill-set. But it’s not going to happen through the mass-consumption of shallow tweets, it’s going to be through zooming. Sensing what’s going on and digging deep into places that are relevant.
RSS, The Subscription Interaction
The adding of new feeds from a site is another way of piling on noise to your ‘reading list’. Browsers now all support RSS auto-discovery of feeds, but who’s really using that? I’d love to see some statistics. Do you add Twitter RSS feeds or Delicious Bookmark feeds to your Google Reader? Perhaps you do.
Google Reader allows you to search for blogs which eases the pain of adding new subscriptions, but it’s still a subscription to a website. Much better would be a client that learns based on your implicit interests (behavior, interaction) and explicit interests (users, keywords, companies). Basically, getting rid of the ‘containers’ and mixing up the data streams. Let the data flow to the right attention (Synaptically!).
RSS, The Data Standard
I’m a huge fan of open standards, but I’m an even bigger fan of getting shit done. One serious practical obstacle with RSS is that it’s built on XML. I used to love XML until I started using JSON and I think many developers will agree with me. To summarize a long list of benefits, JSON reduces the time it takes for developers to integrate.
RSS is also not very extendable, which is horrible in an era where we have a serious need for meta-data. Atom is much better in that respect and a great example of it is Atom Activity Extensions.
RSS, The Feed Technology
Feeds are passive, non-real time and non-event driven. This poses severe practical limitations to RSS as a technology. Real-time carriers as Twitter have appealed greatly to developers. Why is that?
- Developers want to use JSON
- They want to do a bazillion small requests
- They want to keep many connections open and receive real-time events
- They don’t want to poll for changes
- They want meta-data
- They want to be able to integrate directly from the client-side (JSONP)
- They want OAuth integration to get private data (RSS never said anything about the authentication mechanisms, and many have resolved to ’secret hashes as URLs to hack around this’)
Twitter is not solving all of these, but is already solving many of them. It’s time that we create an open version of the Twitter Stream API, one with a lot of meta-data and with the ability to really extend the protocol.
Pubhubsubbub is a nice attempt, but it’s not meeting all the above needs and the stuff that’s out there is very hard to set up. Also, it’s very content-centric. The standard talks about ‘content, subscriber and publisher’. What about data? Big pipes of data flowing everywhere? Is the temperature sensor in my fridge a publisher? Is the smart algorithm that’s processing that ‘content’ a consumer? It’s just too oldskool.
XMPP is also something I was very excited about a while back, but what happened to it? I thought it would propel us into the ‘active web’. Twitter completely ditched XMPP and so have other non-nerds. I’ve tried doing stuff with XMPP too, it was a horrible technology to work with if you’re not building an IM client.
I think there is a huge opportunity here to accommodate the needs described here (technology and business wise). Execution will be key there, this means getting over our infatuation with some of the standards out there that are practically undesired. XMPP and Pubhubsubbub look good on paper and the nerds love it, but we need real post-1999 open standards.