A couple of points I thought of in reading this. First is the keywords issue:
"one might be able to filter by specific companies, or licenses, or by topics like distributions and development. This raises interesting questions for journalists as well as developers and publishers: The topics that are chosen as filters can shape the reader's interaction with a story"
Well yes. One can view a public library's catalog in this way, too. Cataloging terms are a huge industry already, take a look at Google's results for Anglo-American Cataloging Rules. So your subject terms reflect what part of the catalog you see, how narrow or how broad, etc. I don't know whether the developers were aware of this kind of structural analog but it seems like a clear isomorphism to me.
And I see here a change in what has until now been a key pain in the butt for me personally. I use Google Alerts to filter news stories to me in a daily digest. But if I'm on the road for a couple of weeks, as often happens, I may be separated from my email or web access. Coming back home, it has often happened that the link in my email is now no good. It seems the most common news model has been to post it for a relatively short period, several weeks, say, and if you want access after that window you need to get archive acce$$. Nothing especially wrong with that as a business model (I don't know if it works, am inclined to doubt it) but it doesn't work well for me. But now the stories seem intentionally designed to be persistent. Better by far for me, I can expect (if this model becomes pervasive) to be able to not only follow up on the sequels to the original story but can read the whole timeline and any content that catches my eye.