Ok, looks like one big difference I can see right up front. This would put piracy for the masses back to being semi-anonymous.
If you ask for "hitmovie.avi" from BitTorrent your IP address is instantly visible in the torrent cloud and your ISP gets a C&D within hours these days. This scheme would have you hit your ISP's CCNx cache for it and only it would have a record of who asked for it, but even that wouldn't be conclusive since any node can and would also be a cache, only difference is the size and connectivity of the cache.
This moves things back towards the more broadcast model of UseNet. Imagine a UseNet server that could instantly pull a group when any user or downstream server read it.
Or another (weak) comparison to existing things would be a Bittorrent client that added torrents in the background. So you start it up to download debian.iso and as it peers with people with that torrent also pitches in with any other torrents any of the peers on that file are working on.
What I'm confused by is the insistence on avoiding what will eventually be required to make this work in the real world. A canonical URL field indicating where the content can be obtained if all attempts to retrieve a cached copy fail. So that at some point a boss level CCNx server can realize it has nowhere else to look and can just go there and retrieve a copy. Otherwise the first request made for something is going to incur some 'latency'. Likewise for rarely accessed content.