There were other problems with the old writeback code - most especially if you had storage devices of varying throughput on the same system.
The old writeback code traversed super blocks in order, skipping over those currently congested and without regard to the throughput of the devices backing the supers. Recall that the old writeback code/pdflush indiscriminately issues writes until the memory threshold is reached.
This could have (and probably did) lead to possible performance penalties for applications referencing the *fast* devices while consequently improving the performance of apps on the slow devices. It certainly lead to unfairness issues wrt who dirties memory and who cleans it.
Consider the followed kludged example to illustrate the point. Two apps, both dirtying pages at the same rate, one app backed by a "fast" device, the other by a "slow" device. Both apps are contributing to the dirty page count at the same rate, so now pdflush and writeback are kicking in.
Since the slow device will maintain a "congested" state longer (since it is "slow"), the faster device will eventually account for more cleaning of pages than the slow device.
This has two effects:
1) Dirty pages for the app on the slow device potentially stay in memory longer and have a better chance of being re-referenced without I/O.
2) Dirty pages for the "fast" device are more likely to be written out and consequently require an I/O for re-reference.
So we wind up penalizing the app on the fast storage device. In theory at least. :-)
I haven't looked at the per-BDI code, but with such it is now possible to apply fairness to ensure that each device cleans its share of dirty pages. (Whether that is a good thing or not, I don't know, its just that it enables the capability.)