|| ||Christoph Lameter <email@example.com> |
|| ||Tejun Heo <firstname.lastname@example.org> |
|| ||[this_cpu_xx V8 00/16] Per cpu atomics in core allocators, cleanup and more this_cpu_ops |
|| ||Fri, 18 Dec 2009 16:26:17 -0600|
|| ||Article, Thread
Leftovers from the earlier patchset rediffed to 2.6.33-rc1.
Mostly applications of per cpu counters to core components.
After this patchset there will be only one user of local_t left: Mathieu's
I added some patches that define additional this_cpu ops in order to help
Mathieu with making the trace ringbuffer use this_cpu ops. These have barely
been tested (boots fine on 32 and 64 bit but there is no user of these operations).
- Fix issue in slub patch
- Fix issue in modules patch
- Rediff page allocator patch
- Provide new this_cpu ops needed for ringbuffer [RFC state]
- Drop patches merged in 2.6.33 merge cycle
- Drop risky slub patches
- Drop patches merged by Tejun.
- Drop irqless slub fastpath for now.
- Patches against Tejun percpu for-next branch.
- Avoid setup_per_cpu_area() modifications and fold the remainder of the
patch into the page allocator patch.
- Irq disable / per cpu ptr fixes for page allocator patch.
- Fix various macro definitions.
- Provide experimental percpu based fastpath that does not disable
interrupts for SLUB.
- Available via git tree against latest upstream from
- Rework SLUB per cpu operations. Get rid of dynamic DMA slab creation
- Create fallback framework so that 64 bit ops on 32 bit platforms
can fallback to the use of preempt or interrupt disable. 64 bit
platforms can use 64 bit atomic per cpu ops.
- Various minor fixes
- Add SLUB conversion
- Add Page allocator conversion
- Patch against the git tree of today
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/