Well, I understand your arguments and agree with the "upstream" consideration. the offline scheduler approach is agressive . when i offlined napi, i had to do some re-writing in dev.c .
>The lkml discussions with you stalled because you basically only >repeated your arguments why you'd want to have the offline scheduler >(which in itself is fine) - without showing much interest in improving >existing kernel facilities or showing that they are unfixable (which is >not fine if you want to enhance the upstream kernel
In the case of cpu sets, i argue that cpu sets do not provide complete partitioning. Meaning , i cannot ask a packet from 10gbps interface to be moved to processor X and another packet from the same 10gbps interface to be moved to processor Y. why should a flash video packet be moved to processor 7 if processor 7 is heavily busy with incoming ftp traffic ?
For the best of my knowledge; a napi context is triggered by the first packet which can be any processor "in the affinity".
But this is possible by offlin'ing napi. just simply route packets by their service type; not by irq masking; And who care for cache misses if i have an entire processor to do that work;
But you are correct that i haven't replied with technical details. i just posted the link to the essay.
what is correct way to isolate a processor, What are the restrictions ? what are the requirements ?