Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics
Posted Oct 11, 2017 16:45 UTC (Wed) by tialaramex (subscriber, #21167)In reply to: Continuous-integration testing for Intel graphics by jhoblitt
Parent article: Continuous-integration testing for Intel graphics
1. From very early, much earlier than for a comparable commercial system like NT, the Linux kernel's developers subsisted largely and in some cases entirely on dogfood. So all core systems that must work to have an environment in which you can edit text files, compile and link a large project and then ship it somewhere were tested de facto continuously by their developers. If you broke chown() then it didn't fail a unit test that would result in an email and public shame - it broke your computer, and you spent miserable hours figuring out what was wrong. If you broke the filesystem your files got trashed and you didn't have anything to show for it.
2. Almost all drivers and subsystems which aren't actively used by developers rot and die. In some ways this is "better" than a traditional CI because traditional CI causes a bias where the "loudest screams win" - effort may be expended on fixing something that failed unit tests even though it's actually not very important, and that's always effort which could have been directed at something which _is_ important. In Linux the developers were unavoidably focused on making their own computers actually work. When those computers had NE2000 ISA cards they made sure the driver for NE2000 ISA cards worked. Today they have Intel graphics chips.
So: Lots of things broke, but, relatively few of them were things people cared about.
Posted Oct 11, 2017 22:23 UTC (Wed)
by roc (subscriber, #30627)
[Link] (9 responses)
Every time my laptop freezes I feel a little bit grumpy about how i915 is the poster child for how Linus kernel graphics should be done.
Posted Oct 12, 2017 1:05 UTC (Thu)
by ras (subscriber, #33059)
[Link] (8 responses)
Me too. The headline prompted me to read the article. I was hoping to read a mea culpa from the Intel devs, along with the how they were addressing their quantity issues. Instead I got Intel devs pontificating about testing, and it does not sit well with me.
I can only hope that they are talking about some different driver. The i915 driver was terrible. In fact maybe that drivers quality issues is what drove them to implement CI. In 4.12 it's not so bad - but it's taken 2 years(!) after the chip was released to get a stable driver. Their first 10 or so releases were so bad people were returning Dell Laptops as unusable and subsequently ranting at Dell on every form the could find. All the noise came from people running Windows - but I suspect only because the people running Linux knew who was really to blame, and trusted it would be fixed. I was one of those. But I never dreamed it would take so long.
Then there whatever driver xblacklight depends on. Promised for 4.11, not still not delivered in 4.12. https://bugs.freedesktop.org/show_bug.cgi?id=96572#c11 That's been over 2 years(!).
Posted Oct 12, 2017 7:32 UTC (Thu)
by blackwood (guest, #44174)
[Link] (7 responses)
If you read the article it says clearly that 1 year ago the entire board was red, which is around 4.10/11. We're not blind idiots who can only pontificate, the reason we pontificate is that CI actually dug us out of this huge hole we've got into over 2-3 years of no testing at all due to reorg madness within less than 1 year (you can't see all that yet because development is 4-6 months ahead of the latest release). So yeah, CI is pretty much the only way to get quality on track.
And yes skl didn't work on those older kernels. Per CI, it still didn't work well on 4.12, but at least it's better (4.14 should be actually good).
Posted Oct 12, 2017 7:46 UTC (Thu)
by andreashappe (subscriber, #4810)
[Link]
If there were a way of upvoting a comment, I would do that. Thanks for your work.
Posted Oct 12, 2017 7:48 UTC (Thu)
by ras (subscriber, #33059)
[Link] (1 responses)
A rational explanation. It also explains why 4.12 was a marked improvement. Thank $DIETY for that. Well, I guess I should be thanking you guys.
Sounds like you are back on the track. It would be interesting to know how an engineering firm like Intel fell off it in the first place, but I guess that explanation will have to wait until someone moves on.
Posted Oct 14, 2017 0:45 UTC (Sat)
by rahvin (guest, #16953)
[Link]
Posted Oct 13, 2017 20:12 UTC (Fri)
by JFlorian (guest, #49650)
[Link] (3 responses)
I just wish I could buy Intel graphics chipsets on add-in cards. The integrated video easily becomes too dated while the mainboard remains otherwise sufficient.
Posted Oct 14, 2017 1:40 UTC (Sat)
by jhoblitt (subscriber, #77733)
[Link]
Posted Oct 14, 2017 10:08 UTC (Sat)
by excors (subscriber, #95769)
[Link] (1 responses)
Then you'd need to add gigabytes of dedicated VRAM to make it work as a discrete card. And in terms of performance the current highest-end Intel GPUs would still only compete with perhaps a $70 NVIDIA card, so it doesn't seem there's much opportunity for profit there.
Posted Oct 17, 2017 15:13 UTC (Tue)
by JFlorian (guest, #49650)
[Link]
The Nvidia cards I do buy are often in the $70 range. I don't play games so most anything is overkill.
Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics
Continuous-integration testing for Intel graphics