|
|
Subscribe / Log in / New account

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Over on the Collabora blog, Helen Koike writes about the DRM-CI project for running automated continuous integration (CI) tests on multiple graphics devices in several different labs. It uses the IGT GPU tools for testing, though there are plans to expand:
The roadmap for DRM-CI includes enabling other devices, incorporating additional tests like kselftests, adding support for vgem driver, and implementing further automations. DRM-CI builds upon the groundwork laid by Mesa3D CI, including its GitLab YAML files and most of its setup, fostering collaboration and mutual strengthening.

[...] Adapting the DRM-CI pipeline to other subsystems is feasible with a few modifications. The primary consideration is setting up dedicated GitLab-CI runners since Freedesktop's infrastructure is meant only for graphics.

In light of this, our team is developing a versatile and user-friendly GitLab-CI pipeline. This new pipeline is envisioned to function as a flexible interface for kernel maintainers and developers that can be evolved to connect with different test environments that can also be hooked with CI systems such as KernelCI. This approach aims to simplify the integration process, making GitLab-CI more accessible and beneficial to a broader range of developers.



to post comments

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 10, 2024 17:02 UTC (Sat) by gfernandes (subscriber, #119910) [Link] (13 responses)

I don't get GitLab CICD.

Maybe I'm dense. But to me, a CI till must provide an excellent build summary report, including detailed test information - run time, failure traces etc., a detailed coverage report, including increase/decrease percentages.

The GitLab summary report is so basic, it beggars belief.

If you have a budget, use Team City. If you don't, use Jenkins. But Gitlab? I don't know. Not for me.

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 10, 2024 17:34 UTC (Sat) by gfernandes (subscriber, #119910) [Link]

CI tool I mean...

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 10, 2024 18:38 UTC (Sat) by marcH (subscriber, #57642) [Link] (2 responses)

Team city is expensive and closed source so as you said let's forget about it.

We use both Jenkins and GitHub workflows. The latter is conceptually very similar to gitlab CI. Yes the latter cannot do complex things. But EVERYONE in the project has a rough understanding of how it works
A good enough understanding to make drive-by changes - and they DO!

There are 1.6 people in the entire project who understands our Jenkins. I'm the 0.05 in that and I lost an entire day this week because everyone else was out. Not the first time.

Using Jenkins - which gives all the rope you need to hang yourself, many times over - for simple things is like using C to write a parser that's not even in the critical path: a sure way to waste megatons of the most rarest and most expensive thing: engineering time. Especially rare in open source.

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 11, 2024 21:29 UTC (Sun) by gfernandes (subscriber, #119910) [Link] (1 responses)

_Gitlab_ isn't particularly cheap either...

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 12, 2024 8:14 UTC (Mon) by marcH (subscriber, #57642) [Link]

> _Gitlab_ isn't particularly cheap either...

You know it's not that simple, which means one-sentence long comments on this topic are pointless.

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 10, 2024 22:44 UTC (Sat) by stephanlachnit (subscriber, #151361) [Link] (3 responses)

I think this is not true.

GitLab (free) has support for per-line coverage (+overall coverage change), a summary for all run tests + how long they took + output on failure + named artifacts for failed jobs.

Besides that, Gitlab has IMHO the best CI config (or at least the best documented one).

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 11, 2024 21:17 UTC (Sun) by gfernandes (subscriber, #119910) [Link] (2 responses)

I know this is true. I have used all three.

Here are some of the problems inherent in Gitlab:
1. Limited storage space resulting in build/test log losses, when run in containers.
2. A summary report that is pretty much useless for coverage analyses and, if you lose logs, for test failure analysis.

Given the poor build result reporting, I don't see it usable for CI. A lot of Dev time is wasted downloading log files and analysing these in the ide for both, test and coverage failures.

This is quite backwards. A CICD tool must solve this problem. It's purpose is not just build *automation*. It is also to *enable* quicker resolution of build problems. After all, gitlab does not automate the process of fixing test and coverage failures. Not yet anyway...

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 12, 2024 9:17 UTC (Mon) by daniels (subscriber, #16193) [Link] (1 responses)

There is no inherent limit on these. Job log output is limited to 500kB by default, and you can also use artifacts to store totally arbitrary amounts of data. Personally I find scrolling through megabytes of logs to try to find a failure pretty terrible, and would much rather the log show a summary and link to the artifact with more detailed logs.

Coverage analysis is also supported at line-of-source granularity.

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 12, 2024 13:02 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

FWIW, I have a plan to write a tool (issue tracking[1]) that can watch a CI fleet to look for problems. It is geared towards looking for hardware-related problems (e.g., a slow machine or scheduling imbalances), but I do want to also have it collect failure logs, test reports, etc. to feed into some kind of classifier. My main purpose is to weed out machine related issues (e.g., "Windows didn't clean up processes cleanly *again*") so that "real" failures are more evident. There's nothing stopping further analysis of those failures (if anyone else wants to join in to develop such a tool).

[1] https://gitlab.kitware.com/utils/ci-monitoring/-/issues

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 11, 2024 20:00 UTC (Sun) by cyperpunks (subscriber, #39406) [Link] (2 responses)

Give it some time and you will see the light and wonder why you did not switch years ago :-)

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 12, 2024 10:56 UTC (Mon) by taladar (subscriber, #68407) [Link] (1 responses)

We have been using Jenkins with Job DSL for years and now use GitLab CI with one customer who insists on it as part of having another third party provider run their infrastructure.

GitLab CI is absolutely terrible. You have to configure every little thing in every repository even if it is completely identical to the code doing the same thing in another repository.

The Job view does not auto-update its state so you have to refresh manually when you think a job might have started running.

Every stage runs inside separate containers which means it takes convoluted workarounds and duplicate work to get some information from one stage to the next.

Images take ages to build because caching is only available via the registry remotely which makes the registry UI basically unusable because it is cluttered with all the intermediate layer images and you constantly run out of space and the entire thing grinds to a halt for all jobs.

Not to mention that jobs doing roughly comparable things in GitLab CI to our Jenkins run about 10 times as long as the best case.

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 12, 2024 12:58 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

> You have to configure every little thing in every repository even if it is completely identical to the code doing the same thing in another repository.

There are templating features (though I've not used them): https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gi...

However, I prefer to have the CI configuration in the repository as much as possible in order to allow building $arbitrary_commit 6 months down the line. Pinning the template repository to a commit hash would be preferable, but then I'd rather just see the diff in the MR anyways so you may as well copy files (ideally with some tooling to track it or perform the copying).

> The Job view does not auto-update its state so you have to refresh manually when you think a job might have started running.

It will refresh all jobs when any job is noticed to have changed state (e.g., you start or cancel a job). I understand the want, but I also appreciate not polling (I can have a few dozen pipeline pages open at once).

> Images take ages to build because caching is only available via the registry remotely which makes the registry UI basically unusable because it is cluttered with all the intermediate layer images and you constantly run out of space and the entire thing grinds to a halt for all jobs.

Not sure what you're referring to here, but maybe I'm not building images the same way (we use `podman` in Docker containers to build some images in CI).

> Not to mention that jobs doing roughly comparable things in GitLab CI to our Jenkins run about 10 times as long as the best case.

AFAIK, Jenkins requires out-of-repository control information. We had this with buildbot, but once we migrated to GitLab-CI (we don't use the CD part so much), allowing developers to tweak the configuration via an MR made maintenance a lot easier (not to mention that branch-specific things were stored in the branch itself). While GitLab-CI does technically have some out-of-repository information too (e.g., where the runner's local Redis store is), these end up being pretty well partitioned for our projects.

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 12, 2024 12:29 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

> But to me, a CI till must provide an excellent build summary report, including detailed test information - run time, failure traces etc., a detailed coverage report, including increase/decrease percentages.

It does. You need to feed it something like JUnit or `coverage.xml` though; it's not going to magically understand every test harness for every language/framework out there on its own.

See test reports here: https://gitlab.kitware.com/utils/rust-ghostflow/-/pipelin...

Code coverage graph is here: https://gitlab.kitware.com/utils/rust-ghostflow/-/graphs/...

Merge requests show coverage changes: https://gitlab.kitware.com/utils/rust-gitlab/-/merge_requ...

DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)

Posted Feb 13, 2024 21:20 UTC (Tue) by robclark (subscriber, #74945) [Link]

> Maybe I'm dense. But to me, a CI till must provide an excellent build summary report, including detailed test information - run time, failure traces etc., a detailed coverage report, including increase/decrease percentages.

By _far_ the most valuable thing about gitlab CI is re-using the existing mesa test farms and related infrastructure for running tests on actual hw. Nice shiny results summary (which gitlab can do AFAIU) isn't hugely useful, as is knowing pass/fail/flake and having enough artifacts to debug when necessary.


Copyright © 2024, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds