DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
The roadmap for DRM-CI includes enabling other devices, incorporating additional tests like kselftests, adding support for vgem driver, and implementing further automations. DRM-CI builds upon the groundwork laid by Mesa3D CI, including its GitLab YAML files and most of its setup, fostering collaboration and mutual strengthening.[...] Adapting the DRM-CI pipeline to other subsystems is feasible with a few modifications. The primary consideration is setting up dedicated GitLab-CI runners since Freedesktop's infrastructure is meant only for graphics.
In light of this, our team is developing a versatile and user-friendly GitLab-CI pipeline. This new pipeline is envisioned to function as a flexible interface for kernel maintainers and developers that can be evolved to connect with different test environments that can also be hooked with CI systems such as KernelCI. This approach aims to simplify the integration process, making GitLab-CI more accessible and beneficial to a broader range of developers.
Posted Feb 10, 2024 17:02 UTC (Sat)
by gfernandes (subscriber, #119910)
[Link] (13 responses)
Maybe I'm dense. But to me, a CI till must provide an excellent build summary report, including detailed test information - run time, failure traces etc., a detailed coverage report, including increase/decrease percentages.
The GitLab summary report is so basic, it beggars belief.
If you have a budget, use Team City. If you don't, use Jenkins. But Gitlab? I don't know. Not for me.
Posted Feb 10, 2024 17:34 UTC (Sat)
by gfernandes (subscriber, #119910)
[Link]
Posted Feb 10, 2024 18:38 UTC (Sat)
by marcH (subscriber, #57642)
[Link] (2 responses)
We use both Jenkins and GitHub workflows. The latter is conceptually very similar to gitlab CI. Yes the latter cannot do complex things. But EVERYONE in the project has a rough understanding of how it works
There are 1.6 people in the entire project who understands our Jenkins. I'm the 0.05 in that and I lost an entire day this week because everyone else was out. Not the first time.
Using Jenkins - which gives all the rope you need to hang yourself, many times over - for simple things is like using C to write a parser that's not even in the critical path: a sure way to waste megatons of the most rarest and most expensive thing: engineering time. Especially rare in open source.
Posted Feb 11, 2024 21:29 UTC (Sun)
by gfernandes (subscriber, #119910)
[Link] (1 responses)
Posted Feb 12, 2024 8:14 UTC (Mon)
by marcH (subscriber, #57642)
[Link]
You know it's not that simple, which means one-sentence long comments on this topic are pointless.
Posted Feb 10, 2024 22:44 UTC (Sat)
by stephanlachnit (subscriber, #151361)
[Link] (3 responses)
GitLab (free) has support for per-line coverage (+overall coverage change), a summary for all run tests + how long they took + output on failure + named artifacts for failed jobs.
Besides that, Gitlab has IMHO the best CI config (or at least the best documented one).
Posted Feb 11, 2024 21:17 UTC (Sun)
by gfernandes (subscriber, #119910)
[Link] (2 responses)
Here are some of the problems inherent in Gitlab:
Given the poor build result reporting, I don't see it usable for CI. A lot of Dev time is wasted downloading log files and analysing these in the ide for both, test and coverage failures.
This is quite backwards. A CICD tool must solve this problem. It's purpose is not just build *automation*. It is also to *enable* quicker resolution of build problems. After all, gitlab does not automate the process of fixing test and coverage failures. Not yet anyway...
Posted Feb 12, 2024 9:17 UTC (Mon)
by daniels (subscriber, #16193)
[Link] (1 responses)
Coverage analysis is also supported at line-of-source granularity.
Posted Feb 12, 2024 13:02 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted Feb 11, 2024 20:00 UTC (Sun)
by cyperpunks (subscriber, #39406)
[Link] (2 responses)
Posted Feb 12, 2024 10:56 UTC (Mon)
by taladar (subscriber, #68407)
[Link] (1 responses)
GitLab CI is absolutely terrible. You have to configure every little thing in every repository even if it is completely identical to the code doing the same thing in another repository.
The Job view does not auto-update its state so you have to refresh manually when you think a job might have started running.
Every stage runs inside separate containers which means it takes convoluted workarounds and duplicate work to get some information from one stage to the next.
Images take ages to build because caching is only available via the registry remotely which makes the registry UI basically unusable because it is cluttered with all the intermediate layer images and you constantly run out of space and the entire thing grinds to a halt for all jobs.
Not to mention that jobs doing roughly comparable things in GitLab CI to our Jenkins run about 10 times as long as the best case.
Posted Feb 12, 2024 12:58 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
There are templating features (though I've not used them): https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gi...
However, I prefer to have the CI configuration in the repository as much as possible in order to allow building $arbitrary_commit 6 months down the line. Pinning the template repository to a commit hash would be preferable, but then I'd rather just see the diff in the MR anyways so you may as well copy files (ideally with some tooling to track it or perform the copying).
> The Job view does not auto-update its state so you have to refresh manually when you think a job might have started running.
It will refresh all jobs when any job is noticed to have changed state (e.g., you start or cancel a job). I understand the want, but I also appreciate not polling (I can have a few dozen pipeline pages open at once).
> Images take ages to build because caching is only available via the registry remotely which makes the registry UI basically unusable because it is cluttered with all the intermediate layer images and you constantly run out of space and the entire thing grinds to a halt for all jobs.
Not sure what you're referring to here, but maybe I'm not building images the same way (we use `podman` in Docker containers to build some images in CI).
> Not to mention that jobs doing roughly comparable things in GitLab CI to our Jenkins run about 10 times as long as the best case.
AFAIK, Jenkins requires out-of-repository control information. We had this with buildbot, but once we migrated to GitLab-CI (we don't use the CD part so much), allowing developers to tweak the configuration via an MR made maintenance a lot easier (not to mention that branch-specific things were stored in the branch itself). While GitLab-CI does technically have some out-of-repository information too (e.g., where the runner's local Redis store is), these end up being pretty well partitioned for our projects.
Posted Feb 12, 2024 12:29 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
It does. You need to feed it something like JUnit or `coverage.xml` though; it's not going to magically understand every test harness for every language/framework out there on its own.
See test reports here: https://gitlab.kitware.com/utils/rust-ghostflow/-/pipelin...
Code coverage graph is here: https://gitlab.kitware.com/utils/rust-ghostflow/-/graphs/...
Merge requests show coverage changes: https://gitlab.kitware.com/utils/rust-gitlab/-/merge_requ...
Posted Feb 13, 2024 21:20 UTC (Tue)
by robclark (subscriber, #74945)
[Link]
By _far_ the most valuable thing about gitlab CI is re-using the existing mesa test farms and related infrastructure for running tests on actual hw. Nice shiny results summary (which gitlab can do AFAIU) isn't hugely useful, as is knowing pass/fail/flake and having enough artifacts to debug when necessary.
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
A good enough understanding to make drive-by changes - and they DO!
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
1. Limited storage space resulting in build/test log losses, when run in containers.
2. A summary report that is pretty much useless for coverage analyses and, if you lose logs, for test failure analysis.
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)
DRM-CI: A GitLab-CI pipeline for Linux kernel testing (Collabora Blog)