Next steps for kernel workflow improvement
Next steps for kernel workflow improvement
Posted Nov 1, 2019 17:44 UTC (Fri) by q_q_p_p (guest, #131113)Parent article: Next steps for kernel workflow improvement
Posted Nov 1, 2019 19:20 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted Nov 6, 2019 5:00 UTC (Wed)
by marcH (subscriber, #57642)
[Link]
This is one of the main reasons absolutely every code review solution (and any every "firehose" solution for that matter) relies on some sort of *database* featuring all the typical bells and whistles: [live] queries, cross-references, statistics, [CI] triggers, filters, notifications, authentication, etc.
The web vs CLI question is important but secondary, it's "just" the interface and many code review solutions offer both to some degree.
Now what is exciting here is allusions to some _distributed_ database model? Who knows, this could revolutionize code reviews like bitkeeper decentralized and revolutionized version control...?
Next: distributed bug tracking? OK, maybe that wouldn't be useful.
Posted Nov 1, 2019 20:11 UTC (Fri)
by logang (subscriber, #127618)
[Link] (3 responses)
My vague ideas for features in git would be:
* Support the entire flow for sending git patches inside git itself. This means branches need first class ways of storing cover letters, versions, recipient lists, etc. Instead of needing to do: format-patch, write cover letter, figure out send lists, notice a mistake, format patch, copy over cover-letter, send. It would nice if git just stored all this with the branch and all you needed to do was 'git send' when it's all ready.
* Support for easily importing patchsets from a mailbox into branches, with the cover letter and recipient lists. (Obviously this will need to solve the base-tree information problem first, possibly by including public repos that already have the base commits with the patches).
* Support for reviewing a patchset inside git itself and having the review comments sent via email to everyone on the recipient list and author, etc.
* Support for branch queues: if people are now importing tons of branches into their repos from their mailboxes, then they need some way of organizing these branches and determining which need attention next
* If the above features start being used by a majoriy, maybe then git could start to allow different transports other than email. So imagine a .git_maintainers file that contains a mapping of email addresses to desired transport. If the recipient's address isn't in this file, it simply falls back to email. A new transport might simply be that instead of emailing the patches they get pushed to a specified branch queue in a world-writable git repo. Sadly, this likely means that git will need to support some spam mitigations too.
* After that, interested parties could probably write a github-like web service that just provides a new front end for git's existing features. Then maintainers that want this could set it up for themselves, or kernel.org could offer this for maintainers that want it.
Posted Nov 3, 2019 21:14 UTC (Sun)
by rodgerd (guest, #58896)
[Link] (2 responses)
It's a pity Fossil isn't better-known; they already seem to have solved a lot of these problems.
Posted Nov 4, 2019 14:17 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
For those curious, I care because we vendor sqlite which is based on using git repositories for patch tracking before we import the code and then enforce that all changes for the vendoring process are tracked in that repository. Inb4 "vendoring is bad": there's an option to use an existing sqlite, but…Windows.
Posted Nov 4, 2019 17:26 UTC (Mon)
by logang (subscriber, #127618)
[Link]
Frankly, I think the fossil model is not useful for most open source projects. They don't really have a convincing story for drive-by contribution nor scaling a community. And they pretty much state out right that it would not be suitable for the kernel development model:
>The Linux kernel has a far bigger developer community than that of SQLite: there are thousands and thousands of contributors to Linux, most of whom do not know each others names. These thousands are responsible for producing roughly 89⨉ more code than is in SQLite. (10.7 MLOC vs. 0.12 MLOC according to SLOCCount.) The Linux kernel and its development process were already uncommonly large back in 2005 when Git was designed, specifically to support the consequences of having such a large set of developers working on such a large code base.
>95% of the code in SQLite comes from just four programmers, and 64% of it is from the lead developer alone. The SQLite developers know each other well and interact daily. Fossil was designed for this development model.
Posted Nov 3, 2019 10:23 UTC (Sun)
by daniels (subscriber, #16193)
[Link]
Next steps for kernel workflow improvement
Next steps for kernel workflow improvement
Next steps for kernel workflow improvement
* And once there's a majority using this flow, adding structured data or tags from CI bots should be a bit easier because it's just a matter of changing the tooling everyone already uses.
Next steps for kernel workflow improvement
Next steps for kernel workflow improvement
Next steps for kernel workflow improvement
GitLab at least has a comprehensive API which can be used to pull the feed of recent events, create/modify/etc merge requests and comments on them, and so on, from the client of your choice. There are standalone CLI clients and rich bindings for whichever language you care to use it from. That's true of most web-based services created in the last 5-10 years.
Next steps for kernel workflow improvement