Welcome to 2023
The community will see a spike in AI-generated material in the coming year. Machine-learning systems can now crank out both code and text that can look convincing. It seems unavoidable that people will use them to generate patches, documentation, mailing-list polemics, forum answers, and more. It could become difficult to tell how much human-generated content a given submission contains.
Perhaps this flood of content will prove beneficial — it could increase our development pace, bring about better documentation, and provide improved help to our users. But that outcome does not seem highly likely in the near future. Instead, we're likely to see code submissions from "developers" who do not understand what they are posting; this code could contain no end of bugs and, potentially, license violations. Cut-and-paste programming has long been a problem throughout this industry. It is far from clear that automating the cutting and pasting is going to improve the situation.
AI-generated text has its own challenges. Our mailing lists and forum sites do not lack for people trying to appear authoritative on subjects they do not really understand; how many more will show up when it is easy to get a machine-learning system to produce plausible text with little effort? Even the most ardent believers in the "last post wins" approach to mailing-list discussions will get tired and shut up eventually; automated systems have no such limits. How long until we have a discussion on, say, init systems that is sustained entirely by bots?
As a community we are going to have to come up with defenses against abuses of this sort. It would be good to start in 2023.
New kernel functionality written in Rust will be proposed for inclusion into the mainline. While the initial support for Rust kernel code landed in the 6.1 kernel, it was far short of what is needed to add any interesting functionality to the kernel. As the support infrastructure is built up in coming releases, though, it will become possible to write a useful module that can be built for a mainline kernel. A number of interesting modules exist now and others are in the works; they just need the kernel to provide the APIs they depend on.
Pushing a module written in Rust for the mainline seems almost certain to spark a significant discussion. While many kernel developers are enthusiastic about the potential of Rust, there are others who are, at best, unconvinced. This latter group has gone quiet in recent times, presumably waiting to see how things play out. After all, as Linus Torvalds has said, the current Rust code is an experiment; if that experiment does not go well, the code can be taken out again.
The merging of a Rust module that people will actually use will be a tipping point, though. Once this code is merged, taking it back out would create the most obvious sort of regression; that, of course, is something that the kernel community goes far out of its way to avoid. So the merging of user-visible functionality written in Rust will mark the point where the Rust code can no longer just be torn out of the kernel; it will be a statement that the experiment has succeeded.
Anybody who is still unsure of the benefit of Rust support in the kernel will have to speak out before that happens, and some of them surely will. Reaching a consensus may take some time, to put it lightly. So, while it seems likely that this discussion will begin in 2023, it is far less likely that any user-visible functionality written in Rust will actually be merged this year.
It will be a make-or-break year for distributed social networking. The events at Twitter have highlighted the hazards of relying on centralized, proprietary platforms, and have put the spotlight on alternatives like Mastodon. This may just be the opportunity that was needed to restore some of the early vision of the net and give us all better control over our communications.
A flood of new users is also going to expose all of the weaknesses, vulnerabilities, and scalability problems in the current solutions, though. Putting up an instance running Mastodon is easy enough; managing that instance in the face of an onslaught of users, not all of whom have good intentions, is rather less so. There is going to have to be a lot of fast hacking and development of social norms if all of this is going to work.
Perhaps the outcome will be a future where we can communicate and collaborate without the need for massive corporations and erratic billionaires as intermediaries. Or maybe the open and distributed alternatives will prove unable to rise to the challenge quickly enough. People have a remarkable ability to shrug and return to abusive relationships, no matter how badly they have been burned in the past. The sad truth is that things may well play out that way this time too.
It will be the year of the immutable distribution. The classic Linux distribution follows in the footsteps of the Unix systems that came before; a suitably privileged user can change any aspect of the system at any time. For a few years now, though, we have seen movement away from that model toward a system that is, at least in part, immutable. Android is arguably the most prominent example of an immutable system; the Android core can only be changed by way of an update and reboot — and the previous version is still present in case the need arises.
Distributions like Fedora's Silverblue have been exploring the immutable space as well. The upcoming SUSE Adaptable Linux Platform (ALP) is based on an immutable core, as is the just-released, Ubuntu-based Vanilla OS system. It seems likely that others will follow, perhaps using the blueprint that was laid out at the 2022 Image-Based Linux Summit. By the end of the year, there may be a number of immutable alternatives available to play with — and to use for real work.
A distribution with an immutable core offers a higher level of security, since a malicious process or inattentive administrator is unable to corrupt the base system. It provides a stable, consistent base on which applications, often in the form of containers, can be run. Immutable systems naturally lend themselves to fallback schemes that can make recovery from a failed update easy to the point of being transparent. It's not surprising that this approach is attracting interest.
An immutable system makes a lot of sense for cloud deployments, where changes after an image is launched are unwelcome at best. They seem to work well for Android devices, where users are unable to (and generally don't want to) change the core system in the first place. It remains to be seen whether immutability will prove attractive on desktop systems where, arguably, a larger number of users want to be able to tinker with things and may not be interested in running a platform that makes life easier for vendors of proprietary software.
Let the year begin
LWN completes its first quarter-century at the end of January; what a long,
strange trip it's been since we put out our first weekly edition in
1998. Thanks to you, our subscribers, we're still at it, which is a good
thing, since the Linux and free-software communities are far from done. We'll
still be here at the end of 2023, when the time will come to look back at
these predictions and have a good laugh. In between now and then, we're
looking forward to covering our community from within; to say that it will
be interesting is the safest prediction of all.
Posted Jan 2, 2023 18:25 UTC (Mon)
by jepler (subscriber, #105975)
[Link] (2 responses)
Posted Jan 2, 2023 18:59 UTC (Mon)
by NightMonkey (subscriber, #23051)
[Link] (1 responses)
Posted Jan 2, 2023 19:10 UTC (Mon)
by amacater (subscriber, #790)
[Link]
Posted Jan 2, 2023 20:04 UTC (Mon)
by developer122 (guest, #152928)
[Link]
Immutable doesn't necessarily mean stable.
Posted Jan 2, 2023 20:44 UTC (Mon)
by flussence (guest, #85566)
[Link]
Posted Jan 2, 2023 21:11 UTC (Mon)
by q_q_p_p (guest, #131113)
[Link] (1 responses)
Hopefully decentralized git also will return and proprietary platforms like github will cease to exist and we will see more of sr.ht like hubs :)
Posted Jan 5, 2023 15:10 UTC (Thu)
by hubcapsc (subscriber, #98078)
[Link]
-Mike
Posted Jan 2, 2023 22:20 UTC (Mon)
by summentier (guest, #100638)
[Link] (28 responses)
I am currently teaching C++ to physics undergrads. This make me moderately miserable: teaching C++, to me at least, feels like trying to explain how precisely the "Zone" in Tarkovski's STALKER works.
At the same time I want to make sure the language I swirch to does not fall into oblivion a decade from now. Tethering Rust to the largest open-source project in history would give me a lot more confidence to look into it. I conjecture that I am not alone with this strategy...
Posted Jan 3, 2023 4:49 UTC (Tue)
by himi (subscriber, #340)
[Link] (15 responses)
It's not that Rust /can't/ do that kind of thing, but much of what you do when writing computational code is fiddling around and experimenting with ideas, and the things that make Rust valuable for systems coding can get in the way of that kind of fiddling and experimentation. I've had some success in my (small, pretty simple) projects doing the experimental phase in straight Python and then once I had a successful model replacing the kernel (and potentially other hot paths) with Rust, but when I've tried to start out from scratch with Rust it's not been much fun . . . Julia is trying to find a middle ground that lets you have a lot of the flexibility of Python, but the performance characteristics of a fully compiled language - I haven't done more than dabble with it yet, so I can't say how successful it is with that.
Of course, if you take that multi-language approach it ends up not mattering that much what your components are written in, as long as they can interoperate - that's one thing where Rust is both good /and/ bad (as the recent discussions about ownership of buffers across the Python/Rust boundary shows). In that case, Rust instead of C++ is definitely a win (for your sanity, if nothing else) . . .
Posted Jan 3, 2023 5:17 UTC (Tue)
by rsidd (subscriber, #2582)
[Link]
C++ is just too clumsy for scientific programming, in my opinion.
Posted Jan 3, 2023 15:08 UTC (Tue)
by summentier (guest, #100638)
[Link] (13 responses)
I have recently switched from Python to Julia, and minor language warts aside, I really like it: it gets the performance–productivity tradeoff exactly right. I have used Julia for two medium-site projects now, both of which would have been nigh-impossible for me to do in a two-language, Python+X, setup (despite 10+ yrs of eyperience).
So I agree with you insofar: if I were teaching students who will soon become computational physicists, Julia, no questions asked. The problem is, I am not. In the ballpark of 9/10 of students will leave for industry before or at graduation and 8/9 of the remainding scientists will do so as they fail to secure a permanent position in academia. Physicists are in a bit weird spot when it comes to industry since there are few dedicated positions – the most consisistent career I could make out was sort of a "specialized computational problem solver". So it feels prudent to at least keep this in mind.
The upshot is that I am faced with a trilemma performance–productivity–popularity. We agonized over the weight of each pole in multiple faculty meetings. If we neglect performance, Python is the clear winner. Eventually, you will need to make your program fast, which is where vectorization and native/JIT extension come into play. Only after starting to teach Python, I came to realize that for vectorization to make sense, you first have to rewire your brain in a weird way, which leads often to convoluted code and confused students. The two-language setup is too tall an order, particularly because of how difficult is to teach the "compiled" side. Neglegt the productivity side, and you arrive at C(++) and, perhaps surprisingly, Fortran, all three of which make me unhappy.
That brings us back to Julia and Rust. I become more and more confident that both languages will survive, but with both I still have feeling that they're not quite out of the woods yet. That's why I am monitoring the Rust progress in the kernel quite closely.
Hope this clarifies things. (Thanks for sharing your Rust expericence BTW.)
Posted Jan 3, 2023 15:37 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (8 responses)
Out of curiosity, why Fortran? Okay, I've used nothing newer that Fortran-77, but I thought newer versions had a lot of maths operators that made life easy(er).
Cheers,
Posted Jan 3, 2023 16:34 UTC (Tue)
by summentier (guest, #100638)
[Link] (7 responses)
Posted Jan 3, 2023 17:30 UTC (Tue)
by fenncruz (subscriber, #81417)
[Link] (6 responses)
Perhaps its biggest strength (and weakness) is its backwards compatibility. Fortran versions are more of a guideline than a rule. So in your projects, you can combine different versions of Fortran, even in the same file. After living through the python 2 to 3 transition, Fortran's ability to still run code from the '60s unmodified is a miracle. Of course, we get stuck with language constructs that we would rather leave in the 60s, but I guess that is the price to pay.
Posted Jan 3, 2023 18:49 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (5 responses)
Just watch out for that FOR LOOP. (I think I've got the right construct.)
The semantics between FORTRAN and Fortran is subtly different which can do serious damage if you don't realise the FORTRAN guy relied on it.
Namely
FOR I = 10 TO 1
will execute the code in the loop if compiled with FORTRAN, but will skip it if compiled with Fortran.
So be careful, boys and girls :-)
Cheers,
Posted Jan 3, 2023 19:15 UTC (Tue)
by fenncruz (subscriber, #81417)
[Link] (4 responses)
Fortran does not have have for loops is has do loops, there is no NEXT statement, nor do you use TO, to sepcify a range. All versions of fortran would use: do I=10,1 which would not execute. The difference with between versions is modern fortran would recommend an end do statement to terminate a loop while older fortran would use a numbered label to specify the end of the loop.
Perhaps you have some very non standard vendor extensions but that is not standard fortran in any version.
Posted Jan 3, 2023 20:26 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (3 responses)
FORTRAN *would* execute the body of the loop. Once.
Fortran, as you say, wouldn't.
(Unless you use compiler specific extensions - the F77 compiler I used had a switch to enable the old FTN behaviour.)
I'm old enough to remember the difference between FORTRAN and Fortran :-) I've never heard of fortran.
Cheers,
Posted Jan 3, 2023 20:46 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (2 responses)
DO I=10,1 would probably do an implicit STEP -1
J=10
Cheers,
Posted Jan 7, 2023 9:31 UTC (Sat)
by joib (subscriber, #8541)
[Link] (1 responses)
DO I=m1, m2
then m1 must be <= m2. So the loop in your example is invalid. You're most likely describing some compiler-specific extension (or accidental behavior later documented by the compiler developers as expected behavior. :) ).
Posted Jan 7, 2023 17:25 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
Fortran allows a lot of (specified) behaviour in the name of optimisation, that can lead to unexpected results. Like storing the index of a do loop in a register, such that it can only safely be read, and not relied on when the loop exits. It seems highly likely that most FORTRAN compilers (and certainly the one I was using in 1983, iirc) did not bother to check the loop limits. Given that I understood that FORTRAN explicitly said the index,limit check was done at the *end* of the loop, I would be surprised if there was a special check on entering the loop.
When Fortran moved the check to the start of the loop, then it makes sense that loops can execute zero times.
Cheers,
Posted Jan 3, 2023 19:17 UTC (Tue)
by ma4ris5 (guest, #151140)
[Link] (1 responses)
In Rust language side Google develops libraries and scientists verify that those libraries (to be used in Android phones)
Azure CEO Mark Russinovich wrote last year, that C/C++ should be deprecated over Rust with new projects.
Container images in Cloud are being developed (hopefully) to contain less components with memory safety issues,
RedHat recommends that the container images should contain only the necessary components:
I found "This week in Rust"
Posted Jan 4, 2023 8:24 UTC (Wed)
by jem (subscriber, #24231)
[Link]
The original is at: https://doc.rust-lang.org/book/
Posted Jan 4, 2023 2:16 UTC (Wed)
by ianmcc (subscriber, #88379)
[Link]
In the case of the high performance computing course, it is actually run by the computer science faculty, so they are "real" computer science students, and I find it astonishing that they have been taught a really ancient style of C code, basically K&R. Introducing scoped variables in OpenMP already runs counter to what they've been taught. Although we expect that in practice most of the students will end up using TensorFlow or some other toolkit in the real world, we're trying to teach them some computer architecture, so we cover AVX intrinsics, OpenMP, MPI, and CUDA from a relatively low level, so they learn a bit about how the CPU actually executes instructions, and how memory hierarchies work etc, the difference between the various different CUDA memory allocation functions, etc. Given the aims of the course, I think C++ works fairly well, and they write some simple CUDA kernels, which are C++ code anyway.
Using C++ for computational physics is more controversial, and I expect that will switch to Julia eventually. A blocker is that it should be coordinated with the prerequisite course also shifting to Julia. Now I quite like C++, and my career has been doing numerical simulations using code I've written in C++. But it is a really huge language, and few (if any) computational physics students have the time and inclination to properly learn it, even if they go on to a professional career. 20 years ago the decision to use C++ was very clear; for serious scientific computing (back then) you need to use a systems programming language, or something with comparable facilities (Fortran might qualify, but barely), because you need to do your own memory management, and compared with most other options, C++ would let you do that but also use a high level of abstraction in the actual numerical algorithms. This became a very powerful approach with the advent of generic programming. A big disadvantage of C++ is that you need to find (or write your own) libraries for linear algebra etc. The early foray into high performance computing in C++ with valarray was not a success, but there are now some HPC people from Sandia National Lab and other places on the C++ standards committee, so that will hopefully lead to some interesting developments.
In the past I've taught the computational physics course using Matlab, Python, and for a couple of years own-choice of language. For a while students all learned a bit of Matlab in some earlier courses, and these days they should all have done a bit of Python. But those languages are OK for some things, they are terrible for things like Monte Carlo, where you want to have a tight loop that runs as fast as possible. Own choice of language was a bit of a mess; one year a student used Haskell, which I was initially OK with, expecting some elegant one-line (or a few-line) solutions, but his code ended up way longer than anyone else's, and was totally unreadable.
I think in the longer term, if C++ is going to survive then it needs to become a smaller language, eg by compiler enforcement of the C++ core guidelines, and perhaps a language dialect that removes as much unsafe stuff as possible, eg get rid of char* (or maybe get rid of all bare pointers) and just make string literals an std::string. The barrier to entry to learn how to write safe C++ is just too big.
Rust is certainly an interesting alternative, and it may well end up as match up between Julia and Rust, for scientific computing in cases where Python won't cut it.
Posted Jan 4, 2023 5:51 UTC (Wed)
by himi (subscriber, #340)
[Link]
In that kind of context it probably makes sense to be moderately aggressive about moving away from C++ towards something like Rust, given the trends that seem to be developing. There's also maybe a case to be made for the Rust community working to help find ways to make the language more practical for the kind of exploratory development that Python is so good at - I don't know whether that's really viable, though, given the nature of Rust. Alternatively, pushing for Julia to give more consideration to systems development might be an option, though again I'm not sure how realistic that might be.
Good luck with your probably thankless educational challenge!
Posted Jan 3, 2023 11:32 UTC (Tue)
by beagnach (guest, #32987)
[Link]
great analogy
Posted Jan 3, 2023 22:56 UTC (Tue)
by eean (subscriber, #50420)
[Link] (10 responses)
Rust getting immense credibility from its use in Linux is very real though. You'd be able to tell your students "it's used by Linux now" and folks in the workforce will be able to tell their managers the same.
Posted Jan 5, 2023 20:59 UTC (Thu)
by acarno (subscriber, #123476)
[Link] (9 responses)
In short, I've become a huge Ada evangelist, but I'm sadly aware that it'll never see the meaningful uptake that Rust and other "modern" languages will.
Posted Jan 5, 2023 21:43 UTC (Thu)
by khim (subscriber, #9252)
[Link] (8 responses)
For a longest time Ada looked like insane contradiction: it offered tools to control stuff above and beyond what even Rust may offer while simultaneously refusing to even attempt to solve the biggest issue of them all, memory safety. Eventually it took page from the Rust's book and is now, apparently, safer than Rust, but it's still no-community language (does it even have any popular non-trivial libraries?) which makes it non-starter for most projects.
Posted Jan 7, 2023 9:06 UTC (Sat)
by joib (subscriber, #8541)
[Link] (7 responses)
I guess people developed an instinctive dislike for it because they were forced to use it, thanks to DoD contracts requiring it's use. Then the Ada toolchain vendors took the opportunity to fleece the captive users as hard as possible, killing Ada usage except for DoD contracts. And for civilian usage C came to dominate thanks to Unix and compilers being free or at least cheap.
And eventually when the DoD stopped requiring Ada it all but died.
Sometimes I wonder about an alternative history where Ada instead of C became the 'standard' systems programming language. Oh, how much less security vulnerabilities would we have to endure. *sigh*
Posted Jan 7, 2023 13:57 UTC (Sat)
by khim (subscriber, #9252)
[Link] (6 responses)
I don't think so. Ada, for years, covered the remaining 30% well, but never had an answer to the main problem of these pesky 70%. Today… it borrowed the solution from Rust, sure, but before that… it was only “safe” in a limited world of “no dynamic memory”. Thus we could have had somewhat smaller number of vulnerabilities, but hard to say how much.
Posted Jan 9, 2023 22:50 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link] (5 responses)
[1]: All of the bugs documented in https://youtu.be/9xE2otZ-9os?t=189 are the result of reusing statically-allocated memory. In fairness, this technique is technically equivalent to using a fixed-size (block) allocator with a very small heap. But then you can describe almost any use of non-stack memory as "technically" some kind of dynamic allocation.
Posted Jan 9, 2023 23:38 UTC (Mon)
by khim (subscriber, #9252)
[Link] (4 responses)
Well… affine types (which Rust and now Ada are using to solve handling of dynamic memory issues) were, initially, invented by mathematicians and adopted by GC-based functional languages. Not to manage memory, but to manage external resources (in that case “they would be freed but we have no idea when, precisely” is bad answer). Rust discovery (as with TMP it was discovered, not designed into the language from beginning) was surprising and somewhat startling and it's not even mathematical fact, but a social one: if you give people an easy-to-use affine type system then they can solve almost all practical memory handling problems without GC, just with a small amount of It's still not clear whether you can rewrite piecemeal any old code with similar results (which is what Linux Rust project is, essentially, trying to do) or whether you have to design everything from scratch for that phenomenon to work, but the whole thing wasn't pre-planned when Rust was first imagined. But yes, heap is definitely not the only resource which you need to manage… just, probably, the most important one. And it's scary how many really profound, important results are not designed, but discovered when people design something entirely different… are there similarly simple things which could have changed our computing (or maybe more than just computing) world as profoundly and which were just simply not discovered in time?
Posted Jan 10, 2023 12:12 UTC (Tue)
by kleptog (subscriber, #1183)
[Link] (3 responses)
The thing I found most amazing was its impact on some junior developers. They'd start with C and pointers and get themselves tangled into knots keeping the ownership/concurrency/etc straight. But after using Rust for a while they'd internalised the model such that after that coding C became much less scary because they had a model that they knew that worked, and all C needed was some boilerplate that the Rust compiler handles for you (i.e. working without guardrails).
The resulting programs became better simply because they understood the ownership of the objects they're manipulating, rather than just throwing pointers around. Some of us had to learn that the hard way using debuggers on segfaulting programs.
Posted Jan 10, 2023 17:08 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (2 responses)
One thing I've seen time and time again with junior developers is that they've not yet internalised the rules of programming (in any language, not even the one they're working in), and are reliant on the computer telling them that they've made a mistake. Having this happen at compile time is better, but even a runtime exception (like in Java or C#) is an enormous help, since it means you get feedback that you've written something illegal.
In this context, C's thing of "if it's UB, the computer can silently do the wrong thing" is really bad for a developer's education, since the computer will often appear to work even though the developer has done the wrong thing (buffer overflow, use-after-free). Rust's move to "in order to even have UB, you need the unsafe keyword" means that junior developers know to stay away from potential UB, and thus avoid the C problem, since it's obvious to them that they're potentially making a mistake.
Posted Jan 10, 2023 19:36 UTC (Tue)
by khim (subscriber, #9252)
[Link] (1 responses)
This complicates life with junior developers, but they are not the most problematic case. Look on that whole discussion. Wol is most definitely is not a junior. It's not easy to educate junior developers, but it's almost impossible to educate senior ones… because often they firmly believe they know in this or that undefined language construct does. Even if documentation says something else.
Posted Jan 10, 2023 19:46 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
(Not helped because C is not my language of choice.)
Cheers,
Posted Jan 2, 2023 22:25 UTC (Mon)
by unixbhaskar (guest, #44758)
[Link] (2 responses)
Hope to have a good year for me and wish the same for others too.
Posted Jan 3, 2023 0:38 UTC (Tue)
by magfr (subscriber, #16052)
[Link]
As we are all aware that fighting social changes with technology ain't working I fear that the detection AI will become obsolete before the generation AI
This is actually a big problem - I heard about the detection AI in an article by a teacher about AI generated essays.
Posted Jan 3, 2023 0:38 UTC (Tue)
by mtaht (guest, #11087)
[Link]
I am glad that the ubuntu base already has fq_codel in it, but I'd hoped to see cake take off... and so many other possibly innovative kernel features, things like BBR.
I got an acer 516GE chromebook for christmas, and the idea of my entire experience being locked away behind a container really bothers me, ESPECIALLY not being able to take a packet capture from the chromeos side, and being able only to "trust in google"... and a bunch of natted ips. The darn thing, unlike my previous chromebook, had no fq_codel in it, so an upload, locally, at a gbit, clocks in at 100+ms of extra latency. (I filed a bug on it here: https://support.google.com/chromebook/thread/195481344?hl=en but having to campaign to get a kernel option switched vs just changing one line of conf... for this, or ecn... um...)
I'd got kind of used to being able on my ubuntu studio boxes being able to run the sqm-scripts and manage inbound lte and wifi traffic also...
Yes, I know I can turn on developer mode, but freezing Linux's progress further, not just on my innovations - (I can't run a routing protocol or server inside chromeos either, at least so far), makes me worry about an ever more locked down future, where linux once represented freedom to innovate (as well as screw up).
So count me out on the immutable containerization movement, except where I absolutely have to do it. Bare metal to the end.
Posted Jan 3, 2023 2:01 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (1 responses)
The good news is: unlike when the Spam Messaging Transport Protocol was invented, we have a pretty good idea of the threats. Holding my breath.
> Even the most ardent believers in the "last post wins" approach to mailing-list discussions will get tired and shut up eventually; automated systems have no such limits.
"Dumb" spammers have been automated for a very long time and they had no such limit either. Why would filtering out "smart" AI spammers be harder? My 2023 prediction: systems that failed to fight "dumb" spam will fail to fight "smart" spam and those that succeeded won't see much difference either.
Immediately... proving myself wrong with this new policy? https://meta.stackoverflow.com/questions/421831/temporary...
> Android is arguably the most prominent example of an immutable system; the Android core can only be changed by way of an update and reboot — and the previous version is still present in case the need arises.
Pretty sure ChromeOS came first https://source.android.com/docs/core/ota/ab
Posted Jan 3, 2023 9:31 UTC (Tue)
by excors (subscriber, #95769)
[Link]
I think Android was always (including pre-ChromeOS) designed around immutable system partitions, though early devices didn't use the A/B scheme (https://source.android.com/docs/core/ota/nonab) - they'd just download the OTA package into a cache or data partition, then reboot into recovery mode and install the update into the system partitions. Not ideal since the device is unusable for several minutes while updating, and there's no easy way to roll back if it doesn't boot successfully, but the system is still immutable outside of that OTA process.
Reportedly Samsung phones still don't do A/B updates, I guess because of the major downside that it doubles the required flash space.
Now there's also "virtual A/B" (introduced in Android 10, reportedly mandated by GMS on Android 13) which (if I understand correctly) uses dm-snapshot to cheaply snapshot the original system partition, and installs the update onto that snapshot via temporary COW storage in the data partition. Then it reboots using the updated snapshot, and once it's booted successfully it can seamlessly merge the COW storage back into the original system partition and free up the space in the data partition. (And if it doesn't boot successfully, it can simply ignore the snapshot and reboot into the original image). That should give the benefits of A/B updates without such high cost in flash space. (https://blog.esper.io/android-13-virtual-ab-requirement/)
Posted Jan 3, 2023 9:28 UTC (Tue)
by TheGopher (subscriber, #59256)
[Link] (39 responses)
I do hope though that the kernel will move to the gcc rust implementation. This would avoid the kernel becoming a "monorepo" style repository requiring two toolchains with a version compatibility matrix.
GCC is also important for the embedded space. When we moved from proprietary compilers to gcc in the embedded work at a former employer it was quite a revolution, suddenly we could customize the toolchain and upgrade compilers without vendor communication. rustc's licensing may see a return to the bad old days when we got a binary delivery of the compiler and we were beholden to the hardware vendor for fixes, this would be very unpleasant and would make Linux much less desirable within the embedded space.
Posted Jan 3, 2023 10:51 UTC (Tue)
by rsidd (subscriber, #2582)
[Link] (37 responses)
gccrs has a long way to go to catch up, though an independent implementation is always goo.
Posted Jan 3, 2023 11:37 UTC (Tue)
by TheGopher (subscriber, #59256)
[Link] (36 responses)
Apache 2.0 and MIT are GPL compatible in the sense that you can integrate them into a GPL project, but not in the sense that they actually guarantee you access to the source code, they're a substantial step down on the copy-left scale. I would never agree to using a compiler under these licenses in an embedded project simply because of the exposure to bad actors (or even the hardware vendor going bankrupt and not being able to get new compiler versions (this has happened!!))
Posted Jan 3, 2023 14:54 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (1 responses)
So while you have the source, and it's licenced Apache 2 or MIT, you can't actually distribute it until the NDA sunset kicks in. Not the best state of affairs, but you have an escape hatch ...
Cheers,
Posted Jan 3, 2023 16:17 UTC (Tue)
by TheGopher (subscriber, #59256)
[Link]
Posted Jan 3, 2023 16:00 UTC (Tue)
by rsidd (subscriber, #2582)
[Link] (29 responses)
Posted Jan 3, 2023 16:10 UTC (Tue)
by ballombe (subscriber, #9523)
[Link] (2 responses)
This is an example of the author protection th GNU GPL gives over MIT style license.
Posted Jan 3, 2023 17:28 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
Posted Jan 3, 2023 20:51 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
Read up on the case, it's fascinating. And a salutary lesson in how (not) to behave.
Cheers,
Posted Jan 3, 2023 16:33 UTC (Tue)
by TheGopher (subscriber, #59256)
[Link] (1 responses)
There is a big difference between GPL and MIT/Apache 2 in the embedded industry, and also in the proprietary server software industry. A lot of companies try hard to avoid GPL code due to the license requirements, and try to avoid contributing due to implicit/explicit patent licenses.
I think people have been spoiled in the belief that MIT/Apache 2 means they will get the code, but this is a relatively recent phenomenon, and there is nothing guaranteeing this will continue. In fact, I would argue that for the majority of LLVM installations (Apple Xcode) you do not in fact have access to the source code, and Apple has shipped changes that are not in mainline LLVM. I don't know what approach microsoft are taking to the clang/llvm versions they're including with visual studio, but it may soon be that the open source LLVM deployments are in a very small minority.
If Apple and Microsoft start shipping rustc with OS specific UI extensions and OS specific optimizations/fixes in LLVM do you seriously think they will hand over their source code?
Posted Jan 4, 2023 14:51 UTC (Wed)
by pizza (subscriber, #46)
[Link]
> but it may soon be that the open source LLVM deployments are in a very small minority.
We've been well past this point for a long, long time, and has been my experience for about the past decade.
I recall that at one point I had *five* different LLVM instances on my $dayjob workstation, and the only one that even had the option of source code was the one that was supplied by my Linux distribution. The rest all had some amount of non-upstreammed "secret sauce", and a two even required phoning home to a license server.
Posted Jan 3, 2023 17:51 UTC (Tue)
by mtaht (guest, #11087)
[Link]
You can't build another twitter or facebook out of GPLv3'd components. You can, I think, build better social media.
I've also long thought that the constraints of the GPLv2 led to the cloud as we know today, and now that that enormous suite of code has been thoroughly mined out, more and more stuff will move back. At least, I hope so.
Posted Jan 4, 2023 12:18 UTC (Wed)
by khim (subscriber, #9252)
[Link]
Actually GPLv3 found an interesting niche: release of old software for hobbyists with assurance that it wouldn't be used for commercial products. Something like 386MAX. There GPLv3 actually makes sense! Precisely because companies don't want to touch it with 10-feet pole.
Posted Jan 5, 2023 15:29 UTC (Thu)
by jschrod (subscriber, #1646)
[Link] (21 responses)
You make it sound as if shipping binary-only version of LLVM stuff by vendors "could" happen, but would be a negligible occurence. Well, vendors do this *regularly*; this is not "could ship" but "do ship". Please note pizzas empiric evidence in another answer to your post.
In addition, I don't know what your reference to "certain copyleft activists" shall contribute to this thread -- it has nothing to do with it. TheGopher points out real economic advantages of using a GPL-licensed toolchain in the embedded market for a system integrator. The thread is about money and resilience, not about some "holiness" concerning "free software".
In fact, you bringing in that point smells as if you want to portray all GPL proponents as being infected by those "certain copyleft activists" and imply that by that infection these proponents are all wrong. (Whoever these activists are; I have never read anything from any serious developer or FOSS project owner who would proclaim such a nonsense.)
FWIW, a disclaimer: I'm the owner of a company whose software is mainly licensed under GPL -- for economical reasons, not for ideological reasons.
Posted Jan 5, 2023 22:53 UTC (Thu)
by khim (subscriber, #9252)
[Link] (20 responses)
GPL is actually pretty good fit for developer's tools. Even GPLv3. After all you are, quite literally, giving something to a guy who knows how to use source code! Of course in that case chances that s/he would want to see and use source code would be pretty decent and they may even pay you extra for that opportunity. It's when you ship code to the end user, who doesn't care about source and couldn't use it anyway — then it becomes a problem: all the costs from having to deliver source remain, but you get nothing to compensate your for all that work!
Posted Jan 6, 2023 15:55 UTC (Fri)
by pizza (subscriber, #46)
[Link] (19 responses)
See, even after all these years, I still don't get that.
The "costs" from having to deliver source are nearly entirely process-related; the actual direct cost of supplying source isn't even a rounding error on a per-unit basis. [1] And if you are doing proper supply-chain/bill-of-materials tracking (which has not only been "best practice" for many years, but is increasingly becoming a hard requirement) even the process cost of delivering source is practically nil; it's just one additional build artifact. [2]
[1] Assuming electronic delivery, that is. But all source-required licenses allow you to charge to cover the cost of physical media and postage.
Posted Jan 6, 2023 16:48 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (7 responses)
Actually, if you want to comply with the LETTER of the GPL *TWO*, then it IS a problem.
If you put two tarballs on your website, one containing source, one containing binaries, you have just triggered the requirement "to provide the source to anyone who asks, for three years". That is considerable hassle and pain, just because you separated the source from the binary, and didn't *force* the customer to download the source.
Likewise, if you're selling the binary as part of a physical artifact, if you don't include a CD or SD-card or *something* with the source on. So that's an extra per-unit cost.
Some people view that as an advantage, but it's actually a serious bug in the legalese. Back in the old days, when everything was shipped on tape, it was just assumed that it came as source or a complete build system and it didn't matter. The web, and embedded, broke those assumptions.
The GPL v3 fixes it in the sense that - so long as you offer to supply the source with the binaries, and the customer chooses not to take up the offer - the requirement to make source available does not apply.
Cheers,
Posted Jan 6, 2023 18:07 UTC (Fri)
by khim (subscriber, #9252)
[Link]
The biggest cost is still in the need to track and support. And to know what you are shipping to whom. Just like you may order black screws and tiny mom-and-pop Chinese shop may send you blue ones because they have run out of black ones? Chabudo! Software components are developed and delivered in similar fashion. GPL requirements are insane in that world because they mean not just tiny costs needed to deliver some source files, but insanely huge effort needed to keep track of what goes from where to where! And the need to kick out sources from your suppliers, too. And if actual end user doesn't care (because s/he doesn't know what to do with these sources anyway)? It's huge difference!
Posted Jan 6, 2023 19:36 UTC (Fri)
by james (subscriber, #1325)
[Link] (2 responses)
Posted Jan 6, 2023 23:09 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
Cheers,
Posted Jan 7, 2023 14:13 UTC (Sat)
by khim (subscriber, #9252)
[Link]
No, it was always there. That's because that was never what was discussed there. RMS certainly knew about ftp and about how ftp works. FTP predates GNU and GPL by full decade, for crying out loud! Discussion on Groklaw was about more modern ways of distribution, e.g. Bittorrent. There someone who haven't yet downloaded sources or binaries fully already distributes it. What happens if binary was delivered but source was not and seeder disappeared? Now you have legal obligation to deliver sources which you don't have and couldn't have! And this GPLv2 disclaimer wouldn't save you: it protects seeder, who can always claim that s/he offered full thing but you just failed to download stuff in time, but it doesn't cover someone in the middle! I guess these fine details of discussions were just forgotten over time.
Posted Jan 7, 2023 21:56 UTC (Sat)
by pizza (subscriber, #46)
[Link] (2 responses)
Has anyone *ever* gone after someone for making a good-faith effort to comply with the GPL?
Seriously, we're talking about organizations who *simply don't care at all*, and when prompted, if they even respond, it's with a "go ahead and sue us if you feel so strongly about it" attitude.
The "we weren't aware of the GPL's obligations" era completely ended over a decade ago. (Not-so-coincidentally, that's after non-GPL replacements for a lot of the core GPL stuff started to see serious corporate investment)
Posted Jan 7, 2023 23:11 UTC (Sat)
by khim (subscriber, #9252)
[Link] (1 responses)
Maybe in the B2B market in US it ended over a decade ago, but talk to guys who enforce GPL. Significant amount of lawsuits comes against companies who are just not in software business at all, they just ordered some gadget in China or India (to sell for modest profit) and the idea that they have some obligations WRT to software licensing comes as a shock for them. Yesterday they were making these sofas and kitchen utensils deliveries and today some guy asks them for a software that they never even knew exists in the first place? Ridiculous! Yes, awareness is slowly reaching down to the lower levels of “Tower of Babylon”, but would it ever permeate the whole thing? Doubtful.
Posted Jan 8, 2023 1:12 UTC (Sun)
by pizza (subscriber, #46)
[Link]
"Ignorance of the law is no excuse for violating it"
Businesses have to be aware of, and comply with, all relevant regulations in the space they operate.
If they get burned because they weren't aware of what went into the stuff they're selling, then they're (at best) incompetent.
Posted Jan 6, 2023 17:51 UTC (Fri)
by khim (subscriber, #9252)
[Link] (10 responses)
I wonder where that delusion comes from. I have dealt with these things in the past and situation where entirely different things are sold under the exact same moniker is not rare. As long as component behaves approximately how it's supposed to work it's accepted. And if you need to pull and change the binary from working device to duplicate it's work… then that's what is happening. Maybe in your world as you stop using cheap components and switch to rare and expensive one it becomes rarer, but try to buy something on Aliexpress or any other similar site for cheap and see what you'll be getting. You may scream that world is wrong and stupid. As long as you only care about 5-10% of the world and ignore the rest… world would ignore you. And rightfully so. You are deluding itself if you think that orderly world of giants like Google or Microsoft who are doing the “due diligence” is the only thing that matters. On the contrary: what these giants are doing is tiny speck of the development that happens. Only these days it happens far away from US, mostly in China, and while you may [try to pretend] that it doesn't matter… long-term the people who are actually doing things would prevail.
Posted Jan 6, 2023 21:32 UTC (Fri)
by pizza (subscriber, #46)
[Link] (9 responses)
That "delusion" is taken directly from the boilerplate "bill of materials" clauses requirements in the contracts I'm being held to. And previous $dayjob was in a heavily regulated industry with 10+ year support lifecycles and they would be in very, very expensive trouble if they found themselves in a position where they couldn't remedy any defects that show up in the field.
> As long as component behaves approximately how it's supposed to work it's accepted.
If that's the acceptance criteria in your contract, sure. I'm saying that it's increasingly not.
> You may scream that world is wrong and stupid. As long as you only care about 5-10% of the world and ignore the rest… world would ignore you. And rightfully so.
Of course they ignore you; it's cheaper to not care! Until suddenly it isn't. And oddly enough, they seem to care a lot more in the future.
But I don't care what "the world" thinks in the end; my job is to provide analysis and advice to the folks that are paying me. They're also completely free to ignore me. And that's fine too, because it's their neck on the line.
> You are deluding itself if you think that orderly world of giants like Google or Microsoft who are doing the “due diligence” is the only thing that matters.
Google and MS are anything but orderly in their internal processes. Both think nothing of just axing entire product lines with near zero notice.
Posted Jan 6, 2023 22:11 UTC (Fri)
by khim (subscriber, #9252)
[Link] (7 responses)
Which is still much more “orderly” then the majority of the world. At least they wouldn't sell you entirely different thing under the exact same name (picked the extreme case where literally everything is different: hardware, software, even shape of the case… entirely different hardware in the same case is much more common). When was you the last time in the supermarket? Visit any local one and count number of offerings with labels “Made in US”, “Made in German” or “Made in Japan” (the countries where people care a bit about these things) to labels “Made in China” or “Made in Malaysia” (where they can afford not to care). Nope. When they are forced to care they just switch country and continue not to care. So now you want to apply something you observed in one very narrow niche and claim that's how the whole world behaves? The majority of the world don't care about these “best practices”. And your handwaving wouldn't change that. And for the majority of the world GPL is unacceptable, that's why it's not used much where people can avoid it. Even Linux's license is, mostly, treated as if it was BSD-licensed thingie. Only when companies grow beyond certain size they become lucrative enough targets and have to deal with GPL, small players are blissfully unaware that they have any obligations. In effect it works like Microsoft's Windows strategy: if they would pirate software anyway, then let us make them pirate ours, we'll squeeze money from them later. This makes GPLed kernel acceptable for the industry. Busybox is slowly replaced with toybox, on the other hand, because it's copyright holders don't want to do that.
Posted Jan 6, 2023 23:34 UTC (Fri)
by pizza (subscriber, #46)
[Link] (4 responses)
The ones that get burned aren't the Chinese OEMs, it's the folks importing them into the US. Because they're the ones legally on the hook for this stuff. Once they lose enough money, they'll start caring, or they won't have any money left over.
> The majority of the world don't care about these “best practices”. And your handwaving wouldn't change that.
What makes you think I care about about the majority of the world? They're not, nor will ever be, my customers/clients. Indeed, the majority of the world will never even interact with any of the software I've worked on. Which is perfectly fine.
What matters to me are what the folks paying me, or likely to pay me, require. And those folks tend care very much about the provenance of their software. Granted, the desire to avoid all GPL-licensed software is a nontrivial part of that.
> Only when companies grow beyond certain size they become lucrative enough targets and have to deal with GPL, small players are blissfully unaware that they have any obligations.
No, they're not "blissfully unaware"; they simply don't care or think they can get away with it for "long enough".
Posted Jan 7, 2023 14:58 UTC (Sat)
by khim (subscriber, #9252)
[Link] (3 responses)
I think you are overestimating the impact US have on the world markets. It's no longer 1970th, US is no longer the half of world manufacturing. It no longer works like that. People close to “money printer” have money, not people who obey the rules. It doesn't matter how much money people earn, only how much they promise to earn. And if “money printer” would stop working US economy will implode thus all these regulations wouldn't mean anything, anyway. It's funny because you have just “proved” then it shouldn't be a problem for them. Nope. Software engineers may know about GPL, business owners often have no idea it even exist. That's where the whole drama starts: business expect that software engineers would deal with software and would add certain sum to the BOM, they expect that lawers would deal with licenses and would add certain sum to the BOM. What they don't expect is sudden need to redo their whole processes just to simply comply with the license! That's the biggest problem of GPL: it's not that it makes something more expensive but that it requires payments in a totally unexpected form. Not in US$ or CN¥ (which you can easily loan if you are close “money printer”) but in time and other precisions things. That is shocking… especially after they heard that “free” world and expected to see moderate monetary sum, but saw request for something much more precious.
Posted Jan 7, 2023 21:21 UTC (Sat)
by pizza (subscriber, #46)
[Link] (2 responses)
huh? ... and you're accusing *me* of handwaving?
"money printing" applies to folks in the financial sector (including governments) -- It doesn't work when you're dealing with actual physical goods.
> Nope. Software engineers may know about GPL, business owners often have no idea it even exist.
No; "business owners" are *more* likely to care than random software engineers. Because the business owners have lawyers whose job it is to keep the business out of trouble, and as a group tend to be (fiscally) conservative and risk-adverse. The random software engineers just follow the policies (if any) the owners tell them to follow. Or they get fired.
> What they don't expect is sudden need to redo their whole processes just to simply comply with the license!
The "redo their whole processes" was already happening thanks to security disasters and patent trolls. The actual "complying with source requirements" is *minor* compared to the overhead of tracking the fundamental software Bill of Materials. FFS, most of the commercial licenses I've dealt with over the years have had *far* more onerous (and therefore process-affecting) requirements.
Posted Jan 7, 2023 23:01 UTC (Sat)
by khim (subscriber, #9252)
[Link] (1 responses)
Of course it does! How do you think Europe may afford natural gas prices which are 10 times more than in previous years? By printing money. If they would stop doing that then immediately everything which needs natural would stop being producing. That's just one, simple, example, but if you dig just a little you will find out that your whole doubleplusgood B2B and military industries may exist in a form in which they exist because they are attached to money printer. If it would be turned off… Great Depression would be mild bump compared to that. The ability to afford one, single lawyer are already puts business into “medium-sized” one. That's not where the majority of world works. You are, apparently, live in the highest flowers of the Tower of Babel and for some reason what to pretend that the whole thing is not relevant to you. That's where all these extra expensive and lucrative software packages like SAP matter. Sadly the mere existence of these floors is predicated on the stability of the whole thing. All these nice highly regulated companies on the top of the tower… with lawyers, “conservative and risk-adverse” (ha-ha-ha, that's why we have more insolvent companies in US than in China and India, right?) may exist solely because there are millions of companies under them who couldn't even afford any lawyers and BOM tracking. You may continue to ignore the world till your “top floor” before it would be destroyed or may look on how the world actually works now, before you would need to find our suddenly that your “top floor” disappeared. Even if it wouldn't disintegrate before your death (unlikely, but hey, stranger things have happened) you couldn't change the fact that both it's existence and it's rules (which make GPL-compliance not a big deal) depend on other floors of that tower.
Posted Jan 8, 2023 1:46 UTC (Sun)
by pizza (subscriber, #46)
[Link]
In this particular situation, I was talking about a literal mom-and-pop software consulting outfit for which I was the only employee for most of the time I worked there.
The other situation was the was a ~15-person contract design/manufacturing house. We weren't in a regulated space but many of our customers were, and the stuff we designed/built for them (software included) had to meet the requirements of the spaces in which our customers operated. This included quite a few "all sub-sub-sub-sub contractors to federal contracts" (or worse) DoD-specific requirements.
So yes, we had lawyers on retainer. Yes, it represented a non-trivial amount of expenses. And it was an utterly necessary expense if we wanted to land *any* contracts at all.
Excrement flows downhill. So do requirements. Pretending otherwise is delusional and leads to you rapidly going out of business. Also, most software is not written for incorporation into end-user retail products (and by extension, most "programmers"), so acting like end-user retail is the only scenario that matters is, at best, naive.
Posted Jan 6, 2023 23:46 UTC (Fri)
by excors (subscriber, #95769)
[Link]
Sure they would - e.g. Microsoft released a Surface Pro in 2013 and a very different Surface Pro in 2017. (The latter came between the Surface Pro 4 and the Surface Pro 6, but it officially has no number in its name). Google had three generations of Chromecast with very different hardware and appearance, and two Nexus 7s, and so on. I'm not sure what point you're trying to make, but marketing different entries in the same product category with the same name is not unusual.
Posted Jan 7, 2023 17:42 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
> The majority of the world don't care about these “best practices”. And your handwaving wouldn't change that.
Agreed. I want to introduce an Open Source (GPL2) product into my employer. As an end user, I completely and utterly don't care (don't need to care) about all that stuff. If I were supplying to end-users, I should care, but if they don't care why should I bother? People will only care on a "because I have to" basis when they're dealing in big B2B deals. Which - it sounds like - is Pizza's arena. Very much a niche arena.
> And for the majority of the world GPL is unacceptable, that's why it's not used much where people can avoid it.
Well, although I don't *need* to care, I'm a FLOSS guy, so I do. But again, I'm a computing guy in an end-user role. Somewhat niche. And I take the simple attitude "if it's FLOSS, and I only deal in source, then I don't need to care". I'm lucky, I can ... :-)
Cheers,
Posted Jan 7, 2023 17:32 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
> If that's the acceptance criteria in your contract, sure. I'm saying that it's increasingly not.
You're missing the consumer market.
I remember seeing complaints - with regards to Linux - that they were having trouble caused in large part by the fact that equipment they were buying had the same case, the same (alleged) behaviour, the IDENTICAL part number, and a completely different BOM inside ... bit of a problem when you're trying install embedded linux - like the WRT54 ... (pre WRT54G :-)
Cheers,
Posted Jan 4, 2023 2:49 UTC (Wed)
by josh (subscriber, #17465)
[Link] (3 responses)
Same with GCC: people *could* forward-port changes to a newer toolchain, but that doesn't mean they *will*.
All that said, I do *wish* rustc were GPL. But I don't think that solves the problem, at all. GPL forces a fork to provide its source code. But better still is encouraging people to *upstream* their code.
Posted Jan 4, 2023 14:56 UTC (Wed)
by pizza (subscriber, #46)
[Link] (1 responses)
FWIW in my experience, being "stuck" on some ancient kernel has been due to some necessary driver or other kernel component not actually having source available. The classic example being the wifi drivers for various consumer-grade routers.
(That's not to say that even when when you *do* have complete sources, the PITA factor of forward-porting vendor spaghetti might make such a port unustifiable..)
Posted Jan 4, 2023 20:50 UTC (Wed)
by josh (subscriber, #17465)
[Link]
Posted Jan 5, 2023 15:13 UTC (Thu)
by jschrod (subscriber, #1646)
[Link]
TheGopher was telling his experience as a *developer* in the embedded world who *provides* drivers for new hardware.
Those who are stuck are often *users* who have drivers that won't be updated.
Posted Jan 3, 2023 23:27 UTC (Tue)
by piexil (guest, #145099)
[Link]
Posted Jan 3, 2023 15:45 UTC (Tue)
by pj (subscriber, #4506)
[Link] (3 responses)
I've started using Nix to replace all the individual tool-version-management tools like nvm, pyenv, sdkman, etc - because that one tool replace N tools.
Posted Jan 4, 2023 1:09 UTC (Wed)
by beagnach (guest, #32987)
[Link] (2 responses)
Are the documentation and ergonomics good enough that in one morning I'd be able to get this up and running and back doing productive work in my various projects in various languages?
Posted Jan 4, 2023 20:07 UTC (Wed)
by bronson (subscriber, #4806)
[Link]
I really like Nix, and it's helped me solve some sticky internal tooling issues where nothing else came close, but I'd be reluctant to work with it while on a tight deadline. You need to come to Nix with the intent to learn.
Posted Jan 5, 2023 21:58 UTC (Thu)
by smammy (subscriber, #120874)
[Link]
Posted Jan 5, 2023 17:32 UTC (Thu)
by glenn (subscriber, #102223)
[Link] (1 responses)
Posted Jan 6, 2023 4:57 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
Posted Jan 7, 2023 21:27 UTC (Sat)
by vjanelle (subscriber, #44943)
[Link]
Posted Jan 12, 2023 11:02 UTC (Thu)
by athulmul (guest, #143335)
[Link]
Welcome to 2023
Welcome to 2023
Welcome to 2023
Welcome to 2023
Welcome to 2023
Welcome to 2023
Welcome to 2023
something from there I get the right stuff. I guess
the complaints are "social justice" related and not
technical?
Rust bandwagon
Rust bandwagon
Rust bandwagon
Rust bandwagon
Rust bandwagon
Wol
Rust bandwagon
Rust bandwagon
Rust bandwagon
...
...
NEXT
Wol
Rust bandwagon
Rust bandwagon
Wol
Rust bandwagon
K=1
DO I=K,J would not execute in Fortran, but would execute once with I set to 10 in FORTRAN.
Wol
Rust bandwagon
Rust bandwagon
Wol
Rust bandwagon
either using Julia's runtime for Rust, or using for example Tokio for Rust: https://github.com/Taaitaaiger/jlrs.
Currently Rust can't guarantee memory safety in some cases with Julia runtime there.
are memory and data race safe: Compilation will fail, if safety rules are not met.
I don't like with Julia's current status "you have to be careful to avoid data races with mutexes and threads":
Should I validate each Julia library for possible thread safety issues?
Stroustrup defended C++ at https://www.theregister.com/2022/09/20/rust_microsoft_c/.
because every memory safety issue is a security issue (may or may not have a CVE).
https://www.csoonline.com/article/3599454/half-of-all-doc...
So C/C++ has higher post deployment security maintenance cost, compared to Rust.
https://cloud.redhat.com/blog/container-image-security-be...
http://this-week-in-rust.org useful to see that what's new there.
Also MIT book could be useful for learning Rust:
http://web.mit.edu/rust-lang_v1.25/arch/amd64_ubuntu1404/...
Rust bandwagon
>http://web.mit.edu/rust-lang_v1.25/arch/amd64_ubuntu1404/...
Rust bandwagon
Rust bandwagon
Rust bandwagon
Rust bandwagon
Rust bandwagon
Rust bandwagon
Rust bandwagon
> Oh, how much less security vulnerabilities would we have to endure.
Rust bandwagon
Rust bandwagon
Rust bandwagon
unsafe code.Rust bandwagon
Rust bandwagon
Rust bandwagon
Rust bandwagon
Wol
Welcome to 2023
Welcome to 2023
Immutability is bunk
Welcome to 2023
Welcome to 2023
>
> Pretty sure ChromeOS came first https://source.android.com/docs/core/ota/ab
Welcome to 2023
Welcome to 2023
Welcome to 2023
Welcome to 2023
Wol
Welcome to 2023
This argument gets repeated so often it's tiresome
Welcome to 2023
In the interim the FSF has effectively killed any goodwill to GPL with GPL3 which nobody wants to touch.
Welcome to 2023
<https://en.wikipedia.org/wiki/Java_Model_Railroad_Interface>
<https://en.wikipedia.org/wiki/Jacobsen_v._Katzer>
Welcome to 2023
Welcome to 2023
Wol
Welcome to 2023
Welcome to 2023
AGPLv3 for mastodon
> In the interim the FSF has effectively killed any goodwill to GPL with GPL3 which nobody wants to touch.
Welcome to 2023
Welcome to 2023
Welcome to 2023
Welcome to 2023
[2] Of course, it takes a little bit of developer effort to add that artifact to the build system [3], but so does integrating that (or any other) component to begin with.
[3] If you don't have automated builds for deliverables... well, your complaints about licensing/compliance/costs probably aren't worth listening to. IMNSHO.
Welcome to 2023
Wol
Welcome to 2023
Really not sure that's the case. The GPLv2 says;
Welcome to 2023
If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.
In your example, the "designated place" is the system hosting your website, and option 3(a) applies.
Welcome to 2023
Wol
> Is that a recent "minor update"?
Welcome to 2023
Welcome to 2023
> The "we weren't aware of the GPL's obligations" era completely ended over a decade ago.
Welcome to 2023
Welcome to 2023
> which has not only been "best practice" for many years, but is increasingly becoming a hard requirement
Welcome to 2023
Welcome to 2023
> Google and MS are anything but orderly in their internal processes. Both think nothing of just axing entire product lines with near zero notice.
Welcome to 2023
Welcome to 2023
> The ones that get burned aren't the Chinese OEMs, it's the folks importing them into the US.
Welcome to 2023
Welcome to 2023
> "money printing" applies to folks in the financial sector (including governments) -- It doesn't work when you're dealing with actual physical goods.
Welcome to 2023
Welcome to 2023
> You are, apparently, live in the highest flowers of the Tower of Babel and for some reason what to pretend that the whole thing is not relevant to you.
Welcome to 2023
Welcome to 2023
Wol
Welcome to 2023
Wol
Welcome to 2023
Welcome to 2023
Welcome to 2023
Welcome to 2023
Welcome to 2023
GCC rust for gcc kernel builds and rustc for clang kernels.
Immutable distros
Immutable distros
Immutable distros
The definitive answer to your question is Ian Henry's delightful How to Learn Nix. The short answer is “oh my, no!”
Immutable distros
Immutable distro vs Yocto or Buildroot w/ squashfs?
Immutable distro vs Yocto or Buildroot w/ squashfs?
Welcome to 2023
Welcome to 2023
