Software and hardware obsolescence in the kernel
Software and hardware obsolescence in the kernel
Posted Sep 4, 2020 9:45 UTC (Fri) by dvdeug (guest, #10998)In reply to: Software and hardware obsolescence in the kernel by marcH
Parent article: Software and hardware obsolescence in the kernel
An industry is going to value what sells.
Maybe Minix really is a super-reliable system with crystal clear code. Does it run on x86-64? No. Does it run on ARM64? No. Does it run on a Raspberry Pi? No. So I have a super-reliable system I can run in an emulator in a system that valued working over elegance.
For another example, "I just downloaded a file and opened it up" beats "I just downloaded a file and turns out to be TIFF with CCITT T.6 bi-level encoding or a PCX file or use a Microsoft MPEG-4 Version 3 video codec" and I have to dig up code that works (who cares if it's elegant) or give up on viewing it.
There are points when stuff should just go. But if you rewrite Linux filesystem code so it supports ext4 and iso9660 and drops support for all those obsolete and half-supported filesystems like NTFS and JFS and hpfs and VFAT, I don't see why anyone not a kernel programmer should consider that an improvement, even if it did get 0.7% faster. And even among kernel programmers, it seems they're the most likely to have a partition for other operating systems and old drives and disks around with non-ext4 filesystems.
I believe Linus also talks about not breaking code that runs on Linux. It's easy to delete obsolete features if you don't care about code that wasn't updated yesterday.
Posted Sep 4, 2020 17:07 UTC (Fri)
by marcH (subscriber, #57642)
[Link] (2 responses)
Sure but I was writing about something entirely different: removing duplication or _unused_ code, no customer cares about that. This saves maintenance costs and makes adding new features easier yet good luck "selling" this to your average manager in your annual review. I don't know who is Arnd's manager but he seems like a lucky guy ;-)
Same with validation: employees who find bugs before customers save their companies a lot of money and should be rewarded correspondingly. How often have you seen that happening?
There's generally no tracking of who adds the most bugs either. If they really meant business, companies would have a "guilty" field in bug trackers. It wouldn't be rocket science thanks to "git blame" (the command name is a good start...) but I guess "creators" who add new code and bugs are too venerated for that to ever happen. We even have "rock stars", says a lot.
I heard a rumour that Apple is... different in that respect. It seems to achieve some results wrt. to quality.
tl;dr: most companies are still clueless with respect to actual development costs.
Posted Sep 6, 2020 1:43 UTC (Sun)
by dvdeug (guest, #10998)
[Link] (1 responses)
The second line of the article says:
>> Removing code that is no longer useful can be harder, mostly because it can be difficult to know when something is truly no longer needed.
> Same with validation: employees who find bugs before customers save their companies a lot of money and should be rewarded correspondingly. How often have you seen that happening?
https://thedailywtf.com/articles/The-Defect-Black-Market
Besides the subtleties of making it work, there's the questions of financial value. There's a certain point where getting something out today is much better than getting something slightly better out tomorrow, especially when tomorrow is going to bring new hardware you have to run on and thus new bugs.
> most companies are still clueless with respect to actual development costs.
I think that modern capitalism has made many companies worry about today and not tomorrow. Long-term thinking can be discouraged in these projects. There's also developer preferences; most programmers want to write the interesting new code, not spend a week trying to figure why these 10,000 lines of spaghetti code are returning a value that it obviously can't return. There's other developers who will refactor and refactor even when it produces more bugs and less clarity than they started with. Neither encourage companies to go back and clean up.
But I go back to my original comment; people will curse an OS that crashes once a day, but they won't use an OS that doesn't work on their system. Several times in business I've been told how to work about nasty bugs in specialized programs; maybe they got reported upstream, maybe not, but I was told e.g. to twiddle with business numbers until the report printed and then fix them in pen. The upstream is not clueless with the actual development costs in those cases; the users will curse it and work around the bugs, as long as it does what they need. Windows 95 may have sucked, but it was good enough the users tended to stick around despite the bugs.
Posted Sep 6, 2020 19:00 UTC (Sun)
by marcH (subscriber, #57642)
[Link]
If you don't mind cheating then it's much more valuable to sell defects _outside_ the company. Works for both testers and developers.
Spy agencies and criminals wouldn't be doing their work if they were not approaching "vulnerable" developers asking them to add some security "mistakes" in poorly reviewed and tested areas of the code. Easier to hide and deny with an unsafe language like C where intentional memory corruption mistakes are barely distinct from unintentional ones. Even more discreet in a project that does not separate security bugs from other bugs. The only thing difficult to hide is the check/reward.
> There's also developer preferences; most programmers want... Neither encourage companies...
I think management not studying and not questioning what "their programmers want" is a good indicator of "companies being clueless about development costs". "Rock Stars" programmers is of course the extreme version of that.
Software and hardware obsolescence in the kernel
Software and hardware obsolescence in the kernel
Software and hardware obsolescence in the kernel
> Besides the subtleties of making it work,...