Intel's "redundant prefix issue"
We believe this bug causes the frontend to miscalculate the size of the movsb instruction, causing subsequent entries in the ROB [reorder buffer] to be associated with incorrect addresses. When this happens, the CPU enters a confused state that causes the instruction pointer to be miscalculated.The machine can eventually recover from this state, perhaps with incorrect intermediate results, but becoming internally consistent again. However, if we cause multiple SMT or SMP cores to enter the state simultaneously, we can cause enough microarchitectural state corruption to force a machine check.
Intel has released
a microcode update to address the issue.
Posted Nov 15, 2023 16:27 UTC (Wed)
by IanKelling (subscriber, #89418)
[Link] (117 responses)
Posted Nov 15, 2023 17:39 UTC (Wed)
by pizza (subscriber, #46)
[Link] (64 responses)
Even if these patch files were 100% free software, "Set register 1021 bit 3 to 1 when register 8 bit 4 is 0 and register 301 bit 8 is 1" (and, and, and, and....) doesn't really give a lot of insight into what all of these things actually _mean_.
Posted Nov 15, 2023 17:48 UTC (Wed)
by paulj (subscriber, #341)
[Link] (61 responses)
When we got drivers, it turned out none of them were as magic as claimed, developers outside the vendors were well capable of working on them (often making better drivers!), and there was often a lot of things that could be improved and even generalised.
Posted Nov 15, 2023 17:57 UTC (Wed)
by willy (subscriber, #9762)
[Link] (41 responses)
Posted Nov 15, 2023 18:01 UTC (Wed)
by paulj (subscriber, #341)
[Link] (40 responses)
Concurrency in OS kernels is also highly highly complex - "enterprise" Unix kernels for high-end server hardware should have been kept for Sun, SGI, etc. to develop.
Posted Nov 15, 2023 18:58 UTC (Wed)
by wtarreau (subscriber, #51152)
[Link] (39 responses)
I must say that for having worked myself on a CPU 25+ years ago, and seen the cost of traversal of each layer, mux, latch etc, I've always been impressed by the ability CPU vendors have to hot-patch their signal paths like this without killing the overall performance too much, and I'm wondering how many percent extra frequency (and/or lower power usage) would be gained by totally removing this feature.
While I'm not particularly interested in decrypting the microcode files, I would appreciate it a lot if the issues fixed were openly explained and documented. I.e. the bug is caused by this bit being misinterpreted in exactly this condition, having this impact etc. For the fix we opted for this solution, it has this negative impact in this rare situation, thanks for watching. But we'll likely die before this happens...
Posted Nov 15, 2023 20:19 UTC (Wed)
by mfuzzey (subscriber, #57966)
[Link] (34 responses)
What is the justification for *encrypting* the microcode files though?
I can see the justification for *signing* them so that only official Intel updates can be applied by default (because if you are using an Intel processor you effectively trust Intel whereas you probably don't trust J Random Microcode Hacker). Though it would be nice to be able to add additional public keys for extra people / entities you *do* trust
Posted Nov 15, 2023 21:16 UTC (Wed)
by pizza (subscriber, #46)
[Link] (29 responses)
Security, actually.
If microcode updates can patch/work around vulnerabilities, they can also _add_ them.
This is also a good argument for _not_ publicly documenting the inner workings of the processors (or at least the inner workings that microcode can interact with)
Posted Nov 15, 2023 21:21 UTC (Wed)
by dullfire (guest, #111432)
[Link] (9 responses)
Posted Nov 15, 2023 21:54 UTC (Wed)
by jccleaver (guest, #127418)
[Link] (4 responses)
Security through obscurity works until it doesn't. But what a lot of idealists miss is that first part: It works.
Things that are less secure than they could/should be should be fixed, but that doesn't mean you have to make it easier for others to hack.
Posted Nov 15, 2023 22:39 UTC (Wed)
by pizza (subscriber, #46)
[Link] (3 responses)
That should be "security _only_ by obscurity"
> Things that are less secure than they could/should be should be fixed, but that doesn't mean you have to make it easier for others to hack.
Exactly. The more information you provide a would-be attacker, the easier job they'll have attacking you. Providing as little information as possible/necessary is always a good/sane tactic. Make them work for _everything_.
Posted Nov 16, 2023 12:30 UTC (Thu)
by yaap (subscriber, #71398)
[Link] (1 responses)
Common Criteria (CC) certification requires keeping the design confidential for example. Why? Because it forces an attacker into reverse engineering the particular device design, which is not impossible but definitely costly. And this will deter many attackers, all those who cannot justify the effort. In most cases security is not a binary proposition, it's an economic equation between cost of attack and cost of defense. And then obscurity makes perfect sense as a way to increase the cost of attack.
Everybody agrees "security by obscurity" is bad for algorithm and protocols. These are long lived, and we want the more review possibly to ensure strength and lasting protection.
But when it comes to a specific implementation, some amount of obscurity can be good, and can even be a requirement (see the CC spec)
For CC, I don't remember at which level it kicks in but for EAL4+ and above design secrecy is a requirement for sure, with requirements on how to control access and enforce this too. So you want to do an EAL4+ system? You must show your offices are properly secured for example, and that there are protection against tampering the design over the whole chain, from design to manufacturing to production. And most of this must be confidential, with proper access control.
So yes, I wish people would stop pushing "security through obscurity = bad" as if it were insightful. First, when it applies (algos & protocols) it is no longer insightful, everybody knows. And then there are plenty legitimate cases where it does NOT apply.
Posted Nov 16, 2023 16:17 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
When I did some stuff like this, the security I designed (they didn't use it) was pretty rudimentary. But the logic was very simple:
"This stuff has an economic life of about 6 months. If it takes them six months to reverse engineer what we've done, the results will be worthless".
Yes, of course I built in all the security I could, with every *obvious* hardening trick I could think of. But at the end of day, the stuff I was protecting wasn't worth throwing loads of money at.
Cheers,
Posted Nov 16, 2023 13:57 UTC (Thu)
by brunowolff (guest, #71160)
[Link]
Posted Nov 15, 2023 22:01 UTC (Wed)
by pizza (subscriber, #46)
[Link] (3 responses)
Okay. Please let me know the address of your residence, entry points, what security system protects them (eg locks, alarms, trigger mechanisms), and where you keep your valuables/secrets.
If you object to that, then you agree that security-by-obscurity has its place.
Posted Nov 15, 2023 22:55 UTC (Wed)
by Kamilion (subscriber, #42576)
[Link] (1 responses)
Posted Nov 15, 2023 23:30 UTC (Wed)
by halla (subscriber, #14185)
[Link]
I deleted a lot of things I wrote before I pressed the "preview comment" button, but...
Posted Nov 16, 2023 8:55 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link]
Posted Nov 15, 2023 21:55 UTC (Wed)
by mfuzzey (subscriber, #57966)
[Link] (6 responses)
People who want to experiment with microcode could then enable their own keys on their own system (at their own risk).
Posted Nov 16, 2023 9:43 UTC (Thu)
by Sesse (subscriber, #53779)
[Link] (5 responses)
Posted Nov 16, 2023 13:25 UTC (Thu)
by geert (subscriber, #98403)
[Link] (4 responses)
Posted Nov 17, 2023 0:13 UTC (Fri)
by p2mate (guest, #51563)
[Link] (2 responses)
Posted Nov 17, 2023 8:03 UTC (Fri)
by geert (subscriber, #98403)
[Link]
Posted Nov 17, 2023 8:17 UTC (Fri)
by Sesse (subscriber, #53779)
[Link]
Posted Nov 27, 2023 14:16 UTC (Mon)
by immibis (guest, #105511)
[Link]
Posted Nov 16, 2023 6:24 UTC (Thu)
by donald.buczek (subscriber, #112892)
[Link] (10 responses)
So what? I'm free to add bad kernel code, bad userspace code, enter `rm -rf /` or us a hammer on my cpu.
Agreed, people could be tricked to install bad microcode, e.g. by a false promise of performance gain, and brick their systems. But that only justifies a warning, not preventing people from installing whatever microcode they want.
Posted Nov 16, 2023 19:34 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (9 responses)
Posted Nov 17, 2023 9:17 UTC (Fri)
by james (subscriber, #1325)
[Link] (3 responses)
Posted Nov 17, 2023 16:55 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Posted Nov 20, 2023 7:34 UTC (Mon)
by eduperez (guest, #11232)
[Link] (1 responses)
Posted Nov 20, 2023 19:11 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 17, 2023 12:05 UTC (Fri)
by paulj (subscriber, #341)
[Link] (4 responses)
Ergo, graphics drivers must be kept closed.
Posted Nov 17, 2023 16:56 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
I don't think so? All the modern display protocols (HDMI, DisplayPort) are digital.
Posted Nov 17, 2023 17:26 UTC (Fri)
by paulj (subscriber, #341)
[Link] (2 responses)
Regardless, the point is this argument "But the software could damage the hardware!" argument is not new. It was also valid in the 90s to 00s for graphics hardware. Had this argument been held as valid then, we couldn't have had open graphics drivers then.
It's just not a good argument.
Posted Nov 18, 2023 2:07 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
I hate to break this to you, but modern graphics cards _also_ rely on signed firmware. That's why NVidia cards had been so useless with open drivers until recently, the only allowed firmware was locked to the lowest power management setting.
Posted Nov 20, 2023 10:19 UTC (Mon)
by paulj (subscriber, #341)
[Link]
Posted Nov 16, 2023 8:33 UTC (Thu)
by faramir (subscriber, #2327)
[Link]
Err, that's justification for signing the updates. Which can be done without encrypting them. Just saying...
Posted Nov 16, 2023 1:21 UTC (Thu)
by wahern (subscriber, #37304)
[Link]
Realistically I suspect the most pressing motivation is avoiding patent lawsuits. Microcode would reveal patentable aspects of the internal architecture of the CPU that would be more costly to discover and establish by other means. Patent holders are likely already suspicious, if not confidently aware, of potential infringements in Intel and AMD products, but to file an action you need some clear evidence, not assertions or hypotheses. Microcode could provide something solid for plaintiffs to stake their initial claims against, and
For the same reason vendors are often reluctant to release firmware source code, design documents, and other information that would make it easier to build patent and copyright infringement claims against their product implementations. Every industry is rife with actual infringement, and one of the reasons we haven't seen an IP apocalypse in the face of overly broad patent and copyright scope is because of the high cost of establishing a claim relative to the averaged expected value of any particular patent or copyright. IOW, you can't throw a patent portfolio at a defendant and see what sticks, confident that enough would stick to make for a net positive RoI; you always have to first identify and provide some solid evidence for specific patents or copyrights.
Obfuscation is illegitimate in the context of security, but highly valuable from a legal perspective.
Posted Nov 16, 2023 8:43 UTC (Thu)
by excors (subscriber, #95769)
[Link] (1 responses)
AMD copied Intel's microcode for their clones of the 287, 386 and 486:
> On September 2, 1993, AMD issued a press release disclosing for the first time that its "clean room" engineers had been given Intel's 386 microcode and that some portion of Intel's 386 microcode was probably incorporated into AMD's 486 microcode. The following day, AMD announced that it had intentionally exposed its engineers to Intel's 386 microcode in order to accelerate the development process and that approximately 25% or 600-700 lines of AMD's Am486 microcode were "substantially similar" to Intel's copyrighted 386 microcode.
Eventually (in 1994) it was decided that the copying was allowed under a 1976 cross-licensing agreement between Intel and AMD (https://www.latimes.com/archives/la-xpm-1994-03-11-fi-327...).
Intel appears to have supported microcode updates ("BIOS Update feature") since the Pentium Pro, originally developed around 1994, in which the microcode was encrypted:
> Each BIOS Update is tailored for a particular stepping of the Pentium Pro processor. The data within the update is encrypted by Intel and is designed such that it is rejected by any stepping of the processor other than its intended recipient. Thus, a given BIOS Update is associated with a particular family, model, and stepping of the processor as returned by the CPUID instruction. The encryption scheme also guards against tampering of the update data and provides a means for determining the authenticity of any given BIOS update.
Given the uncertain legal situation at the time, it seems quite plausible that Intel decided to encrypt the microcode largely to stop AMD copying it again. Maybe modern CPUs are so much more complex that there's no hope of copying anything useful nowadays, but inertia is very powerful and it's probably hard for some well-meaning Intel employee to successfully convince enough people that the remaining arguments for encryption are not compelling enough and that they could safely stop encrypting it now, so instead they've continued.
Posted Nov 16, 2023 10:02 UTC (Thu)
by james (subscriber, #1325)
[Link]
They replied "Well, what we've already done is put a bunch of Linux boxes in the lab and
they work just fine." We then decided to take the Linux boxes on an “extended field trial” for
the next few months and if it works well, we'll tell Albert that we did it.
And this is the Dilbert moment at the end of all this. The Linux experiment worked
beautifully. We got all the cycles we wanted, and it was as stable as a rock. We got our
validation back on track. At the end of the project, the guys who did that guerilla action, they
got an IAA award, the highest Intel technical award, from Albert for having taking (sic.) the
initiative.
Posted Nov 16, 2023 10:15 UTC (Thu)
by khim (subscriber, #9252)
[Link]
Turn that around and tell the same about, e.g., internals of a compilers. They are even doing the exact same things: convert something human wrote into something hardware understands. Two most important ones are gcc and llvm. How easy it is to change the plugins for these two without looking on the source, just from documentation? How easy is it to fix bugs in these compilers by just altering plugins, without touching the source? Internally there are people who work on microcode have access to schematics of the CPU so their work is similar to working on these plugins with the source code for gcc/llvm available, not with trying to deal with just plugins. Just like gcc/llvm plugins reveal a lot of information about how compilers themselves work microcode reveals a lot of info about hidden properties of AMD/Intel CPUs. Not only these can be picked up by ARM or Qualcomm guys, more importantly, these may be picked up by Chinese companies which is something US tries to avoid. If you want to see what and how can be revealed from microcode dump I recommend you to look on Ken Shirriff blog. Note that “bug”, very similar to what we discussing here, was discovered in 8086 CPU (use of This shows that in case of CPUs you can fool some of the people all of the time, and all of the people some of the time, but you can not fool all of the people all of the time becomes you can hide information with encrypted microcode from everyone for a few years… enough for the CPU to become obsolete.
Posted Nov 15, 2023 21:23 UTC (Wed)
by pizza (subscriber, #46)
[Link]
I was once told that the most power hungry part of a certain MCU design was the block of comparators used to provide low-level patch entry points.
Posted Nov 15, 2023 21:32 UTC (Wed)
by pizza (subscriber, #46)
[Link] (1 responses)
Exactly this. The designs (and implementations) are _extremely_ complex. (though these days it's tens of billions of transistors, with more potential states than there are atoms in our universe)
> it becomes virtually impossible to even document it at all, let alone try to explain how the microcode patches work!
It's not that it's impossible to "document", but effectively impossible to simply _enumerate_ all potential states that might matter, much less _test_ that they work as intended.
Now as far as the microcode itself is concerned, I'd expect it's pretty straightforward to document _how_ the mechanism works in general, but for that to be useful you'd have to effectively expose the entire inner register/API/etc of the processor. And Intel is understandably reluctant to publish those trade secrets.
Posted Nov 16, 2023 4:43 UTC (Thu)
by wtarreau (subscriber, #51152)
[Link]
That's what I meant, "impossible to document" implies "without providing the full internal schematics". Of course, in a 10000-page schematics of the whole stuff there certainly are references to bitXXX coming from the patch system. But it's quite different to place bitXXX on the control pin of a mux on a diagram than to document what bitXXX does. I wouldn't be surprised if the doc for the microcode update stuff only enumerates ranges of bits and the associated blocks and pages in the schematics, and that one has to actually search for them directly in the schematics, otherwise the doc would always be outdated or never finished. And that's not counting the risks of errors in that doc itself which can have devastating effects.
Posted Nov 16, 2023 17:27 UTC (Thu)
by cry_regarder (subscriber, #50545)
[Link]
Posted Nov 15, 2023 21:55 UTC (Wed)
by pizza (subscriber, #46)
[Link] (15 responses)
It's hardly controversial to state that without meaningful (if not comprehensive) documentation of a given CPU's internal design, a microcode patch file is pretty much gobbleygook. And it's also hardly controversial to state that modern CPU cores are _extremely_ complicated.
> When we got drivers, it turned out none of them were as magic as claimed, developers outside the vendors were well capable of working on them (often making better drivers!), and there was often a lot of things that could be improved and even generalised.
I'm well aware of this argument, having personally reverse-engineered dozens of printer drivers (and firmware) to create 100% Free Software alternatives.
I've also spent most of my career writing device drivers (some Free, some not), and more recently, building hardware models that have to work with existing (often binary-only, timing-critical) software.
Posted Nov 16, 2023 10:53 UTC (Thu)
by paulj (subscriber, #341)
[Link] (14 responses)
That still means there will be a few engineers and researchers in the world at large who would be as good at understanding this stuff, and furthering the state of the art, as any engineer within Intel. The world would be better off to have it open.
Maybe Intel wouldn't in the short-term. That said, again just like in the 90s, economically, the PHBs also were afraid of opening up device drivers for similar economic/financial reasons decades ago. Along with the "But others could see our IP and sue us!" arguments made in other parts of these comments. And again, their fears didn't come to pass. Hardware makers have /not/ been economically disadvantaged by going open-source. All of we seen the quality of drivers increase a lot, and one could argue good hardware becomes more valuable when a rising tide has equalised the software.
So, I just don't buy these arguments, when it comes to hardware makers.
Posted Nov 25, 2023 4:54 UTC (Sat)
by calumapplepie (guest, #143655)
[Link] (13 responses)
As described in the other comments, microcode is meaningless without detailed CPU design information. By "detailed", I mean "is literally the blueprint to build the CPU design". There are entire college courses dedicated to understanding simplified implementations of RISC architectures; nobody without at least that experience is going to get anything out of such a blueprint. To get something meaningful, or find a new problem, you'd need to work pretty heavily with those designs; unlike programming, where you can basically scan for risky constructions (array access without bounds check, etc), in CPU design, everything is a risky construction!
There are some engineers who are as good at microcode as anyone within Intel. They mostly work for AMD.
Posted Nov 27, 2023 12:29 UTC (Mon)
by paulj (subscriber, #341)
[Link] (11 responses)
For sure, at least some of said classes had assignments that required students to /develop/ microcode functions to implement some instruction on the (basic) CPU architecture in the P&H CAO / RISC book.
Is this stuff hard, and would it take work to become competent enough at this kind of thing to be able to make some small, but useful changes? No doubt.
Is every CS/EEE graduate (or even student) outside of Intel and AMD too stupid to develop that? Absolutely not.
It's just dumb to argue that only engineers at $MAJOR_CPU_MAKERS have the ability to work on microcode, ergo that's a good reason for keeping it closed. It was used against open-sourcing software, and was invalid. It remains invalid when this argument is used wrt CPUs. (Hello, RISC-V!).
Posted Nov 27, 2023 13:55 UTC (Mon)
by pizza (subscriber, #46)
[Link] (10 responses)
That's not the argument being made.
The *actual* argument is that, without access to more or less the complete design files (schematics, blueprints, verilog, whatever), even a highly-skilled-in-the-arts engineer won't be able to accomplish much of anything because the "microcode" files in question are *tightly coupled to a particular design/implementation*
So by arguing that "microcode should be open" you're actually arguing "the entire CPU design should be open" which is a very different kettle of fish.
Posted Nov 27, 2023 14:50 UTC (Mon)
by paulj (subscriber, #341)
[Link] (9 responses)
Could a good chunk of the microcode consist of deep twiddling of internal units - enabling or disabling functionality there-in under certain conditions say - sure. Perhaps a vendor would be reluctant to document such things. Though, it'd still be useful to some user somewhere someday to know this stuff (see sibling comment from farnz).
A lot of it will also be a bit more mundane, details on how to use internal functional units (ALUs, vector engines, register files, etc.). Stuff like that could be useful to a programmer who wants to add support for a macro-instruction to (say) support some new crypto primitive (ala the AES instructions). Or a macro-instruction to accelerate some common bit manipulation (string handling, some common construct in some new network protocol).
AMD and Intel can not cater - at their level - to all the possible little optimisations for all various things out there; they havn't the engineering time, nor the microcode SRAM space. A user in a specific area, focused on that, could have the time - and they can create the space by dropping other macro-instructions they don't need (obsolete ones, e.g., or ones added in later ISA revs that they don't use themselves).
Posted Nov 27, 2023 15:06 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (8 responses)
For typical microcode, the interface to the hardware that the microcode interacts with is a significant chunk of the design. Further, that interface is not set in stone - it can change with every stepping of the CPU, and is completely redone between generations as some things move to fixed logic and some things move to microcode.
Realistically, it's not practical for Intel or AMD to document their microcode format without giving away the entire CPU design - there's just too much coupling between the hardware design and the microcode. It'd be like asking for the interface between the Linux kernel and all possible kernel modules, but without disclosing any Linux source code (so that I can write a non-Free module against this interface without risk of copyright issues). Yes, it's theoretically possible, but it's a lot of work, and it has to be updated for every single kernel release.
Posted Nov 27, 2023 15:28 UTC (Mon)
by paulj (subscriber, #341)
[Link] (7 responses)
Sure, some specifics - bit patterns to access something - could change, but the behaviour of much of the functional block is going to stay the same. And you're telling me Intel doesn't have internal documentation for microcode writers? :)
I'd accept things are far from perfect for what you'd want to release as a supported external interface. But that isn't what's asked for. Docs suck for much of the open-source world too - doesn't stop people hacking on it, or even /creating/ such documentation (and some basic interfacing information can be auto-generated from HDLs).
Posted Nov 27, 2023 15:42 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (2 responses)
But it's those specifics that you care hugely about, and not the broader functional units; great, the LSQ is the same, but now you've got a completely different set of interactions with microcode to last time, because the hardware designers are confident in different bits of it, and have attached microcode hooks to different bits of the LSQ. And no, I don't believe Intel does have internal documentation for microcode writers; IME, microcode is written by people embedded in the CPU design team, and they use the actual design sources as the documentation.
I suspect that Intel doesn't have anything it can release as a microcode "interface"; you're asking them to write this documentation, unlock the microcode (making reverse engineering their CPUs easier for competitors) and for no gain to Intel. This isn't something Intel will do, for obvious reasons - it's all costs to them, for no gains, and it's stuff they have to redo for each generation.
Posted Nov 27, 2023 16:28 UTC (Mon)
by paulj (subscriber, #341)
[Link] (1 responses)
I still think that, even without _any_ documentation, if the microcode were at least user-replaceable, there would be clever people out there - outside of AMD and Intel - who would start to reverse engineer at least some of its functions, and we'd get some useful things out of it. There'd be bored EEE students writing fuzzers, measuring the visible changes on x86 ISA, and mapping out at least some subset of the microcode to hit on something useful.
Posted Nov 27, 2023 16:35 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
I'd certainly agree that people would reverse engineer the microcode and do things with it - I'm just not sure how I'd write that as a positive for Intel, since most of them will be aiming to either find a cool new vulnerability in Intel's designs, or trying to work out how Intel manages to be "better" than their preferred design at doing something.
We only got what we got for GPUs because Intel (and later AMD) needed something to get them design wins over the leading GPUs on the market, and the business case was that, while writing OSS drivers would help competitors reverse-engineer their designs, having OSS drivers would result in them getting sales to customers who see OSS drivers as a positive - and with ATI and nVidia winning (for Intel) and later nVidia dominating over AMD, they valued those design wins over protecting themselves from reverse-engineering. And this was especially clear since people could, and did, reverse engineer nVidia's closed drivers because they had documentation for the CPUs they ran on, even though they didn't have documentation for the GPUs.
Posted Nov 27, 2023 15:52 UTC (Mon)
by pizza (subscriber, #46)
[Link] (3 responses)
These "bit patterns to access something" is what 95% of these "microcode updates" actually manipulate. A typical change is to invert a chicken bit that [dis]ables a bit of hardwired logic, but only when bits #12213 and #21123 are set, and the rest is writing a _patch_ to the microcode baked into ROM to deal with the changed reality.
Even in simple/trivial designs there can be literally dozens of these bits that mean nothing without detailed design information. -- To provide an example from pop-ish culture, What does the switch labeled "SCE to AUX" do, and when would it matter? Why not always run with it turned on?
(And I speak about this from experience, having a couple of SoCs under my professional belt, including devloping patches for ROM'd code. One chicken bit in particular saved us from having to do a complete respin due to a problem with an untestable-until-silicon-gets-back analog component.)
Posted Nov 27, 2023 16:13 UTC (Mon)
by paulj (subscriber, #341)
[Link] (2 responses)
Just to be clear, that's what I had in mind. The bit pattern to change the FROB function from the FOO mode to BAR mode in the FROBBASHER functional unit could be wired up to bit-pattern 12213 in one implementation and 54213 in another. However, this does not present some fundamental obstacle to documenting the FROBBASHER block and how it could be programmed from micro-code - surely that is obvious?
The answer is the block is documented as having a FROB_MODE setting, which selects either the FOO or BAR mode, which implies <etc, etc.>. Then the internal architecture documentation for the Super-Duper-Micro-Electronics SDME 20001 specifies it uses the FOBBASHER v2.1 block, with its various programmable operations wired to:
...
And the SDME 20100 similar, with (say)
....
There are, 100% for sure, a number of fairly well re-used functional units in Intel CPUs. And further, a number of such functional units are to implement things that are very common and similar and well-understood in high-level operation across all CPUs. And particularly on x86 CPUs, with their CISC ancestry, with a number of (fairly new!) instructions that are simply there to optimise logical constructs of the software which are implemented in micro-code (VGATHER, VBLEND, etc.).
Posted Nov 27, 2023 16:17 UTC (Mon)
by paulj (subscriber, #341)
[Link]
Posted Nov 27, 2023 16:26 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
That's not how reused functional units work in a CPU, though. You have a FROBBASHER functional unit, whose mode bits are set by the CPU in different ways in different implementations; so, in the SDME 20001, FROBBASHER takes its mode bits directly from microcode, while in the 20100 where the FROBBASHER is also used by non-microcoded operations, its mode bit comes from elsewhere, unless the microcode execution engine is in control in which case it comes from a hidden state register (not from the microcode), and you now have to document that hidden state register and how it's controlled by the system.
Posted Nov 27, 2023 13:50 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
However, Intel and AMD are working under time pressures, and backport pressures; an external engineer is under a different set of pressures. It doesn't matter if my hobby change to the microcode on the 2012 Ivy Bridge CPU (a Xeon E3-1245v2) I have to hand for experiments takes me 6 months and is incompatible with current Intel silicon; however, both of those would be unacceptable to Intel. Similarly, I don't care if I break compatibility with kernels I don't run with my microcode change; Intel, however, cares deeply about things like the Windows kernel. Additionally, I don't care if by making my change, I break multi-socket Xeons, or break i3/i5 CPUs, because my Xeon E3 isn't affected by either of those; Intel does care. I don't care if I break systems without ECC RAM, or which use the acceleration part of the iGPU in my E3-1245v2; Intel does.
Because I don't care about the same things as Intel, I can do a better job for my needs than they can, even if I'm less expert at doing the job than Intel or AMD engineers are. I can take much more time than Intel engineers would, because I'm under a different set of pressures. And when I've demonstrated that my change has value, I can share it with other people who'll fill in the bits that I don't care about - or it remains private to me because it only has value for me.
Posted Nov 16, 2023 11:23 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (2 responses)
The argument differs to the 90s one - the argument is not that the hardware is too sophisticated, needs special management, only the vendor has the expertise (and OSS hackers do not). Instead, the argument is that the microcode is closely coupled to a single implementation, and that implementation is not only not documented other than as VDHL/Verilog + schematics, but Intel wants to protect the implementation against reverse engineering by competitors (e.g. AMD, Huawei, Apple).
There's then side issues around the physical damage you can do with microcode changes (e.g. burn out a spot on the chip), but the big deal is protecting the hardware against reverse-engineering via microcode.
Posted Nov 16, 2023 13:55 UTC (Thu)
by paulj (subscriber, #341)
[Link] (1 responses)
Posted Nov 16, 2023 14:21 UTC (Thu)
by pizza (subscriber, #46)
[Link]
It's _still_ used to this day. Anectdotally, it actually seems to be more prevalent now.
The pendulum swings back and forth between "smart hardware, dumb driver" and "dumb hardware, smart driver".
One scenario much more common now than in the 90s: If your product is patented, and stuff covered by the patent is in implemented in the driver, publishing that driver under a Free Software license can effectively give everyone, including your competitors, free access to your patent. Or, perhaps more importantly, it could expose how your product works to a patent troll.
Posted Nov 15, 2023 17:56 UTC (Wed)
by brunowolff (guest, #71160)
[Link]
Posted Nov 15, 2023 18:10 UTC (Wed)
by IanKelling (subscriber, #89418)
[Link]
That just isn't true, and this paper is proof: https://www.usenix.org/system/files/conference/usenixsecu... "In this section, we demonstrate the effectiveness of our re-
Posted Nov 15, 2023 18:00 UTC (Wed)
by brunowolff (guest, #71160)
[Link]
Posted Nov 15, 2023 19:57 UTC (Wed)
by simon.d (guest, #168021)
[Link] (49 responses)
Posted Nov 15, 2023 21:08 UTC (Wed)
by IanKelling (subscriber, #89418)
[Link] (48 responses)
Posted Nov 16, 2023 10:42 UTC (Thu)
by khim (subscriber, #9252)
[Link] (42 responses)
Can we, please, stop this stupidity? All microcode is software. The fact that FSF adopted crazy stance that by stuffing gigabytes of Windows into sealed box which no one may open one may suddenly turns it into hardware doesn't change the fact that Windows is a software. And the fact that old CPUs stored that software on mylar sheets and modern ARM CPUs keep is in ROM doesn't change the fact that microcode is a software. It may change the practical aspects of what knowing about said software may bring, but just like one may simulate old mainframes or old calculators by using their microcode one may simulate modern CPUs from it's microcode. Note that it's the only reliable way to execute some old calculator programs which exploit some crazy bug-or-feature of said microcode which blows “if manufacturer itself couldn't fix the software then it's hardware” argument to smithereens. That tight imaginary line between software and hardware just simply doesn't exist, but microcode is clearly software. One may argue whether PLA in 6502 is software or not, but microcode ISA is undistinguishable from normal (if primitive) ISA.
Posted Nov 16, 2023 13:47 UTC (Thu)
by paulj (subscriber, #341)
[Link]
Posted Nov 16, 2023 14:29 UTC (Thu)
by brunowolff (guest, #71160)
[Link] (37 responses)
Posted Nov 16, 2023 14:54 UTC (Thu)
by khim (subscriber, #9252)
[Link] (32 responses)
The issue is about fanaticism (what FSF is doing is textbook example of redoubling your effort when you have forgotten your aim which is the definition of fanaticism). It's pretty obvious that FSFs stance have done more harm than good. Whether that's enough to call what it does crazy or mere fanaticism is, indeed, up to the debate. One which is not interest me, though. In practice the ability to look on the code that you are using gives you more options than the inability to do that. Even if source to that code is not available. FSFs answer to that is “yes, but if we would reject proprietary firmware then we would achieve a nirvana of the world without proprietary software in the future, just wait 1000 years”. Sorry, but I'm feed up enough with that logic: that's how most atrocities are justified.
Posted Nov 16, 2023 15:43 UTC (Thu)
by brunowolff (guest, #71160)
[Link] (31 responses)
Posted Nov 16, 2023 17:50 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (2 responses)
The problem with the FSF's stance is that a device that has been designed to be updated in the field, but where an implementation has (e.g.) cut the write-enable trace to the flash chip qualifies as having "firmware in ROM" from the FSF's point of view.
This leads to the (quite frankly insane) situation where a piece of hardware from a mass-market vendor can be made acceptable to the FSF just by cutting the write-enable trace to the on-board flash and reselling it under a different brand - and yet all of the things you discuss can still be in place, it's just that the "new" vendor is FSF-washing the firmware non-freeness.
Posted Nov 27, 2023 14:33 UTC (Mon)
by immibis (guest, #105511)
[Link] (1 responses)
The FSF is evidently not worried about the absolute level of freedom, but about freedom disparity, where all freedom is reserved to vendors instead of customers.
Posted Nov 27, 2023 15:01 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
That's just the thing - the silicon manufacturer, and the OEM who creates the PCB with flash and the silicon on it designs knowing that you will not cut the write-enable bit. A third-party reseller then buys the device from the OEM, opens the case and cuts the track, invalidating the warranty from the OEM, but getting RYF from the FSF.
The result is that you get all the downsides of non-free firmware that the manufacturer expects to be able to update (because the silicon vendor and the OEM expect to be able to update, and expect you to be able to install new firmware), with all the downsides of hard-wired firmware (because you can't actually install the update when the manufacturer issues it, even if it's a case of "without this update, you lose your data on a weekly basis").
Posted Nov 16, 2023 19:02 UTC (Thu)
by khim (subscriber, #9252)
[Link] (27 responses)
It may result in better product but more likely it would later product, too. Thus it only makes sense in cases where different companies are cooperating and thus are using open-source components already. Otherwise you just end up with another dead company. What's the point of that boycott? What are you hoping to achieve with it? It's perfectly reasonable to boycott someone if you have better alternative, but if the only alternative is to pay for someone who would buy perfectly usable device and would then cripple it to make “more free”… at that point question about sanity of proponents of such boycott raises it's ugly head. The problem that FSF is faceing is not that people disagree with its tactic. The problem is not even that they disagree with it's strategy. The critical part is that stated FSFs goal of a world without nonfree software is just not something most people even want. In a world where software was scarce but hardware was ever more so this goal sounded reasonable: if you had a hardware with piece of nonfree software then you couldn't use it and couldn't change the software on it. In a world where you can not go to toilet without interfacing with not one but few appliances which include some software in it… this goal becomes pointless: why would you care about software in your lightbulb if you don't have time to hack on the software on your toilet seat? And if that nirvana, that world without nonfree software is not something people want to fight for… then how can you tell people that they should suffer for something they don't even want?
Posted Nov 17, 2023 21:36 UTC (Fri)
by brunowolff (guest, #71160)
[Link] (26 responses)
I said it was a reasonable view, not that I was personally boycotting all hardware that was upgradable by the manufacturer. In general boycotts try to change the behavior of boycotted companies and to encourage other companies to provide acceptable alternatives.
> It's perfectly reasonable to boycott someone if you have better alternative, but if the only alternative is to pay for someone who would buy perfectly usable device and would then cripple it to make “more free”… at that point question about sanity of proponents of such boycott raises it's ugly head.
That doesn't sound very productive, but it isn't the only alternative.
>> But yes, most people don't care, and perhaps the FSF would better move things toward their end goal by modifying their stance on firmware.
My original point in this discussion was your hyperbole in referring to the FSF's stance. My views don't completely align with theirs.
> The critical part is that stated FSFs goal of a world without nonfree software is just not something most people even want.
> In a world where software was scarce but hardware was ever more so this goal sounded reasonable: if you had a hardware with piece of nonfree software then you couldn't use it and couldn't change the software on it.
> In a world where you can not go to toilet without interfacing with not one but few appliances which include some software in it… this goal becomes pointless: why would you care about software in your lightbulb if you don't have time to hack on the software on your toilet seat?
My view point is more towards owner control rather than free software. Free software helps provide owner control, but you can mitigate non-free software in many cases.
Let's try another example. If I'm driving a car that is using an autopilot and it detects an unavoidable accident, I don't want it deciding that it's OK to kill me to save the occupants of the other vehicle. We aren't there yet, but that's not too far off. (I wouldn't actually own a car with that kind of system in it anyway, because such cars also do other things I don't accept.)
> And if that nirvana, that world without nonfree software is not something people want to fight for… then how can you tell people that they should suffer for something they don't even want?
Other people can make their own decisions. I support moving toward more free hardware by buying from organizations that I think are helping move things in that direction and not buying from ones that I think are actively opposing that. I bought a Blackbird from Raptor to support the work they are doing and will very likely buy another system from them next year, when they start offering new machines using hardware from their partnership with Solid Silicon. I don't buy video cards made by nVidia. Of the two, I think the buying from good actors is a lot more effective than not buting from bad (relative to my viewpoint) actors.
Posted Nov 18, 2023 1:21 UTC (Sat)
by khim (subscriber, #9252)
[Link] (25 responses)
What exactly are you talking about? FSF quite explicitly and knowingly promotes devices which make it harder to get rid of proprietary firmware in it's quest for the perfect world where proprietary software doesn't exist. While proclaiming that they are fighting against proprietary software. If that is not crazyness then what is crazyness? There are enough people who care to even affect GPLv3. Look on the whole dance around “customer products” and the ability to not provide source for non-“User Product”s, etc. If you provide source for your RF firmware this exposes you to additional charges. And since that's the only case where “warranty disclaimer” doesn't work we may expect more places like these in the future. I rather suspect we wouldn't be allowed to have any other cars, eventually, but most likely that wouldn't happen in our lifetime. So that one is outside of scope of our discussion. But how can buying from organizations that take something that may, at least in theory, run free software and then cripple these devices to make them unable to run free software is helping move things in that direction? Note that people who really try to make things more open, as rule, refuse to play RYF games! Why do you think that happens? Now try to find it on FSF's site. It's not there. Why do you think? Before you can do that you have to decide which actors are good and which are bed. And FSF quite explicitly promotes vendors which take normal, non-freedom-respecting hardware (from ASUS, Lenovo and son) and cripple them to make sure they adhere to crazy FSFs standards and doesn't promote people who are trying to actually build more free software friendly devices. IOW: instead of promoting genuine friends FSF's policy end up promoting people who are gaming the system. This worked, for some time, and gave them the right to proudly claim that they are only using free software and living in that nirvana world which doesn't really exist, but today that illusion is crashing down since makers of COTS hardware are making it harder and harder to cripple them to satisfy perverted FSF's definitions of what is non-free software and what is hardware (which is not supposed to be free).
Posted Nov 18, 2023 3:05 UTC (Sat)
by brunowolff (guest, #71160)
[Link] (24 responses)
> What exactly are you talking about? FSF quite explicitly and knowingly promotes devices which make it harder to get rid of proprietary firmware in it's quest for the perfect world where proprietary software doesn't exist.
Exactly what I said, you keep making overhyped statements about the FSF, instead of just arguing about why you don't think their policies are the best way to each their goals or arguing about why you don't think people should be trying to achieve their goals.
> While proclaiming that they are fighting against proprietary software.
> If that is not crazyness then what is crazyness?
Craziness is more along the lines of being completely irrational. Having loopholes in a policy or having policies you don't agree with, doesn't make them crazy.
>> I support moving toward more free hardware by buying from organizations that I think are helping move things in that direction and not buying from ones that I think are actively opposing that.
Please read what I write. I didn't say I buy hardware according to the FSF's recommendations. I said I buy stuff that I think promotes freedom / owner control.
>> I bought a Blackbird from Raptor to support the work they are doing
Again, I never claimed that I buy hardware according to the FSF's recommendations. I like what Raptor does mostly on the basis of Timothy Pearson's writings.
>> I think the buying from good actors is a lot more effective than not buting from bad (relative to my viewpoint) actors.
You keep conflating what I said I do, with what the FSF recommends. I determine which actors I think are good or bad. The FSF has their own opinions and they don't match mine.
> IOW: instead of promoting genuine friends FSF's policy end up promoting people who are gaming the system.
You have previously noted some isolated cases of companies gaming the FSF's ratings. But that doesn't justify referring to the FSF as crazy.
Posted Nov 18, 2023 14:02 UTC (Sat)
by pizza (subscriber, #46)
[Link] (20 responses)
Their attitude wrt embedded firmware _is_ irrational, in that it directly impedes progress towards other FSF goals, and is predicated on preconditions that were nonsense at the time, and are even more so today.
(For example, this greatly impedes Freedom 1; because it makes it much, much more difficult to study how the device works, and can make it effectively impossible to make changes)
> You have previously noted some isolated cases of companies gaming the FSF's ratings. But that doesn't justify referring to the FSF as crazy.
It's not isolated cases; it's the logical outcome of the RYF metrics. Vendors take off-the-shelf components that use non-free Firmware (ie nearly everything produced in the last decade) and proceed to intentionally cripple it by removing firmware-applied runtime updates (eg CPU microcode updates), physically cut write enables on embedded flash parts, and/or add dedicated processors (themselves running non-free Firmware) whose sole job it is to upload the necessary, now-non-updateable firmware into the hardware from non-user-accessible storage without the primary, user-accessible CPU being involved in some way.
Again, this isn't "gaming" the system; the FSF explicitly endorses products that follow this approach, _because_ they follow this approach.
In none of these scenarios is the firmware magically freed. In none of these scenarios does the non-free firmware go away. It just gets wrapped in a brown paper bag that hides it from the primary CPU (and user), which somehow makes it "okay".
All three approaches increase the product complexity (and thus cost) and leave the user with an objectively _worse_ product, usually with known (but already-fixed-upstream) issues at the time of release! Saying that the manufacturer "should have tested better" is all fine and good, but is even more divorced from reality -- Every release of decades-old FSF-produced software still has bugs, sometimes severe ones, and that doesn't even begin to touch on the likes of Spectre and other information-leaking bugs in the processors themselves.
Posted Nov 18, 2023 16:28 UTC (Sat)
by brunowolff (guest, #71160)
[Link] (19 responses)
There are plausible arguments that their polices might moderate future behavior. The polices have a rational basis. They aren't irrational. You can certainly argue they haven't been effective and have even had some undesired effects. Eventually one might make a case that by not adapting tactics they have moved into the realm of irrationallity. However, I don't think the FSF is in a good position to influence changes in how firmware is handled. They don't have enough influence to make a big enough dent in the big players profits to change their behavior. And they aren't selective enough and don't provide enough resources to encourage companies that are trying to actually change how things are done to achieve much with new players. It's not something you can just code your way out of, like at application level. They don't have obvious good alternative strategies to try to affect significant changes here with the resources they have.
I think if you want to affect change in this area, you need to build better hardware or support people that do that. There are several groups working on that. But without economy of scale, it isn't going to be cheap. And it isn't likely to get scale any time soon. It's not like at the higher level, where writing some free software application could rapidly improve things for a lot of people. Also, even if a significant player develops, there are going to be incentives to switch back to locking things down to make more money. In publicly owned companies, it can be really hard to not do what makes the most short term profit. So the long term outlook seems poor. But, I'm willing to spend some money encouraging companies that I think are moving things in my preferred direction.
Posted Nov 18, 2023 17:29 UTC (Sat)
by khim (subscriber, #9252)
[Link] (18 responses)
What's is irrational are not policies, but FSF. When something doesn't work you change it — that's the basis of rational behavior. If your policy doesn't work and instead of promoting devices which are modern, easy to tinker with and easy to modify it promotes something that is obsolete and, essentially, second-hand from market of normal, locked-down, devices then you are doing something wrong. They are not in good position to influence anything, at this point. Well… not quite (they have some captive audience in the form of emacs users, but these are becoming more and more rare, almost everything else is replaceable, at this point), but they are moving there. Just ask yourself: why does GlibC released in year 2023 is still licensed under LGPLv2, not LGPLv3? The answer is simple: FSF doesn't dare to change it's license, because this would just cause a fork. And GCC wasn't forked solely because when GCC 4.3 switched the license the plan for the people who refused with FSFs stance already was to just dump GCC entirely and go with clang. That's really sad because, at one point FSF had an ear of many developers. But not anymore. Look here, e.g. Lots of folks express their ire that someone else rewrites coreutils and doesn't use GPL… but have that produced anything constructive? Yes. And that means that you recommend to buy GPUs from AMD and Intel, and not from nVidia. That hurts even trillion-dollar companies enough that they change things to ensure their blobs are less invasive. That's a win. Encouraging people to cripple their distro to make sure it would adhere to some kind of strange of definition of freedom? This just hurst everyone. If Debian would have done 20 years ago what it finally did last year then chances are high that people would have stayed with Debian instead of creating bazillion forks like Ubuntu or LinuxMint. I often wonder how and why FSF went from XX century if you want to accomplish something in the world, idealism is not enough—you need to choose a method that works to achieve the goal; in other words, you need to be “pragmatic.” to current joke of stance where it does things that don't work and ignores the reality.
I guess the seed of insanity was always there, e.g. FSF always insisted that FSF-endorsed free software shouldn't even mention anything proprietary, but that was more of “harmless quirk” similar to how Intel, to this very day, refuses to accept the fact that it's CPUs are implementing AMD-designed x86-64 technology and instead insists that it's Intel-invented ia-32e technology. Today that seed, somehow, expanded enough that the whole thing turned into, increasingly irrelevant, cult which lives in the imaginary world.
Posted Nov 18, 2023 18:10 UTC (Sat)
by brunowolff (guest, #71160)
[Link] (17 responses)
> What's is irrational are not policies, but FSF. When something doesn't work you change it — that's the basis of rational behavior.
That assumes that there is something obviously better to change to. I don't think this is the case here, Certainly not to the level of referring to them as irrational. I don't think anything they do is likely to have much affect on firmware at this point.
>> But, I'm willing to spend some money encouraging companies that I think are moving things in my preferred direction.
While I don't buy nVidia video cards, I don't think that will have an effect on nVidia. They have a lot of lock in and they aren't going to want to give that up. I'm not really fans of either AMD or Intel. Though I think AMD's behavior is constrained by what they need to do to stay in business and employees had made some upfront statements about that.
I really think who you buy hardware from is a lot more important than who you don't but it from. To change things at the hardware level requires money (more so than software, where time can help) and that means that to affect change you need to be spending money or giving it to others to spend,
> I often wonder how and why FSF went from XX century if you want to accomplish something in the world, idealism is not enough—you need to choose a method that works to achieve the goal; in other words, you need to be “pragmatic.” to current joke of stance where it does things that don't work and ignores the reality.
That is very case dependent. In a lot of situations, if everyone is pragmatic, nothing changes. Lot's of things have changed since the FSF started. Hardware can now use crypto to control what software it runs. Hardware documentation at the level needed for writing firmware, often isn't publicly available. There is a bunch of as a service stuff. Their old methods don't work well for starting changes in the current environment. And they never had enough leverage, to use that to affect change using boycotts.
> I guess the seed of insanity was always there, e.g. FSF always insisted that FSF-endorsed free software shouldn't even mention anything proprietary, but that was more of “harmless quirk” similar to how Intel, to this very day, refuses to accept the fact that it's CPUs are implementing AMD-designed x86-64 technology and instead insists that it's Intel-invented ia-32e technology. Today that seed, somehow, expanded enough that the whole thing turned into, increasingly irrelevant, cult which lives in the imaginary world.
I think fanatic or uncompromising would be a better description of this than insanity.
Posted Nov 18, 2023 18:45 UTC (Sat)
by pizza (subscriber, #46)
[Link]
There are obviously better approaches! However, the first step to figuring out which is better is admitting/acknowledging that your current approach is not working (if not outright counterproductive). Instead, they are doubling/tripling-down on an approach that is failing (by *any* metric), in part because it is predicated on a make-believe [1] worldview that is quite simply, demonstrably-many-times-over, _wrong_.
> I think fanatic or uncompromising would be a better description of this than insanity.
Insanity is one of the underpinnings of fanaticism.
[1] As long as "hardware" implementation is hidden from the user's view, they can pretend it's hard-wired immutable logic and thus ethically acceptable. Key word: pretend.
Posted Nov 18, 2023 19:54 UTC (Sat)
by khim (subscriber, #9252)
[Link] (14 responses)
Boycott is not the only way to change things and it's not the most productive, either. You said so yourself: And now this: Seriously? Do you know how GCC have become first C and then C++ “standards-de-facto”? That was Cygnus, not FSF's, work. And X Window System won over NeWS and other such proprietary systems with help of Gtk and Qt. And now we have that issue with secret microcode being needed to run free software (a lot of free software wouldn't be usable on CPUs from before-upgradeable microcode era, that's 80386 or 486 at most). What do free software zealots like FSF do about that? Ah, right, complain that someone else have to give them everything. Now, that's, of course, real problem and it needs fixing. And it's in process of being fixed — using precisely the same old methods that Cygnus was using, that XFree86 hackers were using, that Qt developers were using. On the contrary: if you are trying to repeat things that don't work then nothing changes. While pragmatic approach works, like it always did. You just need to try enough different things to succeed in the end. The more I think about it the more it looks as if the collapse of FSF infuence is related to “free software”/“open source divorce” that happened at the end of XX century. FSF in XX century had a lot of influence that it didn't earn. Simply because lots of people were doing free software while FSF kinds of usurped the term all the influence that Cygnus, Linux companies, TeX community and others had… FSF had the ability to piggy-back on that and look big and powerful. And then… FSF pushed for divorce. It happened. And after that point FSF have become what it always was: small, not too significant by itself, organization without means to achieve what it pushed for. Ironic, isn't it? ESR wasn't trying to kill FSF, he (and others who joined him) just wanted to double down on what actually works and hide things that doesn't work — but RMS, perceptive, as he is, realized that this would be the end of his personal reign. And thus, instead of gradually losing control over FSF, he bet on it's ability to bring control back… and lost everything. Not the first and not the last time that happened, IBM lost control over IBM PC in a very similar way. Only IBM realized it's mistake and after failure of PS/2 it went to create PS/1, ThinkPad using things that it tried to squeeze out, while FSF, after GPLv3 fiasco, just continues to move deeper and deeper into irrelevancy. If they couldn't do anything then should just accept that and do things that they could do. Like Debian finally did, after decades of making life of newbies miserable. But Debian did kinda-not-insane thing from the beginning: they always were making and publishing “unofficial” images that were made sanely, they just tried to hide them a bit. Probably, but note this: a more informal use of the term insanity is to denote something or someone considered highly unique, passionate or extreme, including in a positive sense. I would argue that if you use word “insane” according to that definition then “fanatic” or “uncompromising” just become two different kinds of insanity. I fully agree that FSF members are not insane in a narrow, “a danger to themselves or to other people”, definition. But FSF behavior is hard to call sane, sorry. Whether it's “insane in a positive sense” or not remains an exercise to the reader.
Posted Nov 20, 2023 1:12 UTC (Mon)
by dvdeug (guest, #10998)
[Link] (1 responses)
Except life isn't that simple. Very rarely do things just work or not work; usually all available systems have known problems and unknown problems. People and companies who change course at the slightest bit of trouble often fail to achieve anything. The question of whether you're doubling down on a system that could work or throwing away money and effort on a system that's already a failure is hard to tell in the moment, and it's often hard to tell even in hindsight if whether a different action would have worked better, especially when fighting entrenched powerful systems.
> And X Window System won over NeWS and other such proprietary systems with help of Gtk and Qt.
Just your links alone disprove that. The NeWS article says that NeWS was dead "just as Virtuoso became ready to ship" and "after Adobe acquired FrameMaker, Sun stopped supporting NeWS", which puts its end in 1995-1996 and its death throes before that, but QT's first release was in October 1995, too late to have much impact, and GTK's first release was in 1998, clearly after NeWS's death. (The "Oracle Solaris" article says that Sun dropped NeWS in Solaris 2.3, which other sources give as August 1993, which would make clearly before QT released.)
Posted Nov 20, 2023 8:01 UTC (Mon)
by joib (subscriber, #8541)
[Link]
Posted Nov 20, 2023 13:11 UTC (Mon)
by paulj (subscriber, #341)
[Link] (11 responses)
As for the discussion, some people seem quite fanatical about taking offense at the FSFs' RYF policy. (Does the high-level intent make sense, yes; does the policy have holes, surely; could it be improved, hard to say... possibly, or possibly not... it depends).
Posted Nov 20, 2023 14:13 UTC (Mon)
by pizza (subscriber, #46)
[Link] (10 responses)
If having objective, demonstrable facts to back up my criticism (and suggestions for improvements) of a well-intended policy makes me a fanatic, so be it.
1) Yes, the high-level intent not only makes sense but is IMO a very good thing!
The current RYF program demonstrably results in objectively _worse_ outcomes by every non-holier-than-thou metric. An earlier thread on this subject included multiple suggestions on how to improve the policy, grounding it in modern hardware reality while still remaining true to the spirit of the Four Freedoms. But the FSF refuses to even _listen_, further costing it respect and many would-be allies.
BTW, I say this as someone who's a dues-paying member of the FSF that is very much on the idealistic "Free Software" side of the F/OSS divide. I've also written a _lot_ of Libre device drivers to enable use of said hardware on Libre platforms, allowing entire industry verticals to ditch proprietary software entirely. This should be a GoodThing(tm), yet literally _none_ of the hardware I've worked on for the past two decades could _ever_ qualify under the current RYF program as they all contain some sort of embedded firmware that can be updated at runtime.
Posted Nov 21, 2023 10:13 UTC (Tue)
by paulj (subscriber, #341)
[Link] (9 responses)
On 3, the issue is it is not obvious now. The goal is great - everyone seems agreed - the problem is implementing a policy. I suspect every policy the FSF could have for RYF will have major holes, and hence every policy will be open to angry criticism on forums - some simply as a vehicle for general FSF hate. Farnz' ideas on tiered certification might be a good improvement. Who knows.
The root issue here is there isn't a perfect policy.
Even if tiers would be better now, that doesn't mean the original policy was wrong - just that it turned out there were more subtleties to it, and it needed gradations - but shooting for the moon with the original policy may still have been the right policy at that time.
Posted Nov 21, 2023 10:45 UTC (Tue)
by khim (subscriber, #9252)
[Link] (2 responses)
No. RYF certification arrived in 2012. By that time it was much too late. GPLv3 fiasco was well underway. So FSF knew it's influence is pretty limited. And, worse, hardware manufacturers have already changed their development model to incorporate upgradeable firmware (that happened around the beginning of XX century, I remember how I bought the same Socket 370 motherboard in different revisions and while original one had “flash write enable” pins and one v2.0 it was no longer there… means MSI tech support complained enough that it was removed). Yes, it was an interesting attempt to turn the tide (same as GPLv3), but at some point sane person would realize that it failed and would change the stance. Right now FSF is fighting Paraguayan War and while it loses not lives buy “only” a mindshare consequences are dire: from someone who was respected and had lots of allies (even if not allies agreed on everything) they turned into someone who is not ignored entirely in the mainstream IT field only because it holds copyright for some important projects and you couldn't fix bug in these projects without assigning copyright to FSF. And even that chokehold is slowly eroding.
Posted Nov 21, 2023 14:19 UTC (Tue)
by kleptog (subscriber, #1183)
[Link] (1 responses)
There are other forces at work here. Since devices sold to the public must have a two years warranty, if any serious bugs are found (as they will) the manufacturer has to fix it and given the choice between pushing a software update and replacing the device, the former is much cheaper. And additionally, pushing a update reduces the amount of electronic waste, for which there are also regulations.
Changes afoot making it harder to disclaim warranty for security issues in software is only going to accelerate this trend.
Posted Nov 21, 2023 14:37 UTC (Tue)
by khim (subscriber, #9252)
[Link]
These are minor details. Of course hardware manufacturers have changed their development model to save money! That's what businesses do. And if you understand that then the next question that you should ask is: how may they earn some money from getting that RYF mark? Taking existing hardware with proprietary firmware, crippling it and selling it at higher prices is profitable. Developing hardware and open firmware just to get RYF mark is not. It's as simple as that. Now, you may play with these rules, try to make it more profitable for hardware manufacturers to use your free software instead of writing your own software (that's how Linux have become indispensable). And then, when certain classes of devices would become available with free firmware — you may demand these these have to come with free firmware. Or you may just ignore Goodhart's law and get the opposite of what you are trying to achieve. I can understand why FSF doesn't want to change rules of GPL: it's hard and costly to do and often backfires. But “certification marks”? It's completely normal for the certification mark rules to change over time, it's more-or-less inevitable because of Goodhart's law! Yet FSF acts as if it may design something that works once and for all. Worse: it believes that precisely ignorance of economics would help them to achieve their goals, somehow.
Posted Nov 21, 2023 10:49 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (3 responses)
According to the FSF in 2014, my ideas are not an improvement - the FSF's policy is so great that by 2020, all firmware will be Free software.
I leave it to you to look around and determine if that's actually happened - and part of the problem here is that, having missed their original expectations for the policy, the FSF is not looking to see if a different policy might work better. That a policy has failed to meet expectations is one thing; that the FSF is not changing the policy is the problem. This is doubly problematic because the FSF has guiding principles (the four essential freedoms), and the policy enshrines cases where those principles are overriden by pragmatism to get a result that has not happened; the FSF should, if it were still the organisation I remember from the 1990s, either change the policy and accept a different pragmatic deficiency, or fall back to the principles and say that RYF will no longer certify devices with non-Free firmware, accepting that this reduces RYF hardware further.
Posted Nov 21, 2023 13:37 UTC (Tue)
by khim (subscriber, #9252)
[Link] (2 responses)
They couldn't do that. They like to claim that the FSF members, themselves, only use 100% RYF devices. But there are no storage devices with non-free software on the market today. All the HDDs, all the SSDs and all SD cards include non-free firmware. I think the last devices not to include firmware were old RLL HDDs manufactured more than 20 years ago. And I'm pretty sure they are not using these. Of course most of such devices (if not all!) can upgrade that firmware “in the field”. And it's non-free. Which is in direct contradiction to stated rules, but can be easily fixed: just cripple the device, make sure firmware couldn't be updated and voila: you have RYF device. Instead of accepting that reality and trying to see what can be done in this new reality they keep RYF in the state which allows them to perpetuate their lie of only using free software. If you view FSF as religious cult which values the ability to claim that practice what they preach then this stance is pragmatic. But I, for one, is not ready to accept such thing as my religion. And they are not even formally framing it as such, they tell us that this St IGNUcius stunt was just a joke. I probably would have respected them more if they actually tried to portray what they are doing as a religion, complete with temples, imaginary gods and maybe some ritual sacrifices. But they are acting as a cult while simultaneously preaching how they are all about practical advantages.
Posted Nov 21, 2023 14:53 UTC (Tue)
by brunowolff (guest, #71160)
[Link] (1 responses)
While from a free software viewpoint this isn't going to change any time soon, from an owner control viewpoint, you can mitigate against hostile storage devices. While they can execute denial of service attacks against your system, you can keep them from reading the data being stored on them by using encryption. Some traffic analysis attacks will still be possible, but those should be a lot less damaging. If you use them to provide data for early boot, before you can use encrypted data, you can use digital signatures to make sure your system is being provided good data. You'll need to be extra carefull about replay attacks of old versions of the boot data. They could alo try to exploit driver bugs to compromise your system. Mostly, you don't want to trust encryption built into the drive or assume you can erase everything written to it. If you are getting individually targetted by hostile hard drives, you are probably dealing with an adversary you're going to lose to.
Posted Nov 21, 2023 15:14 UTC (Tue)
by khim (subscriber, #9252)
[Link]
Sure. And that work would be much more useful then pretence that work of someone who takes devices with potentially-hostile firmware and cuts the wire to make sure further updates are no longer possible (which magically turns it into “Respects Your Freedom” device) is, somehow, beneficial to your privacy.
Posted Nov 21, 2023 14:31 UTC (Tue)
by pizza (subscriber, #46)
[Link] (1 responses)
No, it was wrong at the time and they were told so at the time.
By the time this landed in 2014, a majority of hardware (and _all_ newish designs) required some sort of embedded firmware, and making it runtime-updateable was a highly sound business and engineering practice. Given the still-increasing complexity of hardware designs, there is no turning back that clock.
Also by the time this landed in 2014, it was clear to everyone (but the FSF) that the FSF's industry influence was way, way less than they believed -- Indeed, seven years after the GPLv3 landed, instead of enabling their devices to be user-modifiable, device makers stuck with old GPLv2 versions long enough to write their own permissively-licensed replacements. [1]
And then there's the logically absurd stance that claims that devices with immutable proprietary firmware are somehow "freer" than ones that can be updated, as the latter at least has the potential for libre firmware. This make-believe farce completely alienated the folks that have been doing the actual "hardware lberation" [2] work all along, resulting in the entire RYF initiative to devolve into nothing more than a virtue signaling exercise.
[1] Device makers that ever cared about licensing, that is. Most of the stuff churned out by Chinese OEMs has _never_ complied with the GPL.
[2] eg reverse engineering hardware and proprietary software/firmware, writing Libre device drivers, and other such tasks that enable folks to run free software on hardware they already own. ie GNU's entire raison d'etre!)
Posted Nov 21, 2023 14:43 UTC (Tue)
by pizza (subscriber, #46)
[Link]
Just to be clear, I don't fault them with trying something that failed; what I take issue with is that they are _still_ doubling down on a policy that has accomplished nothing beyond harming their cause.
(well, aside from the virtue-signaling aspects. Which is even more damning if that is all they apparently care about...)
Posted Nov 20, 2023 11:22 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
I disagree with you; I have put the following in e-mail to the FSF in 2014 as an alternative approach, and been told that RMS believes that the FSF's influence on hardware vendors is sufficiently strong that the current approach will work better, because hardware vendors will fight to get an FSF RYF certification under the current rules:
Rather than a ban on replaceable firmware, the FSF should endorse a set of requirements on firmware downloading that make replacement of the firmware with a Free version possible, given enough time and interest; I would suggest doing this by having at least 2 tiers of RYF, derived from the FSF's "four essential freedoms". The higher tier requires firmware that is fully Free per the four essential freedoms, including source code; the lower tier is for firmware that has no source, but where you are otherwise permitted to exercise the four essential freedoms; notably, this means that I should be entitled to run the vendor-supplied firmware inside an emulator or a reverse-engineering tool (freedom 0), study how it works and change it, including running my modified version on the hardware I've purchased (freedom 1, with source code precondition removed), redistribute copies without further obligation (freedom 2), and share any modifications I make (freedom 3, with source code precondition removed).
If you wanted to permit fully proprietary firmware (as the current policy does), there's also room for an even lower tier, where the vendor blocks freedom 3 completely; this leaves you with me being entitled to study the vendor supplied firmware, change it and run the modified version on my hardware, and redistribute copies of the vendor firmware as-is. This is a baby step towards the goal of a fully Free system, since it now explicitly allows a vendor to take away essential freedoms, but still requires them to permit me to replace the firmware myself, and to do the work needed to understand the hardware well enough to write a completely Free replacement firmware not derived from their original code.
There's been no shift in the last 9 years from vendors, nor from the FSF; there comes a point where the FSF either needs to accept that a policy doesn't work, and choose a new one, or decide to double-down on "the rest of the world is wrong". It appears to be doing the latter.
Posted Nov 18, 2023 14:40 UTC (Sat)
by farnz (subscriber, #17727)
[Link] (2 responses)
Again, the deep and fundamental problem with the FSF's stance is easy to illustrate. Let's say I have two devices to choose between:
Per FSF policy, device 2 is "more respecting of your freedom" than device 1, and the FSF should give device 2 their full support in gaining market dominance over device 1. Yet it's obvious when you read that description that device 1 is closer to being fully Free than device 2, since the only thing stopping device 1 from having a fully Free firmware is that no-one has written one yet, while device 2 uses DRM techniques to stop you replacing its firmware at all, even if you do rework the device to enable writing to flash again.
And that's what makes the FSF's policy bad - given a vendor who could probably be convinced that Free firmware is the best firmware (the silicon vendor for device 1, who's giving away documentation etc), and a vendor who's very clearly in the lockdown camp (the one that uses DRM to force you to get firmware from them, and only them), the FSF says to privilege the lockdown vendor over the vendor who doesn't lock things down, but who hasn't realised that they're one step away from Free firmware that's as cheap as non-free firmware.
Posted Nov 18, 2023 15:52 UTC (Sat)
by brunowolff (guest, #71160)
[Link] (1 responses)
Posted Nov 18, 2023 16:54 UTC (Sat)
by farnz (subscriber, #17727)
[Link]
Actually, I am claiming they're crazy; this is the very thing their policy is meant to prevent happening (a preference for proprietary software over Free software), and they will not engage with anyone who argues against it - they refused to when I e-mailed them in 2014, instead preferring to made snide comments about how unproductive it is to point out flaws in policy that RMS himself approved.
Posted Nov 16, 2023 15:08 UTC (Thu)
by pizza (subscriber, #46)
[Link] (3 responses)
Mask ROM (or OTP) hasn't existed in all but the simplest devices in the better part of two decades. These days, it's onboard (or directly embedded) flash that can be updated after the fact. This is a _good_ thing [1], especially for devices that need to be network-connected in some form (ie pretty much everything)
In practice, RYF ends up promoting absurd positions where they promote devices that can _never_ see Free Software firwmare over devices that at least have that possibility. They promote use of devices with known vulnerabilities [2] that can threaten the *user*. They promote use of a locked-down [3] cellular modem running a complete embedded Linux system over a wifi adapter that relies on firmware uploaded from the host. And so forth.
So, no, the FSF's position is completely divorced from reality; it's along the lines of thought that a brown paper bag somehow makes a bottle of alcohol "acceptable" in public because not being able to see inside it somehow makes it okay, even though everyone knows what's really in there. It's puritanical, performative nonsense that needs to be called out every time.
[1] Increasingly, this is legally mandated, as a manufacturer can be on the hook for flaws, even after the warranty period has expired.
Posted Nov 16, 2023 17:04 UTC (Thu)
by eduperez (guest, #11232)
[Link] (2 responses)
Posted Nov 16, 2023 18:34 UTC (Thu)
by ballombe (subscriber, #9523)
[Link] (1 responses)
Posted Nov 16, 2023 19:16 UTC (Thu)
by khim (subscriber, #9252)
[Link]
You can disclaim warranty on hardware, you just couldn't sell such hardware in normal shops to general public. IMNSHO that's the problem on software side of the fence and it should be fixed by treating the software similarly.
Posted Nov 16, 2023 14:33 UTC (Thu)
by jem (subscriber, #24231)
[Link] (2 responses)
Posted Nov 16, 2023 15:04 UTC (Thu)
by khim (subscriber, #9252)
[Link] (1 responses)
How do you know that? AArch32 and AArch64 are radically different ISAs and modern ARM CPUs support both. This strongly suggests that there are some internal core with two switchable translation components. We know that NEC V20 did that with two switchable microcode blocks. Without us seeing that block is organized in ARM CPUs we wouldn't know whether what it uses may be classified as microcode or not. As lest more complex chips (like Apple's M1/M2/M3) are likely to include microcode which is used for at least some instructions. Modern AMD and Intel CPUs don't use microcode for all instructions, too, only more complicated ones are microcoded.
Posted Nov 16, 2023 16:54 UTC (Thu)
by malmedal (subscriber, #56172)
[Link]
https://www.righto.com/2016/02/reverse-engineering-arm1-p...
Posted Nov 16, 2023 13:55 UTC (Thu)
by cesarb (subscriber, #6266)
[Link] (4 responses)
One might question whether architectures like ARM even have microcode. Unlike the more CISC x86, where a single instruction can have hundreds or even thousands of micro-operations, on ARM (and other more RISC architectures) each instruction decodes to only a few micro-operations, so the whole decoding can be done in "random logic" instead of having microcode. You can see this more easily in RISC-V cores, since the source code for the whole circuit of some of them is available; for instance, take a look at https://github.com/YosysHQ/picorv32/blob/master/picorv32.v and you'll see that the instruction decoding is nothing more than matching on the bits of the instruction, each instruction type setting one or more wires which are later directly used in the relevant circuit.
And this happens partially even on Intel, which can decode several instructions in parallel, but only one of these instruction decoders can run microcode, while the other decoders are for the more common "simple" instructions which generate only a few micro-operations.
Posted Nov 16, 2023 15:06 UTC (Thu)
by IanKelling (subscriber, #89418)
[Link] (2 responses)
>One might question whether architectures like ARM even have microcode.
Right, it is probably more accurate to say they don't have microcode. I should have said that.
Posted Nov 17, 2023 23:14 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
>>One might question whether architectures like ARM even have microcode.
>Right, it is probably more accurate to say they don't have microcode. I should have said that.
In which case, doesn't that make it un-updateable full stop?
If it's implemented as circuitry, you can't change it (except by things like blowing PROM fuses ...)
If you want to make it do something else, you have to load a program of some sort (and the fact that microcode typically is loaded every time on boot is overwhelming evidence that microcode is software).
Okay, this might not be the best analogy (or it might be spot on), but isn't microcode just programming an FPGA in the CPU? Not all of the CPU will be programmable like that, but that's pretty much what's going on, no?
(Oh, and I'll just throw in the fact that some - probably most - mini-computer CPUs from the 80s and maybe earlier had microcode - the Pr1me 50-series certainly did. If you ran INFORMATION (the Pr1me version of Pick) you got a microcode update to your CPU to optimise it for running the database.)
Cheers,
Posted Nov 18, 2023 1:01 UTC (Sat)
by excors (subscriber, #95769)
[Link]
I don't think so. The reverse-engineering of Intel Goldmont at https://misc0110.net/files/cpu_woot23.pdf says the microcode is stored in MSROM (microcode sequencer ROM), with space for about 8K 'triads' (each consisting of 3 uops and a sequence word), plus a much smaller MSRAM for microcode patches (which are loaded at boot time by the BIOS/OS), and some registers to configure where the patches are applied.
When the CPU's instruction decoder finds a complex instruction, the microcode sequencer uses the opcode to determine the entry point into the MSROM, then it reads a triad from MSROM (or MSRAM if that address has been patched) and issues the uops for execution, then uses the sequence word to determine the next triad, until it has finished that instruction.
It sounds debatable whether it should count as software or as a fancy LUT, but in any case it's nothing like an FPGA, it's just data in memory that gets processed by fixed hardware.
Posted Nov 16, 2023 22:49 UTC (Thu)
by willy (subscriber, #9762)
[Link]
In brief, there is one complex decoder which can emit up to four uops, three simple decoders which can emit up to one uop each _and_ a microcode sequencer which can emit up to four uops per cycle (but while it's running, the decoders are disabled).
I presume that "new microcode" will intercept insns that otherwise would have been natively executed, and issue different uops from the ones that would have been issued by hardware.
Posted Nov 16, 2023 7:03 UTC (Thu)
by ballombe (subscriber, #9523)
[Link]
Posted Nov 15, 2023 18:10 UTC (Wed)
by merge (subscriber, #65339)
[Link] (1 responses)
Posted Nov 15, 2023 19:00 UTC (Wed)
by wtarreau (subscriber, #51152)
[Link]
Posted Nov 16, 2023 5:51 UTC (Thu)
by donald.buczek (subscriber, #112892)
[Link] (4 responses)
If you were to design a ISA from scratch, you probably would have a much more systematic instruction set, probably with fixed length instructions, certainly without prefixes and multiple compatibility modes. The Power ISA, for example, is much better in that regard. Unfortunately, something built from scratch has academic value but is difficult to establish in the market, because it doesn't have the economy of big numbers. You get less performance for you money.
[1]: https://www.unomaha.edu/college-of-information-science-an...
Posted Nov 16, 2023 6:26 UTC (Thu)
by zaitseff (subscriber, #851)
[Link] (3 responses)
Definitely off topic, but I personally find the RISC-V architecture [1] to be very nicely and cleanly defined, for 32-bit, 64-bit and even 128-bit. Much cleaner than ARM, IMHO!
Posted Nov 16, 2023 7:38 UTC (Thu)
by ballombe (subscriber, #9523)
[Link] (2 responses)
aarch64 is much nicer.
Posted Nov 16, 2023 12:03 UTC (Thu)
by anton (subscriber, #25547)
[Link] (1 responses)
Posted Nov 16, 2023 18:04 UTC (Thu)
by ballombe (subscriber, #9523)
[Link]
Compared to aarch64, it lacks addition with carry, subtraction with carry and
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Wol
Intel's "redundant prefix issue"
By hiding how the system works, you forgo reviews that might find and/or fix issues. Whether or not that trade off is good is going to depend on the situation.
The other issue is that there can be multiple parties involved, whose threat models aren't aligned. While a sellar of a product might not want their competitors to be able to reproduce their product, buyers aren't going to care about that, but might have limited trust in the sellar and may want to be able to verify that the sellar isn't compromising their security. The sellar might also want to retain some control over the product after sale, while buyers will generally want full control of what they buy.
Sellars also care about making sales, so if buyers aren't buying their product because of obfusication, they need to balance that against how they benefit from obfusication.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Unfortunately it's proprietary.
http://www.apollo-core.com/index.htm?page=coding&tl=1
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
How is this different from overclocking?
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
> Had this argument been held as valid then, we couldn't have had open graphics drivers then.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
I would be surprised if unencrypted or leaked microcode hasn't figured prominently in some costly patent lawsuits in this industry.
Intel's "redundant prefix issue"
(https://www.sec.gov/files/litigation/admin/3437730.txt)
(http://datasheets.chipdb.org/Intel/x86/Pentium%20Pro/PPPB...)
Intel's "redundant prefix issue"
Intel appears to have supported microcode updates ("BIOS Update feature") since the Pentium Pro, originally developed around 1994, in which the microcode was encrypted
Bob Colwell, one of the lead developers of the Pentium Pro, explained:
In fact, remember the micro code patch facility, I had
a lot of fights over that of that exact nature because the executive VPs would say, "You want
me to allocate chip real estate on a facility to make up for the mistakes that you think you
will have made? Why do you not just make no mistakes and then we don't have to deal with
them on the chip". And I said "If you can find somebody who can actually deliver you perfect
designs of any sort, you really should hire them and if they can do it, you should fire me and
put them in my place, but I don't know how to do it, so until such day as you show me the
magic..."
and
The way P6 worked was that it would take the x86 instruction, turn it in to
micro-ops, and the instruction hasn't finished until the last micro-op associated with that
instruction has finished. So as a microcode writer I might know that to implement that
instruction took me five micro-ops and here they are, but I don't want those outside of Intel
to know that, that's microcode, some of Intel’s family jewels. If you tell people the
microcode of your machine they can design their own machine much more easily. There's a
significant IP aspect of the microcode, so we had to try to find a happy balance between
enough information to let the performance analyst succeed and not so much that we would
expose intellectual property in our microcode. The compromise that I proposed was that if
it's four micro-ops or less [VTUNE would] show them the actual sequence, because for instructions that
simple, there's no mystery to what it does anyhow. But for instructions like far call through a
gate there's some really complicated x86 stuff; there might be 200 micro-ops. We don't show
you those micro-ops; instead, we say the x86 instruction takes 200 micro-ops.
He then went on, in answer to the very next question, to say:
I said if you find a bug in UNIX for example and it's messing you up
go find out where it is. Go into the source code, figure it out, work around it, don't let it stop
you.
and, when Intel had transitioned from commercial UNIX to Windows NT:
Suddenly NT comes along, and we don't have source code. We're not
allowed to have source code. We're not even allowed to cheat and look at the assembly
code, we can't disassemble it.
and
Next the validation chief came to me saying "I'm
worried. We're not getting the cycles that we used to get. We used to crank through a certain
number of validation cycles every night and now we have more machines with theoretically
higher capacity and I'm getting far less useful output. So why is that? Is it that they're slow,
not as efficient? Half the jobs I submit, literally half, never finish. They never succeed." I said
"Is it a random thing?" He says "yeah as far as I know, I have no idea what the pattern is, I just
know that every hundred jobs I submit, I get fifty back. And so it’s going to take me twice as
many machines, for twice as long.
and
...the validation crew said "we're not going to finish the
project this way. So we're going to take drastic action. How about we get a bunch of Linux
boxes in here and we start using those." And so I mentioned that to Albert who was the
decision maker here and he said, "No, over my dead body. I do not want that, you can't have
that. You have got to make this Windows NT conversion; NT is the future." So I came back to
the validation crew and I told them what he said.
> If the world is relying on hardware that is not and cannot be documented we have serious problems...
Intel's "redundant prefix issue"
rep
prefix with mul
negates the result) in said microcode almost immediately after is was decoded yet it eluded hackers of last century for decades!Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
FROB_MODE 12213
...
FROB_MODE 54213, with X set to 1.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
verse engineering effort by presenting microprograms that
successfully augment existing x86 instructions and add
foreign logic" "We see great potential for constructive applications of
microcode in COTS CPU s. We already discussed that
microcode combines many advantages for binary instru-
mentation, see Section 7.1. This could aid program trac-
ing, bug finding, tainting, and other applications of dy-
namic program analysis."
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
> not in software
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
> The issue is more about power/control over hardware you supposedly own than about software versus hardware.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
> In the case where the hardware won't be the updatable, a company has more incentive to spend resources on quality control, which can result in a better product.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
> What's the point of that boycott? What are you hoping to achieve with it?
>The problem that FSF is faceing is not that people disagree with its tactic. The problem is not even that they disagree with it's strategy.
Well most people aren't going to care one way or the other.
> My original point in this discussion was your hyperbole in referring to the FSF's stance.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
> But how can buying from organizations that take something that may, at least in theory, run free software and then cripple these devices to make them unable to run free software is helping move things in that direction?
> Now try to find it on FSF's site. It's not there. Why do you think?
> Before you can do that you have to decide which actors are good and which are bed. And FSF quite explicitly promotes vendors which take normal, non-freedom-respecting hardware (from ASUS, Lenovo and son) and cripple them to make sure they adhere to crazy FSFs standards and doesn't promote people who are trying to actually build more free software friendly devices.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
> Their attitude wrt embedded firmware _is_ irrational, in that it directly impedes progress towards other FSF goals, and is predicated on preconditions that were nonsense at the time, and are even more so today.
> They aren't irrational.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
> Yes. And that means that you recommend to buy GPUs from AMD and Intel, and not from nVidia. That hurts even trillion-dollar companies enough that they change things to ensure their blobs are less invasive. That's a win.
Intel's "redundant prefix issue"
> And they never had enough leverage, to use that to affect change using boycotts.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
2) The policy is predicated on a naive-to-the-point-of-delusional definition of "hardware", and deliberately promotes/encourages solutions that, on their face, run counter to the FSF's goal of user empowerment through software freedom.
3) It can absolutely be improved, but only if the FSF is willing.
Intel's "redundant prefix issue"
> but shooting for the moon with the original policy may still have been the right policy at that time.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
> or fall back to the principles and say that RYF will no longer certify devices with non-Free firmware, accepting that this reduces RYF hardware further.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
[2] a CPU with microcode flaws that can leak secrets simply by visiting a web page is preferable to one that has its flaws fixed via a runtime-applied update.
[3] Locked down to the user, not the manufacturer. I guarantee every one of those devices is field-updatable, possibly OTA without user intervention. But since there's no user-facing mechanism to update the device, it somehow respects your freedom?
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
You cannot disclaim warranty on hardware, but you can disclaim warranty on software.
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Wol
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
Intel's "redundant prefix issue"
It is not even fully 64bit for multiplication and division.
Intel's "redundant prefix issue"
It is not even fully 64bit for multiplication and division.
What do you mean by that? In RV64G mul performs a 64-bit*64-bit multiplication, div performs a signed 64-bit/64-bit division, and divu performs an unsigned 64-bit/64-bit division. Alpha does not have an integer division instruction.
Intel's "redundant prefix issue"
(like aarch64)
"addmul" (that is (a,b,c) -> a*b+c where a,b,c are 64bit and the result is 128bit)