|
|
Subscribe / Log in / New account

Thoughts on software-defined silicon

By Jonathan Corbet
February 18, 2022
People are attracted to free software for a number of reasons, including price, overall quality, community support, and available features. But, for many of us, the value of free software is to be found in its ability to allow us to actually own and maintain control over our systems. Antifeatures in free software tend not to last long, and free drivers can often unlock capabilities of the hardware that its vendors may not have seen fit to make available. Intel's upcoming "software defined silicon" (SDSi) mechanism may reduce that control, though, by taking away access to hardware features from anybody who has not paid the requisite fees.

SDSi is a "feature" that is expected to make an appearance in upcoming Intel processors. Its purpose is to disable access to specific processor capabilities in the absence of a certificate from Intel saying otherwise. As the enabling patch set from David Box makes clear, the interface to the mechanism itself is relatively simple. It appears as a device on the bus that offers a couple of operations: install an "authentication key certificate" or a "capability activation payload". The certificate is used to authenticate any requests to enable features, while the payload contains the requests themselves. Unless this device has been used to store an acceptable certificate and payload, the features that it governs will be unavailable to software running on that CPU.

The SDSi hardware also maintains a couple of counters that track the number of unsuccessful attempts that have been made to load a certificate or enable a feature. Should either counter exceed a threshold, the mechanism will be disabled entirely; the only way to get it back will be to power-cycle the processor. Presumably, the intent here is to thwart attempted brute-force attacks against the SDSi gatekeeper.

Intel is clear enough about the purpose behind this new mechanism. SDSi will enable shipping CPUs with features that may be of interest to users, but which are unavailable unless additional payments are made. The restricted capabilities will be present on all shipped CPUs, but the customers, who might have thought that they own their expensive processors, will not be able to use their systems to their fullest capability without add-on (and perhaps recurring) payments to the vendor.

The benefits to Intel are clear. The company can do price differentiation among its customers in an attempt to extract the maximum revenue from each while simultaneously reducing the number of different hardware products it must carry in its catalog. The revenue stream from a processor will not necessarily stop once the CPU is purchased, and might continue indefinitely. The benefit for customers is not quite so clear. In theory, customers with minimal needs can avoid paying for expensive features they don't use and can "upgrade" their hardware without downtime if their needs change.

Also unclear is which features Intel intends to control in this manner. One can imagine all kinds of things, including the ability to access larger amounts of memory, higher clock rates, additional CPUs, specialized instructions, or accelerators for workloads like machine learning. Taken to its extreme (which the company would presumably not do, though one never knows anymore), an off-the-shelf processor might be unable to run anything more demanding than "hello world" until additional licenses have been purchased. There was a time when a floating-point processor was an add-on unit; perhaps we will find ourselves there again.

This business model is not new, of course; stories abound regarding early mainframes that could be "upgraded" by altering a single jumper. Tesla automobiles include a number of features, including basic capabilities like use of the full capacity of the battery, that only work if an extra payment is made; there is no shortage of reports that the company will disable those features when one of its cars is resold. Car manufacturers evidently want to extend this idea to, for example, requiring subscription payments to enable heated seats. The heating elements exist in the seats regardless, and the manufacturer sold them to the buyer, but the buyer still does not really own them.

Rent-based business models have been spreading through the technology industry for some time. Many of us no longer purchase and run our own servers; we rent them from a cloud provider (and, to the tell the truth, are often better off for it). Companies that are still in the proprietary software business are finding the monthly subscription model more appealing than simply selling software licenses. And, of course, there are dodgy web sites out there demanding payments for access to their content.

But the problem seems worse for hardware that has been purchased, and which the customer, on the theory that they own said hardware, may believe they can rightly use to its fullest capability. Our free software, which is supposed to enable that use, finds itself relegated to asking the hardware for permission to use the available features. It is a loss of control over our systems, yet another set of secrets hidden away inside our computing hardware and protected by anti-circumvention laws; if this approach is commercially successful, we will surely see much more of it.

It is hard to see a way out of this situation that doesn't involve making hardware free in the same way that we have done with software. Maybe someday it will be possible to order the fabrication of processors from free designs and at least be able to hope that the result will be lacking in deliberate antifeatures. But that is not the world we live in now, and it's not clear that we will get there anytime soon.

Meanwhile, SDSi is definitely coming to Linux; maintainer Hans de Goede has indicated that this work is on track to be merged for 5.18. There are not a whole lot of arguments that can be made against the acceptance of the SDSi driver; it simply enables another piece of functionality packaged with upcoming CPUs. The kernel community has not made a practice of judging whether it likes the "features" provided by a specific peripheral before accepting driver support, and it would be hard to justify starting now. So the Linux kernel will play along with SDSi-enabled CPUs just fine; it will be up to customers to decide whether they want to be as agreeable.


to post comments

Thoughts on software-defined silicon

Posted Feb 18, 2022 17:53 UTC (Fri) by jebba (guest, #4439) [Link] (17 responses)

If Red Hat (or whoever) didn't add it to the Linux kernel, wouldn't it be much more difficult for Intel to do this? If so, then why add it?

Thoughts on software-defined silicon

Posted Feb 18, 2022 18:29 UTC (Fri) by MatejLach (guest, #84942) [Link] (13 responses)

I agree with your sentiment but Intel has their own Linux engineers too.

Thoughts on software-defined silicon

Posted Feb 18, 2022 18:32 UTC (Fri) by jebba (guest, #4439) [Link] (12 responses)

Ya, I just mentioned Red Hat since they were in the link. Regardless of who writes it, the Linux developers can certainly reject it. They are under no obligation to add it. If they do, it seems it would do a lot to undercut Intel's attack.

Thoughts on software-defined silicon

Posted Feb 18, 2022 18:52 UTC (Fri) by tshow (subscriber, #6411) [Link] (11 responses)

I suspect what will kill this is AMD and ARM _not_ doing it, combined with users not bothering to rent whatever Intel is trying to hive off. This has the smell of something that is going to wind up in the tech news in a couple of years when Intel end-of-lifes it because it shows no prospect of paying for itself.

Thoughts on software-defined silicon

Posted Feb 18, 2022 19:24 UTC (Fri) by atnot (subscriber, #124910) [Link] (2 responses)

Indeed, Intel has already tried this once: https://en.wikipedia.org/wiki/Intel_Upgrade_Service

However, I think it's much more likely to work these days, especially in the enterprise space. Processors there are already highly segmented, with dozens of SKUs that only differ in what fuses are blown in them in the factory. In that sense, this can be a real benefit for enterprise customers, as they would presumably need to order and stock fewer CPU variants.

Of course the catch is that longer term, while there is an upper limit to the amount of segmented SKUs are feasible to produce, there is no such limit for feature toggles.

Thoughts on software-defined silicon

Posted Feb 23, 2022 22:57 UTC (Wed) by Trelane (subscriber, #56877) [Link] (1 responses)

> Processors there are already highly segmented, with dozens of SKUs that only differ in what fuses are blown in them in the factory.

The reason for this is because defects happen (semiconductor chip joke for free, right there). If you have a problem in a redundant part of you chip, you can close it off and sell the rest of the perfectly good chip at a lower price point. Or lower the clockspeed, whatever.

Interestingly, this is the opposite: it had _better_ have passed qual, but now it is walled off until the customer pays up.

> can be a real benefit for enterprise customers, as they would presumably need to order and stock fewer CPU variants.

On inventory: presumably this will need to be locked to the exact processor. (Or how else do you prevent copying the cert and enabling the functionality on _another_ chip? Maybe phone home? Contact a flex_lm install on your network?) So now you have to track the certs for your chip in _addition_ to the chip itself! Alternately, maybe it stays once enabled sweet, now you have to track the _variants_ of the same chip.

I don't see the upside honestly.

Thoughts on software-defined silicon

Posted Feb 23, 2022 23:52 UTC (Wed) by atnot (subscriber, #124910) [Link]

> The reason for this is because defects happen (semiconductor chip joke for free, right there). If you have a problem in a redundant part of you chip, you can close it off and sell the rest of the perfectly good chip at a lower price point. Or lower the clockspeed, whatever.

As I pointed out elsewhere, the degree to which this happens is greatly overstated by semiconductor companies. True, not every chip is going to reach the full clock speed and have all cores working. But for one, especially on mature nodes, the majority of them do and also that doesn't apply for many other lines they already segment along like maximum memory capacity, ECC, software features like Ryzen PRO/vPro etc. The far majority of chips are cut down because they wouldn't sell at a higher price, not because they are defective in any meaningful way. They do this because relatively speaking, the individual chips are dirt cheap (tens of dollars), all of the cost is in the NRE.

> So now you have to track the certs for your chip in _addition_ to the chip itself! Alternately, maybe it stays once enabled sweet, now you have to track the _variants_ of the same chip.

It is generally rare for a cpu to leave a system after it gets put in and tracking per-device licenses already needs to be done for all of the other hardware like switches, BMCs, etc. so it's not really a lot of extra effort for them.

Thoughts on software-defined silicon

Posted Feb 18, 2022 19:35 UTC (Fri) by developer122 (guest, #152928) [Link] (5 responses)

I already avoid intel chips because they paywall ECC.

My NAS runs on one of the cheapest (and most power efficient) CPUs AMD ever made, but it's fully stacked with ECC RAM for my ZFS ARC.

AMD efficient ECC

Posted Feb 19, 2022 9:20 UTC (Sat) by sdalley (subscriber, #18550) [Link] (2 responses)

That sounds really interesting. Any details on which processor/mainboard you used?

AMD efficient ECC

Posted Mar 2, 2022 18:50 UTC (Wed) by anton (subscriber, #25547) [Link] (1 responses)

Asus and Asrock mainboards, and some Gigabyte ones support ECC on AM4 boards. AFAIK all CPUs are good except the non-Pro APUs. We have a Ryzen 1600X, 1800X, several 3900X, and a 5800X all working with ECC.

AMD efficient ECC

Posted Mar 2, 2022 20:39 UTC (Wed) by sdalley (subscriber, #18550) [Link]

Thanks for this. I've just seen the Gigabyte B550I Aorus Pro which does ECC and is miniITX too. Would make a nice low-dissipation system with a Ryzen 5650GE Pro APU.

But, my, how prices for this kind of stuff have shot up over the last few years...

Thoughts on software-defined silicon

Posted Feb 22, 2022 17:48 UTC (Tue) by IgorTorrente (guest, #156538) [Link] (1 responses)

Complementing the sdalley question.

Which ECC Ram kit are you using?

Thoughts on software-defined silicon

Posted Mar 2, 2022 18:53 UTC (Wed) by anton (subscriber, #25547) [Link]

We use whatever is available at a good price. In our 5800X box we use 4 Kingston KSM32ED8/32ME.

Thoughts on software-defined silicon

Posted Feb 19, 2022 3:35 UTC (Sat) by k8to (guest, #15413) [Link]

Maybe? I'd more expect amd and some arm vendors to do the same.

Thoughts on software-defined silicon

Posted Feb 19, 2022 10:16 UTC (Sat) by pbonzini (subscriber, #60935) [Link]

The optimist in me thinks that this might be targeted only to cloud vendors, who rent a subset of a machine at a time and also might rent the same machine for slightly different instance types. Many instruction set extensions can be hidden from CPUID but would still be present in the processor. Different instance types then could use SDSi to have a different set of features enabled for real, and not just in CPUID.

The pessimist in me thinks that this is just wishful thinking, though.

Thoughts on software-defined silicon

Posted Feb 19, 2022 13:37 UTC (Sat) by nim-nim (subscriber, #34454) [Link] (2 responses)

No chance of Red Hat refusing to do it now they belong to IBM, who already practices this.

And even if everyone in the community refused to add it Intel would just add the patch to its own kernel and require use of this kernel. Like Nvidia does for its own hardware.

Free software means adding antifeatures to a fork is cheap (especially if they are self-contained).

Thoughts on software-defined silicon

Posted Feb 19, 2022 22:52 UTC (Sat) by jebba (guest, #4439) [Link] (1 responses)

But then they'd be left having to maintain their own fork for that CPU series. There's a lot of overhead there.

Thoughts on software-defined silicon

Posted Feb 24, 2022 10:39 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

Not really, for something that only needs minimal interaction with the rest of the kernel. Which should be the case for something that just loads a key.

Thoughts on software-defined silicon

Posted Feb 18, 2022 18:46 UTC (Fri) by tshow (subscriber, #6411) [Link] (8 responses)

According to my father, the IBM mainframe upgrade (at least on the mainframe his company had) involved a physical (door lock style) key that operated a rotary switch. Apparently there was a lot of Blues Brothers style pomp & circumstance (briefcase handcuffed to one of the IBM techs, many IBM people in the upgrade party and so forth) involved in the upgrade. His understanding was that they made a big deal of it for legal reasons. Between it requiring a key and all the ceremony, there was no way a customer could claim they didn't realize they couldn't just close a jumper for free.

I could kind of see the justification for this if it was running the hardware harder and making it more likely to fail within the warranty period, if (for example) they charged to let you dangerously overclock but would still honor the warranty. But this does look like another "The money people are asking where else can we extract rent?" situation.

Now, if you could pay them to get root access to the IME so you could properly secure the machine, that would be another thing entirely.

Thoughts on software-defined silicon

Posted Feb 18, 2022 18:59 UTC (Fri) by jebba (guest, #4439) [Link]

> Now, if you could pay them to get root access to the IME so you could properly secure the machine, that would be another thing entirely.

This is available to some customers, but not the general public. Some folks have done a lot of work to try to neutralize it:

https://puri.sm/learn/intel-me/

Thoughts on software-defined silicon

Posted Feb 19, 2022 12:48 UTC (Sat) by hazeii (guest, #82286) [Link]

On mainframes they were often call 'slugs', in that they essentially slowed the machine down - thus there were extra payments to deslug the machines (even on a short-term basis).

Thoughts on software-defined silicon

Posted Feb 22, 2022 1:28 UTC (Tue) by klossner (subscriber, #30046) [Link] (4 responses)

Many decades ago, my employer leased IBM 370/148 serial number 1. It was a lease not a purchase so we paid a monthly fee for the computing resource, and the fact that increasing RAM could be done by paying more each month to have a jumper moved was not philosophically troubling. It was analogous to renting a cloud server today.

Thoughts on software-defined silicon

Posted Feb 22, 2022 9:46 UTC (Tue) by geert (subscriber, #98403) [Link] (3 responses)

Except that the unused cloud server capacity would be rented to someone else.

Software-defined silicon is like renting the first two floors in a 10-story office building which is otherwise vacant: if you pay more, you get access to more floors; if you don't, the other floors stay unused. But the other floors have been constructed anyway, and thus have already consumed (scarce) resources.

Thoughts on software-defined silicon

Posted Feb 25, 2022 17:02 UTC (Fri) by giraffedata (guest, #1954) [Link] (2 responses)

If this building is analogous to a processor, how has constructing the other 8 floors consumed scarce resources? It costs about the same to build a 10 story building as a a 2 story one if they're like processors.

And that's why the the owner doesn't make you pay for all 10 stories if you don't need them.

And the only reason he doesn't go ahead and let you use the other 8 anyway is that locking you out of them is the only way he can know you're telling the truth when you say you're willing to pay for only 2 stories. Charging people for all the stories they're willing to pay for minimizes the price per story for everyone.

Thoughts on software-defined silicon

Posted Feb 25, 2022 18:58 UTC (Fri) by geert (subscriber, #98403) [Link]

> If this building is analogous to a processor, how has constructing the other 8 floors consumed scarce resources? It costs about the same to build a 10 story building as a a 2 > story one if they're like processors.

The "10-story" processor still requires more raw material (silicon + whatever else for doping, etching, interconnects, ...). Plus, you can fit more "2 story" processors on the same wafer, so there's a processing cost, too.

Thoughts on software-defined silicon

Posted Feb 25, 2022 19:06 UTC (Fri) by nybble41 (subscriber, #55106) [Link]

As long as we're speaking in analogies, it's more like *buying* (not renting) the building but the seller only gives you the key to the first two stories. You are, after all, *buying* the entire CPU, even if parts of it are disabled by software keys.

The difference being that it's not a crime to pick or drill out some locks in your own building to access the upper floors. If you were actually renting the building (CPU) that would be a different matter. (And, of course, that the software locks in question are considerably more difficult to either pick or disable than, say, the lock on your average bank vault, much less a normal high-rise.)

Thoughts on software-defined silicon

Posted Feb 27, 2022 3:07 UTC (Sun) by eean (subscriber, #50420) [Link]

> I could kind of see the justification for this if it was running the hardware harder and making it more likely to fail within the warranty period

yeah the Tesla battery thing actually might make sense since not fully charging the battery is better for its longevity and one presumes/hopes that as the battery ages unused cells can be cycled in as others wear out. so you're basically paying extra to optimize for range instead of longevity.

Hard to imagine Intel having something analogous to this though.

Thoughts on software-defined silicon

Posted Feb 18, 2022 20:20 UTC (Fri) by tux3 (subscriber, #101245) [Link] (3 responses)

Maybe this 'feature' is a preparation for the last few deathrattles of Moore's Law.

If we can't keep selling meaningful upgrades, phasing in a subscription model early seems optimal.
If improvements keep slowing down, it will eventually be cheaper for you to software-upgrade your i3 into an i5 than buy the 5% better and newer i3s that come out every year or two.
This'd be great for Intel, who is now in the business of selling people upgrades while saving the cost of an entire chip.

Thoughts on software-defined silicon

Posted Feb 18, 2022 21:00 UTC (Fri) by atnot (subscriber, #124910) [Link]

That would be pretty poor timing on their part considering AMD has taken significant market share and forced Intel to start delivering double digit performance improvements every year again.

Thoughts on software-defined silicon

Posted Feb 21, 2022 4:20 UTC (Mon) by timrichardson (subscriber, #72836) [Link]

Sounds very similar to resizing an AWS virtual server as needs grow; in other words, a very popular approach.

Thoughts on software-defined silicon

Posted Feb 21, 2022 12:38 UTC (Mon) by BirAdam (guest, #132170) [Link]

Intel is preparing to license not just x86 but also their own designs. People will be able to buy different brands of the same chip (potentially) and some may not be crippled.

Thoughts on software-defined silicon

Posted Feb 18, 2022 20:40 UTC (Fri) by flussence (guest, #85566) [Link] (29 responses)

Nothing new to worry about.

Intel's been famous for crippling CPUs and chipsets via firmware and microcode lockouts ever since they introduced HyperThreading. The only thing they're doing differently here is dumping the implementation burden on the OS and using the existing ucode cryptography circuitry to remove the right to repair entirely. Maybe they're feeling threatened by coreboot?

The other thing they're doing - unintentionally - is admitting that their chips are barely worth what the base model sells for and everything else is pure scalping. That too is public knowledge but it's nice to see it straight from the horse's mouth.

Thoughts on software-defined silicon

Posted Feb 18, 2022 22:56 UTC (Fri) by NYKevin (subscriber, #129325) [Link] (27 responses)

While I can't say that I *approve* of Intel's business model, I also can't say that this is an accurate assertion:

> The other thing they're doing - unintentionally - is admitting that their chips are barely worth what the base model sells for and everything else is pure scalping.

It is possible (I haven't run the numbers) that they are running an "airfare-style" business model, where:

1. The cheap seats barely break even on a seat-mile basis, and may even lose money when non-operating expenses are included.
2. The business class seats are the main profit center, because you can sell a fair number of them to business travelers at a healthy markup, and make a decent profit in doing so.
3. The first class seats are essentially "bonus profit" for customers willing to pay extra for premium services. Some airlines don't even do first class, or merge it with business class.

If the airline had the option to do so, they would fill the entire plane with business class seats. But they can't sell quite enough business class seats, at business class prices, for this to make sense, unless they use smaller planes, which have poorer economies of scale, driving the price further up, etc. The purpose of the economy seats, then, is to lose as little money as possible, and *maybe* make a small profit if the economics allow for it. The people sitting in business class are the folks who are actually paying for the plane ride, despite the fact that their seats are only marginally more costly to the airline in terms of operating expenses.*

The question is whether the economies of scale inherent in the silicon market end up working out the same way as they do in the aviation market. I would be very interested in seeing hard data on that point.

* In a properly-run business, opportunity costs should usually be low or negative. Positive opportunity costs indicate misallocation of resources. So if you want to quantify the "cost" of a good or service to the supplier, you probably mean the accounting cost, not the opportunity cost.

Thoughts on software-defined silicon

Posted Feb 18, 2022 23:48 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (24 responses)

The thing is, the larger die is supposed to be more expensive. So the fact that you can just sell the fully-loaded die as an "economy" SKU means that you don't have competitors that would just undercut you on price or features.

This was very much the case up to about 5 years ago in case of Intel on servers.

These days? It just seems stupid.

Thoughts on software-defined silicon

Posted Feb 19, 2022 2:00 UTC (Sat) by khim (subscriber, #9252) [Link] (23 responses)

> So the fact that you can just sell the fully-loaded die as an "economy" SKU means that you don't have competitors that would just undercut you on price or features.

Nope. You are forgetting an elephant in the room: price of set of photolithography masks and design cost for these masks. To make one set of masks you have to spend millions of dollars (singular millions). But to develop completely new set price can go to $100 million for relatively small chips and I wouldn't be surprised to know that with monsters like AMD/nVidia/Intel are producing they can go to $1 billion or more.

When non-recurring costs are so high it may make sense to produce transistors which are destined to be disabled and it would still be cheaper than to produce many physically different SKUs.

> These days? It just seems stupid.

Nope. As price of development grows higher and higher and relative cost of unused silicone lower and lower more and more companies would want to do that.

That's simple math.

I don't know under what kind of rock you were sitting, but just recall that endless disable/enable AVX512 saga: apparently it's cheaper for Intel to go with already prepared masks and just disable AVX512 in firmware rather then redo the production process!

Thoughts on software-defined silicon

Posted Feb 19, 2022 8:47 UTC (Sat) by Wol (subscriber, #4433) [Link] (11 responses)

I had a 3-core AMD a while back (just scrapped it), and I understood that a lot of these chips were actually 4-cores with a core disabled. Especially with a new design, if these things can be disabled by blowing a fuse, surely it makes sense to just disable stuff that fails QA and sell the resulting chip at a lower price point.

Cheers,
Wol

Thoughts on software-defined silicon

Posted Feb 19, 2022 11:41 UTC (Sat) by smurf (subscriber, #17840) [Link] (9 responses)

That makes economic sense when the silicon is broken, and already routinely used with clock speeds or (in the embedded realm) on-die Flash memory.

Selling perfectly working CPUs at a bargain and then charge for the "upgrade" is a slightly different kettle of fish, and frankly I can't contribute much to that discussion beyond "don't like it and will go to some pains not to use CPUs with this kind of anti-feature".

The non-turnoff-able IME is bad enough.

Thoughts on software-defined silicon

Posted Feb 19, 2022 12:15 UTC (Sat) by khim (subscriber, #9252) [Link] (8 responses)

> That makes economic sense when the silicon is broken, and already routinely used with clock speeds or (in the embedded realm) on-die Flash memory.

Except it wasn't broken. The majority of sold chips had perfectly functional additional cores you can enable (back when all it took to enable them was a pencil).

And I'm sure the majority of Ryzen 5 5600X sold today actually have 8 working cores, too. They have fully functional 32MB cache designed for all 4 cores and if your approach was the reason for their existence then we would have had some version of Ryzen with reduced cache and all 8 cores enabled. There are nothing like that because it doesn't make any economic sense: you couldn't fit these between Ryzen 5 5600X and Ryzen 7 5800X. They would be weird side-cousin to Ryzen 5 5600X which would just make buyers confused.

> Selling perfectly working CPUs at a bargain and then charge for the "upgrade" is a slightly different kettle of fish.

It's exactly the same only now you couldn't sell small percent of chips which are actually defective. But have less SKUs to manage.

I'm not sure if it would work or not, but it makes perfect economic sense.

> don't like it and will go to some pains not to use CPUs with this kind of anti-feature

Yeah, that's what stops these incentives. PR backlash. But as R&D prices for CPUs go up and prices of unused silicone goes down the incentive to switch to that model becomes more and more acute.

Actually the problems with Dark silicon almost guarantee that said model would become the norm eventually.

When you couldn't power all the transistors on the chip simultaneously for thermal reason the ability to pick between features A, B, and C (either of which can be enabled but not all simultaneously) make such scheme pretty attractive. You couldn't do that with millions of SKUs but can easily achieve with million of [potential] licenses.

And in that case the ability to enable features without license would become actively harmful: enabling all features simultaneously would just fry the chip and it would be pretty hard to prove in warranty service that this happened because of customer irresponsibility, not because of customer's misuse.

Thoughts on software-defined silicon

Posted Feb 19, 2022 12:58 UTC (Sat) by Wol (subscriber, #4433) [Link] (1 responses)

Going back many years, to microcode running on processors with speeds in KHz arena ...

50-series Pr1mes actually came with a microcode update if you were running INFORMATION (aka Pick) on them, it added a whole bunch of instructions specially optimised for handling strings, to make the database more efficient.

THAT would be an interesting feature on modern silicon :-)

Cheers,
Wol

Thoughts on software-defined silicon

Posted Feb 26, 2022 5:17 UTC (Sat) by flussence (guest, #85566) [Link]

We *almost* had a chance to see that: Zen2 chips emulate a few instructions (BMI2 set) in microcode, and they're pitifully slow to the point of being better to hand-roll in C. If they could've fixed it in an update it would've been interesting news… but maybe they just didn't care for such a niche thing.

Thoughts on software-defined silicon

Posted Feb 19, 2022 15:06 UTC (Sat) by mfuzzey (subscriber, #57966) [Link] (3 responses)

>enable features without license would become actively harmful: enabling all features simultaneously would just fry the chip and it would be pretty hard to prove in warranty service that this happened because of customer irresponsibility, not because of customer's misuse.

Surely this case could be handled in hardware by only accepting configurations enabling at most any 2 of features A, B, C (or whatever other thermal / power constraints exist). I don't see why this would require a license based system to be safe

Thoughts on software-defined silicon

Posted Feb 19, 2022 15:10 UTC (Sat) by khim (subscriber, #9252) [Link] (2 responses)

This would only work if you can, somehow, determine which combinations are actually safe and which may do harm based solely on some simple calculations.

If you need to do any kind of testing, then license would be perfect way to ensure that everything works perfectly.

That's minor issue, ultimately. Economic need to differentiate markets drives the effort to a much larger degree than technical needs.

Thoughts on software-defined silicon

Posted Feb 20, 2022 10:20 UTC (Sun) by NYKevin (subscriber, #129325) [Link] (1 responses)

Meh, from a technical perspective, there's no reason the licenses have to be sold individually. You could instead publish one set of "licenses" that everyone is allowed to download and copy freely, but they're all signed so you can't modify them. Maybe you also publish a separate signing key that allows users to make their own private "licenses" with untested combinations of features, with the proviso that "this might brick your hardware, don't ask for your money back if it does." Regardless of the specifics, an "open" version of this sort of thing could exist, if Intel wanted to make it.

The point is, you can decouple the technical aspect from the economic aspect, at least to some extent. Locked hardware exists because it is economically favored for it to exist. You can't "solve" the "problem" of locked hardware; it is not a technical problem in the first place. As long as those economic incentives continue to exist, it is inevitable that Intel, and other chip manufacturers, will produce and sell locked hardware.

Thoughts on software-defined silicon

Posted Feb 20, 2022 12:00 UTC (Sun) by khim (subscriber, #9252) [Link]

> You can't "solve" the "problem" of locked hardware; it is not a technical problem in the first place.

Yes, but you can go in the other direction: use solution designed to solve economic problem to solve technical problem, too.

> As long as those economic incentives continue to exist, it is inevitable that Intel, and other chip manufacturers, will produce and sell locked hardware.

True, but they would use technical need to keep hardware from breaking as justification for what they are doing.

Thoughts on software-defined silicon

Posted Feb 21, 2022 1:06 UTC (Mon) by nix (subscriber, #2304) [Link]

> When you couldn't power all the transistors on the chip simultaneously for thermal reason the ability to pick between features A, B, and C (either of which can be enabled but not all simultaneously) make such scheme pretty attractive. You couldn't do that with millions of SKUs but can easily achieve with million of [potential] licenses.
> And in that case the ability to enable features without license would become actively harmful: enabling all features simultaneously would just fry the chip

Yes, but... modern chips have been past this point for at least a decade. It wasn't solved with a licensing system: it was solved by having power management circuitry on the CPU that adjusted things (usually the operating frequency and voltage, but it is perfectly possible to imagine it also adjusting semi-invisible microarchitectural features like the number of execution ports) such that your code would keep running, just slower. Boost mode etc is the same thing: the fewer cores busy, the faster they're run, and even with lots of cores busy you can run fast briefly until the power management system turns down the CPU frequency to keep things cool enough. This is obviously *vastly* more efficient and flexible than some clunky licensing system would be: it allows for dynamic adjustment, which is something no licensing system like this could ever handle.

No, this is all about getting to make one SKU and sell it as several and allow upselling lower models to higher without needing hardware replacement. Shame that doing so requires cryptographic locks in the chip. (I doubt that anticircumvention measures are meaningful here: modern CPUs are almost impossible to analyze at the level you'd need to to crack this open anyway, or people would already have extracted much more significant private keys for firmware signing etc. Nobody has.)

Thoughts on software-defined silicon

Posted Feb 21, 2022 9:01 UTC (Mon) by marcH (subscriber, #57642) [Link]

> When you couldn't power all the transistors on the chip simultaneously for thermal reason the ability to pick between features A, B, and C ...

This problem has already been solved and it has not been solved by turning off features:

https://en.wikipedia.org/wiki/Intel_Turbo_Boost

Thoughts on software-defined silicon

Posted Feb 26, 2022 5:09 UTC (Sat) by flussence (guest, #85566) [Link]

> I had a 3-core AMD a while back (just scrapped it), and I understood that a lot of these chips were actually 4-cores with a core disabled.

I had one of those too. Stable as a rock with the extra core enabled for 12 years now, and it could even handle overclocking on top of that. The weird BIOS dance to unlock it put me off ever trying to run Coreboot on the thing though.

Thoughts on software-defined silicon

Posted Feb 19, 2022 9:12 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

> Nope. You are forgetting an elephant in the room: price of set of photolithography masks and design cost for these masks.

Sure. But this works only if you don't have competition. Because it's easy to undercut others on features otherwise. After all, it doesn't cost you anything extra to enable a feature that your competitor doesn't have.

We're already seeing this with AMD, it's steadily eating into Intel's server marketshare that had been unassailable up until ~3 years ago: https://www.tomshardware.com/news/intel-amd-4q-2021-2022-...

Thoughts on software-defined silicon

Posted Feb 19, 2022 11:57 UTC (Sat) by khim (subscriber, #9252) [Link] (9 responses)

> But this works only if you don't have competition.

If you have competition then it becomes even more important.

> After all, it doesn't cost you anything extra to enable a feature that your competitor doesn't have.

It would cost you a lot. If you would stop selling $299 Ryzen 5 5600X (which is, essentially, $449 Ryzen 7 5800X with two cores disabled) and would start selling Ryzen 7 5800X for $399 instead of having these two… you would lose both on people who were ready to spend $449 and on ones who were ready to buy something for $299.

I took example from AMD book, not Intel book to show that as competition grows more acute the need to disable features and sell crippled product grows, not diminishes. Market segmentation is powerful tool.

When AMD had no ability to make powerful CPUs (for many years AMD's CPU before Ryzen were awful) — because it was hard to sell even CPUs with even all features enabled (and it was losing money as the result). When AMD made something for the top tier — it started marked segmentation games (and immediately become profitable).

> We're already seeing this with AMD, it's steadily eating into Intel's server marketshare that had been unassailable up until ~3 years ago

And this has nothing to do with the fact that AMD presented 7nm EPYC Rome two years ago while Intel still couldn't make 10nm Xeons (10nm of Intel is more-or-less the same as 7nm of TSMC), but everything to do with the fact that EPYCs are less segmented? Dream on.

Thoughts on software-defined silicon

Posted Feb 19, 2022 15:11 UTC (Sat) by jhoblitt (subscriber, #77733) [Link] (3 responses)

It isn't just an issue of competition in the market. Fab capacity of each company and the overall market is a factor. Currently, the entire world is maxed out on wafer starts. So while it probably saves on masks, validation, etc. to cripple a perfectly functional 16core die to be sold as an 8core die, that means there is an 8core die area of silicon that is now lost. In a situation where every cpu produce can be sold, that lost real estate represents a potential 100% increase in gross margin.

Thoughts on software-defined silicon

Posted Feb 19, 2022 15:19 UTC (Sat) by khim (subscriber, #9252) [Link] (2 responses)

If we have lived in a world where every CPU can be sold then we would have seen similar craziness to GPU market where prices are 2x-3x recommended price.

And GPU makers don't, actually, embrace that craziness, they fear it because they know what comes next: governments would say that cryptocurrency mining is a criminal activity, GPU sales would drop through the floor and they would be selling them at loss for some time.

Fab capacity is strained not because it's impossible to build mode fabs, but because it's impossible to do that profitably: build cycles are many years, investment are measured in billions and unused fabs are not aging well.

Thus no, your reasoning doesn't make sense long-term. And CPU/GPU manufacturing is long-term process, it takes many years to develop CPU from scratch and year or two just to do minor alterations.

Thoughts on software-defined silicon

Posted Feb 19, 2022 16:10 UTC (Sat) by jhoblitt (subscriber, #77733) [Link] (1 responses)

I was just quoted a 26+ week lead time on any zen3 epyc cpu with >= 32cores from a major manufacturer. I ended up accepting zen2 cores in order to cut the estimated lead time in half. However, the price will start to float at the market rate 90days from the date the quote was issued. I won't even know the final cost until I'm invoiced for the shipment. If this isn't a CPU shortage, I don't know what one would look like.

The GPU and CPU makers are all bidding on the same fab capacity. When AMD reserves wafers at TSMC, that's capacity that is denied to Apple/Nvidia/Intel/etc. and vice versa.

The ability to build new fabs is not unlimited. Bleeding edge fabs need equipment from ASML who reports that they and their supply chain are already maxed out. The world is essentially already building new fab capacity at the maximum rate they can get lithography equipment.

The rumors are that TSMC is booked out *years* in advance at the 5, 4, and 3nm nodes. Are you arguing that AMD is going to produce 128core zen4s and then cripple perfect dies down to 16c core parts instead of taping out multiple designs that use the exact same logical blocks, when all that lost real estate could have instead been used for GPUs or ~5-7 additional CPUs?

Thoughts on software-defined silicon

Posted Feb 19, 2022 17:12 UTC (Sat) by khim (subscriber, #9252) [Link]

> I was just quoted a 26+ week lead time on any zen3 epyc cpu with >= 32cores from a major manufacturer. I ended up accepting zen2 cores in order to cut the estimated lead time in half.

That's nice piece of data. Let's try to decipher it, shell we?

  1. You asked for latest-and-greatest cores and got promise to receive them when they would be made (you may not know that, but half-year or more is not atypical if we are talking about manufacturing of complex 7nm chip: they require almost hundred masks and application of one mask takes full day or more, depending on how much load fab experiences right now). The exact same thing happened with surprisingly good R300 back when it was hot — yet somehow back then noone was bawling their eyes out and complained about shortage of fabs. Everyone accepted that it was simple management miscalculation.
  2. When you asked for less “hot” chips made by the exact same manufacturer in the exact same lab by the exact same producer you got shorter times because, apparently because there are surplus of these in production.
  3. Yet it's very easy to buy consumer-grade chips with Zen3 core. The full nomenclature from lowly Ryzen 5 5600X (which my friend bought in India two days ago) to Ryzen 9 5950X (which I bought at the same time). Prices are less than what recommended prices are. Many of these are artificially crippled.

Doesn't look like shortage of CPUs to me, sorry.

> If this isn't a CPU shortage, I don't know what one would look like.

Indeed, you don't know that. When you need to pay 10x or 100x price to receive 250nm chip (like some automotive chips are selling for right now) and situation stays that way for yearsthen you can say there are shortage of chips.

Till then it's normal reaction of market on changes in demand and supply for something that takes years to produce coupled with customers who, naïvely, expect to buy the same thing with lead times measured in weeks.

JIT-manufacturing, both good and bad sides of it.

> The rumors are that TSMC is booked out *years* in advance at the 5, 4, and 3nm nodes.

Why do you say these are rumors? That's the reality. Latest nodes are always booked years in advance. Because fabs are extremely expensive yet they can only ask for extra-premium prices for latest nodes for a few years… nobody builds spares. You don't even need rumors to confirm something that was always true.

> The GPU and CPU makers are all bidding on the same fab capacity.

Nope. That's not true because of, as you have said yourself: fab capacities are booked years in advance. Essentially they are booked when labs are built or maybe a bit later. If they are already booked then there are no competition between customers.

> When AMD reserves wafers at TSMC, that's capacity that is denied to Apple/Nvidia/Intel/etc. and vice versa.

No, they just build more fabs.

> Are you arguing that AMD is going to produce 128core zen4s and then cripple perfect dies down to 16c core parts instead of taping out multiple designs that use the exact same logical blocks, when all that lost real estate could have instead been used for GPUs or ~5-7 additional CPUs?

No, I'm saying that if you don't want shortages you don't disrupt markets by printing trillions of unbacked money and using all tricks you can imagine to avoid 40-50% inflation… mechanism where you have to calculate number of chips you need to order five years in advance but where buyers expect lead times measured in weeks works if you can predict number of buyers, but if you break that mechanism… it stops working. This is completely unrelated to selling crippling CPUs. Ryzen 5 5600X and Ryzen 9 5900 are still being produced and sold despite the fact that you can sell the exact same chips as Ryzen 7 5800X and Ryzen 9 5990X.

Turning 128 core chip into 16 core chip wouldn't make any sense because AMD embraces chiplets architecture thus you can just use more or less chiplets. But if you want to shave off 2 cores or 4 cores... AMD does that and is happy to sell these at discount prices.

Thoughts on software-defined silicon

Posted Feb 19, 2022 19:07 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

> It would cost you a lot. If you would stop selling $299 Ryzen 5 5600X (which is, essentially, $449 Ryzen 7 5800X with two cores disabled) and would start selling Ryzen 7 5800X for $399 instead of having these two… you would lose both on people who were ready to spend $449 and on ones who were ready to buy something for $299.

Except that you can undercut Intel and sell Ryzen 7 5800X for $299 and immediately crush Intel. Otherwise Intel will just push you out with cheaper and faster CPUs.

In reality, cores are usually locked because they fail internal QA - this is indeed a perfectly valid strategy.

> I took example from AMD book, not Intel book to show that as competition grows more acute the need to disable features and sell crippled product grows, not diminishes. Market segmentation is powerful tool.

Not following. Once the competition starts biting, there's a pressure to unlock more and more features on the low-end of the spectrum.

Thoughts on software-defined silicon

Posted Feb 20, 2022 0:40 UTC (Sun) by khim (subscriber, #9252) [Link]

> Except that you can undercut Intel and sell Ryzen 7 5800X for $299 and immediately crush Intel.

Seriously? Immediately crush Intel? Weren't you celebrating 25% market share which AMD achieved in four years after introducing CPU which totally kicked Intel's ass?

If you would start selling Ryzen 7 5800X for $299 the only thing you would achieve would be stiffing your own R&D. Which would mean that your next CPU would be worse that CPU of competitors and you would “crush” yourself instead.

> In reality, cores are usually locked because they fail internal QA - this is indeed a perfectly valid strategy.

How do you know? We know for a fact from the times when it was possible to enable them that they were perfectly functional back then. We know from sales figures today that it's, most likely, still true today (Ryzen 5 5600X sales are much higher than Ryzen 7 5800X). Nope, these cores are not locked because they fail QA. They are locked to be able to sell CPUs at different price points.

> Once the competition starts biting, there's a pressure to unlock more and more features on the low-end of the spectrum.

Where would that pressure come from? It's zero-sum game. We only have two competitors (in mobile space there are more, but still not that many). If you stop offering CPUs at different price points then you wouldn't, suddenly, get more money from selling more CPUs because you couldn't produce more CPUs at the snap of fingers. You pay a lot to ramp up capacity and it happens slowly. If you would deprive yourself from receiving more money they you would just fail to produce enough CPUs, leave money on the table and would lose in the next round of competition.

That's how we ended up with just two manufacturers of x86 CPUs BTW.

Thoughts on software-defined silicon

Posted Feb 20, 2022 0:44 UTC (Sun) by excors (subscriber, #95769) [Link]

> Except that you can undercut Intel and sell Ryzen 7 5800X for $299 and immediately crush Intel.

Why would you want to crush Intel in the lower-end market segment where (per the economy seat analogy) their profit margin is approximately zero? The production cost of your 8-core chip will be similar to their 8-core-with-2-perfectly-good-ones-disabled chip, so you'll make no profit either. And now you can't do any market segmentation yourself, because you're already selling your most powerful chip for no profit. That seems much worse than copying Intel's segmentation strategy and getting a small share of high-margin segments.

Thoughts on software-defined silicon

Posted Feb 21, 2022 4:35 UTC (Mon) by timrichardson (subscriber, #72836) [Link] (1 responses)

IN pricing theory, you want to extract the maximum value. If you sell a product at only one price, you are forced to compromise at both ends: there are potential customers who would pay above your marginal price, so are potentially profitable but don't buy because your offered price doesn't meet the value they see in the product, and you leave money on the table from customers who have more value in your product than what you charge; they would have paid more if you asked, but you didn't.

The conventional answer to is create differentiated products at different price points. Intel does this, nothing new. It is commonly accepted that this is something like an happy accident of the variation in how CPUs are made. A comment above says that this is greatly exaggerated but even if not, the distribution of different working cores is not a random accident: it would be a deliberately chosen manufacturing strategy affected by how the production process is configured. I doubt that Intel or AMD is very surprised by the output they get, and I expect they could tweak their production process to avoid nearly all locked cores, although at the cost of lower total output ( I have some manufacturing experience behind that comment, but it think it is not a controversial statement). The difference between accepting binned manufacturing output or achieving the same thing with software seems really invisible to me. I find it ironic that a computer science community is having trouble with the concept of abstracting hardware into software.

Thoughts on software-defined silicon

Posted Feb 21, 2022 13:40 UTC (Mon) by gnb (subscriber, #5132) [Link]

Whether the difference is really invisible depends a lot on the implementation: are the feature enablements being sold liable to expiry or revocation by the vendor? If so the difference between that and actually owning the feature seems pretty clear-cut. I suspect what is making a lot of commenters on this article uneasy is a suspicion that this is part of a move to a rental model.

Thoughts on software-defined silicon

Posted Feb 20, 2022 0:36 UTC (Sun) by Thomas (subscriber, #39963) [Link]

In other words it will simply be like this: https://i.imgflip.com/65vncp.jpg

Thoughts on software-defined silicon

Posted Feb 21, 2022 3:12 UTC (Mon) by marcH (subscriber, #57642) [Link]

> It is possible (I haven't run the numbers) that they are running an "airfare-style" business model, where:

Breaking news: "premium" products yield higher margins. You've described how pretty much every industry works. The reason SUVs cost more that minivans in the USA right now is not because they cost more to manufacture, it's only because more people want SUVs more, etc.

What is special here is the frustration to hold something physical in your hand and owning only _part_ of it. For some curious psychological reason it feels _more_ frustrating than owning _none_ of it (like for instance: when some device is unusable without some cloud subscription)

Thoughts on software-defined silicon

Posted Feb 19, 2022 8:24 UTC (Sat) by zdzichu (subscriber, #17118) [Link]

Is it different, really? It sounds exactly like unlockable Pentium G6951 from 2010 (mentioned in comments here), but with Linux driver this time.

the name of this "feature" is appalling

Posted Feb 18, 2022 20:50 UTC (Fri) by jhoblitt (subscriber, #77733) [Link] (6 responses)

If there is no FPGA, CPLD, or microcode involved than the name of this "feature" is badly inaccurate, even by the usual marketing standards. This is nothing more than MSRs to turn on and off features baked into the die. I am skeptical that even "enterprise", let alone "hyperscaler", customers are enthusiastic about having to deal with per socket license entitlement. AMD is going to finally start shipping avx512 support. It seems like adding avx512 to "e-cores" for a consistent desktop/server fp abi would have been a much more productive use of engineering time.

the name of this "feature" is appalling

Posted Feb 18, 2022 21:12 UTC (Fri) by developer122 (guest, #152928) [Link]

Can you imagine if we still had to pay per-CPU software licences for operating systems?

https://megatokyo.com/strip/549

the name of this "feature" is appalling

Posted Feb 19, 2022 1:58 UTC (Sat) by flussence (guest, #85566) [Link] (4 responses)

This is the first I've heard of AMD having plans for AVX512. I thought they didn't see it going anywhere?

the name of this "feature" is appalling

Posted Feb 19, 2022 9:36 UTC (Sat) by developer122 (guest, #152928) [Link] (2 responses)

I'm surprised. Intel is killing it off with Alder (lava) Lake.

the name of this "feature" is appalling

Posted Feb 19, 2022 11:32 UTC (Sat) by pbonzini (subscriber, #60935) [Link]

It's still there on server parts, and with growing functionality.

the name of this "feature" is appalling

Posted Feb 19, 2022 13:29 UTC (Sat) by khim (subscriber, #9252) [Link]

Intel overestimated the speed of changes in the software realm. Alder Lake have some cores which support AVX512 and some that don't support it yet.

I guess the optimistic thinking was that software would adopt, somehow.

But software was barely able to adopt cores with different speed, neither Linux nor Windows is ready to deal with cores with different instructions sets!

Thus it was easier for Intel to disable it rather then try to create some buggy driver which would do something with that mess.

the name of this "feature" is appalling

Posted Feb 19, 2022 15:33 UTC (Sat) by jhoblitt (subscriber, #77733) [Link]

It has been widely reported that zen4 will finally add avx512, including leaked screen shots of an AMD manual. AFAIK there has been no public confirmation from AMD. There are probably enough engineering samples floating around at this point that we will know one way or the other soon. Avx512 complicates the decision as to whether or not to use a gpu for fp intensive code. It is a lot easier to change some compiler flags/add some code annotations than committing to debugging a cuda pipeline. Without fancy SIMD instructions, the choice of cpu is less relevant as the system will probably have one or more Nvidia gpus installed anyways.

Thoughts on software-defined silicon

Posted Feb 18, 2022 21:06 UTC (Fri) by wittenberg (subscriber, #4473) [Link] (1 responses)

Amdahl made this a feature. You paid for a small machine, and could upgrade it (via software) whenever you needed the faster machine. Since you were renting the machine anyway, this placed a small administrative burden on everyone. In essence, this is the elasticity that we now praise in cloud computing.

Thoughts on software-defined silicon

Posted Feb 18, 2022 22:45 UTC (Fri) by ejr (subscriber, #51652) [Link]

This.

Doubling down, this is about ensuring cloud providers negotiate appropriate fees. Unless competitors undercut these fees.

So it's a temporary thing. Intel won by nuking these constraints once upon a time.

Thoughts on software-defined silicon

Posted Feb 18, 2022 21:48 UTC (Fri) by MattBBaker (guest, #28651) [Link] (1 responses)

IBM has tried this before in the server market. Racks of fully populated machines, and then you would install a license in the BIOS to unlock capabilities. Rumor is that the target of this was Amazon, which needed a way to run their website over Christmas, but not over buy on compute for the rest of the year. Instead, Amazon started AWS and rented out their excess capability.

It seems unlikely Intel is eyeballing "Matt's desktop". It feels like this will be DOA because anyone that Intel would eye up for this would instead go to ARM for custom chips.

Thoughts on software-defined silicon

Posted Feb 21, 2022 16:20 UTC (Mon) by jhhaller (guest, #56103) [Link]

HPE is doing this today with Greenlake. In order to sell to on-premise customers with a need for variable compute capacity and to prevent them from fleeing to the cloud, they deploy more hardware and let you activate that additional hardware for a fee. I'm not sure how this works on the back-end, if they have worked out a deal with Intel to get unactivated chips for a lower price, or if they just expect that people will buy the extra capacity once they have it. But, this is the kind of thing they could to with activation codes to ensure that the processor couldn't be used without activating it.

Thoughts on software-defined silicon

Posted Feb 18, 2022 21:59 UTC (Fri) by kunitz (subscriber, #3965) [Link] (6 responses)

There are a number of details that will be interesting to understand. Those certificates must be specific for a CPU; you don't want customers to run a whole fleet with a single feature certificate. What will be the lifetime for the root certificate? Can it be exchanged? If not this is an interesting mechanism for implementing planned obsolescence.

I can understand why Intel is doing it. Right now they have to blow off fuses to generate different SKUs. In the future they have one SKU but can still do price segmentation.

Thoughts on software-defined silicon

Posted Feb 18, 2022 22:59 UTC (Fri) by jhoblitt (subscriber, #77733) [Link] (5 responses)

Intel will still have to have a huge number of SKUs to make use of the chips with defects in cores, L2, L3, etc.

Thoughts on software-defined silicon

Posted Feb 19, 2022 13:45 UTC (Sat) by atnot (subscriber, #124910) [Link] (2 responses)

This is what chip manufacturers always point to, however from people in the industry the reality is that especially as the process matures, most dies are able to hit high bins without issues. There isn't really a technical need for more than two or three SKUs per design. As e.g. AMD has shown by only releasing three real desktop SKUs in the 5000 series.

Thoughts on software-defined silicon

Posted Feb 19, 2022 15:36 UTC (Sat) by jhoblitt (subscriber, #77733) [Link] (1 responses)

I know next to nothing about ICU manufacturing but couldn't that also be a sign as to the differences in TSMC vs Intel defect rates?

Thoughts on software-defined silicon

Posted Feb 19, 2022 23:33 UTC (Sat) by atnot (subscriber, #124910) [Link]

Unlikely, as there are two major other differences:

1. AMD could not adjust it's wafer allocations to the shortages easily, as TSMC fab capacity has to be booked far in advance. Intel is far more flexible there.

2. More importantly, unlike Intel, AMD shares silicon between server and desktop processors. Since Server CPUs have far higher margins, this means they are going to prioritize those when push comes to shove as it did last year. In such an environment it makes little sense to launch an SKU for every price point as they usually would.

Thoughts on software-defined silicon

Posted Feb 20, 2022 17:13 UTC (Sun) by willy (subscriber, #9762) [Link] (1 responses)

You're assuming that defects in L2/3 can only be worked around by disabling noticable chunks of the CPU. I don't know how Intel handles it, but I'd encourage you to read a paper from HP on how they handled it twenty years ago,

A 500 MHz 1.5 MB cache with on-chip CPU

(there are various free copies of the pdf floating around the net; you don't need to pay for it)

You probably also want to consider what percentage of the die is L3 cache; over 90% on the high end models with a hundred MB of L3 cache.

Thoughts on software-defined silicon

Posted Feb 21, 2022 4:54 UTC (Mon) by willy (subscriber, #9762) [Link]

Sorry, that design didn't feature the redundancy. That was added in the next generation,

https://parisc.wiki.kernel.org/index.php/File:Isscc_cache...

Slide 20 is where they start talking about the yield improvement features.

Thoughts on software-defined silicon

Posted Feb 19, 2022 3:40 UTC (Sat) by k8to (guest, #15413) [Link] (1 responses)

I feel like the natural consequence of "let's put software in everything" is that everything will be full of bugs and security problems.

I'm not really a fan of rentier models, but I think the technical problems that will come are enough to make this a bad idea.

Sure there are examples of doing this in the mainframe era, but mainframes were not living in our modern world of security attacks. Additionally, the pricetags and dev cycles of those systems meant that a lot more attention was given to the implementations at least in an attention / complexity ratio sense.

Thoughts on software-defined silicon

Posted Feb 21, 2022 9:34 UTC (Mon) by taladar (subscriber, #68407) [Link]

You should read the article. It is not about actual software defined hardware, just about enabling features in hardware with licenses.

Also, do you really think people somehow make fewer mistakes just because they are designing hardware instead of writing software?

Thoughts on software-defined silicon

Posted Feb 19, 2022 6:51 UTC (Sat) by pabs (subscriber, #43278) [Link]

This reminds me of when HDCP support was added to Linux/etc:

https://drewdevault.com/2019/10/07/HDCP-in-Weston.html

Thoughts on software-defined silicon

Posted Feb 19, 2022 7:26 UTC (Sat) by mcon147 (subscriber, #56569) [Link]

Linux isn't required to take anyone's patches, accepting them is a deliberate choice.

Thoughts on software-defined silicon

Posted Feb 19, 2022 8:14 UTC (Sat) by gioele (subscriber, #61675) [Link] (2 responses)

Just for historical context: somewhat similarly to Intel Upgrade Service, Raspberry Pis until version 4 used to come with disabled MPEG-2 and VC-1 hardware acceleration blocks. To enable them you needed to buy a key [1,2]. The firmware took care of uploading the key to the right HW register.

[1] https://codecs.raspberrypi.com/mpeg-2-license-key/
[2] https://codecs.raspberrypi.com/vc-1-license-key/

Thoughts on software-defined silicon

Posted Feb 19, 2022 18:01 UTC (Sat) by ermo (subscriber, #86690) [Link] (1 responses)

Out of curiosity, why was this handled like this?

Was it because the IP for the hardware blocks in question was owned by someone else than the SoC supplier (MPEG LA vs. Broadcom) in this instance and that, due to wanting to keep the BoM as low as possible to hit the intended RPi price point, this was necessary for the RPi foundation?

Or am I getting it all backwards?

Thoughts on software-defined silicon

Posted Feb 19, 2022 18:24 UTC (Sat) by excors (subscriber, #95769) [Link]

Because of the MPEG LA patent licensing fees. From https://www.raspberrypi.com/news/new-video-features/ :

> One of the things that we had to regretfully dismiss as an option was an MPEG-2 decode licence for every unit. Providing that licence would have raised the price of every Raspberry Pi by roughly 10%

> We’ve spent some months working out how on earth to square this particular circle. A blanket licence for everybody would cost the Foundation money it simply doesn’t have, and not everybody with a Raspberry Pi would use that licence; an individual licence for an individual user to download and use with an individual machine is a surprisingly finickity thing to engineer. [...] But that’s what we’ve done

(They already paid a blanket licence fee for H.264 decode/encode, so that was enabled by default.)

I don't believe the individual licence key is used by the hardware blocks in any way; it's merely verified by the Pi's proprietary firmware before enabling the APIs that make the hardware accessible from Linux. Some naughty people hacked the firmware so the verification function would always return true - it's not particularly secure, but it was apparently good enough to keep the MPEG LA happy.

Thoughts on software-defined silicon

Posted Feb 19, 2022 13:53 UTC (Sat) by b3nt0box (subscriber, #98698) [Link]

IBM has been doing this with mainframes for a long time.

I think that is the area where this is intended to play. Large systems installations where the hardware is never actually "owned" but leased.

Thoughts on software-defined silicon

Posted Feb 19, 2022 14:50 UTC (Sat) by jeffreypmcateer (guest, #140200) [Link]

I'm surprised at the number of people completely ignoring the very real possibility that the lockout implementation will be heavily flawed; Intel CPUs haven't exactly been faithful to their ISA plans in recent years.

I fully expect within 3-5 years an exploit to surface giving everyone access to all of their CPU features, after which point Intel will lawyer up and play the DRM game. In this scenario private cloud gets a huge advantage; they can move illegal CPUs behind legal nginx servers, while public cloud has to pay the full cost of licensing each CPU. Private consumers don't even come close to mattering, nobody has the legal capacity to just "sue all our customers" (besides, you lose a lot of customers who were on the fence but still buying).

Thoughts on software-defined silicon

Posted Feb 19, 2022 16:04 UTC (Sat) by felixfix (subscriber, #242) [Link] (1 responses)

Is the key unique to each chip, or the same for them all? Seems likely to leak sooner rather than later.

Thoughts on software-defined silicon

Posted Feb 20, 2022 15:35 UTC (Sun) by Tobu (subscriber, #24111) [Link]

The activation trigger could be something like a CPU identifier wrapped in a signature from a key Intel controls, because why wouldn't they.

Thoughts on software-defined silicon

Posted Feb 20, 2022 11:47 UTC (Sun) by jengelh (guest, #33263) [Link]

>Many of us no longer purchase and run our own servers; we rent them from a cloud provider (and, to the tell the truth, are often better off for it).

With dedicated machines, you get all the features. If a higher-value component is inserted, you get to keep the benefit. Think of storage: it fails, and models go out of production, and the hoster's warehouse may be out of stock on a particular historic item. Having a 500GB SSD replaced with a 512GB model comes to mind.

sometimes it's just a sale/negotiatin tactic

Posted Feb 21, 2022 10:35 UTC (Mon) by eliezert (subscriber, #35757) [Link]

Over a decade ago I worked for a HW company that had FW support for enabling features by a signed certificate.
On every cycle of the HW, what would eventually happen, is that all vendors who integrate the HW got the company to agree to include "100% attach rate" as part of the cost.
So there weren't really any certs installed by end users, it was (in my opinion) just something sales people can give as part of the negotiation tactics.
I recently looked at a product that includes such HW and to this day it lists these features as optional and enabled by a certificate even though it comes pre-installed and as a user I did not pay anything additional for it.

Thoughts on software-defined silicon

Posted Feb 21, 2022 12:04 UTC (Mon) by bblacksr (guest, #83377) [Link]

Maybe in the long run they may find themselves loosing market share to more open hardware like RISC-V or something else.

Thoughts on software-defined silicon

Posted Feb 21, 2022 23:31 UTC (Mon) by mr_bean (subscriber, #5398) [Link] (2 responses)

I'm possibly alone in not seeing a massive amount of harm in this.

IF I can do the equivalent of buying, say an i5 12600k (currently on sale for about £280 = USD 380) and later upgrade it to an i9 12900K (currently retailing about 2x the 12600k) by just paying the differential in price I've potentially saved on buying a whole new 12900K and having to pass on the 12600K on eBay, or wherever, then I think there is a gain for me as well.

I'm much more worried about e.g. Lenovo using the features in AMD Ryzen chips to lock CPUs to Lenovo boards thus ensuring they have almost zero resale value if one happens to want to upgrade the CPU in a Lenovo PC.

Granted 99.99% of users upgrade at the granularity of "Whole PC" but Lenovo's move seems way more sinister than Intel's

Thoughts on software-defined silicon

Posted Feb 23, 2022 4:28 UTC (Wed) by NYKevin (subscriber, #129325) [Link] (1 responses)

E-waste is definitely bad and nobody (to within experimental error) really enjoys re-selling parts. The concern is that Intel might decide that you only get to have an i9 if you pay them $5 a month or something, and if you stop paying then it goes back to being an i5. As far as I'm aware, there is no law or regulation which unambiguously says they can't do that.

Thoughts on software-defined silicon

Posted Feb 23, 2022 8:38 UTC (Wed) by nilsmeyer (guest, #122604) [Link]

> E-waste is definitely bad and nobody (to within experimental error) really enjoys re-selling parts.

I'm sure there however are people who enjoy having used kit available at a discount.

> The concern is that Intel might decide that you only get to have an i9 if you pay them $5 a month or something, and if you stop paying then it goes back to being an i5. As far as I'm aware, there is no law or regulation which unambiguously says they can't do that.

Yes and there are a few more things they could do, for example only sell the most basic feature set (single core x86-64) and then sell all the additional capability at a monthly fee structure. And they can decide to on longer rent the features out unless you get a new CPU. Getting downgraded to an i5 may not be so bad, getting downgraded to a 1 core i3 with no turbo probably renders the machine almost unusable and with zero resale value - though at this point you're likely renting / leasing the whole thing anyways which means it's no longer really your computer.

Thoughts on software-defined silicon

Posted Feb 23, 2022 13:50 UTC (Wed) by jezuch (subscriber, #52988) [Link]

> And, of course, there are dodgy web sites out there demanding payments for access to their content.

I see what you did there :)

Thoughts on software-defined silicon

Posted Feb 23, 2022 22:34 UTC (Wed) by Trelane (subscriber, #56877) [Link]

I was pretty excited at first. I'm a fan of pushing software to the edge of hardware.

Unfortunately, this isn't Software Defined Silicon so much as Software _Deleted_ Silicon.

Coupling

Posted Feb 24, 2022 18:51 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link]

There may be use-cases for being tightly bound to the vendor at the chip level.

However, it seems that, if we are unhappy with the risks of remote entities (public or private sector) being able to decide that we don't need to be operating anymore, that this looks like a tremendous threat vector.

Which invites the question of what open chip manufacturers exist that Linux can target.

Thoughts on software-defined silicon

Posted Mar 1, 2022 12:55 UTC (Tue) by bblacksr (guest, #83377) [Link]

It seems this would be a bad business decision in the long run with many alternatives possible like ARM or Risc-V much less other competitors.

Thoughts on software-defined silicon

Posted May 23, 2022 8:00 UTC (Mon) by littoral (guest, #140523) [Link] (2 responses)

Intel is planning to rip off its customers, by selling them hardware that they can't use unless they pay an additional "ransom".
This kind of ripoff - pioneered, I believe, by IBM in the 1970s when it had an effective monopoly on certain kinds of peripherals - only works when a company has a monopoly. In a free, perfectly competitive market, the price of a product will be the cost of producing and delivering it, plus the reasonable profit margin that the manufacturer needs to stay in business.

It follows that the best defense is to make sure that AMD remains a viable competitor.

Thoughts on software-defined silicon

Posted May 23, 2022 8:33 UTC (Mon) by jem (subscriber, #24231) [Link] (1 responses)

I don't see how this could be seen as a ripoff, if Intel is clear about what you get when you purchase the processor. What worries me is that people are not objecting to monopolistic pricing when you can't see it, but feel ripped off when they find out that a product is hiding additional capabilities that weren't in the purchase agreement in the first place.

I don't think we should be worried about Intel being a monopoly. To me it seems like they are gradually becoming the underdog, with AMD being competitive again. Apple has also proved that it is possible to make Arm processors that are competitive in general purpose computing, and with increasing distrust between China and the US, I wouldn't be surprised by a flood of powerful Chinese RISC V chips on the market in the coming years.

Thoughts on software-defined silicon

Posted Jun 7, 2022 18:52 UTC (Tue) by flussence (guest, #85566) [Link]

> I don't see how this could be seen as a ripoff, if Intel is clear about what you get when you purchase the processor.

Intel's customer isn't you as an individual. These things are laundered through the retail and system-builder industry, and those middlemen are under no obligation to be clear or honest about this DRM if it'll make them an extra buck.

The Apple situation just reinforces that: they *do* advertise and sell direct to end users, so they aren't going to intentionally build lemons and cut corners simply because they don't have the responsibility-laundering arrangements in place to get away with that.


Copyright © 2022, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds