|
|
Subscribe / Log in / New account

Good luck

Good luck

Posted Feb 26, 2025 20:22 UTC (Wed) by malmedal (subscriber, #56172)
In reply to: Good luck by Cyberax
Parent article: Building an open-source battery

> I invested quite a bit of my personal money into three flow battery startups, and they all failed :(

People kept thinking that lithium-batteries could not possibly get any cheaper, and then they did just that. I remember a study claiming the theoretical minimum was $300/kWh. (current price is less than $50)

It reminds me of how people kept thinking that Silicon could not get any faster and next year we'd finally get GaAs chips.


to post comments

Good luck

Posted Feb 26, 2025 21:08 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Yeah, but with li-ion the improvement pathways are clear: better materials to hold on to lithium ions, and better electrolyte. There is no problem with the low power density, as the surface area of electrodes in li-ion cells is huge.

I think that something like a sodium-ion battery will end up dominating the energy storage business.

Good luck

Posted Feb 26, 2025 21:43 UTC (Wed) by malmedal (subscriber, #56172) [Link]

Quite likely, sodium is much cheaper and directly below lithium in the periodic table, so apparently most of the experience with producing lithium batteries is directly applicable.

Only downside is that sodium is heavier, but I heard some rumours that it can withstand more charging cycles.

Good luck

Posted Feb 26, 2025 22:14 UTC (Wed) by Wol (subscriber, #4433) [Link] (13 responses)

> It reminds me of how people kept thinking that Silicon could not get any faster and next year we'd finally get GaAs chips.

There's a difference between getting faster (a physical constraint) and getting cheaper (an economic constraint).

You cannot clock your standard ATX motherboard faster than 500MHz (or CPU faster than 1GHz) because speed of light. (As people have pointed out to me, there are ways to cheat ...)

But just because we can't (curently) make things cheaper, doesn't mean we can't find a way. Big diamonds were expensive. Now we can grow them cheaply ... artificial rubies likewise ... making petrol was wasteful of oil which meant diesel was cheaper, now cracking can make petrol as cheaply as diesel ...

But if the Physics says "this won't work", you're stuffed (unless you find a cheat). Don't worry, nature cheats too - we wouldn't have stars if it weren't for a cheat ... :-)

Cheers,
Wol

Good luck

Posted Feb 27, 2025 8:42 UTC (Thu) by PeeWee (guest, #175777) [Link] (12 responses)

> You cannot clock your standard ATX motherboard faster than 500MHz (or CPU faster than 1GHz) because speed of light.

There seems to be a misunderstanding. Of course you can and we do. CPUs these days top out at around 5GHz and that is real clock speed, not "cheated". The problem which involves the speed of light is called "clock skew": at such high frequencies the distances the clock signal needs to travel in the circuitry become very relevant. Just take one clock cycle at 1 GHz (forget about rate doubling). That is 1 nanosecond per cycle. And the speed of light is somewhere around 2/3 of the vacuum speed in these materials. This makes it very non-trivial to ensure that the clock signal reaches every subsystem at the same time and CPU vendors expend significant compute resources to match up the clock line lengths, to make sure it does.

The other problem with increasing clock speeds has to do with parasitic capacitors and resistors; every transistor gate has a parasitic capacity and every electrical line has a parasitic resistance. Both multiplied are: R*C or just RC, for short. That should look familiar to some people and it also has a name: "time constant" (symbol: Tau, the Greek letter). This defines how fast one can load a capacitor and has a bearing on how steep the edges of the clock signal are. Ironically, what we draw (idealized) as rectangular clock signal with infinitely steep edges is more like a sine wave in reality due to this. And if one clocks too fast the signal might not even reach full nominal voltage. One can, of course, increase the voltage, but since that increases power consumption, i.e. heat, there is an upper limit, because the cooling has its limits as well. Also power is proportional to the square of the voltage, and higher voltage means a higher likelihood of breaking through, and thus destroying, the transistor gates. And that's the reason for the 5 GHz limit, which is already higher than originally anticipated. But there is simply no way to make any more gains in this area with the given materials, other than ever decreasing incremental ones.

Good luck

Posted Feb 27, 2025 11:12 UTC (Thu) by Wol (subscriber, #4433) [Link] (11 responses)

> That is 1 nanosecond per cycle.

That's the point. That is the time required for a signal to travel from one side of a mobo to another. So in order for a component to receive a response - ANY response - from a component on the other side of the mobo - it's impossible for the mobo to run faster than 500MHz. A chip is an order of magnitude smaller, so it can run an order of magnitude faster.

Getting round the speed of light is what I meant by "cheating". You mention making sure the clock signal arrives everywhere at the same time. CPUs have pipelines. Etc etc. But at the end of the day, if you want two components to talk to each other, you have to assume that that communication cannot happen faster than 500MHz without "cheating". Be it squeezing components as tightly together as possible, predictive pipelines, all sorts of fancy tricks to give the illusion of faster than light.

Cheers,
Wol

Good luck

Posted Feb 27, 2025 11:28 UTC (Thu) by PeeWee (guest, #175777) [Link] (5 responses)

You did say that there is a limit of 1 GHz for CPUs which is so obviously wrong that I will leave it at that and not stray further from the subject of the article.

Good luck

Posted Feb 27, 2025 14:51 UTC (Thu) by dskoll (subscriber, #1630) [Link] (4 responses)

The external clock frequency supplied to a CPU is much less than the internal CPU clock frequency; I think it's this external clock Wol was referring to. The internal clock is generated via a clock multiplication circuit and a PLL.

Good luck

Posted Feb 27, 2025 15:07 UTC (Thu) by Wol (subscriber, #4433) [Link] (2 responses)

Plus I screwed up my maths. If a 30cm mobo requires 2ns for a signal to round-trip, a chip which is 10 times smaller needs one tenth the time ie 0.2ns - 5GHz.

But if you don't want to stall the bejeezus out of your chip, you can't go any faster than that ... (without pipeline/magic "cheating" etc etc.)

Cheers,
Wol

Good luck

Posted Feb 27, 2025 16:56 UTC (Thu) by kleptog (subscriber, #1183) [Link] (1 responses)

Just because the motherboard size is significant compared to the wavelength of a clock pulse doesn't mean you can't do higher frequencies. It just means you need to consider the signal to be more a directed radio wave than a digital signal in the normal sense.

10Gb/s ethernet over copper has multiple bits "in-flight" on the cable if it's more than a few metres long.

A technical challenge sure, but not impossible.

Good luck

Posted Feb 28, 2025 9:22 UTC (Fri) by taladar (subscriber, #68407) [Link]

I think you are talking about two different things. Of course you can send messages faster, you just can't expect a response faster than a speed of light round trip.

Good luck

Posted Feb 27, 2025 16:10 UTC (Thu) by atnot (subscriber, #124910) [Link]

DDR5 is specced up to 4Ghz, PCIe has used frequencies north of that for many years. USB4 tops out at 8Ghz. Commodity RF hardware regularly deals with >50Ghz signals. Sure, there are additional considerations when creating synchronous clock domains above a certain size. But the speed of light is far less of an interesting or fundamental limitation than naive math would imply. There's standard techniques for dealing with it. The far more pressing issues are power (shoving electrons into and out of a piece of metal that fast creates a lot of heat) and the fact that your circuit eventually stops behaving like a circuit and more like a waveguide. Which is mostly just annoying. But of the three things, heat is really the only thing that's not a solved problem.

That ignores pipelining

Posted Feb 27, 2025 13:52 UTC (Thu) by farnz (subscriber, #17727) [Link] (4 responses)

You're looking at the single-cycle latency limit, and asserting that it's impossible for the motherboard to cope with multi-cycle latencies. In practice, we can happily use gigahertz signals over kilometers of distance (such as in mobile phone networks) by "pipelining"; you send a request signal, knowing that the response to the signal you've just sent will be multiple clock cycles later. Even better, as long as you allow for propagation delays, you can send a sequence of signals (timed by the clock), to be handled by the remote end as they arrive, in a "pipelined" fashion.

Where size becomes an issue is where latencies are critical - if a device is small, everything can be assumed to happen in a single clock cycle, but as the scale increases, the minimum latency also increases. You thus have an extra layer of complexity when clocks increase, because you can no longer place devices freely; you have to allow for latency issues.

But all this actually means is that various simplifying assumptions that hold at 20 MHz stop holding at 200 MHz, and simplifying assumptions that hold at 200 MHz don't hold at 200 GHz. Instead, as the frequency increases, the amount of real-world physics you have to take into account also increases.

That ignores pipelining

Posted Feb 28, 2025 9:26 UTC (Fri) by taladar (subscriber, #68407) [Link] (3 responses)

There is a very real limit to how much work you can do in your average program though without at some point involving the result of a previous calculation. At that point you do need the response to your earlier request, regardless of how you juggled instructions around. Most programs other than trivial things like pure math (e.g. digits of pi calculations) can't keep calculating things without communication with other components.

That ignores pipelining

Posted Feb 28, 2025 10:19 UTC (Fri) by farnz (subscriber, #17727) [Link] (2 responses)

That's what I was getting at when I wrote:
Where size becomes an issue is where latencies are critical - if a device is small, everything can be assumed to happen in a single clock cycle, but as the scale increases, the minimum latency also increases. You thus have an extra layer of complexity when clocks increase, because you can no longer place devices freely; you have to allow for latency issues.

As frequency increases, you quantise the possible latencies of an operation into more buckets; a signal travelling 2 cm in a typical chip takes about 0.1 nanoseconds to do that (speed of light in a silicon chip is about 2/3rds of that in a vacuum). If your clock cycle time is 1,000 nanoseconds, then it's easy to ensure that everything in your chip is at most half a cycle time away from everything else.

But if your clock cycle time is now 0.2 nanoseconds (5 GHz), you have to place the latency sensitive parts with care; you need the things that cannot wait more than 1 clock cycle to be safely under 4 cm apart at their furthest extents, because even 4cm is too far to be reliable (since logic has its own delays and required timings to meet when the signal enters and exits part of the chip). You can put things that can tolerate a 10,000 cycle delay at 5 GHz on different chips on a motherboard, since your 10,000 cycle delay is 2 microseconds, giving you about 300 meters, even as you put things that cannot tolerate more than 1 cycle latency 2 cm apart.

Note though, that this doesn't make 5 GHz impossible on a chip that's 80mm by 80mm; it just means that you have to work a lot harder on chip layout for a 5 GHz chip than you do for a 2 GHz chip of the same size.

That ignores pipelining

Posted Feb 28, 2025 11:50 UTC (Fri) by malmedal (subscriber, #56172) [Link] (1 responses)

> a signal travelling 2 cm in a typical chip takes about 0.1 nanoseconds to do that (speed of light in a silicon chip is about 2/3rds of that in a vacuum).

The speed you can signal on is governed by the telegraphers equations.

The answer is complicated and depends on the process technology. In 28nm I believe the typical speed of signal propagation is around 20% of light-speed, there are things you can do on the chip to make it faster, at the cost of space and power.

That ignores pipelining

Posted Feb 28, 2025 16:26 UTC (Fri) by malmedal (subscriber, #56172) [Link]

Hmm, I must have remembered a way too high figure.

This article gives a speed of 1200 ps/mm for a 5nm node:

https://semiengineering.com/slower-metal-bogs-down-soc-pe...

speed of light is 3.3ps/mm...


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds