|
|
Subscribe / Log in / New account

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

developerWorks talks with Paul McKenney about processors, computer history, time slices, games, physics, and Linux. "Paul E. McKenney is a Distinguished Engineer at the IBM Linux Technology Center. He has worked on SMP, NUMA, and RCU algorithms since he came to IBM in the early 1990s. Prior to that, he worked on locking and parallel operating-system algorithms at Sequent Computer Systems. He has also worked on packet-radio and Internet protocols (even before the Internet became popular), system administration, real-time systems, and business applications."

to post comments

Determinism

Posted Aug 22, 2005 6:07 UTC (Mon) by xoddam (subscriber, #2322) [Link] (1 responses)

Paul seems to be using the word "deterministic" in the sense of
having a guaranteed maximum latency; so Linux is "not yet
deterministic" because certain subsystems can't be guaranteed
to respond (or alternatively, to yield a required resource) in
a given time.

I always thought a "deterministic" system was one that, given
the same inputs on any number of occasions, would produce the
same outputs each time. As soon as you introduce randomness
(from the real world or, in thought-experiments like Schrödinger's
cat-box, from nuclear decay), you don't have a deterministic system
any more.

The concepts are related, but I was confused for a while.

Determinism

Posted Aug 22, 2005 12:00 UTC (Mon) by ncm (guest, #165) [Link]

He's talking about deterministic timing, not output. I.e., does it always take the same amount of time to do the same operation? If you can't get that, you settle for maximum latency instead. With sufficiently blurred vision, the response patterns match, providing you have someplace to wait if the result comes back too quickly.

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 22, 2005 15:09 UTC (Mon) by cthulhu (guest, #4776) [Link] (9 responses)

He has some interesting things to say, but what really blew it for me was the utter crap justification for nanosecond timeslices. FM operates at 100 MHz means we "need" ns timeslices? If that's the case, how does my cell phone deal with 1.8 GHz (rhetorical question)? Or my 802.11a wireless NIC deal with 5.25 GHz? The important thing is not the carrier frequency, it's the signal bandwidth. This is not a justification for nanosecond timeslices.

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 23, 2005 0:11 UTC (Tue) by farnz (subscriber, #17727) [Link] (8 responses)

Given my experience of interviews, I would guess he's talking about the concept of using a baseband ADC and DAC to handle everything, and the interviewer didn't understand well enough to convey this to us.

I can imagine a world where all we use are powerful CPUs, and baseband DACs/ADCs, together with software radio. No IF in the radio, no hardware modulation or demodulation, just software working with baseband signals; I would imagine that this is what Paul was talking about. Given a 20GHz wideband ADC and DAC with appropriate resolution, and "enough" CPU power you could use the same device as an AM radio, an FM radio, a WiFi card, a GSM phone and more, with the number of concurrent radio applications limited by CPU power, not the ADC or the DAC.

Of course, with modern technology, this is nothing but a dream; however, the end goal is a system without the hardware aided demodulation and modulation your wireless NIC and cell phone use.

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 23, 2005 14:12 UTC (Tue) by mikec (guest, #30884) [Link] (7 responses)

Wow what a waste of power! I can understand certain (i.e. rare and not in your house) applications beifiting from extremely flexible baseband processing of RF, but using multimillion gate ASICs and megabytes of software to model what a cheap analog circuit to downconvert 802.11 makes very little sense to me...

Sounds like more hot chips, exotic cooling, low battery life and generally a waste of electricity.

That is not to say that there are not applications for this, but "hardware aided demodulation" makes the same sense to RF applications as "hardware acceleration" makes sense to video processors...

Somewhere along the way, the allure of "just do it in software" seems to have drowned out actually solving the problem in the most cost effective, efficient and robust manner. There is a cost to this (See above - power consumption).

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 23, 2005 14:31 UTC (Tue) by farnz (subscriber, #17727) [Link] (6 responses)

Like I said, not sensible with modern technology.

On the other hand, if it becomes possible to process a 20GHz wide signal at a sane resolution with a low-power CPU (5-10W), it may be worthwhile doing everything in software; the NRE for hardware is not falling, and the risk of bugs is climbing.

And before you say that this won't happen, there was a point where decompressing video on a mobile phone in software was deemed impossible. Now, some phones do just that (no hardware acceleration involved). The tradeoff is development cost versus power consumption, and software has some major development cost advantages over hardware.

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 23, 2005 14:55 UTC (Tue) by mikec (guest, #30884) [Link] (5 responses)

The trouble is regardless of the technology, the number of transistors via method "a" (software) vs that of "b" (downconvert with analog) will always remain greatly skewed (by many orders of magnitude) towards the software baseband solution.

Even assuming some other form of "switch" comes into being this will still be true...

Phones have indeed managed to squeeze a surprising amount of of processors, but my point is that they could run far longer if they were using 10X less power again from where they are now...

5-10W is still a lot of power to dissipate and store... It is a downright offenseive amount of power for downconversion :-)

As far as cost advantages, I hear that theory every day, but what I see in practice is lots of shrinking margains, product delays and bugy products.

I am not saying that there is not a great deal of utility in using software to solve some traditionally hardware problems. My point is that this is being treated as a panacea right now at the expense of _REAL_ innovation and non-incremental (that is greater-than incremental) improvements in technology.

Looking at the shrinking enrolement in EE courses and even those that do find themselves getting all the way through without ever dirtying their hands with anything below the "digital abstraction", I think there is a more compelling reason for this shift than the theoretical economic argument - the sad fact is we are running short of people who know how to do this stuff...

Full disclosure: I actually work almost exclusively in the "digital abstraction" now and even there I spend more of my time writing software than I do designing hardware and even when I do design hardware, I use software langauges. But, in a past life, I worked with high-speed mixed signal and I saw just how few people there are in that arena who really know that they are doing...

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 23, 2005 15:08 UTC (Tue) by farnz (subscriber, #17727) [Link] (4 responses)

Sure, the number of transistors in a software solution is higher. So's the number of transistors in a mixed solution (FPGA, or combination ASIC and CPU). So long as software is cheaper to develop than an FPGA, or an analogue ASIC, there will be benefits from going to a software-only approach.

And my 5-10W figure was not "just for downconversion". It's for the complete software radio. The big benefit is that you can turn out hundreds of millions of identical cheap boards, and just change them into WiFi cards, or GSM cards, or FM tuner cards, or whatever by fitting the right aerial and downloading the right software.

Sure, a mobile phone could run far longer if it had optimised, dedicated hardware to do everything; however, it is already uneconomic to do that. Instead, several mobiles share one board, and the differences are clock speeds (tolerances), and software.

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 23, 2005 15:24 UTC (Tue) by mikec (guest, #30884) [Link] (3 responses)

I was not arguing for removing the processor... I carry a phone with a full-blown OS in it and I really like it... (my comment on 5-10W came with a "wink", but clearly on the web noone can hear your sarcasm)

I am simply extending the argument made for hardware accellerated video to RF processing. Particularly with something like cell phones where the frequencies are well known and the bands narrow, you can accomplish what you describe and still run linux on it...

Right tool, right job...

In essence the solutions I see today appear just like prototypes of old... you get it working without concern for final power consumption and unit cost then you whittle it down to the final product which is dedicated to its purpose (allowing for "purpose" to be defined as broadly as you like)...

You sell a lot of them while you are prototyping the next one...

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 23, 2005 17:15 UTC (Tue) by farnz (subscriber, #17727) [Link] (2 responses)

If you think cell phone bands are narrow, go working out what you'd need for a universal phone with all network types currently in use (both cellular and simple cordless), WiFi and Bluetooth, bearing in mind that W-CDMA channels can be 2MHz wide, and you can want to communicate on multiple channels. At a rough look, you need to be able to tune to several bands between 400MHz and 5.5GHz, channel hopping in some of the bands, and coping with channel widths from a mere 16kHz all the way up to 2MHz. The complexity of the hardware to do this is high, and the question is whether it's better value (taking into account the market size) to provide a handset with a wideband DAC and ADC, and some serious processing (maybe done on a traditional CPU, maybe with an FPGA), or several small radios, each set for one frequency band.

Now add to this that if (say) digital TV on the move becomes a wanted feature, your hardware approach means a redesign of the hardware, and trashing existing units. My software approach involves reflashing the physical hardware with a new image. Further, when 802.11n comes out, I've got a fighting chance of being able to run that in software too; hardware needs a redesign. Depending on the cost of making a unit, the cost of designing a unit, and the number shipped, the optimal point can be anywhere between a pure software solution (the wideband DAC and ADC with a beefy processor), a pure hardware solution (think crystal radios, for example), or some hybrid. At the moment, processor performance isn't high enough to make pure software viable in any niche; as this changes, we'll see a brief flood of pure software products, then a swing back to mixed devices.

This is analogous to your video acceleration thing; something like a VT100 is a text-mode video accelerator with a keyboard. We went from that to framebuffers (almost pure software), and now we've swung back to a mixed architecture (but note that we accelerate very different things now).

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 23, 2005 19:13 UTC (Tue) by mikec (guest, #30884) [Link] (1 responses)

I meant that each operating band is relatively narrow... given the cost (or lack of cost), the analog solution, you can easily cover that range and more and even do so "on-chip" so that it is part of the processor... That is you have N analog hard macros for handling whatever range you like.. all highly flexible and programmble - all they do is downconvert, so they don't need to very complex...

The argument that you can "future proof" hardware seems to sell a lot of people but if you look at real behavior and product life it is a "sales pitch" not a reality... When it comes down to it, most consumer electronics simply don't survive long enough to be moved from one generation of behavior to the next...

But, all that does not really matter, I am knowingly disagreeing with a good portion of the world in my statement that the balance of hardware vs software does not head in only one direction... It is somewhat sinusoidal as technologies, skills and applications move forward in time... We are WAY over to the software side now and I expect that product margins, support costs, development costs and ultimately power consumption are going to be the drivers of this balance back toward the middle...

Intel finally ending its seemingly endless pursuit of clock rate at the expense of everything else is a great indication (at least to me) of this trend. As is the proliferation of "applicances" (such as tivo, linksys etc..) which when you open them up are low clock-rate embeded processors coupled with hardware acceleration for the compute-intensive operations (mpX codec, IPsec, HDTV etc...)

These tasks were previously accomplished in sofware - once they stabilized, then the hardware showed up to do it faster, cheaper, cooler...

The same can be said for RF processing... Although it often seems to go through 3 phases:
1. RF too fast for baseband -> CPU + external
2. RF possible at baseband -> CPU only
3. RF standardized -> CPU + RF ASIC (which may be embedded in the CPU)

There is another interesting aspect of this which is that the more is accomplished in hardware, the less undue control software vendors have over your use of it... It is amazing the clarity a little plasitc or ceramic packaging can bring to the consumer/producer relationship...

Looking at Real Time for Linux, PowerPC, and Cell (developerWorks)

Posted Aug 25, 2005 3:39 UTC (Thu) by dlang (guest, #313) [Link]

a software radio has the huge advantage that it can receive multiple signals at the same time.

with hardware you have to have a physicly seperate receiver for each signal you want to receive.

there are a lot of situations where you have a lot of radios in one place to try and monitor lots of signals (and a scanner doesn't do the job, it hops between channels with one receiver, it still only listens to one signal at a time)

in addition, software radios can handle new types of signal encoding with just software changes, including spread spectrum sideband, etc that would require hardware redesigns for traditional equipment.

and finally, there are some things that just can't be done reasonably in hardware that software makes possible. The advances in telecom speeds are mostly due to the audio-speed circuits being changed from being implemented in hardware to being software, the software has made it possible to adapt the signal to the lines (and detect the inteded signal from the lines) to a degree that was considered physicly impossible not too long ago, and these changes have made much of the broadband service that people use possible. we really don't know what will end up happening as this scales up to higher frequencies, but past experiance makes it clear that it will be things that we can't imagine today.

and before you say that hardware radios give you more flexibility, consider that you can't purchase a scanner in the US to receive some frequencies becouse the hardware to do so has been outlawed (cellular freqs for example). even excluding these limits, you are in just as much need to hack the hardware as you would be the software of a software controlled radio if the manufacturer decides to put limits in there.

besides the origional article just listed that as one of the things that could be done if performance was to improve that much, not that that was the only reason to improve performance to that level.


Copyright © 2005, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds