|
|
Subscribe / Log in / New account

Video? You mean that chip on the motherboard?

Video? You mean that chip on the motherboard?

Posted Jun 26, 2008 17:35 UTC (Thu) by faramir (subscriber, #2327)
In reply to: Video? You mean that chip on the motherboard? by dmarti
Parent article: Nvidia Reiterates Position on Closed Source Driver (OSnews)

I've been thinking for a LONG time, that there probably are real limits to how much
graphics/audio computing power that we need and that those limits are based on human
physiology (limits of perception).  I think your comment about audio cards actually supports
this idea.  When 5 channel 16+bit 48Khz sound is on the motherboard, just what more do we
need.  OTOH, imaging seems to be a long way from human limits.  Until we have picture window
perfect
real-time rendering with 3D display capabilities, there will still be room
for improvement that the average user would find useful (or at least "attractive").  And thus
the graphics card will survive.


to post comments

Video? You mean that chip on the motherboard?

Posted Jun 26, 2008 20:58 UTC (Thu) by drag (guest, #31333) [Link] (1 responses)

Well look at this way:

Mobile computing is the wave of the future, and present for that matter. Graphic cards are
going to be limited by the amount of power they can get out of a battery that is small enough
that a human can carry it around without getting annoyed about the weight.


Another way to look at it:

For modern PC the amount of I/O it can perform is a huge bottleneck in performance. This means
the amount of time it takes to shuffle information from one side of the computer to another
side. And it's not just bandwidth, it's latency.

So you have, on your computer, some very high bandwidth RAM on a PCI Express card. 512Megs. 

Then you have your CPU at the far end of that PCI Express bus, with it's own memory, say 4GB
that is slightly slow. It has Level 1 and level 2 cache that is even closer to the CPU because
your CPU burns through dozens of cycles every time it needs to access main ram.. RAM is just
that slow compared to the CPU.

So a modern 3D environment there is a great deal of 3D processing going on both your CPU and
your video card. A large amount of texture data, and other things, need to be read from main
memory over that PCI bus to that PCI card and then read into that PCI card's memory.

That is _slow_. Many hundreds of cpu/gpu cycles. Thousands. Energy being wasted, time being
wasted, no processing being done. 


Now imagine instead of sticking that powerful GPU and memory on the far end of a PCI-E bus
that it was a bit closer? Say... On your CPU die running at, or near the same speed of your
CPU? Sharing the same memory manager, sharing the same high-speed ram? Instead of wasting
thousands of cycles your wasting maybe 1 or 2 to communicate?

And lets suppose, that instead of having to use proprietary drivers and having the GPU exposed
through a limited OpenGL API or shading language that you can simply use ISA extensions
(think, mmx/sse/sse2/etc on steroids) to the x86 instruction set to compile and access the GPU
directly so that it can be easily be exploited for purposes other then just graphics
processing?

That is were things are going. :)



Main memory is high-speed RAM?? Look more carefully.

Posted Jun 27, 2008 10:51 UTC (Fri) by renox (guest, #23785) [Link]

You selected carefully the facts which support the GPU in the CPU approach omitting the other
facts: the memory bandwidth of the RAM in the GPU is in fact far higher than main memory's
bandwidth and the quantity of RAM present in a GPU is bigger than CPUs caches.

So if your textures fit in the GPU's memory, in many case the standalone GPU will be faster
that the GPU-in-CPU..

IMHO, the GPU-in-CPU will be interesting for the 'low end' of the gaming market which is also
the one with the biggest number of users (maybe also for some scientific computations this
remain to be seen).




Video? You mean that chip on the motherboard?

Posted Jun 28, 2008 10:31 UTC (Sat) by man_ls (guest, #15091) [Link] (1 responses)

The problem with audio is not the number of bits in the digital part, but the quality of the analog components: they must be well done and perfectly isolated from electrical interference. Otherwise there is buzzing, hisses and all kinds of electrical noises when you record sound. OTOH the average integrated audio chipset is perfectly adequate for Skype or WoW, so there is no need to spend more.

At least a couple of years ago if you really wanted professional audio you needed a discrete card. I don't know if things have changed much.

Video? You mean that chip on the motherboard?

Posted Jul 3, 2008 9:27 UTC (Thu) by ekj (guest, #1524) [Link]

Today, if you want high-quality audio you use a -digital- output from your computer and let
your external amplifier do Digital-analog-conversion AND amplification.

No sense in having half a dozen separate digital-to-analogue circuits of high quality when a
single one will do just fine.



Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds