|
|
Subscribe / Log in / New account

Giving Rust a chance for in-kernel codecs

Giving Rust a chance for in-kernel codecs

Posted Apr 27, 2024 13:02 UTC (Sat) by atnot (guest, #124910)
In reply to: Giving Rust a chance for in-kernel codecs by gmatht
Parent article: Giving Rust a chance for in-kernel codecs

> What is less clear to me is why we have in-kernel codecs in the first place. Is it faster to blit video to a Weyland server from the kernel, or do the codecs need low-level access to the GPU for acceleration?

The article touches on it pretty well I think, but the reason basically comes down to stateful vs stateless decoding hardware. In the olden days hardware media decoders were pretty simple as far as programmers were concerned. You just put the bytes from your file in one end and got pixel data out the other. And vice versa, which is e.g. how those high resolution IP cameras are so cheap.

However, among other things implementing an entire complex codec including the file parsing logic this way is pretty inflexible and kind of wasteful when you have perfectly good CPU cores sitting there anyway. So in the newer stateless model, you instead favor implementing only the "hot loops" of the codec in hardware (some of which may even be shared by multiple codecs) and rely on the driver to pass in the required state. That requires a much deeper understanding of how the codec works, which can't really be fully offloaded to underspace because similar to GPUs, the kernel still needs to validate that the potentially dangerous commands it's getting actually make sense before passing them to the hardware.


to post comments


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds