The growing image-processor unpleasantness
The growing image-processor unpleasantness
Posted Aug 18, 2022 16:43 UTC (Thu) by farnz (subscriber, #17727)In reply to: The growing image-processor unpleasantness by tialaramex
Parent article: The growing image-processor unpleasantness
UVC pushes all the sensor to useful format conversion into firmware inside the camera - sensor data goes into the firmware, H.264 or similar comes out. Something like the IPU6 gets raw sensor data, allows you to apply processing steps (under software control), and then do what you like with the resulting stream of data.
There's a lot more control exposed by something like the IPU6, but it's not necessarily useful; your UVC camera has an IPU6 equivalent in it, plus hardware and firmware to encode to H.264, but you can't control the processing the camera does (at an absolute minimum, you need to demosaic the raw sensor data, but you'll also want to do things like dynamic range processing, exposure control and more).
Posted Aug 18, 2022 17:57 UTC (Thu)
by mss (subscriber, #138799)
[Link] (3 responses)
I suspect, however, that due to insufficient IPU capabilities at least part of the proprietary processing must happen on the CPU and that's where things become messy.
Anyway, I think just providing raw sensor output (much like a RAW file from a DSLR) would be a step in the right direction towards developing the required open-source processing.
Posted Aug 19, 2022 3:54 UTC (Fri)
by xanni (subscriber, #361)
[Link] (2 responses)
Google’s New AI Learned To See In The Dark! 🤖
I'd rather have a camera that provides access to the raw data so I can implement these new algorithms in open source code and get capabilities a generation ahead, instead of being stuck with whatever they implement in the firmware.
Posted Aug 21, 2022 3:15 UTC (Sun)
by bartoc (guest, #124262)
[Link] (1 responses)
Ditto on the depth of field simulation (although I wonder how accurate it is), it's really hard to get a lot of DoF with those tiny sensors given how the lens geometry ends up. I wonder how it compares with light field cameras, which could produce some images that would be really, really, really hard to get with a normal camera.
Posted Sep 13, 2022 10:18 UTC (Tue)
by nye (subscriber, #51576)
[Link]
The kind of view synthesis described in this paper requires images taken from multiple angles, and uses all the combined data to generate a radiance field - basically a model of where the light is in the scene, its direction, and its value. Once you've done that, you can generate a new synthesised view from any angle, including an angle which matches one of the input images.
I'm not sure how the various NeRF methods currently compare to more traditional light field methods, but this is an area of rapid research and I would expect any answers to get stale very quickly.
The growing image-processor unpleasantness
IOMMU would then take care of protecting the rest of the system from that blob's actions.
After all, we already have open-source DSLR RAW file processors.
The growing image-processor unpleasantness
https://www.youtube.com/watch?v=7iy0WJwNmv4
The growing image-processor unpleasantness
The growing image-processor unpleasantness
