Where to move secret sauce
Where to move secret sauce
Posted Oct 4, 2024 19:38 UTC (Fri) by laurent.pinchart (subscriber, #71290)In reply to: Where to move secret sauce by marcH
Parent article: Coping with complex cameras
In this context, a "camera" is usually a device made of at least an imaging sensor (the chip with the glass with shiny colours sitting under the lens) and an ISP (a hardware components that performs image processing tasks that would be too expensive for the CPU). When the imaging sensor and the ISP are in different chips, as is the case for the "complex cameras" we're dealing with, there are also different types of physical transmitters and receivers between the two components (MIPI CSI-2 is often involved). Pre- or post-processing steps can also be implemented in DSPs, NPUs, GPUs and/or CPUs as part of the camera pipeline. There needs to be a software component in the system with a global view of the whole pipeline and all the elements it contains, in order to configure them and run real time control loops.
The imaging sensor can contain a firmware, but that's largely out of scope here. It only deals with the internal operation of the sensor (sequencing exposure or read-out of lines for instance), and not with image processing by the ISP. GPUs, DSPs and NPUs, if used in the camera pipeline, can also include firmwares, but that's not relevant either from the points of view of the ISP or the top-level control loops.
> (which do of course run firmware)
Many ISPs don't. They are often fixed-function pipelines with a large number of parameters, but without any part able to execute an instruction set. Some ISPs are made of lower-level hardware blocks that need to be scheduled at high frequency and with very low latency, or contain a vector processor that executes an ISA. In those cases, the ISP usually contains a small MCUs that runs a low-level firmware. When the ISP is designed to be integrated in a large SoC, those firmwares often have very limited amount of memory and no access to the imaging sensor. For these reasons they are mostly designed to expose the ISP as a fixed-function pipeline to the OS.
When the ISP is a standalone chip, sitting between the imaging sensor and the main SoC, the MCU integrated with the ISP is usually a bit more powerful and will run the camera control algorithms, taking full control over the imaging sensor. The ISP chip then exposes a higher-level interface to the main SoC, similar to what an imaging sensor with an integrated ISP would expose.
Other firmwares can also be involved. Large SoCs often include cores meant to run firmwares, and those can usually interact with the entire camera (imaging sensor and ISP). Some vendors implement full camera control in such firmwares, exposing a higher-level interface similar to a webcam. There's a big downside in doing so, as adding support for a different imaging sensor, or even tuning the camera for a different lens, requires modifying that firmware. I believe this is done for instance by the Apple M1, as they have full control of the platform. More directly relevant for Linux, this kind of architecture is also seen in automotive environments where the camera is controlled by a real-time OS, and Linux then accesses some of the camera streams with a higher level of abstraction.