SCALE 8x: Color management for everyone
SCALE 8x: Color management for everyone
Posted Mar 4, 2010 13:51 UTC (Thu) by ssam (guest, #46587)Parent article: SCALE 8x: Color management for everyone
i have an image file where each pixel is given by a red, blue and green component, and a standard that says what a give combination or r, g and b should look like (sRGB). so if my pixel is 'lwn orange' #FFCE9C, it could be well defined what this should look like.
now my image viewer tells the X server to paint the pixel #FFCE9C, X passes this value the graphic driver, and the graphics driver figures out what signal need to be sent down my dvi/vga cable to show the colour.
if its the application doing the translation, then what happens if i take a screen shot? also what happens if the application window is spread across 2 monitors?
Posted Mar 4, 2010 14:28 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (5 responses)
sRGB isn't the only standard RGB colourspace. Just off the top of my head, I can think of sRGB, Adobe RGB, BT.709 RGB, CIE 1931 RGB and I'm sure there are more out there. I can very easily find images in three of those colourspaces (my digital camera can produce two of them, I can get a third by grabbing a frame from BBC HD and converting from YUV to RGB).
Plus, many images aren't in an RGB colourspace to begin with; if the image is CMYK (common if it's being prepared for print), or some variation on YCbCr (common if it's from TV). The application needs to tell the graphics driver about all of this information if you're to get a precise match; what's more, not all colour values are in 8-bits per channel RGB; if I had 16-bit per channel CMYK, I'd like the graphics driver to know that I want a slightly darker red than the RGB tuple suggests, so that it knows which way to round error.
This becomes more important on 30-bit displays (where an RGB tuple is three ten-bit values, not three 8-bit values), as the conversion formulae are not precise, and it would be nice to use the extra accuracy to improve colour reproduction.
Posted Mar 4, 2010 15:02 UTC (Thu)
by ssam (guest, #46587)
[Link] (2 responses)
Posted Mar 5, 2010 23:39 UTC (Fri)
by roelofs (guest, #2599)
[Link]
No--any application that does its own alpha-blending, for example, needs to convert the image(s) and background from their native color space(s) to linear (gamma = 1.0), do the compositing, and (usually) convert back. Other transformations (lighting/shading calculations and whatnot in 3D, IIRC) also require linear gamma for correctness. And it's hard to imagine a driver complex enough to support multi-source display app like a browser ("this block of pixels is linear, this block uses this custom ICC profile, that block is SGI/gamma-1.7, and everything else is sRGB"). Conceivably it's doable, but I'm not sure a driver is the best place for it.
Greg
Posted Mar 8, 2010 3:50 UTC (Mon)
by psyquark (subscriber, #58373)
[Link]
All color triplets have a color space, either explicit or implicit. On a stand alone uncalibrated machine the implicit colorspace is the colorspace of the display. The problems start when sharing files with other computers, each of which has its own implicit colorspaces. Known colorspace to known colorspace mapping is doable but unknown colorspace to any colorspace is not feasible. To fix that the sRGB colorspace was created. It was defined to be a decent approximation of the standard displays of the time. The web standard became "If you don't know the space and care about color, treat it as sRGB."
The naive solution is to have the client handle all color management and output pixel values in the display's colorspace. This can be useful in cases when a specific or controlled rendering intent is needed to accommodate special needs. The problem is that client side management does not handle multiple outputs well. In fact, it is not possible for a client to handle simultaneous display on multiple outputs such as from software mirror to a projector or a visual pager application showing a preview on a second display.
The best solution is to have the Compositor handle the final colorspace conversion. It knows exactly where each pixel is going to be displayed because it will be putting that pixel there. There has been some work towards this (but not mentioned in any slides or the writeup) from a Google Summer of Code in 2008. It was called "Color Management Near X" and I believe the "net-color" standard. The solution was not to send colorspace with each pixel, but rather to set window properties defining the icc profile for the window or regions of the window. I should note that specifying client's colorspace to be the same as one of the outputs results in a null transform. That gives the application the same power as before, but allows the Compositor to give sane colors on other displays. Sadly I can't find much information on it with google.
As it stands "color managed applications" can display correctly on my wide-gamut LCD or my normal gamut LCD but not both. The colors will be flat wrong when they appear on the "other" output.
Posted Mar 4, 2010 23:07 UTC (Thu)
by spitzak (guest, #4593)
[Link] (1 responses)
Claiming these are color spaces is equivalent to claiming that jpeg is a color space, rather than a compression algorithm.
Posted Mar 5, 2010 10:02 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
Depends on how you define a colour space; on the one hand, they are just a reversible transform from a known RGB colour space. On the other hand, practical YUV colour spaces, while sharing primaries with practical RGB colour spaces, tend to have different gamuts.
Additionally, they only correspond perfectly under the assumption of infinite precision; in colour management situations, it's of interest to the output driver to know that when I ask for 8-bit BT601 RGB (236, 250, 255), it actually came from YUV (230, 140, 120), and thus if display RGB (237, 251, 255) corresponds to BT601 RGB (237, 251, 255) while display RGB (236, 250, 255) corresponds to BT601 RGB (235, 249, 255), the former is a closer approximation to the desired colour than the latter.
YUV colour spaces differ from JPEG in a very important manner; JPEG is lossy by design, whereas YUV isn't lossy until you subsample. Any loss between an RGB and YUV colour space occurs due to loss of precision
Posted Mar 4, 2010 17:00 UTC (Thu)
by rgmoore (✭ supporter ✭, #75)
[Link] (2 responses)
That works for most applications that are only interested in displaying graphics; they send the color information (possibly with an ICC profile) to the graphics server and don't worry about anything else. But it's obviously not enough for a program that's supposed to be editing the graphics information itself, e.g. GIMP, Inkscape, etc. Those programs need to understand the color information to be able to edit it properly.
A good example of this kind of problem is one that I saw recently about problems with image scaling; most image processing applications do it wrong. The problem is that their color information has a gamma of 2.2, meaning that displayed intensity is supposed to be value**2.2. The correct way to apply scaling is to convert the color information to a linear value, apply the scaling, and then convert back. Instead, most image processing applications use the values with gamma applied, which results in the scaled images being too dark. A properly color aware application wouldn't make that mistake.
Posted Mar 4, 2010 23:04 UTC (Thu)
by spitzak (guest, #4593)
[Link] (1 responses)
If as the original poster said, everybody could assume the image is sRGB, then the scaling algorithim could be designed to correctly scale sRGB. This is much easier than something that can scale "anything".
Also from everything I have learned about color management, there appears to be a need for a "blending space" that is controllable and that scaling and mixing is always done linearly in this blending space. If this blending space is sRGB then the scaling is in fact required to do the "wrong" result. You need to change the "blending space" to some linear color space for blending to be correct.
Posted Mar 5, 2010 2:02 UTC (Fri)
by rgmoore (✭ supporter ✭, #75)
[Link]
B) The "everything is sRGB" assumption is untrue. Real world programs have to deal with all kinds of color spaces. Once you have to deal with more than two color spaces (sRGB and linear), the need for real color management will be much more obvious.
SCALE 8x: Color management for everyone
SCALE 8x: Color management for everyone
ok. so the application would just need to pass some colour space metadata along with the pixels.
SCALE 8x: Color management for everyone
SCALE 8x: Color management for everyone
YUV and YCbCr irrelevant
YUV and YCbCr irrelevant
SCALE 8x: Color management for everyone
i have never understood why applications need to care about colour management, surely it could all be done by the graphics driver.
Image scaling problem
Image scaling problem
If as the original poster said, everybody could assume the image is sRGB, then the scaling algorithim could be designed to correctly scale sRGB. This is much easier than something that can scale "anything".
A) I'm not sure that it would be any easier to do everything on the raw sRGB data. It's not just image scaling but all aspects of image processing that are easier to do on linear data. It's likely to be easier to write one algorithm to convert sRGB to linear and one to convert it back than to include an implicit implicit conversion in every image processing algorithm. And if you care about correctness- which you obviously do if you're bothering to worry about the gamma applied data- it's going to be much easier to prove that you're doing everything correctly by working on explicitly gamma corrected data than to count on having the correction in every routine.