Color Management (On Linux)

There seems to be a lot of confusion about what color management is, what it is supposed to do, and most particularly how to use it on Linux. While most information below is generically applicable, in cases where I have to be specific I’ll focus on Ubuntu/GNOME/Unity.

The first thing to get out of the way is the simply question what color manament is supposed to do for you. Color management is used to get consistent and reliable results from device to device to device. So if I take an image with my color managed camera, and display it on my color managed display and print it with my color managed printer it should look nearly the same everywhere. This means it doesn’t per-se make your image look better (whatever that may mean for you). Also what color management can’t do for you is make crappy equipment better than it is. Any color management solution always has to work within the bounds of the equipment it is managing. Of course any color management solution tries to compensate for a device’s limits as best it can, but there are inherent limits. When these limits are hit, colors aren’t accurately reproduced anymore.

Now we need to get some terminology straight. Calibration is modifying a device’s characteristics to match a specification (for example changing brightness of a display). Characterization is recording the devices behavior for correction in software. These terms are often incorrectly used interchangeably (even by me, excuse me when I do). The end result of characterization is a (standard) ICC color profile.

While pretty much any device can be color managed , I’ll focus on displays for the rest of this article.

Now to color manage a display you need a device that can “read” (characterize) your displays characteristics. To characterize your display there are two types of devices you can use: colorimeters and spectrophotometers. Colorimeters are the most common device to characterize displays, as they are fairly affordable (100-200 EUR range). Colorimeters do have their limits, they are in essence just a very special purpose digital camera with only a handful of pixels. While I personally never had any issues, I’ve read about older colorimeters having trouble with new kinds of display technology like LED backlit displays, and some entry level colorimeters may not work as well with professional wide gamut displays (more on that later). The other option is a spectrophotometer, these devices are rather pricy, entry level spectrophotometers like the ColorMunki Photo, are priced slightly below 400EUR (if you see any device priced significantly lower, it’s likely that the device is not a true spectrophotometer). Spectrophotometers actually read a full spectrum of the light they receive, which means it produces a lot more detailed information. This means spectrophotometers are unlikely to be fooled by new technologies. Most spectrophotometers also include a reference light source, which means they can illuminate (for example) paper, so they can be used to profile printers (combined with ink & paper) as well.

So now we’ll need to explain some more concepts. So first I’ll tell you how silly talking about (for example) RGB 245/0/0 really is. Imagine owning a car, and you’re stuck with an empty tank. So using your last few drop of petrol you go to a petrol station. Say you live in Europe, and you say to the attendant, I’ll have 40, so the attendant fills up your car with 40 Liters of petrol. If someone living in the US says the same thing to an attendant at a US petrol station, they’ll get 40 Gallons of petrol. So, you’d say, well I did tell you properly, it’s RGB… But RGB really doesn’t mean anything at all. Since RGB just tells you, you are defining colors in three components: red, blue and green. It doesn’t say anything about what the reddest red is, the greenest green is and let not forgot about the bluest blue. To define this, the concept of colorspaces was created, a colorspace defines the range of colors a device can reproduce, this is also called a device’s gamut. RGB colorspaces are defined in CIE XYZ colorspace. They do this, because XYZ colorspace encompasses all colors the average human eye can see. All RGB colorspaces are a defined as a subset of XYZ colorspace. More importantly in the late nineties two of the most important colorspaces were defined: sRGB (by Microsoft and HP), and AdobeRGB (by, erhm… well… Adobe). sRGB was more or less defined as the average common denominator of most affordable displays. These days anything not explicitly defined a in different colorspace is assumed to be in sRGB. On the other hand, AdobeRGB was defined to encompass much more colors, with it’s main goal of covering most colors professional printing solutions could cover.

Next to defining RGB to be in a colorspace, there is still the issue that the human eye does not experience light in a linear fashion, so we need gamma encoding to make images not look like a murky mess. These days gamma 2.2 is universally accepted as a standard for displays. There are some caveats though. I own a cheap netbook, and it’s display for example seems to have a native display gamma of about 1.8, which means it lacks contrast.

And then there is the issue of whitepoints, since there is no such thing a “just” white. For most purposes, a white point of 6500K (this is at least true for both sRGB and AdobeRGB) is good as a standard neutral white. Higher temperatures in Kelvin make a display look blue (common with laptop displays), and lower temperatures in Kelvin more a display look more yellow.

And last there is the question of luminance, which is a snazzy term for brightness. If your work isn’t color critical, just put your display to a comfortable level (usually not too bright), if your work is color critical, it’s common to calibrate your display to 120cd/m2.

That said, there are some common issues to address. As I said the result of characterization is an ICC profile. ICC profiles usually have the file extension of .icc or on Windows .icm. Depending on the software which generated the profile, profiles can either be version 2 or version 4. And at least on Linux (but also true for older proprietary software), many programs may not properly apply version 4 profiles, so it’s best to stick with version 2 profiles for the time being. Luckily ArgyllCMS, the premier open profiling suite generates version 2 profiles by default.

Also, you need to be aware that most web browers aren’t color management aware (Safari & Firefox are the exception when properly configured). The W3C specified that “the web” should be in sRGB. This basically means that you should only upload sRGB images to websites, if you upload images that are not sRGB, they may not look as intended to your potential viewers (depending on which browser they use). The common problem is that people upload AdobeRGB images to the web, and they get complaints that the images look desaturated (since the web browser is assuming them to be sRGB, even though they are not).

Now back to display profiling, there are several ways to accomplish this on Linux. I’ve talked in the past about manually doing this with ArgyllCMS, which is a suite of commandline tools. There are however some front-ends available. The two most important ones are dispcalGUI and GNOME Color Manager. Both tools have their own target audience, dispcalGUI caters to advanced users who really know color management inside out. While GNOME Color Manager caters to entry level users, and tries to make everything as easy as possible. To be blunt, if everything in this article isn’t really obvious to you, your best bet is probably GNOME Color Manager. GNOME Color Manager generally provides sane defaults, and guides you through the process using a Wizard *cough* Druid (or whats-it-called?)…

Next some information on the general anatomy of display profiles. Display profiles in particular have three important components. There is the VCGT, the TRCs and the XYZ matrix. The first bit, the VCGT, also often called the VideoLUT, is a lookup table which is designed to correct your display’s whitepoint and potential aberrations between the R, G and B channels. The VCGT is loaded into your X11 driver, and only works if your driver is in 24 bit mode. When the VCGT is being loaded into X11 (usually in the login manager or just after logging in) you should see the colors of the display shift a bit. The VCGT is the only bit of the profile which is beneficial to all applications (as it’s applied by X11), the other two parts have to be actively applied by the application (if properly configured, more on that later). So next we have the TRCs which basically models your display’s gamma curve. And last the XYZ matrix determines what maximum red, blue and green are for your particular display. It’s possible to have an XYZLUT instead to get more detailed correction, however I’d never recommend this, as not all applications properly apply an XYZLUT.

Since the last two parts (TRC+XYZ) need to be applied by your color management aware applications, they need to be properly configured. To make this easier there is something called the XICC specification, which allows an “active” profile to loaded into X11 as well, this is very rudimentary though, as it just means the file is being loaded into the _ICC_PROFILE atom (which is basically like an environmental variable), so it can be easily picked up by color management aware applications. Applications most commonly use the LittleCMS library on Linux to apply the TRCs and the XYZ matrix.

GNOME Color Manager (via GNOME Settings Daemon) makes sure that a profile’s VCGT gets loaded into the X11 display driver, as well as setting the _ICC_PROFILE atom. You can check if the _ICC_PROFILE atom has been properly set using xprop:

# xprop -display :0.0 -len 14 -root _ICC_PROFILE

It’s known that proprietary (nVidia/ATi) drivers can cause problems, also dual head setups can complicate things.

Now, some applications do color management by default (assuming the _ICC_PROFILE atom has been properly set), this for example includes Eye of GNOME and Darktable. Other applications seem to ignore the _ICC_PROFILE atom by default, like Firefox and GIMP.

To check if a profile is being applied, you need to good test image to evaluate, I can highly recommend SmugMug’s Calibration Print for this. In GIMP’s particular case, load this image into GIMP, and go to Edit and Preferences. Go to the Color Management section. Then check the “Try to use the system monitor profile” box while looking at the image,  in most cases you should see a change (if not use xprop to check the _ICC_PROFILE atom), and more importantly you should be able to distinguish the top gray patches from each other.

Last there is the issue of images that were adjusted on uncalibrated displays, which is true for probably 99% of all images on the web. If the author had a low contrast unmanaged display, it’s likely he might have increased contrast in a particular image. When you take a look at that image on your color managed display (with proper contrast), it may look too contrasty. And on the flipside, if the author had a high contrast unmanaged display, it’s likely he might have decreased contrast in a particular image. When you take a look at that image on your color managed display (with proper contrast), it may look devoid of contrast. So it’s not weird to have discrepancies between managed and unmanaged setups.

With the above text I hope to have shed some light on color management in general and some of the particular issues regarding it’s use on Linux.