I was once part of a two-person mobile app company called Figure 8, LLC. We focused on color sampling and color management apps.
One of our prototypes, proposed by my partner Dave Ruel, helped users ensure that graphic images, when printed on any given output device and medium, would look as similar as possible to the original image as seen on a user's original viewing device.
That prototype leaned heavily on open-source software called lcms, the "little color management system". At a low level it answered a sequence of questions: "For every distinct color C in the input image, what's the nearest color C' that my output device can produce? OK, now what's the nearest color to that, that my iPhone can produce?"
That app was centered on human-perceivable colors. It used standard ICC profiles to help map colors between spaces.
(Hang on, I may be getting to a point.)
The Mars Perseverance rover has more than 20 cameras. Some can work as pairs to produce stereo images. Some have custom color filters that can see light frequency bands – colors1 – that help identify molecules of interest, but that can't be perceived directly by humans.
This is a twist on our Figure 8 colorspace problem: how do you map colors that a human can't see (the input color space) into a human-sensible image (the "device" color space - a person's eyes)? 2
That's what false-color images do. "Let's map all of the light in this frequency range to a special shade of purple, which we shall call 'Beverly'." (h/t Rob Hohne)
This week I started playing with images from the Mars Perseverance Rover's raw image website, learning how to assemble color images from the grayscale raw images. And now I'm wondering if there's something analogous to an "ICC profile for false color image data from planetary probes"™.
There probably is. Affinity Photo's tutorials for its new astrophotography stacks mention typical color mappings for deep space images. (These mappings help make various interstellar gases visible, and so on.) But I haven't found it yet.
Okay, maybe this post doesn't have a point. But it does have a circle of confusion 😉
One of the Mastcam-Z co-investigators, Melissa Rice, notes that false-color mapping isn't really a goal: "For me, the outcome isn’t even visual, in a sense. The outcome I’m interested in is quantitative." Mu again. ↩