Thursday, July 7, 2016

To Gamma or not to Gamma

Directly or indirectly, Gamma adjustment is part of most color vision simulation models. Since Gary W. Mayer and Donald P. Greenberg wrote "Color-Defective and Computer Graphics Displays" in 1988, the world of gamma has underwent numerous changes. This article explores the impact of the gamma value on colorblind simulation models, in the past and in the present. First, we need to understand the definition of gamma.

What is Gamma?

In "A Technical Introduction to Digital Video," Charles Poynton states:

"The nonlinearity of a CRT is remarkably similar to the inverse of the lightness sensitivity of human vision. Coding intensity into a gamma-corrected signal makes maximum perceptual use of the channel. If gamma correction were not already necessary for physical reasons at the CRT, we would have to invent it for perceptual reasons."

Electronic standards for gamma correction date back to the inception of color television. The days when television equipment used analog electronics. The television set used a CRT (Cathode Ray Tube), which had a non-linear output based on input voltage to the cathode. The gamma of a CRT is 2.35 to 2.55, with 2.5 being the standard value. The power relationship between input voltage and output in terms of lumens is:

output = input ^ gamma

Beyond looking at the gamma of an individual component, we can look at the gamma for the entire system. If the value of gamma is 1.0, then there is a linear relationship between the input and output. When viewing an image, the white objects in the surrounding environment would have the same brightness as those in the image. This is referred to as a "bright surround."

Movies are normally shown in a "dark surround." For this environment, a gamma of 1.5 provides a natural looking image. If the surrounding objects are visible in the room, then it is said to have a "dim surround." The corresponding gamma is 1.25. Thus, depending on the brightness of the surround environment, we need a system gamma of 1.0 to 1.5. The gamma for a system is the product of the gammas for each component in the system.

In an analog television system. The two major imaging components are the camera and television set. Thus, a camera needs a gamma of approximately 0.5 (in actuality, cameras have a gamma of 0.45). The PAL and NTSC television standard were designed to create a system gamma for dim surrounds, with the television set having a gamma of 2.5. For a reasonably short review of the technical details of gamma see Gamma FAQ - Frequently Asked Questions about Gamma by Charles Poynton.

Gamma and computers

When computers entered the world of color images, the standard color terminal was a CRT monitor with a gamma of 2.5. Color terminals underwent an evolution from 8 colors, to 16 colors, to 256 colors. The 256 color terminals did not really support 256 unique RGB colors. Rather, the first 16 colors were the same colors as the 16 color terminals, for backwards compatibility. These terminals only supported 216 RGB colors and 24 grayscale colors. While graphic cards in PCs actually supported 256 RGB colors, and were connected to monitors with a gamma of 2.5. This distinction is going to be important when evaluating CVD studies.

If we consider the PC being used in dim surroundings, we need a system gamma of 1.25. However, the system now has more components. The basic components are:
  •  camera gamma - image capture gamma
  • encoding gamma - image is encoded into a file
  • decoding gamma - extracting image from encoded file
  • lookup table (LUT) gamma - the gamma of the frame buffer (optional)
  • screen gamma - the gamma of the display device
We can group some components together, such that:
  • file gamma = camera gamma * encoding gamma
  • display gamma = LUT gamma * screen gamma
Thus,

system gamma = file gamma * decoding gamma * display gamma

The encoding gamma for JPEG files is 2.2. Each pixel in the file is encoded according to the following formula:

encoded pixel = camera pixel ^ 1/2.2

Decoding each pixel is then just a matter of:

decoded pixel = encoded pixel ^ 2.2

The gAMA field in a PNG file defines the gamma for the file. Although it may contain any value greater than or equal to 1.0, the current default value is 2.2.

The above components did not appear at once. It was an evolutionary process, in which operating systems and software development libraries adapted to color images. As discussed in the previous article, Chromatic Appearance Models (CAMs) and Chromatic Adaptation Transform (CAT) matrixes also evolved from their inception in 1976.

Gamma and color vision simulation

The first computer simulation model was done in 1988 by Gary W. Meyer and Donald P. Greenberg. In their paper, they use the acronym SML, instead of LMS. Their basic assumption is that SML color space is a linear transformation of the CIE XYZ color space. Meyer and Greenberg based their confusion points on a study by Esteves (see reference 4 in their paper). Some values were changed to eliminate negative results. My tests confirm that their algorithm never produces a negative value, although it occasionally produces an out-of-bounds positive value. The basis for their tests was an extended version of the Farnsworth-Munsell 100 Hue test.

While the Meyer and Greenberg study was published in 1988, Thomas Wolfmaier wrote a Java applet around 1999 that implemented their model. Two years later Matthew Wickline published his improved version of Wolfmaier's code. Thus, this code is referred to as MGWW.

In 1997, Hans Brettel, Francoise Vienot and John D. Mollon published Computerized simulation of color appearance for dichromats. Even though it was published 11 years earlier, Brettel-Vienot-Mollon do not reference the work of Meyer-Greenberg. They used a Hitachi color monitor driven by a Silicon Graphics 3x8-color graphics card (512 RGB colors). From the manufacturer of the card, we know that the computer was Unix based workstation. While they did not calibrate the gamma of the display, they did calibrate the colors within 2 nm. The major difference between BVM and MGWW is that BVM converts RGB to LMS and not CIE XYZ. The BVM algorithm makes no attempt to limit out-of-bound conditions for RGB values.

What about Gamma?

Neither Meyer-Greenberg or Brettel-Vienot-Mollon mentioned gamma in their papers. Gamma is important to CVD models, because gamma shifts the confusion line. When gamma is increased the colors shift towards black. Conversely, reducing the value of gamma shifts the colors to white.

Further, brightness adjustments for the display device effects the gamma. Brighter screens reduce gamma, while darker screens increase gamma. Then you have operating systems, such as some Android devices, that allow for different screen modes. Both of these factors are handled by the LUT gamma, which is part of the display gamma. Changes to the display gamma, change the system gamma.

The CRT devices used be Meyer-Greenberg and Brettel-Vienot-Mollon had a gamma around 2.5. LCD screens have a gamma of 1.0. Apple Macintoshes originally had a system gamma 0.82, while Microsoft Windows and UNIX/Linux had a system gamma of 1.0. Starting with Snow Leopard (Mac OS 10.6, Macintoshes now have a system gamma of 1.0. Web browsers have also shifted to a system gamma of 1.0. By having a system gamma of 1.0, the OS is neutral. The display gamma now handles the brightness adjustments based on surrounding environment.

When working with images in a current high-level language, we work with a bitmap file. In Java, each pixel is represented by 4 16-bit fields (transparency, red, green, and blue), which is True-Color RGB. In most cases, we are working file-gamma * decode-gamma = 1.0. This is true for both JPEG and PNG files.

Gamma used in Source Code

Each pixel in an image is separated into its red, green, and blue values. For each value the following formula  is applied:

ColorIn =  OrigColor ^ (1 / gamma)

After shifting the color space for one of the CVD types, the following formula is applied to each color value in the pixel:

ColorOut = ColorIn ^ gamma

The Gimp image editor, popular on Linux, is based on the BVM model. The gamma value used in their code is 2.1. While the comments in the code reflect an understanding of gamma, the authors do not explain their choice. Although the Gimp code is based on Vischeck, the non-availability of Vischeck makes the Gimp code a common BVM implementation. However, implementations of Gimp often incorporate variations from the base Gimp implementation.

In his original code, Matthew Wickline used a gamma of 2.20. In a newer revised Javascript code, there is a option menu that provides for selecting values between 1.0 and 2.22 depending on the color terminal and operating system. In his original code, Wickline reverses the above two formulas. His later Javascript code uses a different approach for gamma, and just applies the following formula after simulation:

finalColor = ((colorOut / 255) ^ gamma) * 255

In his colorblind simulation models, Loren Petrich took a totally different approach. Petrich uses a gamma that ranges from gamma > 0 to gamma = 1. While his comments gamma of zero  represents no gamma, his formulas would lead to a divide by zero condition for a gamma of 0. His input and output conversions use precalculated values based on the following:

ColorIn = ((OrigColor + 0.5) / 256) ^ gamma
ColorOut = ((256 x ColorOut) / 65536) ^ (1 / gamma)

In Petrich's system, gamma = 0 is the gamma for bright surroundings. What is interesting is that Petrich uses the same formula for BVM and MGWW. While the general format of the gamma formulas follows the formulas used by Wickline, they are the opposite of the formulas used by BVM.

The hardest simulation models to evaluate are those that use pre-calculated values such as ImageJ. Without any comments, it is impossible to know if gamma played a role in the pre-calculations.

Conclusions

Digital imaging technology, both hardware and software, have undergone radical changes since the studies by Meyer-Greenberg and Brettel-Vienot-Mollon. Smartphone technology is reaching the point that we can view images in bright surrounds (gamma = 1). Movie theaters theaters have a gamma around 1.5, for dark viewing conditions. PC screens were designed for dim surrounding (gamma of 1.25). We know longer need to adjust colors for CRT monitors with a gamma of 2.5, as devices gamma is now handled by the framebuffer LUT.

The screen brightness control for Android outputs a value between 0 and 255, where 255 is equivalent to a gamma of 1, and 0 is equivalent to a gamma of 1.51. Since brightness and screen modes adjust the gamma, accurate simulation requires a gamma that matches the gamma of the screen brightness. Version 1.1.0, and higher, of my Colorblind Simulator app include a gamma setting feature, for which the default setting is 1.

The Pro edition allows the user to vary the gamma, with one additional feature. A setting of 1.51 creates a variable gamma based on the current brightness value. This feature works best when the Auto brightness setting  is disabled.

One of the critical requirements for accurate simulation is that the gamma adjustment on the input side cannot create out-of-bound conditions. In testing Wickline's code, I found a regular pattern for extreme out-of-bound conditions. After changing the gamma equations to match those used by BVM, the out-of-bound conditions on input disappeared. While BVM and MGWW still produce different results, those difference are based on other factors. As noted above, Petrich saw this same error in Wickline's code. However, his formulas tend to produce a high number of out-of-bound conditions for the output.

So far, every piece of code tested has required some corrections in gamma. Gamma is important, as the values are critical for accurate simulation. The equipment used today are not the same as the equipment used in the original studies. Operating  systems and software libraries now handle much of the gamma correction. The simulation code for CVD needs to keep current with an ever changing world. Gamma of today is not the same as the gamma of the past.

Sunday, June 12, 2016

Color space and the illusion of color

Just as the universe is not bound to the human construct of three dimensions, the universe does not have color. As Peter Gouras states in "Webvision: The organization of the Retina and Visual System":

"Color vision is an illusion created by the interactions of billions of neurons in our brain. There is no color in the external world; it is created by neural programs and projected onto the outer world we see. It is intimately linked to the perception of form where color facilitates detecting borders of objects."
Cones do not see color. Cone opsins merely respond to the chemical reaction of the chromophore pigment. To call the cones red, green, and blue is actually a misnomer. While not completely accurate, the terms Long, Medium, and Short (LMS) are better. The reality is that the three different types of cones respond to three different ranges of wavelengths. This information is relayed via the optic nerve bundle to the left and right visual corti. Information from the left side of the retina goes to the left visual cortex. Conversely, information from the right side of the retina goes to the right cortex. The visual corti assemble the color image.


What are the wavelength ranges for each cone type? It varies depending on the study, and the methodology used to measure the wavelength and sensitivity. While the following diagram appears in a number of color vision articles, the source of the associated study is not given:


A 1995 study by Williams & Cummins show the following wavelength ranges:


Just think, 12% of women are tetrachromatic, but with a different wavelengths for the fourth cone type. The following diagram is from The Neurosphere:

Since a very small percentage of women have a distinctly different curve for the fourth cone type, we can safely use trichromacy for the general population.

As we look at the wavelength curves for each cone type, we can see that the names red, green, and blue are misnomers. The peak receptivity for the Long (red) wavelength cone is closer to yellow than it is to red. The green cone has a peak sensitivity in the dark green wavelengths, while the peak sensitivity for the blue  is closer to violet. The trap is using color models, such as RGB, to define CVD.

Simulation of CVD requires the use of models that reflect the color space of the human eye, or the LMS color space. in 1931, the International Commission on Illumination (CIE) created one of the first mathematical models of the human color space (CIE 1931 color space). This is known as the CIE XYZ color space and is based on the experiments done by William David Wright and John Guild in the late 1920s. Their experiments resulted in the CIE RGB color space. The CIE XYZ color space is a derivative of the CIE RGB color space. The Y component is the luminance. The Z component is quasi-equal to the S cone response, while the X component is a linear combination cone response curves chosen to be non-negative. For any given Y (luminance), the XZ plane will provide all chromaticities at that luminance. It is important to remember that the perceived color depends on the luminance.

From the CIE XYZ color space there are transforms to the LMS color space. Given the complex nature of human color vision, there is not a universally accepted transform. Instead, Chromatic Appearance Models (CAMs) provide Chromatic Adaptation Transform (CAT) matrices. These matrices (M) are the basis of modern simulation models.

The following is a brief summary of the common CAMs. If you are interested in reviewing the actual CAT matrices, you can find them in LMS color space.

  1. CIELAB
    It wasn't until 1976 that CIE released a CAM to replace the many existing, and incompatible, color difference models. CIELAB became the first color appearance model, and became one of the most widely used models. The major weakness of CIELAB is that it performs the von Kries transform before converting to LMS color space. The LLAB CAM was released to correct this error.
  2. Hunt and RLAB CAMs
    Both the Hunt and RLAB CAMs use the Hunt-Pointer-Esteves transformation matrix. Since this matrix was originally used von Kries transform method, it  is also known as the von Kries transform matrix.
  3. CIECAM97s and LLAB CAMs
    Both the CIECAM97s and the LLAB CAMs use the Bradford transformation matrix. With the Bradford transformation matrix, the L and M cone curves are narrower. The narrower curves create a "spectrally sharpened" transformation matrix. The Bradford transformation matrix is also used with the linear von Kries method. There is also a revised CIECAM97s CAM that uses a linear transformation matrix.
  4. CIECAM02
    Released in 2002, the CIECAM02 CAM is a replacement to CIECAM97s. CIECAM02 has better performance and is easier to implement than CIECAM97s. CIECAMO2 comes close to being an internationally agreed upon standard for a comprehensive color appearance model.
  5. ICAM
    Also released in 2002, ICAM was developed by Mark D. Fairchild and Garrett M. Johnson. The goals included simple implementation for images, handling of HDR images, and tone mapping.
The CIE XYZ color space is a virtual space that acts as a reference model for color models. Color models are the subject of the next two articles in this series. These articles provide the necessary background for understanding CVD simulation models, which will be the last article of this series.

Color is an illusion, but it is a fascinating illusion.

Thursday, June 9, 2016

The role of the retina in color vision

With my library still in boxes in the US, I had to spend the day refreshing my old brain cells with information from Wikipedia. In 1913, Sir William Abney published "Researches in Colour Vision and the Trichromatic Theory." After a little over a century, much of what he says remains the same, while there have been dramatic changes in other areas, especially neurology and genetics.

While the names of changed, Sir William Abney defines the eight layers of the retina. What he calls the "peculiar layer"  is the the layer of amacrine cells. The following diagram shows just the retina layer of the eye:


While the rods play an important role in low light conditions (scotopic vision), they have no known role in color vision (photopic vision). The cones require higher luminosity, before they respond. In the above illustration, the cones are divided into the Long (red), Medium (green), and Short (blue) wavelength cones. The ratio of Long:Medium:Short is 40:20:1. While this ratio is a good statistical average, the actual ratio varies across the human population.

The rods and cones are both photoreceptor cells. The shapes of the cells match their names, as seen in the following diagram:


The light sensitive protein lies between the disks in the rods,  and the folds in the cones. In rods the protein is an amino acid chain called rhodopsin. In cones, the amino acid chain is dopsin. The dopsin surrounds the chromophore, which is the pigment that distinguishes color. Most explanations leave out the chromophore, but it is the light filter, not the opsin. It is the chemical reaction in the chromophore that triggers the opsin. Thus, what distinguishes the types of cones is the chemical composition of the chromophore.

Teleost fish, birds, and reptiles are tetrachromatic. These species have cones with chromophore that detect Ultraviolet, Short (blue), Medium (Green), and Long (Red) wavelengths. Placental animals are dichromatic in that their cones detect Medium (Green) and Short (blue) wavelengths. Primates, including humans, developed trichromatic vision. It is possible that gene duplication resulted in a chromophore that detects Long (red) wavelengths. This certainly explains the similarity in their wavelength curves.

For short (blue) wavelength cones the chromophore DNA sequence is on chromosome 7. This placement makes the gene sequence sex independent. The medium (green) and long (red) chromophore DNA sequences appear in contiguous regions on the X chromosome.

The following statement from Hereditary Ocular Disease summarizes the genetic issue for red-green color blindness:

"Red-green color perception is based on gene products called opsins which, combined with their chromophores, respond to photons of specific wavelengths. The OPN1LW and OPN1MW genes reside in a cluster with a head-to-tail configuration on the X chromosome at Xq28. Red-green color vision defects are therefore inherited in an X-linked recessive pattern. There is a single gene for the red cone opsin but there are multiple ones for the green pigment. Only the red gene and the immediately adjacent green pigment gene are expressed. All are under the control of a master switch called the locus control region, LCR.

These DNA segments undergo relatively frequent unequal crossovers which can disrupt the color sensitivity of the gene products so that red-green colorblindness in some form is the most common type of anomalous color vision. It is found in approximately 8% of males and perhaps 0.5% of females."
The same article provides the following definitions:
  • Protanopia - only blue and green cones are functional (1 percent of Caucasian males)
  • Deuteranopia - only blue and red cones are functional (1 percent of Caucasian males)
  • Protanomaly - blue and some green cones are normal plus some anomalous green-like cones (1  percent of Caucasian males)
  • Deuteranomaly - normal blue and some red cones are normal plus some anomalous red-like cones (5 percent of Caucasian males)
My questions are what exactly are the green-like and red-like cones, and how do the alter the response to different wavelengths? Even though there are unanswered questions, the above should help provide a better understanding of the color codes generated in my Colorblind Simulator app.

I leave you with a thought. Just as there are genetic variations that produce the diversity in the physical appearance of the human population, there could be genetic variations in the dopsin structures that determine color vision. Every human eye could be uniquely different. Israeli research has shown that each of us an olfactory fingerprint that is unique to every person. Why not our sense of color?

In 1913, Sir William Abney lived at a time when little was known about neurology and genetics. I enjoy reading old medical books, because they show how much the world has changed in a 100 years. While they didn't know about DNA sequences, they certainly knew about the genetic expression of color blindness.

Sunday, June 5, 2016

Color design for your audience

We use color to communicate a message to our audience. Failure to understand Color Vision Deficiency  (CVD) means that about 9% of our audience doesn't receive the message. This applies to all media, whether it be print, slides, Web pages, games, videos, or smartphone applications. it applies to all communicators, including teachers and graphic designers.

For this article, the focus is on background and text colors. To illustrate the impact of color choices, I used the Text Color activity of my Colorblind Simulator app. Since the True Color standard for RGB supports over a million colors, the colors used are from the Material Design Colors palette for Android. The text color activity, itself, supports all colors. However, copying and pasting from a color palette simplified entry of text colors. The demo uses black as the background color.

The first test used Red 500 as a text color. The following screen capture shows the results for normal vision. A contrast ratio of 4.52, while not high, is above the minimum 0f 4.0. What happens when we look at protanopia (red), deuteranopia (green), tritanopia (blue), and monochromatic vision?

Screen capture of Text Colors with Material Red 500 for text color on Black background.
Individuals with protanopia won't be able to read the text, as the contrast ration is only 1.93, as shown below:

Screen capture of Text colors with Red 500 on black background.

The results for deuteranopia illustrate the problems for those with the most common form of CVD. While the W3C contrast ratio is greater than 3, it is still presents problems for those with poor vision. 

Screen capture of text colors for red 500 for deuteranopia.

Individuals with tritanopia (blue) color blindness have almost the same contrast ratio as normal vision for this color combination. 


While monochromatic vision is vary rare, there are individual for which their world consists of shades of grey. Converting a color to its nearest grayscale value provides an approximation of what they see.

Screen capture of text colors with Material Red 500 as a text color.

Instead of using Material Red 500, the following tests use Material Yellow 500 as the text colors. Starting with normal vision, there is a dramatic improvement in contrast ratio.

Screen capture of text colors with Material Yellow 500 as a text color.
While the contrast ratio drops from 17.20 to 16.74, those with protanopia would not have a problem reading the text.


Screen capture of text colors with the text color of Material Yellow 500.

While deuteranopia (green) color blindness reduces the contrast ratio to 16.19, the text is still easily readable.

Screen capture of text colors with Material Yellow 500

Although individuals do not see the color as yellow, a contrast ratio of 15.03 makes the text easily readable.

Screen capture of text colors with Material Yellow 500.

With a contrast ratio of 16.52, even individuals with monochromatic vision can easily read the text.

Screen capture of text colors with Material Yellow 500.
As long as color does not carry special information, using colors for background and text colors is not an issue. What color an individual sees is not an issue. The issue is having a contrast ratio that makes the text easy to read. While the W3C guidelines define 3.0 as a minimum value, contrast ratios above 7.0 are much easier to read. White background and black text produce the highest contrast ratio of 21. Some individuals have a problem reading text with very high contrast ratios. To reach the maximum number of members in an audience, I recommend a contrast ratio between 7 and 18.

To accommodate visually impaired individuals, color cannot be the sole carrier of information. Graphs and bar charts need to be labeled, along with a detailed verbal description. Graphs can use different line types. These techniques do not preclude the use of color, they just provide alternate methods of communication. Afterall, the ultimate goal is effective communication to an audience.

Monday, May 9, 2016

We don't see color

The names used to describe CVD (Color Vision Deficiency) confuse, but do not clarify, the functioning of the cones. There are no red cones, green cones or blue cones. RGB is a merely a human convention for describing colors by adding red, green and blue. Great for defining light sources, but not for understanding color vision.

Object colors are based on CMY (Cyan, Magenta and Yellow). The mind warping aspect of objects is understanding the difference between refraction and reflection. Forget reflection. Think refraction. The molecules in any object absorb the photons of light. Absorbing energy results in the release of energy. The release of energy is a different frequency. This is refraction. It is a subtractive process. This is why paints use CMY for mixing colors.

Vision research groups cones into three categories according to their sensitivity to different wavelengths of light. Again, we are talking about energy. The resulting wavelength graph for the different cones are as follows:


The cones that have a peak abundance of 559 nm are known as the long wavelength cones. The medium wavelength cones are those whose peak abundance centers on 531 nm. The short wavelength cones have a much higher center wavelength at 419 nm. The acronym is LMS. Many explanations refer to long wavelength cones as red, medium wavelength as green, and short wavelength as blue. The reality is that a person can have 100% protanopia (no long wavelength cones) and still visualize some shades of red. The medium wavelength still respond to shorter wavelengths of the color red.

The Quick Test feature of my Color Simulator apps provides a good demonstration of the overlapping of the wavelength response curves. Just follow color code values, as you adjust the sliders. Even a person with protanopia has some red response. Change the type to deuteranopia, and notice the similarities and differences.

Cones respond to a range of wavelengths of light. The terms red, green, and blue tend hide this phenomenon. The cones of the eye do not see color, they are receptor cells that respond to light energy of different wavelengths. The vision centers of the brain create the vision of color based on the information received from the cones.

The overlap of the long and medium wavelength curves leads to the term red-green color blindness. This is wrong. There are differences between protanopia and deuteranopia, as they represent the loss of different cones.

Sunday, May 8, 2016

Colorblind Simulator for Android released

Color Vision Deficiency (CVD) affects our ability to distinguish colors. It can impact on a child's ability to learn, the ability to select two socks that match in color, the selection of clothing, purchasing fruits and vegetables, and even careers. The purpose of the Colorblind Simulator app is to provide educators, graphic designers, and everyone who works with individuals with CVD, a way to understand their world.
The Colorblind Simulator app for Android is a collection of simulation tools. It includes tools for images, text, Material Design colors, and any single RGB color. Currently, it is the only color blindness simulation toolbox available for Android.

We do not see RGB colors. Rather, the cone cells on the retina of the eye respond to different wavelengths of light. There are three different types of cones: long wavelength (red) cones, medium wavelength (green) cones, and short wavelength (blue) cones. The following diagram illustrates the LMS wavelengths.


Simulation processing is a memory intensive task, as the app processes each pixel in an image. Each pixel is transformed to its LMS color, color loss factors are applied, and then transformed ack to RGB. The application default is for dichromatic vision, which involves 100% less loss of one cone type. These conditions are protanopia (red), deuteranopia (green), and tritanopia (blue). The most common form of color blindness is deuteranopia. A far larger number of individuals are affected by anomalous trichromacy, which means that there is a partial loss of one of the cone types. The following graphic compares (from left to right) normal, protanopia, deuteranopia and tritanopia vision for a flower.


The above images assumed 100% loss, and used the linear model. The app offers two other models. The linear model, itself, is a variation of the Brettel-Vienot-Mollon (BVM) model. The Meyer-Greenberg-Wolfmaier-Wickline (MGWW) is an alternate model that produces slightly different results. The MGWW model was developed for color monitors. A fourth, grayscale model simulates monochromatic vision, a vary rare form of color vision deficiency.
The Colorblind Simulator app is not limited to processing example images, images from galleries, or the camera. The text activity provides a way to test background colors with various text colors. Along with transforming the colors, the text activity displays the W3C contrast ratio.  While it is just a dream, it would be nice if this activity put an end to red text on a black background, and yellow text on a white background.

The material colors activity provides shows the impact of CVD on the standard Android design colors. This activity is also a good way to see the impact on groups of colors. The following screenshot illustrates what red colors look like to someone who has protanopia. The darker colors of red are much hard to distinguish.


The quick test app displays the conversion of any single color. You can just play with the scroll bars and watch the colors change. 


To see these features in action, I created a two minute video at https://youtu.be/nblzVHEbu2s. The Colorblind Simulator Pro app is available from the Google Play store, and only costs $1.29, which helps support future development of this app. If you have any questions about this app, you can contact me through the Twitter link on this page, or send an email to support@all-things-android.com.