Thursday, August 6, 2009

Activity 11 Color Image Processing

We have learned in our Applied Physics 187 course that an image captured by a color digital camera is composed of an array of pixels each having R, G, and B information. These three color information are each a product of the spectral profile of the light source, objects reflectivity and the cameras spectral reflectivity for R, G, and B. To obtain the actual appearance of the object we must divide these R, G, and B values with a scaling/balancing factor equivalent to the R, G, and B values of purely white imaged captured by the camera. This scaling factor is just the product of the cameras sensitivity and the illuminator's spectral profile. If the wrong scaling factor is used or it is miscalculated then the resulting image would not appear as good. The process of applying these scaling factors to a digital image is called White Balancing. In this activity we try out the White Patch and Gray World algorithms in doing White Balancing.

To test the effectiveness of these algorithms; we take pictures of objects containing the major hues which are obviously not properly balanced. We do so by not using the "auto white balance" function of the camera and just selecting the different balancing conditions. In the camera we used these conditions are called "daylight", "cloudy", "fluorescent", and "incandescent".

The White Patch algorithm makes use of information "obviously white" regions in the image. That is we calculate the balancing factor by taking the average R, G, and B values within a white patch of the image. On the other hand the Gray World algorithm assumes that the average color of the world is gray. Since the ratio of the R, G, and B values for gray is the same as white or black, this method takes the RGB of gray and uses it for White Balancing. In the Gray World algorithm the RGB values of gray is taken as the average R, G, and B of the whole image (whole world).

We must remember though that these White Balancing Methods wont work if the captured image is saturated, even for just a single color channel. A saturated image actually contains no color information and would not be useful in calculating the balancing factor.

Figure 1 below shows samples of captured images and their corresponding RGB channels. The blacked out regions on the RGB channels indicate the areas of saturation and wont be useful for our purposes. Therefore, in selecting the white patch we create a composite image (figure 1 last column) using the RGB channels that blacks out all the areas that are not useful. For the Gray World algorithm we simply take out the saturated regions in calculating the white balancing factors.


Figure 1. Sample captured images with their RGB channels and a composite image
indicating (blacked out) the areas that saturated.

Taking into account the saturated regions the results of applying the White Patch and Gray world algorithms on an image containing the major hues (RGB) are shown in figure 2 . First let us note that the results for both White Patch and Gray world algorithms for the incandescent and fluorescent settings are obviously wrong. The resulting image takes on an extreme shade of blue (or red) which is clearly not the true color of the objects. This error is mainly due to the fact that the original images taken with these settings were mostly saturated on the blue channel. On the other hand the images taken with Daylight and Cloudy settings were seen to improve by using the White Patch algorithm. In the daylight setting the original image is slightly bluish which after processing took on more accurate colors of the object.s This was also observed in the cloudy setting which initially had a reddish or brownish that was cured after processing. As for the Gray world algorithm, it was only seen to improve the images taken with cloudy setting resulting to a final image that displayed white as "white" even better than the White Patch algorithm.

Figure 2. Original image taken with different white balancing settings and
their respective results after applying White Patch and Gray World.

Next, figure 3 shows the results of applying White Patch and Gray World on images that had a dominant color, which in this case is blue. Again it is observed that applying a white balance on the images taken with fluorescent and incandescent settings resulted in a dark image. But even though these processed images are darker they are stilled considered improvements on the original since the original pictures are actually very bluish are actually worse representations of the true color of the image. As for the results of the Daylight settings we first note that the original image actually seems to be properly white balanced. We also see very little difference in comparing the original with the result after the White patch algorithm. But looking at the results of the Gray world algorithm we see that the image lost its yellowish color after it was processed. It is though to determine which is the more correct result. Finally the image taken with the cloudy setting has a very yellowish color which is obviously not correct. After doing the White patch we see that the yellowish color is removed which is a significant improvement. But after doing the Gray world algorithm the final image completely lost its yellowish shade but also became darker.

Figure 3. Original image taken with different white balancing settings and their respective results after applying White Patch and Gray World. The images have blue as the dominant color

Overall the results suggest that in general the White Patch algorithm results in a much better white balancing. The results of the gray world algorithm is seen to be highly dependent on the dominant color of the image.

I thank ate Cherry for lending us her camera and Irene for taking the pictures. I give my self a grade of 10 in this activity.

1 comment: