Gamma encoding of images is used to optimize the usage of bits when encoding an image, or bandwidth used to transport an image, by taking advantage of the non-linear manner in which humans perceive light and color. Gamma correction is a nonlinear operation used to encode and decode luminance values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression.
$$ L_d=L_w^{1 \over \gamma} $$
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
voidgamma_correct(BITMAP *bmImg, BITMAP *bmGamma, double gamma) { BITMAPINFOHEADER *bmiHeader = &bmImg->bmInfo->bmiHeader; gen_color(bmImg, bmGamma); uint8_t *p = (uint8_t *) malloc(1 << 11); for (int16_t i = 0; i < 256; ++i) { p[i] = adjust(255 * pow(i / 255., 1 / gamma)); } for (uint32_t h = 0; h < bmiHeader->biHeight; ++h) { for (uint32_t w = 0; w < bmiHeader->biWidth; ++w) { uint32_t pos = h * bmImg->bmBytesPerRow + w * bmImg->bmBytesPerPel; bmGamma->bmData[pos] = p[bmImg->bmData[pos]]; bmGamma->bmData[pos + 1] = p[bmImg->bmData[pos + 1]]; bmGamma->bmData[pos + 2] = p[bmImg->bmData[pos + 2]]; } } free(p); }
Step Three: Visibility Enhancement
We use a logarithmic operator to adjust the pixel value,
$$ L_d={\log(L_w+1) \over \log(L_{max}+1)} $$
where $L_d$ refers to display luminance, $L_w$ refers to original luminance, and $L_{max}$ is the maximal luminance in the original image.
This mapping function make sure that, no matter the dynamic range of the scene, the maximal luminance value will be 1 (white), and the others change smoothly.
Histogram equalization is a method in image processing of contrast adjustment using the image’s histogram. Let $p_i$ be the probability of an occurrence of a pixel of level $i$ in the image, then its new level $s_i=\sum_{k=0}^i p_k$. The image should be converted to Lab color space or HSL/HSV color space before equalization, so that the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image.