What a Raw Photo Actually Looks Like

A raw file from a camera sensor looks nothing like a photograph — it is a grayscale grid of integers with a narrow dynamic range, a mosaic color pattern, and nonlinear brightness. Here is the step-by-step process that turns it into an image your eyes can read.

When you shoot in RAW mode, you might expect to see a dark, slightly off-color version of your final photo. What you actually get is stranger than that. Let me walk through what the data really looks like and what needs to happen to it before it resembles a photograph.

Step 1: The Raw Sensor Data

A camera sensor is an array of light-sensitive cells. Each cell produces a voltage proportional to the number of photons that hit it. A 14-bit analog-to-digital converter (ADC) turns that voltage into an integer between 0 and 16,382.

In practice, the sensor never uses the full range. A typical camera might produce values between roughly 2,110 and 13,600 — most of the theoretical range sits unused. The result is an image that looks nearly uniformly gray: low contrast, washed out, barely any detail visible.

Step 2: Black and White Point Normalization

The first correction is simple: remap the narrow actual range onto the full 0–255 display range. The formula is:

V_new = (V_old - Black) / (White - Black)

Where Black is the minimum sensor value (~2,110) and White is the maximum (~13,600). After this step the contrast jumps dramatically and detail becomes visible — but the image is still entirely grayscale.

Step 3: The Bayer Filter and Why Sensors Are Color-Blind

Camera sensors do not natively see color. Each cell only measures total light intensity. To capture color, a Bayer filter — a grid of tiny colored transparent squares — is placed over the sensor. The standard pattern is 50% green, 25% red, and 25% blue filters arranged in a repeating 2×2 tile:

G R G R
B G B G
G R G R
B G B G

Green dominates because human vision is most sensitive to green wavelengths, and more green samples improve perceived sharpness. Each sensor cell now records only one color channel. If you color-code the pixels by their filter type but do not process them further, you see a mosaic — the Bayer pattern itself — rather than a recognizable image.

Step 4: Demosaicing

To reconstruct a full RGB image, each pixel needs values for all three channels, but only has one. The solution is demosaicing: estimating the missing channels by averaging the neighboring pixels that do have those values.

For example, a red pixel surrounded by green and blue neighbors gets its green and blue values estimated from those neighbors. Apply this across the whole sensor and the mosaic pattern disappears, replaced by a full-color image — still imperfect, but recognizably a photograph.

Step 5: Gamma Correction

The sensor responds linearly to light: twice as many photons produce exactly twice the output value. But human brightness perception is nonlinear — we are far more sensitive to changes in dark tones than in bright ones.

If you display linear sensor data directly on a screen, the image looks unnaturally dark, even though the numbers are correct. The fix is gamma correction: apply a nonlinear curve to each channel that compresses bright values and expands dark ones. The sRGB standard defines this curve precisely. After gamma correction the image looks much closer to what your eyes would have seen.

Step 6: White Balance

Because the Bayer filter uses twice as many green cells as red or blue, and because sensors are generally more sensitive to green wavelengths, the image after demosaicing and gamma correction has a strong green cast. Under different lighting conditions there may also be a warm (orange) or cool (blue) color shift.

White balance correction scales the red, green, and blue channels independently so that a neutral gray in the scene appears neutral gray in the image. Importantly, this scaling must be applied before the gamma curve, not after — applying it to gamma-encoded values produces incorrect results.

The Full Processing Pipeline

Putting it all together, converting a raw sensor file to a viewable photograph requires:

  • Black and white point normalization
  • Demosaicing (Bayer pattern reconstruction)
  • White balance (applied to linear data)
  • Gamma correction
  • Optional: noise reduction, sharpening, saturation adjustments

When your camera produces a JPEG, it performs every one of these steps automatically, using its own tuned parameters. When you shoot RAW and edit in Lightroom or another tool, you are doing the same pipeline — just with your own choices at each step.

Conclusion

An edited photograph is not "more fake" than an unprocessed raw file. Both are representations of the same underlying sensor data. The raw file just happens to be a representation optimized for storage and further processing rather than for display. The processing is not artistic manipulation — it is the necessary translation between how a silicon sensor sees light and how human eyes perceive it.