Have you ever noticed that when you blur a colorful image using software like Photoshop or Instagram, the colors don’t blend as smoothly as they do in real life? Instead of transitioning seamlessly from red to yellow to green, you might see an unpleasant dark boundary between bright colors. This issue isn’t just limited to photo blurring; it can occur anytime a computer tries to blur an image or use transparent edges.
The root of this problem lies in how we perceive brightness. Human vision operates on a relative, roughly logarithmic scale. This means that our eyes are more sensitive to changes in brightness in darker scenes than in brighter ones. For example, doubling the light from one to two units is much more noticeable than increasing it from 101 to 102 units, even though the physical amount of light added is the same.
In contrast, computers and digital image sensors measure brightness based on the number of photons hitting a photodetector. This means that each additional photon registers the same increase in brightness, regardless of the surrounding scene. When a digital image is stored, the computer records a brightness value for each color—red, green, and blue—at each point in the image. Typically, zero represents zero brightness, and one represents 100 percent brightness.
Here’s where it gets tricky: a digital brightness value of 0.5 might seem like it’s halfway between black and white, but in terms of absolute physical brightness, it’s only one-fifth as many photons as white. Even more surprising, a value of 0.25 has just one-twentieth the photons of white!
This discrepancy is intentional. Digital imaging was designed this way to save disk space, taking advantage of our vision’s sensitivity to dark scenes. When a digital camera captures an image, it stores the square roots of the brightness values instead of the actual values. This method provides more data points for dark colors and fewer for bright colors, mimicking human vision.
Problems arise when modifying the image file, such as blurring. Blurring involves replacing each pixel with an average of the colors of nearby pixels. However, whether you average before or after taking the square root affects the result. Most software incorrectly averages the brightness values of the image file without considering the square-rooting done by the camera. This oversight results in an average that’s too dark because an average of two square roots is always less than the square root of an average.
To blend colors correctly and avoid the dark sludge, the computer should first square each brightness value to undo the camera’s square rooting, then average them, and finally take the square root again. This method produces a much more visually appealing result.
Unfortunately, most software, including popular platforms like iOS, Instagram, and even standard settings in Adobe Photoshop, use the incorrect approach. While advanced settings in professional graphics software can correct this, shouldn’t beauty be the default?
Use image editing software to blur a colorful image. Observe the color transitions and identify any dark boundaries. Reflect on how the software’s approach to averaging brightness values might contribute to these artifacts.
Create a simple simulation using a programming language like Python to compare how human vision perceives brightness changes versus how digital sensors record them. Use logarithmic and linear scales to illustrate the differences.
Take a digital image and extract the brightness values for a selection of pixels. Calculate the square roots and compare them to the original values. Discuss how these transformations affect the perception of brightness in digital images.
Write a script to implement the correct blurring algorithm by squaring brightness values, averaging them, and then taking the square root. Apply this to an image and compare the results with those from standard software.
Engage in a debate about whether software should default to the correct color blending method. Consider the trade-offs between computational efficiency, storage space, and visual accuracy.
Brightness – The attribute of visual perception in which a source appears to emit a given amount of light. – Adjusting the brightness of a computer screen can help reduce eye strain during long study sessions.
Colors – Different wavelengths of light perceived by the human eye, often used in digital displays to create images. – The software allows users to adjust the colors of an image to enhance its visual appeal.
Digital – Relating to technology that uses discrete values, often represented in binary code, to process, store, and transmit data. – Digital circuits are fundamental to the operation of modern computers.
Image – A representation of visual information, such as a photograph or graphic, often stored in digital format. – The image processing algorithm improved the clarity of the satellite photos.
Pixels – The smallest unit of a digital image or display, representing a single point of color. – Increasing the number of pixels in a display enhances the resolution and detail of the images shown.
Software – A set of instructions and data that tell a computer how to perform specific tasks. – The new software update includes features that improve the system’s security and performance.
Square – A geometric shape with four equal sides and four right angles, often used in grid layouts for digital design. – The square layout of the user interface makes it easy to navigate and organize content.
Camera – A device used to capture images or video, often integrated into digital devices like smartphones and computers. – The high-resolution camera on the smartphone allows for capturing detailed photos even in low light.
Blending – The process of combining different elements, such as colors or images, to create a smooth transition or unified effect. – In graphic design, blending techniques are used to create seamless transitions between different layers of an image.
Vision – The ability to interpret the surrounding environment using light in the visible spectrum reflected by objects. – Computer vision technology enables machines to recognize and process visual information from the world.