Sunday, April 22, 2007
This article is meant to augment my previous article on photography, “Digital vs. Film and what really matters”. That article was written to discuss the two mediums at a high level and come away with a few generalizations that apply. It is commonly understood that film is a generic term and there are many different grades of quality for film. However, film is a very mature medium and likewise, film technology is relatively stagnant in terms of advances. Differences in existing digital sensor technology is equally significant in terms of image quality, possibly even more so. This article will explore differences in digital sensor technology and discuss how these differences affect image quality.
How do (most) digital sensors work?
In order to appreciate the improvements being made with modern digital sensors, it’s probably necessary to establish a basic understanding of how the average digital sensor works today.
Like film, the purpose of the digital sensor is to collect light focused through a lens. A digital sensor has millions of individual photo receptors (sometimes called photosites) aligned uniformly in rows in columns to collect this focused light. Each of these individual photo receptors represents an individual pixel in the final image. This sensor information is converted from an analog signal into a digital signal, and then it’s stored onto the camera’s storage card, much like a film negative.
A more technical description of this process is as follows. There is a natural phenomenon called the photo electric effect whereby electrons are released when exposed to light. This dates back to Albert Einstein’s 1921 Nobel Prize in physics for his work in this area. Each photo receptor is insulated from one another. When you take a picture, each photo receptor is charged electrically. As light is focused on each photo receptor, some percentages of electrons are released, depending on the amount of light received by that receptor. The voltages are read and amplified through the analog to digital conversion process.
In theory, that sounds simple enough, right? Unfortunately, it’s not that simple. For starters, the sensor’s photo receptors can only capture the degrees of luminance, not color information. In order to capture color, most sensors use what’s known as the “Bayer Filter”. Named after Dr. Bruce Bayer of the Eastman Kodak company, the Bayer filter is a mosaic color filter array. Essentially, each pixel has a color filter over it. The first row has a pattern of blue, green, blue, green… followed by the next row which consists of green, red, green, red… etc. This pattern is repeated across each pixel of the digital sensor. [Note: Some manufacturers, like Sony, have used a modified version of this that adds another color like emerald to their filter.] Camera makers often used different demosaicing algorithms to achieve a full color image of varying levels of image quality.
Demoasicing, anti-aliasing, sharpening, oh my!
While the Bayer Filter does allow for full color images, this benefit comes at a price. The same sensor, without the Bayer Filter would yield a better true resolution, but would provide a monochrome image. The demoasicing algorithms use sophisticated interpolation techniques in an attempt to preserve the resolution and provide color, but these algorithms are never perfect. A typical side affect of the Bayer Filter is what’s known as digital aliasing. Visually this typically appears as irregular shaped edges. That is, it may look artificial in some way as compared to a film based image. To overcome this effect, digital camera makers typically use a low pass anti-aliasing filter. This creates the smoother edges, etc. but it also creates a somewhat softer image by default. Most cameras that output to JPEG files (as opposed to Raw) will also apply some level of a sharpening filter to compensate for any softening that occurred during the anti-aliasing filter. Additionally, most cameras have a filter to block the infrared rays as photo sensors are sensitive to this spectrum of light.
If it sounds like digital cameras have to jump through a lot of hoops in order to create a quality image, it’s because they do. This technology wasn’t always up to the standards that it is today. Likewise, early digital cameras had a reputation of having problems with images due to things like digital aliasing, etc. Even though these problems have largely been addressed, this reputation of imperfection has stuck with many of the film purists. However, the quality of results today speaks for itself. That is, the quality of digital images has come a long way over the past 10 years.
New technology at the sensor level
One way to avoid using the Bayer Filter is to have a 3 CCD based system. In this implementation, a prism is used to separate the red, green and blue light onto three separate sensors. The problem with this is that the sensors are very expensive and likewise this is not a practical solution for high megapixel cameras. However, this is a common solution for higher end digital video camcorders as the resolution for video is much lower than still photographs. Likewise, the sensors are smaller and much cheaper to produce.
One of the biggest advances in sensor technology has to be the Foveon X3 sensor. Each pixel has 3 vertically stacked diodes (Red, Green, Blue) to capture the entire color spectrum. They were able to accomplish this by making use of the fact that the physical properties of light have different wavelengths for each color and are likewise able to penetrate silicon at different depths. The benefit of this approach is that this type of sensor does not need a Bayer Filter to produce color. As such, it doesn’t have to deal with any of the demoasicing and anti-aliasing then sharpening filters. The Foveon sensor is currently being used in Sigma cameras. I’m not quite sure why other camera manufacturers haven’t yet jumped on this bandwagon. There could be issues of cost or licensing that I’m not aware of. I’ve read that these sensors don’t perform quite as well in low light conditions, but I have not yet used a camera with this type of sensor first hand to verify this. A word of caution to the buyer though… Sigma advertises a 14MP camera that is actually a 4.7MP camera. Sigma counts each pixel three times as there are 3 photo receptors for each element. To me, that’s an unfair marketing gimmick. I’ve seen comments that suggest their 4.7MP (or 14MP as they call it) camera compares to 10MP cameras from other vendors with a Bayer Filter. Marketing tricks aside, this seems to be the way to go from my perspective – especially as this technology continues to mature. I’d like to see other camera companies explore this option better. Interestingly, just as film and traditional digital sensors produces images with a different “feel”, images with this sensor produce yet another unique “feel” to them.
Another interesting technology comes in the form of Fuji’s Super CCD. http://en.wikipedia.org/wiki/Super_CCD Fuji’s latest technology, the SR II format, does two things different. First, the photo receptors are shaped like an octagon as opposed to a square. This apparently allows for a more efficient layout of photo receptors on the chip. Additionally, it has two photo receptors per pixel, one large and one small. The idea behind this is to more closely mimic the best characteristics of film while retaining the benefits of digital. The result produces an image with very low noise and very high dynamic range.
Super CCD SR
Q. What are the main benefits of Super CCD SR?
A. Due to an innovative new CCD arrangement, cameras featuring Super CCD SR are able to capture highlight and shadow detail that conventional digital cameras miss. Overall, it will provide a more faithful representation of the actual subject and greater dynamic range. Specific benefits are:
it combats the bleached out effect that often ruins flash photography
it allows you to shoot confidently even in very bright, contrasty conditions
it delivers detail in areas that normally get lost, such as cloud detail outdoors
increased exposure latitude provided by the sensor means that it is more forgiving of incorrect exposure.
Q. How is Super CCD SR different from a normal CCD?
A. Super CCD SR uses a new CCD arrangement, based on the diagonally mapped, octagonal sensor arrangement that Fujifilm pioneered with Third Generation (3G) Super CCD. However, with Super CCD SR, not one, but two photodiodes capture information on the same area of the image (these are arranged in a ‘double honeycomb’ structure).
The sensitive primary photodiode registers the light reflected off the subject at a high sensitivity (similar to a conventional Super CCD photodiode), whilst the secondary photodiode captures highlight information from the same part of the image, recorded at a lower sensitivity.
Because it is set at a lower sensitivity than the primary photodiode (in other words, records a darker image), the secondary photodiode is able to ‘see’ additional detail in bright areas normally beyond the reach of conventional photodiodes. This also frees up the primary photodiode to deliver a better quality rendition of mid to dark tones.
This combination of primary and secondary photodiodes produces an image that is more richly detailed than conventional CCDs, resolving more detail in highlight and dark areas of the image.
Q. Is there a simpler way of explaining the technology?
A. A useful way of explaining this is to compare the technology to an audio speaker. Formerly, audio speakers relied on just one large speaker cone to deliver all of the musical range, meaning that bass and treble notes were obscured. This was overcome by developing a secondary, high sensitivity cone (known as a ‘tweeter’), radically improving the sound quality. The primary and secondary photodiodes in Fujifilm’s new technology effectively mirror the hi-fi speaker. This is why Fujifilm is marketing this as ‘High Fidelity Photography’.
CCD vs CMOS
There are plenty of articles that discuss the technical details behind both CCD and CMOS technology. There are articles written that provide some comparative analysis as well. Most of this is beyond the intended scope of this article. Instead, a brief paragraph discussing the basics should suffice.
Right up front it’s safe to say that neither technology is really superior to the other. They both basically do the same thing, just in a different way. In a CCD, every pixel’s charge is output through a very limited number of channels, depending upon the chip design. This makes CCDs inherently slower than CMOS chips. However, through more complex designs, adequate speeds can be achieved by CCD sensors. CMOS by comparison has the built in circuitry to do this conversion on a pixel by pixel basis. CMOS requires less power. In theory, because CMOS does this conversion on a pixel by pixel basis, uniformity and similarly noise handling is not supposed to be as good as CCDs. Yet, in practice, Canon has demonstrated that their SLR cameras offer lower noise and better uniformity than most CCD implementations.
Size of the sensor is most important.
Just like film, the larger the film format, the more likely you are able to produce higher quality prints. Surely, that’s not the only factor, but it is a factor. For example, you’re not going to find too many professional film photographers using 110 film for a reason. The same is true with digital photography. The number of megapixels a sensor has is only part of the equation. I’ll stop short of saying it isn’t important. The number of megapixels in a digital camera is one of several important factors which determine image quality.
A classic example which illustrates the “Megapixel Myth” was when Sony introduced the F828. This was a 8 MP camera, Sony just crammed more photo receptors onto the same small sensor (8.8 x 6.6 mm). In order to accommodate the larger number of photo receptors, the size of each photo receptor was likewise reduced. This practice results in a lower overall signal to noise ratio. By comparison, Canon’s 6 MP EOS Digital Rebel had a much larger sensor (22.7 x 15.1 mm). Despite having fewer megapixels, the Canon camera produced significantly better results. Both cameras were targeted at the “prosumer” market. Sony has learned from this and future products along the same product line adopted a larger APS-C sized sensor, similar to what Canon’s Rebel was using. So, when it comes to sensors, size does matter!
There are plenty of articles which discuss sensor sizes. This article isn’t meant to be a tutorial on sensor sizes. Instead, I’m raising this issue as a topic that’s worthy of consideration with regards to digital image quality. The link below is one of many articles on the topic:
Another issue to consider with regards to sensor size is the affect it has on focal length. This is particularly an issue with regards to dSLR cameras with lenses that are interchangeable with full frame sensors (equivalent to 35mm film). For example, while some more expensive dSLR cameras use full frame sensors, most use the smaller APS-C sized sensor. This typically results in a multiplier effect on the focal length, usually by 1.5 or 1.6x. In other words, a 200mm lens on a full frame sensor acts like a 300mm lens when moved to a camera with a smaller APS-C sensor. This is not necessarily a good or bad thing, it’s just different. For example, when doing telephoto shooting, many like this multiplier effect. However, when trying to get a good wide angle shot, this multiplier effect can be a burden.
File format is a factor of image quality.
Finally, it’s worth noting that the format an imaged in saved in can affect image quality, especially if there is any post processing applied to that image. Without going into significant detail, there are two items worth mentioning here.
1. The JPEG format is a “lossy” compressed format. That is, in the process of compressing the image, data about that image is lossed in the process. How much data is lossed is determined by the amount of compression applied to the image. If too much compression is used, visual artifacts can be visibly seen.
2. Equally important is the amount of precision that is lossed when working with JPEG images as opposed to images stored in their RAW format. JPEGs typically work with 8bits of precision per pixel whereas RAW images work with 12bits of precision. When applying numerous calculations through applying multiple filters, this can impact quality significantly. RAW images typically have better tonal range than those stored in JPEG format, even when taken from the same camera and lens combination.
Like any new technology, there are stereotypes regarding problems relating to early implementations of digital photography that still persist today. Hopefully, people who propagate these stereotypes and misconceptions are not just reading issues from outdated periodicals, but instead basing their opinions from their own practical experience. Even then, from this article, it should be self evident that not all digital photography implementations are created equal.
From this article, it should be evident that the Bayer Filter is both the first method of providing color capabilities to a device that inherently is incapable of measuring color and it’s also the source which introduces flaws to the digital image. However, the technology (both hardware and software) used in digital photography is advancing at an incredibly fast rate. When you look at the state of digital photography just ten years ago, the progress that has been made is nothing short of amazing! One can only image the state of digital photography ten years from now.