Archive for the ‘Photography’ Category

Photography: A look at digital sensor technology.

April 22, 2007

Sunday, April 22, 2007

This article is meant to augment my previous article on photography, “Digital vs. Film and what really matters”. That article was written to discuss the two mediums at a high level and come away with a few generalizations that apply. It is commonly understood that film is a generic term and there are many different grades of quality for film. However, film is a very mature medium and likewise, film technology is relatively stagnant in terms of advances. Differences in existing digital sensor technology is equally significant in terms of image quality, possibly even more so. This article will explore differences in digital sensor technology and discuss how these differences affect image quality.

How do (most) digital sensors work?

In order to appreciate the improvements being made with modern digital sensors, it’s probably necessary to establish a basic understanding of how the average digital sensor works today.

Like film, the purpose of the digital sensor is to collect light focused through a lens. A digital sensor has millions of individual photo receptors (sometimes called photosites) aligned uniformly in rows in columns to collect this focused light. Each of these individual photo receptors represents an individual pixel in the final image. This sensor information is converted from an analog signal into a digital signal, and then it’s stored onto the camera’s storage card, much like a film negative.

A more technical description of this process is as follows. There is a natural phenomenon called the photo electric effect whereby electrons are released when exposed to light. This dates back to Albert Einstein’s 1921 Nobel Prize in physics for his work in this area. Each photo receptor is insulated from one another. When you take a picture, each photo receptor is charged electrically. As light is focused on each photo receptor, some percentages of electrons are released, depending on the amount of light received by that receptor. The voltages are read and amplified through the analog to digital conversion process.

In theory, that sounds simple enough, right? Unfortunately, it’s not that simple. For starters, the sensor’s photo receptors can only capture the degrees of luminance, not color information. In order to capture color, most sensors use what’s known as the “Bayer Filter”. Named after Dr. Bruce Bayer of the Eastman Kodak company, the Bayer filter is a mosaic color filter array. Essentially, each pixel has a color filter over it. The first row has a pattern of blue, green, blue, green… followed by the next row which consists of green, red, green, red… etc. This pattern is repeated across each pixel of the digital sensor. [Note: Some manufacturers, like Sony, have used a modified version of this that adds another color like emerald to their filter.] Camera makers often used different demosaicing algorithms to achieve a full color image of varying levels of image quality.

Demoasicing, anti-aliasing, sharpening, oh my!

While the Bayer Filter does allow for full color images, this benefit comes at a price. The same sensor, without the Bayer Filter would yield a better true resolution, but would provide a monochrome image. The demoasicing algorithms use sophisticated interpolation techniques in an attempt to preserve the resolution and provide color, but these algorithms are never perfect. A typical side affect of the Bayer Filter is what’s known as digital aliasing. Visually this typically appears as irregular shaped edges. That is, it may look artificial in some way as compared to a film based image. To overcome this effect, digital camera makers typically use a low pass anti-aliasing filter. This creates the smoother edges, etc. but it also creates a somewhat softer image by default. Most cameras that output to JPEG files (as opposed to Raw) will also apply some level of a sharpening filter to compensate for any softening that occurred during the anti-aliasing filter. Additionally, most cameras have a filter to block the infrared rays as photo sensors are sensitive to this spectrum of light.

If it sounds like digital cameras have to jump through a lot of hoops in order to create a quality image, it’s because they do. This technology wasn’t always up to the standards that it is today. Likewise, early digital cameras had a reputation of having problems with images due to things like digital aliasing, etc. Even though these problems have largely been addressed, this reputation of imperfection has stuck with many of the film purists. However, the quality of results today speaks for itself. That is, the quality of digital images has come a long way over the past 10 years.

New technology at the sensor level

One way to avoid using the Bayer Filter is to have a 3 CCD based system. In this implementation, a prism is used to separate the red, green and blue light onto three separate sensors. The problem with this is that the sensors are very expensive and likewise this is not a practical solution for high megapixel cameras. However, this is a common solution for higher end digital video camcorders as the resolution for video is much lower than still photographs. Likewise, the sensors are smaller and much cheaper to produce.

One of the biggest advances in sensor technology has to be the Foveon X3 sensor. Each pixel has 3 vertically stacked diodes (Red, Green, Blue) to capture the entire color spectrum. They were able to accomplish this by making use of the fact that the physical properties of light have different wavelengths for each color and are likewise able to penetrate silicon at different depths. The benefit of this approach is that this type of sensor does not need a Bayer Filter to produce color. As such, it doesn’t have to deal with any of the demoasicing and anti-aliasing then sharpening filters. The Foveon sensor is currently being used in Sigma cameras. I’m not quite sure why other camera manufacturers haven’t yet jumped on this bandwagon. There could be issues of cost or licensing that I’m not aware of. I’ve read that these sensors don’t perform quite as well in low light conditions, but I have not yet used a camera with this type of sensor first hand to verify this. A word of caution to the buyer though… Sigma advertises a 14MP camera that is actually a 4.7MP camera. Sigma counts each pixel three times as there are 3 photo receptors for each element. To me, that’s an unfair marketing gimmick. I’ve seen comments that suggest their 4.7MP (or 14MP as they call it) camera compares to 10MP cameras from other vendors with a Bayer Filter. Marketing tricks aside, this seems to be the way to go from my perspective – especially as this technology continues to mature. I’d like to see other camera companies explore this option better. Interestingly, just as film and traditional digital sensors produces images with a different “feel”, images with this sensor produce yet another unique “feel” to them.

Another interesting technology comes in the form of Fuji’s Super CCD. http://en.wikipedia.org/wiki/Super_CCD Fuji’s latest technology, the SR II format, does two things different. First, the photo receptors are shaped like an octagon as opposed to a square. This apparently allows for a more efficient layout of photo receptors on the chip. Additionally, it has two photo receptors per pixel, one large and one small. The idea behind this is to more closely mimic the best characteristics of film while retaining the benefits of digital. The result produces an image with very low noise and very high dynamic range.

http://www.ephotozine.com/article/Fuji-Super-CCD-SR–HR-FAQ-1

Super CCD SR

Q. What are the main benefits of Super CCD SR?
A. Due to an innovative new CCD arrangement, cameras featuring Super CCD SR are able to capture highlight and shadow detail that conventional digital cameras miss. Overall, it will provide a more faithful representation of the actual subject and greater dynamic range. Specific benefits are:
it combats the bleached out effect that often ruins flash photography
it allows you to shoot confidently even in very bright, contrasty conditions
it delivers detail in areas that normally get lost, such as cloud detail outdoors
increased exposure latitude provided by the sensor means that it is more forgiving of incorrect exposure.

Q. How is Super CCD SR different from a normal CCD?
A. Super CCD SR uses a new CCD arrangement, based on the diagonally mapped, octagonal sensor arrangement that Fujifilm pioneered with Third Generation (3G) Super CCD. However, with Super CCD SR, not one, but two photodiodes capture information on the same area of the image (these are arranged in a ‘double honeycomb’ structure).
The sensitive primary photodiode registers the light reflected off the subject at a high sensitivity (similar to a conventional Super CCD photodiode), whilst the secondary photodiode captures highlight information from the same part of the image, recorded at a lower sensitivity.
Because it is set at a lower sensitivity than the primary photodiode (in other words, records a darker image), the secondary photodiode is able to ‘see’ additional detail in bright areas normally beyond the reach of conventional photodiodes. This also frees up the primary photodiode to deliver a better quality rendition of mid to dark tones.
This combination of primary and secondary photodiodes produces an image that is more richly detailed than conventional CCDs, resolving more detail in highlight and dark areas of the image.

Q. Is there a simpler way of explaining the technology?
A. A useful way of explaining this is to compare the technology to an audio speaker. Formerly, audio speakers relied on just one large speaker cone to deliver all of the musical range, meaning that bass and treble notes were obscured. This was overcome by developing a secondary, high sensitivity cone (known as a ‘tweeter’), radically improving the sound quality. The primary and secondary photodiodes in Fujifilm’s new technology effectively mirror the hi-fi speaker. This is why Fujifilm is marketing this as ‘High Fidelity Photography’.

CCD vs CMOS

There are plenty of articles that discuss the technical details behind both CCD and CMOS technology. There are articles written that provide some comparative analysis as well. Most of this is beyond the intended scope of this article. Instead, a brief paragraph discussing the basics should suffice.

Right up front it’s safe to say that neither technology is really superior to the other. They both basically do the same thing, just in a different way. In a CCD, every pixel’s charge is output through a very limited number of channels, depending upon the chip design. This makes CCDs inherently slower than CMOS chips. However, through more complex designs, adequate speeds can be achieved by CCD sensors. CMOS by comparison has the built in circuitry to do this conversion on a pixel by pixel basis. CMOS requires less power. In theory, because CMOS does this conversion on a pixel by pixel basis, uniformity and similarly noise handling is not supposed to be as good as CCDs. Yet, in practice, Canon has demonstrated that their SLR cameras offer lower noise and better uniformity than most CCD implementations.

Size of the sensor is most important.

Just like film, the larger the film format, the more likely you are able to produce higher quality prints. Surely, that’s not the only factor, but it is a factor. For example, you’re not going to find too many professional film photographers using 110 film for a reason. The same is true with digital photography. The number of megapixels a sensor has is only part of the equation. I’ll stop short of saying it isn’t important. The number of megapixels in a digital camera is one of several important factors which determine image quality.

A classic example which illustrates the “Megapixel Myth” was when Sony introduced the F828. This was a 8 MP camera, Sony just crammed more photo receptors onto the same small sensor (8.8 x 6.6 mm). In order to accommodate the larger number of photo receptors, the size of each photo receptor was likewise reduced. This practice results in a lower overall signal to noise ratio. By comparison, Canon’s 6 MP EOS Digital Rebel had a much larger sensor (22.7 x 15.1 mm). Despite having fewer megapixels, the Canon camera produced significantly better results. Both cameras were targeted at the “prosumer” market. Sony has learned from this and future products along the same product line adopted a larger APS-C sized sensor, similar to what Canon’s Rebel was using. So, when it comes to sensors, size does matter!

There are plenty of articles which discuss sensor sizes. This article isn’t meant to be a tutorial on sensor sizes. Instead, I’m raising this issue as a topic that’s worthy of consideration with regards to digital image quality. The link below is one of many articles on the topic:
http://www.cambridgeincolour.com/tutorials/digital-camera-sensor-size.htm

Another issue to consider with regards to sensor size is the affect it has on focal length. This is particularly an issue with regards to dSLR cameras with lenses that are interchangeable with full frame sensors (equivalent to 35mm film). For example, while some more expensive dSLR cameras use full frame sensors, most use the smaller APS-C sized sensor. This typically results in a multiplier effect on the focal length, usually by 1.5 or 1.6x. In other words, a 200mm lens on a full frame sensor acts like a 300mm lens when moved to a camera with a smaller APS-C sensor. This is not necessarily a good or bad thing, it’s just different. For example, when doing telephoto shooting, many like this multiplier effect. However, when trying to get a good wide angle shot, this multiplier effect can be a burden.

Sensor Sizes

File format is a factor of image quality.

Finally, it’s worth noting that the format an imaged in saved in can affect image quality, especially if there is any post processing applied to that image. Without going into significant detail, there are two items worth mentioning here.

1. The JPEG format is a “lossy” compressed format. That is, in the process of compressing the image, data about that image is lossed in the process. How much data is lossed is determined by the amount of compression applied to the image. If too much compression is used, visual artifacts can be visibly seen.

2. Equally important is the amount of precision that is lossed when working with JPEG images as opposed to images stored in their RAW format. JPEGs typically work with 8bits of precision per pixel whereas RAW images work with 12bits of precision. When applying numerous calculations through applying multiple filters, this can impact quality significantly. RAW images typically have better tonal range than those stored in JPEG format, even when taken from the same camera and lens combination.

Conclusions…

Like any new technology, there are stereotypes regarding problems relating to early implementations of digital photography that still persist today. Hopefully, people who propagate these stereotypes and misconceptions are not just reading issues from outdated periodicals, but instead basing their opinions from their own practical experience. Even then, from this article, it should be self evident that not all digital photography implementations are created equal.

From this article, it should be evident that the Bayer Filter is both the first method of providing color capabilities to a device that inherently is incapable of measuring color and it’s also the source which introduces flaws to the digital image. However, the technology (both hardware and software) used in digital photography is advancing at an incredibly fast rate. When you look at the state of digital photography just ten years ago, the progress that has been made is nothing short of amazing! One can only image the state of digital photography ten years from now.

Photography: “Digital vs. Film” and what really matters.

January 5, 2007

Film vs Digital
Another view on the great “Digital vs. Film” debate. This article explores the issue from a practical matter and discusses what really matters.

Friday, January 5, 2007

The Digital vs. Film debate has been on-going since the advent of digital photography. Early in the debate, there was a question as to which medium was better. That debate isn’t really the issue anymore. Rather, what I find most amusing are the discussions regarding the 35mm film equivalent to digital in terms of megapixels (MP). I’ve seen wild ranges of values and talks about a general consensus, etc. This article attempts to explore the “megapixel” issue in more detail.

Which is better, film or digital?

I’ll get right to the point and say that all things being equal, digital is clearly better. By “all things being equal”, I mean that film comes in multiple formats (35mm, medium – 4×5, etc.). Likewise, it’s fair to say that digital has not completely eliminated the need for film as there is no real digital equivalent to the high end film formats. However, for the other 99.999% (estimation of course) of the photographer population we’re really only interested in 35mm film. With the quality of today’s digital cameras, there is really no need to continue using 35mm film based equipment. Some older photographers like the look (including the imperfections) of film based prints, but even this can also be simulated with digital filters. Of course, I suppose it’s not fair to just proclaim one medium as being better than another without qualifying the reasons behind the claim. In no particular order, digital is better than film because:

1. The development process: There are so many ways things can go wrong with making prints from film. Digital processing is much more consistent. Only the most skilled photographers can get the maximum benefit of film. In practice, the average photographer does much better with digital.

2. Workflow: To get the best image possible, film has to be digitized any way in order to tweak the color, sharpness, attempt to reduce the noise, etc. In these cases, you’re at the mercy of the quality of your scanner. In terms of convenience, digital is already “digitized”, though those shooting in RAW format will do some development. The convenience of sorting, storing, searching and transferring digital files cannot be matched by film.

3. Much more accurate colors.

4. Better dynamic range through RAW file processing.

5. Far less noise and much more useful ISO range.

6. Film costs money. Digital storage is comparatively cheap.

7. Film degrades over time. Files can be duplicated exactly.

8. Immediate feedback of the quality of the picture taken. This just isn’t possible with film based cameras. Many professional photographers readily admit that learning on a digital camera can save years of trial and error based experimentation with film based cameras.

What is the digital equivalent to 35mm film in megapixels?

This is the more difficult question to answer. The explanation for this difficulty is that there is no quick and easy way to measure this accurately. Worse, most photography “experts” are by no means qualified scientists. Many feel that based on their photography (which is essentially an art form) credentials, they’re able to also provide answers to more scientific questions. People feel the need to quote respected photographers on what is essentially their “opinion” on the matter. As such, for 35mm film, I’ve seen ranges from 5 megapixels up to and even beyond 20 megapixels.

Why such a large range and who is right?

One of the main reasons there is such a large range is because of film itself. That is, not all film is created equal. Film advocates tend to quote numbers from the highest quality, highest priced, highest grain films available. That’s fine, but in reality the average film purchased is probably the cheapest. In theory, the very high grain 35mm films have the equivalent of about 20 megapixels. So, 35mm film = 20 megapixels, right? Film advocates would like to stop there and say yes, but the reality is far different. The film advocates that think 35mm film is equivalent to 20MP apparently aren’t familiar with the Modulation Transfer Function (MTF). In layman’s terms, this refers to the cumulative affect of the lens, lens + film, scanner and sharpening algorithms, etc. In short, the theoretical 20MP quickly gets cut in half to about 10MP. More detail about MTF can be found at the link below:

http://www.normankoren.com/Tutorials/MTF.html

This is where the industry “consensus” comes in. The general consensus is that in terms of resolution only, 35mm film is close to 10MP on digital cameras. That would be fine if resolution where the only factor involved with image quality, but that’s not the case. The four major factors that impact image quality are resolution, noise/grain, dynamic range and color. Digital cameras have several times the signal to noise ratio as compared to film cameras. Even digital cameras are not the same in quality. Digital SLRs have a much higher signal to noise ratio than “point and shoot” digital cameras because of their larger (and more expensive) internal sensors. Likewise, a 6MP digital SLR will produce better quality images than an 8MP point and shoot digital camera.

I’ve read many articles in magazines and read many web based articles on the topic. I’ve come to the conclusion that most photographers are not scientists and likewise don’t really understand the dynamics that come into play when making such comparisons. While I encourage everyone to read as much information as they can on the topic, ultimately you’ll have to come to your own conclusion. There is one web site that I can recommend on this topic. It’s certainly one of the few that have any scientific credibility behind it.

http://www.clarkvision.com/imagedetail/film.vs.digital.summary1.html

Of particular interest, is a graph which shows the range of resolution on the vertical axis and ISO values on the horizontal axis. The quality of film photography quickly drops off across the ISO range. What’s the point of enlarging a photo if it’s going to look like crap? Just because film has potential resolution, doesn’t mean the images will be visually appealing if they are grainy and noisy. I’ll present my own conclusions which are a combination of my own observations and my research on the topic in the conclusion paragraph below.

Film Quality

Also, when researching the topic, I’ve found that most of the articles are old and mostly out of date. That is, film is relatively stagnant in terms of improvements. By comparison, digital imaging technology has been progressing at a very fast pace. For example, the first consumer level digital camera was the Apple Quicktake 100, released in February, 1994. It would be a joke to compare that to film. Clearly film was far superior. For more digital photography history, see the link below.

http://inventors.about.com/library/inventors/bldigitalcamera.htm

By 2001, 5MP cameras were coming onto the scene that posed a serious challenge to 35mm film capabilities. By 2003, there were heated debates on the topic as digital cameras from several vendors were exceeding 35mm film capabilities. As of this writing, December 2006, nobody with an ounce of credibility can deny that digital photography has long since eclipsed the capabilities of 35mm film based photography. In fact, higher end Canon DSLRs are producing images that match or rival medium format film cameras.

How much is enough?

In the printing business, the rule of thumb for high quality output is that you want to keep your images in the 200 – 300 ppi (pixels per inch) range. Essentially, based on printing technology, nothing above 300ppi can be realized in terms of quality. Anything under 200ppi is where you might start to notice a drop in quality. So, for the best quality images, how much resolution is needed? A 4” x 6” photo at 300ppi would require a 2.16MP image. (300 x 4) + (300 x 6) = 2,160,000 or 2.16MP.

Print Size 300ppi 200ppi
4×6 2.16MP 0.96MP
5×7 3.15MP 1.40MP
8×10 7.20MP 3.20MP

Etc…

So, why would you want more resolution? More resolution allows you crop a portion of your image and still make a reasonable sized print out of it. The resolution chart above also explains why digital photography really took off at the consumer level once affordable digital cameras were available in the 2 – 3 MP range. Most amateur photographers rarely have a need for anything but 4×6 prints.

Conclusions…

Based on my own observations and research into the many factors of image quality, I’ve come to the following conclusion: There is no exact megapixel correlation between 35mm film and megapixels in digital media. Image quality varies based on the quality of the film used; the ISO setting for the picture taken; not just the number of megapixels, but also the size of the digital sensor; the signal to noise ratios, etc. That said, there are a few generalizations that can be made with regard to the 35mm film and digital mediums.

1. With “regular” film, shooting at a standard ISO 200 on a film based point and shoot camera is equivalent to about a 5MP point and shoot digital camera.

2. On one impractical extreme, under the most ideal lighting, ISO 50, best quality film possible, 35mm film can achieve a quality somewhere in the 10+ MP digital equivalent ranges.

3. On the other impractical extreme, at ISO 1600 and with lesser quality film, 35mm film degrades to somewhere between the 2 – 4 MP digital equivalent range.

4. Overall, across a range of settings, a decent quality digital camera in the 6 – 8 MP range will almost always beat 35mm film based cameras in terms of the overall quality of image produced.

Since there are many factors which influence image quality, it’s not hard to understand why there is no one single magic answer as to how many megapixels are equivalent to 35mm film. Likewise, regardless of how many people repeat the same “consensus”, understand there is no single answer. Any source that tries to peg 35mm film into a single, precise megapixel equivalent is simply incorrect.