Digital cameras have largely supplanted film in the last decade and as they have grown in popularity and dropped in price, they have exploded in pixel count. The first really practical digital camera in my estimation was the Sony Mavica (that was back when Sony still cared about its customers and didn't do stupid things like install rootkits on their computers). I say "practical" in the sense that it was really the first camera that allowed unlimited photography. Existing digital cameras used proprietary memory sticks which were outrageously expensive and negligible in capacity. The Mavica recorded on floppy disks at a whopping 640 by 480 pixel resolution, a whole 0.3 megapixels. Then later Mavicas (Mavicae?) used mini-CD's. I had great experiences with mine, although some folks had awful luck. I had a few disk failures, but then again I had a few rolls of film come out badly, too. I literally wore one out and then switched to other brands when SD cards came along. Also, Sony by this time had gone "screw you" and was using proprietary batteries.
So now I've got a number of digitals. My good camera is a Canon 8-megapixel camera and by backup for when I need something very portable, or am concerned about damage, loss, or theft, is an HP 5 megapixel camera. No, I will not touch anything that uses proprietary cards, batteries, or anything else. You sold me the camera, now thanks and see you next time I need a new one. (There seems to be an emerging belief in some business circles that businesses are entitled to a continuing revenue stream every time a user makes use of a product. Latest is a proposal that developers get a fee every time a house is re-sold. We need to smash this trend like a cockroach.)
All images are inherently blurry. This is built into the wave nature of light and, to quote Scotty, "I kenna change the laws of physics." And believe me, people have tried. The payoffs would be huge.
The culprit is something called diffraction. Any time light passes an edge, be it the edge of a mirror, a lens, or merely a pinhole, secondary ripples emanate from the edge (A-B, below). These reinforce or cancel out the original light (C). Thus, a perfect image of a star consists of a bright central disk surrounded by alternating bright and dark rings. An astronomer who saw this in a telescope would be elated. It would mean both his instrument and the steadiness of the atmosphere were perfect.
From the two left diagrams below, it's clear that large apertures are less affected by diffraction. From the two on the right, it's clear that long focal lengths (longer distance from lens to image) are more affected than short ones. So both diameter and focal length are important. For many aspects of photography, the ratio of the two is critical. The ratio of focal length to diameter (or aperture) is called the f-ratio. Two lenses will experience the same diffraction if they have the same f-ratio.
When light passes through an opening, the diffraction pattern has the appearance below. The image of the star has a bright central disk, called theAiry disk, surrounded by bright and dark rings. (Lots of stuff in physics, optics, and astronomy bears the name "Airy" after George Biddell Airy, Astronomer Royal of England from 1835 to 1881.)
A profile through the image is below. 84 per cent of the light is in the Airy disk so most analyses of imaging focus (no pun intended) on the Airy disk and neglect the rings.
Star atlases generally show bright stars larger than faint stars, and people instinctively refer to bright stars and planets as "bigger." In a sense, they really are. A bright star will have a bigger Airy disk above the visual threshold than a faint one, even if the stars are far too tiny for us to see their actual sizes. In fact, it may even be bright enough to see the diffraction rings. Imperfections in our lenses and visual processing also cause bright objects to spread out more than faint ones. It happens with cameras, too, and for pretty much the same reasons.
Digital cameras record color by superimposing a color filter array over the pixels on the imaging chip. The most common pattern is a checkerboard called aBayer RGB filter. Half the squares are green, a quarter red and a quarter blue, matching the spectrum of sunlight and the color sensitivity of the eye. A "pixel" in a digital camera is actually a square of four sensors, two green and one each blue and red.
It seems pretty clear that making pixels far tinier than the Airy disk of your lens won't accomplish much. The image can't record any detail smaller than the Airy disk, so what's the point? The question is, does it do any harm? Some watchdog groups insist it does. I'm not so sure.
The size of the Airy disk is 1.2wf, where w is wavelength and f is f-ratio. If w = 500 nm or 0.5 microns, then the diameter is about 0.6f microns. For f=8, a typical daylight exposure setting on a digital camera, the Airy disk is thus about 5 microns. For f=4 it's about 2.5 microns. Any pixels much smaller won't improve image quality.
Critics of multi-megapixel cameras point to the following objections:
Objection 1, as far as I can see, has no basis in physics. Every image consists of innumerable overlapping Airy disks. Also, for every light ray that hits a pixel dead center, there will be innumerable others that strike the edges and overlap neighboring pixels.
Also, 84 per cent of the light is in the Airy disk, meaning 14 per cent is not. In a typical scene, therefore, one seventh of the light is in the diffraction rings and overlaps other Airy disks. One seventh of every picture is "stray" light. That's a lot.
Objection 2 has some validity, but considering that every Airy disk and its diffraction rings overlaps the Airy disks and diffraction rings of other light rays, I suspect it's not a major issue.
Objections 3 and 4, on the other hand, are legitimate and number 5 is dead on target.
The Airy disk analysis suggests that pixels smaller than a few microns don't contribute anything. How does that compare with film and the human eye?
High quality films typically have a resolution of about 100 line pairs per millimeter, that is, they can resolve 100 black lines separated by 100 white spaces. That corresponds to 200 pixels per millimeter (alternately black and white) or 5 microns per pixel.
On the other hand, nobody has ever complained that films offered too much resolution, and some microfilms have claimed up to 800 line pairs. You might be able to use a very small f number and get Airy disks down to a micron or so, but then a small f ratio would result in a very small depth of field, which would mean the edges of a document wouldn't be as sharp as the center. So whether 800 line pairs is useful might be debatable, but the point is, nobody ever claimed it was harmful.
I digitized my archive of 30,000 slides, and, since I didn't want to do it twice if better technology came along, I carefully compared the digitizing results with the actual slides under a microscope. I became convinced that a width of 2500 pixels captured just about everything that was physically on the slide. Now 100 line pairs per millimeter corresponds to 7200 pixels across the 36-mm width of a 35-mm slide. (The width runs along the film. The height of the slide uses only 24 of the 35 mm width of the film. Most of the rest is sprocket holes, a legacy of the days when 35-mm film was movie film.)
So if film can record 7200 pixels, why be satisfied with only a third of that? I might achieve 7200 pixels - if - I always used a tripod, had absolutely perfect lenses, took pains to focus with extreme precision, never shot through windows or from moving vehicles, shot through absolutely clear air, shot subjects where no wind was blowing, and shot subjects that had 7200 pixels of detail to record. Considering how many "f/1.4 at next Thursday" pictures I've had to shoot, 2500 pixels is probably way overkill in many cases.
The spacing of cone (color) cells in the human eye is variable but averages about 5 microns. So evolution has not seen any pressure to supply pixels finer than that. Eagles have closer spacing toward the 2.5 micron size. I suspect the fabled visual acuity of eagles has more to do with learning than actual wiring - if our survival depended on catching food from the air, we'd learn to see a mouse from far away, too. R. Shlaer, writing in " An eagle's eye: quality of the retinal image" (Science. 1972 May 26;176(37): 920-2) wrote: "The optical quality of a living eagle's eye was determined by an ophthalmoscopic method. The performance of the eye was substantially better than that reported for humans, but did not confirm some of the wilder claims made for such birds."
Two revealing statistics. HDTV (high definition TV) delivers about 2 megapixels, and James Cameron shot Avatar at 2.2 megapixels. Although you can bet his pixels were pretty big.
I decided to do a comparison of my Canon against the HP. I used the "what can I throw together for a camera test" method, which consists of rummaging around for whatever I can find at the spur of the moment. For a target, I used a page of text, which I arranged carefully using gravitational positioning (tossing it on the floor). Then I took a picture from chest height using each camera, zoomed in on each image on my computer and clipped the same text for comparison. Both images were very good, allowing the text to be blown up much larger than actual size (and recall the pictures were taken from about 1.5 meters away) with perfect legibility. But to no particular surprise of mine, the Canon image was distinctly better: sharper and with better contrast and less noise. Oho, I said, the pixel size will tell all....
And....the chip specifications for both cameras turned out to be virtually identical!
Now the HP does a good job for most purposes but at the limits, I had already noted some color fringing in the corners of the frame. I am sure that's a lens issue and not a pixel issue. And for the record, there is no higher high tech than optics. What even disposable plastic cameras achieve with one piece molded plastic lenses is amazing. And color photography imposes incredibly stringent demands on lenses, but some lenses are observably better than others. That's one reason cameras vary in price. Since one site critical of multi-megapixel cameras displays an image with conspicuous fringing, I suspect the image problems are optical as well as digital. Also, chip manufacturing and internal processing may be equally important as well.
Down to a certain level, more and smaller pixels probably do little if any harm. The main disadvantages are noisier data and bloated file sizes. If I ever have to go to a 12-megapixel camera, I'll keep my pictures around 2500-3000 pixels wide until I see serious reason to save bigger files. And I definitely won't pay more just to get more, but smaller, pixels. The hyping of more pixels is shady marketing, and the practice of counting each sub-pixel in a Bayer RGB array as a pixel crosses the line into deception.
I suspect the limit of pixel size may be dictated by quantum effects in the chip, electronic noise and the physical inability to achieve useful resolution on a tiny chip. 2 megapixels means a square about 1400 pixels on a side. If 2.5 microns is a lower limit of useful pixel size, that will mean 3.5 mm on a side. Since chips now are a bit under a centimeter wide, we can maybe go a bit smaller, but not a whole lot. I would like to have a camera I can mount undetectably on my glasses. A chip 3.5 mm (just over 1/8 inch wide) with a tiny lens and inconspicuous wire to storage comes pretty close (a wireless feed would be even better).
The worst thing about digital cameras is their sluggishness. I could point and shoot a film camera and it would take the shot in a hundredth of a second. Digital cameras waste several seconds racking the lens into position because a retracted lens is more palatable to people who can't deal with cameras for grownups. Make the camera a lousy 3/4 inch bigger, keep the lens in position at all times, and use the most recent settings for exposure.
So the watchdogs warning about megapixel hype have their hearts in the right places, and claims of 10+ megapixels are overblown and unnecessary, but fraud? I don't quite think so.
Created 1 April 2010; Last Update 11 January, 2020
Not an official UW Green Bay site