High Dynamic Range - HDR
This is a discussion of how HDR works perceptually and how to use it effectively not a "How to do HDR" tutorial. HDR is necessary because digital cameras can't record highlight and shadow detail at the same time outdoors when scenes have more contrast - difference in reflected brightness - than the sensor can handle. That's not a new problem. Color film has the same limitation. B&W film didn't because the contrast of the negative could be manipulated with development, but that doesn't work with the three color layer sandwich of color film.

When making the transition from B&W to color photographers devised work-arounds to the problem such as moving a portrait subject into the open shade where the lighting had less contrast, putting the subject back to the sun and exposing for the shaded face (which is the same as shooting in open shade except the hair becomes a nuclear halo) or shooting with the subject's back to the sun and lighting the front side with flash, either poorly with a single eye-level flash or with the same lighting techniques that flatter a face indoors (see my outdoor flash tutorial).

None of that helps the outdoor sports or landscape shooter who shoots beyond the range of flash in lighting they can't control. But they also developed strategies to cope with the limited range of color. A photographer can't move the Grand Canyon or second base they can move their position relative to the direction of the light so it doesn't put the important detail in the shadows beyond the range of the sensor.

You may have read stories about landscape shooters who wait hours, days or even months for the sun to be in the spot in the sky needed to produce the lighting for the shot they took. Look closely at the lighting in most outdoor sports shots and you see the photographer positioned himself where the subject's face and eyes were in the light, or cropped in a way that a blown out background isn't noticed.

Digital sensors have the same scene contrast limitations as color film. They have no problem handling the contrast of an overcast day, but in direct sun exposing correctly for the highlights on the cheekbones will render the eyes shaded by the brow very dark. The limited range causes a loss of shadow detail when the exposure is set to retain the highlights, but that's really not the problem perceptually.

Perceptually what matters in a portrait and most other photos is that the mid-tone values like the skin highlights and shadows are rendered normally. What is the baseline for normal? What is perceived by eye. The crux of the photographic range dilemma is the fact perception is based on expectations and an exposure for highlight in a digital shot makes the MIDTONES darker than the view would expect, which makes them look abnormal.

The limited range is a function of how the sensors work. A 14MP sensor is like a matrix of 14 million buckets, each with a finite capacity to capture photons. The bigger the buckets the longer it takes to fill them in the highlights, leaving more time for the buckets in the shadows to absorb light and wind up capturing a greater amount of shadow details until the highlight buckets are full and the metering closes the shutter. Thats why a larger full frame 14MP sensor with larger imaging sites will usually have a longer dynamic range than an 1.6 crop sensor with the same MP rating: bigger buckets.

The sensor captures the same amount of light at ISO 100 as it does at 12,000 its just amplified more at higher ISOs and the camera renders anything below a certain signal level as black to avoid amplify that noise and avoiding the random speckling in you see in underexposed shadows at high ISOs. There is no detail there, just amplified noise. When you start with one file you can't expand the range of detail the camera captures, you can only dig down into the detail it did capture above 0 and amplify it in a way that fools the eye perceptually. To grasp this visually take any photo, open it in levels and move the middle slider back and forth. The amount of detail in the photo doesn't change but your perception of it will.

Perception plays a big role in exposure and the entire photographic process is really like a magic trick that tricks the brain of the viewer into thinking 2D contrast patterns are real 3D object. But the trick doesn't work well if the photograph doesn't render the contrast differences in ways the viewer doesn't recognize as "normal" based on a lifetime of viewing similar things in person.

The ability to associate a 2D image with a 3D object isn't something we are born with, its learned from infancy from the moment we open our eyes. A baby by playing with balls, blocks and other shapes learns to associate their shape with the pattern of highlight and shadow contrast the light creates on them. It is not until the age of one that they begin to be able to associate with and react emotionally to a photograph of their parents or familiar toys.

In the mid-70s I worked at National Geographic making maps and and doing reproduction photography: halftones, duo-tones, tri-tones and color separations. One of my jobs was making a halftone reproduction of the mountain relief for the maps which was drawn by an artist with a pencil. The artist depicted 3D the shape by drawing just the shadows the mountain would cast in the afternoon when the light was at a 45° angle. When you see the shadows the brain connects the contrast pattern with the shape.

Lighting in photos works the same way to create 2D contrast patterns the brain recognizes as 3D shapes. The brain can't detect shape when there aren't any shadows (flat lighting). It can detect overall shape when there are shadows without detail, but when the viewer EXPECTS to see detail and doesn't (i.e. brow-shaded eyes in a sunlit portrait) it doesn't look "normal". The most "natural" appearance of faces is perceived when they are illuminated from above and the side from about a 45° vertical and 45°degree angle FROM THE NOSE because the clues about 3D shape come mostly from the shadows.

Content and context - the surroundings the important content is in - affects perception. I noticed that in these two shots I took for my high-speed-sync flash tutorial:

DR_FlatLight

DR_Backlight

Those two shots were taken a few minutes apart at 11AM the first facing West the second East. Both were exposed, using the clipping warning, to keep the brightest area, the towel, below clipping. Both, technically, are correctly exposed for the highlight yet the second looks underexposed. Why? Content and context affects perception.

In the first the stuff you recognize and know what the tones should be are in the sun in the range the sensor can capture accurately. In the second the same stuff is in the shadows (light 3 stops less intense) and fall below the range of brightness the sensor could record accurately.

What would most photographers decide to do? Increase exposure in the second photo by 2 or more stops to expose the important stuff in the foreground correctly "perceptually". That will nuke the detail the highlights on the towel but most viewers wouldn't even notice since that content isn't as interesting or important. That would be even more the case if it was a backlit face being correctly exposed perceptually.

A digital camera can record a 6-8 stop range with detail. A simple test to find your camera's range is to take a fast lens and a gray card, open the aperture all the way and adjust shutter until the card image filling the frame has an eyedropper reading of 250, just below clipping. The big narrow spike the card creates on the histogram will be kissing the right edge. Then bracket in 1-stop increments with the aperture. The spike will march left across the histogram to the other side and the card image will get progressively darker. After closing the aperture about 6-8 stops (if you run out of f/stops on the lens cut shutter speed in half)the card image will read about 30 with the eyedropper, the level above solid black you can detect shadow detail.

This a exposure vs. tonal road map I created for my 20D that way a few years ago which told me I can expect about 6 stops of detail I can see in a scene.

HistogramTest

Why is knowing the camera sensor range important? So you will know how much to overexpose with a second shot to capture the entire scene range WITH detail, or how much flash is needed if you aren't using a tripod and need to capture the full range with one exposure...

DR_FillHisto

With a hand held 1° spot meter it is possible to precisely determine the scene range. I've been there and done that with the zone system 40 years ago so I know an average outdoor scene, flat lit has about a 10 stops range. In a cross-lit scene with dark stuff in the shadows and light stuff in the sun the range will be higher, around 12 stops. At the beach or with snow the range will be even higher. Anything longer than your camera range means exposing for the highlights will cause a loss of shadow detail. It will also make the midtones darker than perceived by eye.

When I take a multi-exposure HDR shot I don't bother with metering. I put the camera on a tripod, and first use the clipping warning to record my baseline shot with all the non-specular highlights below clipping. Then I bracket +1, +2, +3, +4 stops, with the shutter so DOF does not change, to record the detail that fell below the range of the sensor in the first stop. Then back on the computer I look at the over-exposed shots and see which has the shadow detail I saw with my eyes or want revealed in an exaggerated way, and use it for blending. 3, 4, or 5 exposures are not really needed, just two that record detail in the entire range of the scene and a +4 stop exposure does that for most.

The genius behind the iconic B&W shots of Ansel Adams is how they reveal more detail that you'd typically see by eye in person from the same point of view. The eye like a camera has a limited range and when a bright sky is in the field of view it will affect the ability to perceive detail in the shadows. But in viewing a scene in person the eye will adapt to the brightness of what is being focused on: the center 2° of the field of vision, about twice the width of a thumb held at arms length.

Adams understood that adaptive trait of human perception and found a way to manipulate the negative/print reproduction process to simulate within the tonal range of the print what the brain of a viewer would PERCEIVE when it stitched together all the snippets of information gathered as the eyes dart around. The eyes see 140° and the brain filters and edits. A photographer does the same thing, edit, when selection what part of their panoramic vision to include in the photo, but most don't understand the other perceptual dynamics that made what is perceived and the emotional reaction it creates, different than what is actually there and the camera records.

By recording the entire brightness range a person would typically see when scanning and adapting to brightness Adams created in his photos a facsimile not just of what was there, but what it FELT LIKE TO BE THERE IN PERSON. Beyond capturing the entire tonal range of the scene by controlling exposure and development of the negative Adams manipulated the viewer's perception of the print image by using orthochromatic film and red filters to artificially darken the sky. A red filter on the lens when shooting B&W will make anything blue darker than seen by eye and anything red, like the granite peaks in Yosemite, seem lighter than seen by eye. In his landscapes where the foliage seems vibrant he used a green filter. Because Adams understood perception he was able to find the perfect balance in which the viewer was sucked into the alternative reality of the photograph perceptually and emotionally. I taught myself the zone system from Adams books in 1970-72 while in college, focusing only on the technical aspects of making it work without fully appreciating until years later why it worked.

Human perception adapts by finding the lightest and darkest tones in the field of view then comparing everything else to them. That's why its important to have a full range of tone in any photo. When a photograph doesn't look "right" or "normal" most often it is because the shadows are lighter and more washed out than the viewer would expect based on seeing similar things in similar context, or because the highlight detail is either blown out, or too dark. In person the eyes will try to correctly expose the highlighted parts of the scene because most in most scenes that is what contrasts the most with the background. In scenes where the background is predominantly light in on the pupil of the eye will adapt to the bright background and lose the ability to discern detail in the shadows. So either way, light or dark background, the degree of CONTRAST WITH THE BACKGROUND, affects perception.

In most photos the detail in the shadows really isn't very important. On a medium dark background the eye will quickly be drawn over the darker parts towards the contrasting lighter ones. On a large white background the eye will have trouble seeing the detail in objects darker than middle gray because the pupil of the eye adapts to the glare of the background; the same cause and effect as being blinded at night by the headlights of an oncoming car.

Every photo tells a story or seeks to evoke an emotional reaction from the viewer. In a photograph or any other 2D artistic medium it is placement of objects in the frame and the contrast used to guide the viewer to them which creates both the timing of when things are revealed and the inflection; their relative importance to the story being told. There are many forms of contrast. Tone, color, shape, texture, relative size, relative sharpness are all factors in how a viewers eye will move around a photo.

Understanding how human perception works affects how I light and compose photos. I know from studying human physiology and psychology the eyes have very shallow DOF and the brain tunes out and ignores anything not in the center 2° of vision (where all the color sensing cells are located) and is attracted by contrast of many types (tone, color, sharpness, shape, relative size) in part because the rest of the retina is covered with rods that only react to blue-green but are 3000x more sensitive to light. That's why the brain has a "hair trigger" for contrast. In person our eyes react strongly to any contrasting object in the 140° field of view and the brain will redirects the eyes to go check it out whatever contrasts the most. It's an ingrained evolutionary survival skill - if it moves (a form of contrast) its either potential food source or a potential threat. In a still photo nothing moves except the eyes of the viewer and composition is all about controlling where they go and when they see things. Like a joke a visual narrative is more effective when the "punchline" is at the end and the skill in leading the audience to it is in the timing and the inflection, as well as the content.

I try to find ways to put either put the focal point in the foreground and contrast it with shallow DOF and tone, or if the focal point is in the middle ground or background as is often the case in a landscape made sure the tonal contrast of the focal point is so strong against the background the brain of the viewer, triggered by the contrast, will pull their eyes over everything less important in the photo to reach it, then once there, stay focused on it because there's nothing of interest they haven't already seen. A photo with sky on the top of the frame will create strong contrast which can literally pull the eye path up and out of the top of the photo. Strong vertical lines -- like people posed against a tree or wall at the side of the frame - do the same. To counter that tendency don't make my portrait subjects hold up trees or buildings and when there is a distracting sky in a photo I add a dark mat around it which like the cushion on a billiards table stops and bounces the eye back down into the photo when it scans the sky.

Photographs on white backgrounds, especially portraits, work better in color than B&W and with centered or broad lighting rather than short lighting. The reason? Contrast. Highlighted caucasian skin is very close in tone to the white background making the highlighted far side an oblique view of a face disappear into it. But when an oblique view of a face is broad lit the far side profiled against the background wind up darker than the side of the face. That hides the wider near side of the head (perceptually) and makes it easier to see the front of the face and the shape of the face on the far side. Centered lighting from above the head with a full face view frames the front of the face with darker shadows making it appear less wide that an flat lit view.

Those perceptual effects are more pronounced in B&W than in color there is another form of contrast attracting the eye: color. In a color portrait on white the things that will contrast the most are: 1) dark / colorful clothing (tone/color/size contrast); 2) dark hair (tonal contrast); 3) skintone/light hair (color contrast). Guiding the viewer to the face and holding attention there on a white background is easy when the person is wearing white and the face is evenly lit with a low contrast pattern which pulls attention from the lighter background to the color contrast of the face and the tonal contrast of the eye and mouth which will automatically become the perceptual focal point.

What does all that perceptual mumbo-jumbo have to do with HDR? All the things I discuss above explain how to trick the brain of the viewer in ways that make shadow detail and HDR less important. Yes a full tonal range is preferable in nearly every photo, but the practical reality of digital cameras is that the range of the sensors -- related to the bucket analogy - can't yet deliver detail over the entire tonal scale of a cross-lit outdoor scene.

When you do use multiple exposure HDR an understanding of perception in person vs. a 2D rendering will help prevent your HDR shots winding up and bland looking as an flat-lit scene on foggy day with the viewer struggling to find a contrasting focal point -- the problem I see in a lot of HDR shots in which the photographer is more focused on the technique than effective delivery of the message.

I prefer to do blend manually from two exposures that cover the range of the scene because I usually don't want all the dark areas raised equally. That makes a photo a "sea of sameness". Instead I blend the tone and detail in a gradient from dark on the edges to lighter around the focal point to clue the brain of the viewer of that 2D rendering where to go in a way similar to how their narrow DOF vision and attraction to contrast would cause their brain to filter and forget out the less important stuff as the "tunnel" in perceptually on the areas which contrast with the background and attract their eyes.

I've been doing that for 40 years. First with the zone system in B&W, then working at National Geographic where I made full range tri-tone reproductions from an original Edward Curtis portfolio. Later, managing USIA's printing plant I had the pleasure of working with a dozen original Adams prints and making full range double impression (on press) duotones which reproduces the tonality and detail of the originals to the extent the halftone process could. To do that I used a process identical to HDR with one plate exposed for the highlights, another for the shadow detail with the two blended together with the gradient of the screening process.

LilleyMotor

It is contrast gradients of various types which provide clues to the viewer's brain about where to send their eyes in a photo. Making the overall tonal range seem real requires creating a perceptual benchmark and frame of reference by retaining some solid black voids where the viewer would logically expect them to anchor the shadow perception and retaining the contrast between a specular reflection on the solid surface of a white object - both at the same time. That's easy to do indoors with flash by using fill to lift the shadows to were the camera can record detail then overlapping a key light and raising its power until the specular highlight on a white object like a towel clip (255 eye dropper values) but the solid white objects don't (250 values). Flash can also be used that way outdoors by composing the main center of interest in the foreground within its range and matching the ambient light. But for landscapes the limited range of the sensor requires either two blended exposures, or some clever manipulation of the viewers perception by making the 3/4 tones lighter and creating the illusion of more detail.

Today with 14-bit capture and longer range sensors find that in most cases with a single RAW exposure from my 50D and the controls in ACR I can lighten the recorded shadow detail in the 3/4 tones to the point it looks like there is more detail PERCEPTUALLY as if I blended two exposures. Not the same technically, but good enough to fool the brain of the viewer and much less work.

All facsimile reproduction is an optical illusion with the goal of making a viewer suspend reality around them, focus on what is in the frame, and tricking their brains with lighting created 2D contrast patterns into thinking the content is 3D. Creating an interesting visual narrative starts with a good "punchline" but the effectiveness in delivering it rests on the clues provided to help them find it, how long they dwell on it, and whether or not they are distracted somewhere else and forget it. Those goals won't be met effectively by creating a sea of tonal sameness, which is what can happen in an over the top HDR shot, so use it judiciously with the broader goal of delivering your message and creating an emotional reaction in mind. Reacting to technique rather than content and delivery becomes a gimmick and gimmicks come and go because once seen they lose their novelty. Over the long haul work like that of Adams' form of HDR with negative and print is effective because you aren't even aware it was used or that the photographic process had any technical limitations which needed to be overcome.

Holistic Concepts for Lighting
and Digital Photography

This tutorial is copyrighted by © Charles E. Gardner.
It may be reproduced for personal use, and referenced by link, but please to not copy and post it to your site.

You can contact me at: Chuck Gardner

For other tutorials see the Tutorial Table of Contents