Dynamic Range Optimization and video

In a previous post, I wrote about low contrast (and ultra-contrast) filters and their use in DSLR video to increase video quality. They do this by lifting low-tones, which prevents AVCHD encoding causing macro blocking.  What I did not realize at the time is that there is a built in feature of Sony Alpha, Panasonic and other DSLRs that pretty much gives you the same thing for free: Dynamic Range Optimization.

Dynamic Range Optimization

Cameras have a more limited dynamic range than our eyes. If we photograph a bright sky looking into the sun (so there are shadows in our scene), then the camera can only expose for the sky or shadows, but not both, yet our eyes can see detail in both.

Most current cameras have a feature that attempts to emulate how our eyes see such a scene. They go under different names: iExposure, Active D-Lighting, Auto Lighting Optimizer, Shadow Adjustment Technology, and so on. Sony’s version is called Dynamic Range Optimization (DRO) and Panasonic’s is called Intelligent Dynamic (i.Dynamic).

Although some of these systems change exposure as part of their operation, most of them use a range compression algorithm that brightens shadows and adds texture to highlights to better approximate what the human eye would see.

Left: a photograph taken with DRO off. Right: the same photo, this time with DRO set to level 5 (maximum). Notice that both the shadows and highlights are brighter in the shot that uses DRO.
Left: a photograph taken with DRO off. Right: the same photo, this time with DRO set to level 5 (maximum). Notice that both the shadows and highlights are brighter in the shot that uses DRO.

Although Sony are often cagey over what DRO is, it is simply Sony’s branding of their licensed use of Apical Iridix, as shown here. Other manufacturers use Iridix without trying to pretend it is proprietary (the name Iridix even appears on the box for my Olympus Stylus 1 as a feature!).  You can also read a technical interview with the Apical CEO here. Finally, you can see what Iridix actually does behind the scenes here.

Iridix is not a simple tone curve but tone mapping It works on a per pixel basis, checking the brightness of each pixel against both its neighbors and against the dynamic range within the overall photo.  Iridix is actually very similar to tone mapping as used in HDR images, but has crucial differences:

  • Iridix it is very fast computationally because it is implemented at a low level: either as dedicated on-chip signal processing or as part of the camera firmware.
  • Iridix is designed to keep the image looking realistic, so you don’t end up with any HDR effects (HDR halos and HDR that starts to look ‘painterly’).

It is important to realize that Iridix is not a simple tone curve, and you cannot replicate it completely by using a typical ‘S’ tone curve in Photoshop or Lightroom.

Disadvantages of Dynamic range optimization

There are two big disadvantages of Apical:

  • It is not applied to RAW. You have to be shooting JPEGs to be able to use it. For Sony Alpha, the camera does alter the RAW metadata so that the DRO settings are available to your RAW converter, but unless the application also licenses Iridix, the DRO settings will be ignored. Adobe don’t license it, so Photoshop/Lightroom ignore DRO, and DxO Optics also seems to ignore it. The Sony-specific RAW editing software (Sony Image Data Converter) does use the DRO settings, and it can be downloaded here.
  • As Iridix brightens shadows, it also increases noise visibility in the dark areas. If you are shooting at high ISO this can become problematic.

It’s also worth noting that Iridix does not increase the actual dynamic range. It’s just a different way of rendering the image data, but is closer to how our own eyes would perceive the same scene. In particular, it should be noted that using Iridix does not increase the shadow detail but is instead ‘perceptual’. Iridix simply pulls lows up so that the human eye perceives the detail better but since AVCHD also uses a perceptual system to decide what to throw away when optimising video, Iridix and AVCHD actually work together well: DRO stops AVCHD throwing shadow detail away by making low tones perceptually more important.

Using DRO in video

Where Iridix really comes into its own is in video. Yep: DRO works in video!  For Sony, you can see it by putting your camera in A video mode and then pressing the Fn button, selecting DRO/Auto HDR and setting it to Lvl1 to Lvl5 or AUTO. For Panasonic GH2, press MENU button and then go MOTION PICTURE ICON > Page 2 > I.DYNAMIC. Olympus possibly have the best implementation of all (you can sellect level 1-5 for shadows and level 1-5 for highlights separately, but since Olympus DSLR video doesn’t generally work in ASM modes, it isn’t much use for DSLR film, so I don’t consider it further).

The following example is from a Sony Alpha A77, so for the rest of the article, I refer to Iridix by its Sony name, DRO.

As noted in my previous video post, a Tiffen low contrast or Tiffen ultra contrast filter is useful with AVCHD video because they lift your blacks, and this prevents macro blocking. Good low contrast/ultra contrast filters are expensive though, but that’s okay, because it turns out that DRO does exactly the same thing – it lifts the blacks! Better still, Iridix does this without losing you as much contrast as the Tiffen filters.

Top: video still with DRO set to Off. Bottom: the same video, this time shot with DRO level 5. Notice that the lows and highs are both brightened.
Top: video still with DRO set to Off. Bottom: the same video, this time shot with DRO level 5. Notice that the lows and highs are both brightened. Click Image to see a larger version.

Again, its worth noting why DRO increases final video quality even though it does not increase the detail in your shadows. AVCHD encoding optimizes your video by removing data in the areas you will not notice. Its favorite place to do this is the low-tones, on the basis that our eyes cannot see into shadow well. This means that as soon as you brighten your video, your shadows start to block up (‘macro block’) because the lack of detail starts to become apparent.

By allowing DRO to brighten shadows before AVCHD video encoding, you reduce the encoder’s propensity to reduce shadow information (because the brightened shadows are now taken to be closer to mid-tones) and you therefore eliminate macro blocking. As macro blocking is the main bugbear of using AVCHD (especially when you will be post processing your video), using DRO is to be recommended.

DRO is also useful even when you do not intend to post process your video. DRO models how our eyes see the scene, and this often makes the video look more natural.

The only downside I have encountered to using DRO in video is noise. As DRO brightens your shadows, it also makes noise more visible. At high ISOs the noise can be noticeable especially because it only occurs in shadows, causing the shadows to seem to ‘shimmer’ relative to the mid and high tones.  If you are above ISO 200, I would suggest turning DRO either off or lower that level5 (or putting it on AUTO as this is a conservative setting).

Using DRO in RAW

You can’t use DRO in RAW (unless you use Sony Image Date Converter to do it off-camera), but you can get pretty close optically with a low contrast filter.

Left: photo exposed for the center, as shot.  Middle: exposed for the center with a Tiffen low contrast3 filter, as shot. Right: the middle version edited in Lightroom
Left: photo exposed for the center, as shot. Middle: exposed for the center with a Tiffen low contrast3 filter, as shot. Right: the middle version edited in Lightroom

The use of a low contrast filter in RAW lifts your blacks so that the camera adds more information to them (all digital cameras assign more data to brighter areas of your photo to better represent how your eye sees – your eyes resolve more detail in bright areas and less in shadows). This lifting allows you to either expose up your dark areas without them banding, or to exposure compensate by about -1/3rd of a stop to protect your highlights (which you can do without clipping your shadows because they have been lifted by the filter). Either way, you end up with your dynamic range pulled away from clipping, allowing you to push the file further in post. Adding a low contrast filter costs you the 1/3rd stop light loss caused by the filter and the slight distortion having some extra glass in the light path usually causes… but if you are careful it will not cost you much in terms of noise. The image to the right actually has far less noise than you would see if you exposed the image on the left up to the same levels.

Using both DRO and low contrast filter in video

So if DRO gives you better shadow encoding, and a low contrast filter gives you flatter footage that is more post production friendly, using both together would be interesting. Here’s what you get:

Left: orignal photo, metered for center. Middle: same shot, this time with DRO level 5, otherwise as shot. Right: same as middle shot, this time with DRO and low contrast3 filter
Left: original photo, metered for center. Middle: same shot, this time with DRO level 5, otherwise as shot. Right: same as middle shot, this time with DRO and low contrast3 filter

The effects of DRO and a low contrast filter used together are cumulative: you lift shadows by enabling DRO and lift them more by adding the filter. Noise, however, is also additive: you gain shadow noise by enabling DRO, and lose light (about 1/3rd stop) by adding the filter. The advantage is clear though: the image on the right has much better looking light than the one on the left. Not only does the light look better, the brights have a more film-like roll-off as there are no sharp digital transitions. Most importantly, in the image to the left, the AVCHD encoder would have fun with the shadows as it totally removes all information in your lows, preventing you from being able to do almost anything in post. The image to the right would cause the encoder to retain much more information in the lows, and this gives you more options in post.

If you are using a camera that creates very sharp video and/or creates video that cannot be shot flat, you should consider using a low contrast filter. I find this especially true of the Panasonic GHx cameras, as they otherwise create sharp video with lots of color/tone ‘baked-in’. Without the filter, this ends up giving you footage that looks very ‘digital’ and can be difficult to work with in post (despite the higher/hacked bit-rates of the GHx series).

Update March 2014: see this post for a video example (shown below) of a low contrast filter and Apical Iridix being used together.

Shot with the Panasonic Lumix LX7, 28Mbit/s 50fps, conformed to 25fps with frame blending.

Conclusion

Macro blocking is the main issue with using AVCHD. It is caused when AVCHD removes shadow data during encoding, and you later try to expose up the footage. Using a high DRO setting can eliminate this because it brightens shadows so that AVCHD is forced to treat them more like mid-tones (and therefore hold on to the data). You can also do the same thing with a low contrast filter when DRO is not possible/not available. Using a low contrast filter also gives you a roll-off on highlights that is very reminiscent of film.

Notes

  1. All tests were performed with a Sony Alpha A77 using the 1.07 firmware.
  2. Altered the post March 8th 2014 to make it a bit less Sony specific (as all cameras I have seen have Iridix).
Advertisements

12 thoughts on “Dynamic Range Optimization and video”

  1. This was an interesting read, thank you! I will experiment with the DRO in video technique as suggested to improve shadow detail
    Cheers!

  2. You write “For Sony, you can see it by putting your camera in A video mode and then pressing the Fn button, selecting DRO/Auto HDR and setting it to Lvl1 to Lvl5 or AUTO. You must be in Movie Mode other wise the setting will be applied to stills.” Am I missing something? When I try this in Movie Mode on the A77, it gets applied to stills as well. I don’t see any way to set, say, picture styles or DRO in one mode and not have it applied globally across all modes. Or is there a way of doing it of which I’m not aware?

  3. Hi James. I have been sat here for the last 10 minutes trying to replicate this and you are right: its the same setting for video and stills! Ive changed the text, sorry for my mistake and the resulting confusion.
    Just looked at your landscape photography site, and was thinking’ hey, his photographs are very cinematic already, there’s nothing he needs to learn from howgreenisyourgarden!’. Then I searched a bit more and it clicked 😉

    1. Thanks! Actually, the question about settings caused me to break out my “A77 For Dummies” book (for the first time ever after owning it for a couple of years 😉 ) where I found the should-have-been-obvious solution: save your preferred stills and movie settings each into one of the three memory blocks available for that purpose, then use the MR setting to recall whichever one you might need at the time.

      With that seemingly out of the way, and news that the A77 mkII is going to come with zebra display in the viewfinder for blown-out areas, I suddenly realized that there seems to be no way to display a histogram during video mode. If I were to set up the A77 as described at f3.5 and 1/50, and use a variable ND to control exposure, is there any way to determine whether I’m blowing out any highlights, other than to pray that such highlights would show up as obviously-pure white in the viewfinder display, while non-blown-out highlights would be a noticeable shade darker? Considering your advice to overexpose slightly while avoiding blown highlights, I’m wondering if there’s some sort of way I haven’t figured out yet on how to be sure none of my highlights are blown out, except for keeping my fingers crossed that the viewfinder accurately reflects what’s being recorded?

      1. Ah yes, the curse of the missing feature that we never use nor notice until a newer camera with said feature appears… and then the feature suddenly becomes important!

        Seriously though, I also never really noticed that the A77 does not show blown highlights in video live-view until I bought a GH2 (which has histogram during video). Previously to that, I always set the ND filter in stills manual mode, then switched to video mode to record… and I *still* do that with the GH2
        .
        Which just all goes to show: It doesn’t really matter that the A77 doesn’t show zebra/histogram and the GH2/3/4 and A77 Mk 2 do, because once you start rolling, there’s nothing you can do about clipping anyway. You have to get the settings right before you start the shot irrespective of the data you have on the live-view!

        So to answer your question, I don’t think the lack of clipping information even matters as there’s nothing you can do about it once you start rolling (and especially when neither of us have actually missed the lack of blinkies/zebra/histogram until we are forced to think about it!).

        Oh, and for what its worth, I don’t think the new A77 Mk2 will beat the cheaper hacked GH2 for on-board video, although we will have to see how well the Mk2 performs with the clean HDMI-out.

  4. Hi there,

    It’s true that the Sony implementation of DRO was originally licensed from Apical. The web sources you’ve listed confirm this, and more important, Sony confirmed it in writing in those early camera user manuals by specifically crediting Apical.

    However… any official mention of Apical technology licensing in Sony user manuals or literature disappeared in 2010 (the most recent manual containing that reference being the A390). This suggests that the current implementation MIGHT NOT have anything to do with Apical or with that early implementation. Maybe it’s the same, and maybe it isn’t. Maybe Sony got tired of paying Apical and came up with their own technology. And if that’s so, it would have to be significantly different from Apical’s design to avoid patent infringement.

    We cannot assume that Apical’s pre-2010 technology is still being used in recent and current cameras (including your A77, which was released in 2011) unless there has been some confirmation of this from Sony or from Apical. If you are aware of such a confirmation, I would appreciate seeing it.

    Thanks!

  5. Superb! Random question, on the Lx7 if I’m recording at achd/psh 60fps and want to render out a timeline at 24p do you recommend 1/50th ss or the closest I can get to the 180 shutter(125) if I wanted to emulate cinematic motion blur? Thanks for all of your posts.. I’ve learned a incredible amount, thank you!

  6. I shoot and edit in 50p/60p (so choose 1/100 or 1/125 shutter), then at final export, change the frame rate to 24fps. So I don’t conform to 24fps as soon as I import into Premiere or whatever, but I do it right at the end when I export the final completed video. For this to work you have to make sure you export with frame blending (there is a single checkbox for this on Premiere export). Also, don’t oversharpen, as this may mess up the blur.

    What you get from this is cleaner frames to edit with (so your selections will be better because the edges are cleaner) and more frames, so you can make finer edits. You also up the bitrate slightly, so when you downsample on frame rate, you blur out some of the AVCHD compression artefacts.

    finally, you retain the ability to add effects ( half speed slow-mo being the most obvious, but the fact that you have cleaner frames lets you do much more in something like twixtor).

    Having said all that, I actually conformed to 24fps straight away the first few times I edited so that I had a feel what 24fps should look like. I’d do that to start with, at least two or three times until you know what 24p ‘looks like’ and what 50p ‘looks like’. HTH.

    1. I’m a huge gaming nerd and would like to believe I feel/see the difference in frame rate decently. I use vegas but have been thinking about switching over to premier mainly because of the convenience of using after effects with it. The first method I tried was downsampling 60fps by .400 to get 23.976 but I noticed minor ‘hiccup/skipping’ issues from dropped frames. Not sure vegas is as refined with frame blending (will have to read more about it) but everything I have read says to avoid smart resampling / resampling within vegas. Perhaps I can use twixtor to interpolate the frames/blend to conform to 24p before final render. Thank you for the wealth of information sir!

  7. Yeah, twixtor is magic! Its also slow. Also try Davinci resolve, that has twixtor style frame blending as standard. You’ll have to repack your AVCHD using smartffmpeg (see ‘Avoid AVCHD compatibility problems’ in https://howgreenisyourgarden.wordpress.com/2013/11/16/sony-alpha-video-part-2-avchd/ … but DaVinci Resolve is free and absolutely fantastic for color grading and correcting (I don’t use Premiere at all now, just Resolve with AVCHD for quick videos, and a Black magic pocket for stuff where I want to spend some time creating a ‘look’, as that shoots raw video)… Id get used to video via AVCHD via something like the LX7 first though, the BMP is the least forgiving camera I have ever owned, and I am still struggling with it after 6 months!).

    If nothing else, just use Resolve for your frame rate changes, as you get it for free and it does a decent frame interpolation rather than a blend (which is actually better than frame blending – its closer to what Twixtor does).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s