DSLRs are photograph devices that happen to have video capability. Most DSLRs don’t therefore have any special features that make shooting video as easy as shooting stills. There are workarounds but your camera manual will not tell you what they are because like your DSLR itself, the manual is most concerned with stills shooting.
This blog post explains how to get around the issues with the minimum additional kit: a variable ND fader (required) and a low contrast filter (optional but recommended if you will be performing heavy post processing).
When shooting stills, you have a lot of control over how you set exposure. You can vary shutter, aperture or ISO. If you were to take a series of photographs of (say) a bride and groom leaving a church, you or your camera would maintain correct exposure by varying these three values as the couple moved from the darker interior and out to the bright sunlight. None of this will work for video:
- You cannot easily change aperture or ISO midway through a take without it being obvious (i.e. it will look awful), so you are stuck with the values at the start of the take even though the lighting conditions may change midway through.
- You typically set the aperture fairly low in video (around f2-f4, with f3.5 being a good default value), so your ability to control exposure with it is limited. Like in stills, aperture is more of a stylistic control (i.e. it is used to set depth of field and sharpness) rather than linked to exposure in any case.
- Although some cameras do change shutter to maintain exposure when shooting video on auto, this is never done in professional production. Too high a shutter causes less smooth video and strobing. Instead, shutter is fixed to the frame rate. As a rule of thumb, you set the shutter to twice the frame rate. So if you are shooting 24fps video, you set the shutter to 1/50s. If you want to shoot something moving fast, you do not increase shutter as you do in stills. Instead, you have to increase fps, and then increase the shutter to match.
So here’s the problem: If you are coming to video with a stills mentality there is no way to control exposure!
In video you have to control exposure via a variable Neutral Density (ND) fader.
So, the variable ND fader keeps you happy for a while. You video starts to look smoother and less like it was taken with an iPhone. But then you realise that a lot of the cool stuff in professional films occurs in post-production, and you try your hand at that.
An example of macro blocking. The splotches on the black container and some of the mushiness in the background foliage are all signs of macro blocking. These would both become much more prominent if you sharpened and/or brightened the footage.
But weird stuff starts happening. If you try to change exposure too much you start to see either blockiness in what used to be shadows (‘macro blocking’) or color banding if you try to give your scene more punch. The above image is a good example of poorly shot footage: we have allowed the blacks to macro-block because of the underexposed dark tones.
As a stills shooter, you will know that if you want to do any significant post-processing, you have to shoot RAW. The option to shoot RAW video is generally not available to you in current DSLRs and you instead use a compressed format rather like JPEG in that It looks good unless you try to edit it too much whereupon it will break up and start to show up quality issues (banding, compression artifacts, crushed shadows and blown highlights).
In video you have to shoot flat if you want to post process your footage.
‘Flat’ means shooting such that your Luma (brightness or ‘Y’) and Chroma (color or ‘C’) values are near the center of the available range so you end up with de-saturated, low contrast footage. All your YC values are well away from extreme values that would cause clipping so you can now push the values much further in post-production (recoloring, changing lighting digitally, changing exposure digitally).
Shooting flat seems like an easy step: you just set an in-camera video style with low contrast, low saturation and low sharpness values. Many DSLRs don’t allow you get particularly flat footage out of them using the available digital controls though (and especially not the Panasonic GH2/GH3, which are otherwise excellent video DSLRs) so you may have to do it optically via a Low Contrast filter.
It is worth noting that if you want to create footage that you can post process heavily, you may use both a variable ND fader and a low contrast filter together. This raises light loss and optical aberration issues caused by the additional glass in your light path. However, most color-editing video software assumes you are using flat footage, and if you are not, you may have bigger problems.
For example, most plugins and applications come with lots of preset ‘film looks’, which sounds great until you try them with non-flattened footage: the preset result becomes too extreme to be usable, and if you mix them down, the effect becomes negligible. Not good!
In the next section I will show how both the variable ND fader and Low Contrast filter can be used to create well exposed and flatter footage. I am using a Panasonic GH2, but also retested using a Sony Alpha A77 to confirm the workflow on both cameras.
I will be shooting the same footage with and without the two filters. To make sure the footage is identical, I am moving the camera on a motorised slider (a Digislider).
I am shooting foliage because panning along foliage is actually a very good test of video: it generates massive amounts of data, as well as producing lots of varying highlight/dark areas. That and the fact that the garden is when most enthusiast photographers test out most things (hence howgreenisyourgarden).
I am using a Panasonic GH2 with the ‘Flowmotion’ hack that allows me to shoot high bitrate video (100Mbit/s AVCHD). I set the GH2 to shoot 1080p 24fps video, which is as close as we can get to motion-picture-like film on the camera. To start the flattening process, I set the GH2 picture style (which the camera calls a ‘Film style’ when applied to video) as contrast -2, sharpness -2, saturation -2 and noise reduction 0. On a Sony Alpha, you would do the same, but set the sharpness to as low as it goes (-3) and forget the noise reduction (that option is not available as in their wisdom, Sony have simply limited the maximum video ISO!).
Low Contrast filter
A Low contrast filter is optional. You will not see any reason to use it when you start with video but the requirement to use it may become more apparent if you get heavily into post-processing.
Using my setup, I took two identical pans, moving left to right along the hedge.
Top – a frame from the ‘without filter’ video. Bottom – a frame from the ‘with Low contrast filter’ video
The footage without the contrast filter (top) initially looks better, but it more susceptible to issues if you try to change it. This becomes more apparent when we look at the underlying data as graphs.
The YC graph (as seen in most video editing software) has the image width as its x-axis and plots luma (cyan or light blue-green) and chroma (blue) on the y-axis. Think of it as a much more useful version of the standard camera histogram. Its more useful because (a) it shows brightness and color separately but at the same time in relation to each other and (b) is directly related to the image width, so you can tell where along the width of the image you have shadow clipping or highlight burn – with a histogram, you only know you have clipping/burn but not where on the image).
Left – ‘without filter’ footage. Right – ‘with low contrast filter’ footage
As you can see, the low contrast filter lifts the low part of the data. This corresponds to brightening shadows. It is important to realise that this is not giving you more information in the shadows. The filter is merely creating diffused local light and then adding it to the shadows to given them a bit of lift. You will see many people on the internet telling you this means a low contrast filter is useless. This is not the case because:
- One of the way most video codecs work to optimise filesize is by removing data in areas where we cannot see detail anyway. This can occur in several places, but one place this always occurs is in near-black shadows. By lifting (brightening) such shadows optically before the camera sensor sees them, we force the codec to encode more information in our shadows, thus giving us a more even amount of data across the lower tonal range.
- We often have to increase exposure in post-production. If we did this without the shadow lift that low contrast filters give us, the shadows would be encoded with very low data levels assigned to them. When we expose them up, we see this lack of data as macro-blocking (shadows become blocky and banded). This is especially true of AVCHD, and if your camera creates AVCHD, then you need to be particularly careful of editing shadow areas. Further, some busy scenes may even show macro blocking in the shadows before you edit for standard 24-28Mbit/s (unhacked) cameras. Without a Low Contrast filter in place, such footage may have to be retaken.
- With a contrast filter, shadows are often brighter than we need so we typically have to darken them. Doing this never causes macro-blocking, and because macro-blocking almost always occurs in shadows, it is eliminated in our edits.
There is a way of lifting shadows without using the filter – simply expose your footage a little to the right (about 1/3rd of a stop), and then underexpose by the same amount in post. That is fine, but makes it more likely you will burn highlights. If you use the low contrast filter you have a better time because you can now underexpose to protect highlights, knowing that you will not clip the shadows because the darks will now be lifted by the low contrast filte so you end up with flatter lows and flatter highs.
There is one final issue to consider with Low contrast filters: low colors.
The RGB parade is another common video-editing graph, and it shows the color values (0-100%) across the frame for the three color components.
Left – ‘without filter’ footage. Right – ‘with low contrast filter’ footage
Notice that when we use the low contrast filter, the low color values are also lifted upwards towards the average. This means we are less likely to shoot footage with clipped color in the lowlights, and thus we can push our dark colors further in post-production. This is especially important when we look at skintones: we usually want to brighten skin overall (to make it the focus of attention) and that means exposing up skin that is in shadow. We are particularly sensitive to odd looking skin, so it is important that we don’t clip any skin color because we will almost certainly need to expose it up and clipped skin will then look odd.
So, the low contrast filter protects our shadows by optically adding some mid tone light (‘gamma’) to it. The filter also raises our darker color values towards the average. Both these changes reduce the chance of clipping and color change in post-production.
In the image below, the left side of the frame was shot with the low contrast filter, and then post-processed (using RGB curves and the Colorista 2 plugin, both via Adobe Premiere CC). The right half of the image is as-shot without the filter or any processing.
Edited footage shot with Low contrast filter (left side of image) vs normal footage (right side)
Click the image to see a bigger version.
Notice that the left half of the image seems to have the healthiest leaves, and is therefore the nicest to look at. This of course was done via color correction – the low greens were significantly lifted upwards (notice also that lifting the dark colors gives a much more subtle and realistic looking edit than just saturating the mid-greens). Further, notice that the darks on the left side never drop down to true black – they are always slightly off-black and blend in better with the mid-tones, whereas the dark areas in the right hand side look like true black, and consequently contain no detail (and very little information). Although the left side is actually the most processed of the two, it looks the most natural because of the more believable color variation and contrast between the lows and mids, and (somewhat counter-productively) this is down to using an optical Low Contrast filter on the camera.
Is worth noting that if we tried to bring up the blacks in the right hand side so they looked as natural as the ones on the left, we would not be able to because there is not enough information in the shadows to do this. As soon as we exposed up the shadows, we would start to see macro-blocking and banding because the footage doesn’t contain enough information in the darks for us to be able to change them significantly, so you would have to enhance the leaves using only gamma (mid-colors), which tends to look more processed and artificial.
Another very important thing to see here is how subtle the color correction gets if you are careful not to clip your blacks. The two halves of the image have not been blurred together at the join in any way: there is no alpha transition. So there is an immediate cutoff between the color corrected (left) and non-corrected (right) version. Look at any leaf on the center line: half of it is corrected and half is not. Can you see the join? No, me neither: it just looks like half the leaf is an anemic off-green and half is healthy. The reason why professional color correctors fix the lows first is because that is where all the color is: shadows contain lots of hidden color. By increasing vibrancy and saturation in the low colors (assuming you have not clipped your shadows), your color editing becomes much more subtle than if you edit the mids, and you corrections become much more natural and believable. Further, if you need to enhance only one set of colors, editing only the lows makes your changes more realistic, and you often no longer even need to use masking.
Although as a photographer, you are taught to protect the highs from clipping (because that kills your high tones), in video, your shadows are often more important because they contain the low color. It may be hidden, but it is this color information that you will be boosting or diminishing in post to create atmosphere in your footage. This is the main reason that you use a low contrast filter in video: not to protect tone, but to protect color.
Poor shadow encoding is a common default failing of all DSLR footage (unless you are shooting RAW with a 5D, or using the BlackMagic/Red, but then you have a much more non-trivial post production process: see here if you want to see why at least one videographer actually prefers an AVCHD camera to a Red Scarlet… the resulting flame wars are also quite entertaining to read in the comments!).
There is a final advantage in using a Low contrast filter: it reduces the bandwidth requirements slightly. Using Bitrate Viewer on my footage, I find that the bandwidth is about 5% lower with the filter attached (probably because the footage is less contrasty and has smaller color transitions). It’s not much of a difference, but perhaps a use-case if you are using a Sony A77 or other camera with the default AVCHD bitrates (24 to 28Mbit/s), especially when it will also eliminate the occurrence of shadow macro blocking that can occasionally occur in shadows before you edit if you are using the default AVCHD bitrates in difficult lighting or busy scenes.
My Low contrast Filter also sees some use in stills. Because it spreads out highlights, I see less highlight clipping, and a more rounded roll off. (NB -the photographer in me thinks this is a very cool thing about this filter, but the videographer in me says ‘nah, its the richness of color in your darks that makes this filter great!)
Left – Raw image imported into Lightroom. Notice the red dot in the middle of the candle flame, signifying highlight clipping. Middle – The same shot with the Low Contrast filter attached. Note that the clipping is much less pronounced and has a more subtle rolloff. Also note that the background is brighter because the filter is pushing some of the light on the background highlight (top left) into the ambient gamma. Right – The middle photo after post processing in Lightroom. Because of the better highlight rolloff, the light from the candle has more presence. If we wanted to recolor the flame (to, say, yellow), we could, because the highlight has not been clipped, and contains lots of data to work with.
It is also useful if you are shooting into the sun. The downside is that you usually have to significantly post-process any stills taken with a Low contrast filter to get your tones back: it’s one to use only in tricky tonal lighting conditions, when you want to achieve a low contrast look (it is actually used often in stills to give a retro 70’s look), or when you know you will be post processing color significantly (usual in video production).
In terms of buying a low contrast filter, there is only really one option: buy Tiffen as they are the only ones who do them well. I use the Low Contrast 3 filter. Some videographers use an Ultra Contrast filter, but I find the Low contrast filter to be better with highlights (it gives a film like rolloff to highlights because it diffuses bright lights slightly). You can see a discussion of the differences between Low Contrast and Ultra Contrast on the Tiffen website.
Update January 2014: Since writing this post, I have found out that Sony Alpha cameras apply DRO (dynamic range optimization) to video. DRO works electronically by lifting shadows. Although this process does not increase detail, it may prevent macro blocking, making it a good alternative to using an expensive low contrast filter. See this post for further details.
The low contrast filter reduces the final exposure by about 1/3rd of a stop, but that is because of the way the filter works rather than intentional. If we actually want to control exposure, we have to use the next filter…
Variable Neutral Density fader
For shooting video that looks like film rather than live TV, using a variable ND fader is not an option: you have to use one because it’s the only way you can control exposure. A variable ND filter is rotated to vary the light passing through the filter, and this (rather than shutter or aperture) controls the exposure of the final shot. Without a variable ND filter, shooting at wide apertures and a shutter of around 1/50s (which you need for the DSLR film look) will leave you with overexposed footage.
All ‘DSLR film’ (footage shot with a DSLR that has a filmic look to it) is shot using a variable ND fader, and using one is a given. There is only one real issue to consider: which one to buy. There are a few good video reviews:
For the price of 1 expensive Tiffen variable ND fader, I bought the following:
- 6 Fotga variable ND faders, one for each lens diameter I use for video on my Sony Alpha A77 and Panasonic GH2 (82, 77, 72, 55, 52 and 49mm)
- 1 Polaroid ND fader at 37mm for my Panasonic LX7 advanced compact.
The Fotga gives 2 to 9 stops of exposure control, but only the first 2-7 stops are usable (you start to see vignette and uneven filtering after that, which is fine by me for the price: $10-30, depending on size and where you buy them). Sharpness is not really an issue at video resolutions (but certainly is at stills resolutions: buy Tiffen if you want to use your Variable NDs for stills as well as video).
The most important thing to watch out on Variable ND faders is that you are getting optically flat glass and not normal glass. You can check this by looking through the fader whilst turning it left-right in your hand. If the view through the fader seems to wobble, then you are not getting constant diffraction and this occurs because you do not have optically flat glass (and probably need to throw the fader away). You can also check for sharpness by videoing something that generates moiré. Moiré is caused by capturing something that is on the edge of what your sensor can resolve. If the moiré changes significantly with the fader on, then it is because the resolving power has changed significantly (i.e. the fader is bringing down the resolving power of your sensor), and you again probably need to bin the fader. Its worth noting that moiré may occur in stills but not in video, depending on how your camera sensor is set up, so check in video footage only.
In terms of actual use, here’s a quick before and after of shooting a scene with and without a variable ND fader. I’m shooting at the sky with an aperture and shutter dictated by my frame rate and requirement to capture ‘filmic’ footage (f3.5, 1/50s, 24fps, 1080p).
Comparison without ND fader (L) vs with (R)
I think the before and after speak for themselves, and we don’t need justification via graphs!
Using Variable ND fader and Low Contrast filter together
Using an ND fader and Low contrast filter together is often necessary where you have a very high contrast scene, typically when you have to shoot into the sun, or when you have deep shadow and highlights in the same scene. If you are shooting outside in full sun, this may occur so often that it is actually the norm rather than an edge case.
Consider the following scene.
Using an ND fader and Low contrast filter together. See text below for information.
The top image shows our situation with no fader. We have a wall in semi-shade with the sun shining over it. If we add a variable ND fader, we can reduce the exposure so that we now have no blown out sky highlights, but that leaves us with foliage that is too dark (middle), so we have clipping in the shadows (and probably macro-blocking if we try to expose the foliage up in post-production). If we now add the Low contrast filter, some of the ambient is added to the shadows, and we now have brighter foliage (and darks that we can lift up without causing macro-blocking) and no blown highlights. Note thatwe have raised the darks optically during the shot so we don’t get digital noise associated with raising shadows in post processing. This is a significant difference.
Although this is all very good, the filters add some issues of their own. In particular, because the low contrast filter lifts the blacks based on the local gamma, its effects will vary as the local gamma varies. Thus, we have halation at the border between our sky and foliage, which may be difficult to get rid of because it may not be linear, and it gets worse with more extreme light-dark borders (such as sun straight to to dark shadow as we have here). However, the final image is better than either of the other two n that it is the one that contains most information for post processing… although you will have to be fairly experienced to edit back to normality, so should nto attempt this until you have some experience of color correction.
Finally, consider the tree example we looked at earlier for the Variable ND fader.
Example of how variable ND faders and Low Contrast filters can help when shooting into the sun. See text below for information.
Although we get a correctly exposed sky when we add just the ND fader, we also get black foliage with little information in it, something that will easily macro-block in post-production. If we now add the Low contrast filter, not only do we now lift the blacks (so we reduce the chances of macro-blocking in the shadows), we also make the shot look more like true film.
You will notice in the top frame that I have tried to hide the sun behind the tree because I know it will cause blown highlights. With the Low contrast filter, the sun will now have a much more analog-film like rolloff, and will actually hardly clip at all, so I can stop hiding it.
The bottom frame is therefore much easier to work with in post-production. This is true even though lifting the blacks in the tree outline adds no extra information, because lifting the blacks before video encoding in-camera drastically reduces the chances of macro-blocking at any point after capture.
When shooting video with a DSLR, there are two filters you can use to make life easier.
A variable Neutral Density fader is a must-have because it is the standard way you control exposure.
A Low contrast filter is something to consider if you will be heavily post-processing, It does bring problems of its own (potentially unwanted halation), but it does give you flatter video in the low tones and allows you to shoot slightly underexposed without losing shadow data, (so it effectively allows you to flatten both lows and highs). This makes your footage much easier to handle in any post-production process.
- This was supposed to be part three of my ‘Sony Alpha Video’ series, and was going to be titled ‘Sony Alpha Video: Part 3: Filters for video, but I didn’t want to alienate non Sony AVCHD camera users, to who this post equally applies.
- In the main text I say ‘shadows contain lots of hidden color’. Many photographers know this already: if you want to boost a color in Lightroom/Photoshop but keep the edit realistic, you need to darken it slightly rather than increase its saturation. Although we often don’t see dark colors, we do respond to them, making them a very good thing to know about especially when you want to keep your edits non-obvious.