During the summer, Black Magic reduced the price of their Black Magic Pocket Cinema Camera (BMPCC) by half price.
This was a limited offer, so I ordered straight away. Remorse started sinking in when I read the reviews whilst waiting for delivery: it’s a difficult camera to work with (it is, but it’s also very rewarding), there’s a massive x2.88 crop (there is, but focal reducers are now very cheap and decent), and worst of all, it’s very difficult to grade the video.
Well, there’s fixes for the first two, and for the third, that’s a blatant lie: grading log footage is not difficult as long as you get your white balance right before you do any other grading or color correction. This point is crucial.
Assuming good white balance, all you have to do is push the saturation, add a LUT (look up table) or even just use an auto-color correction to get you in the right ballpark. Anyway, shown above is my very first attempt with the BMPCC. Notice the sky and ground are well exposed in every shot. That’s the power of a high dynamic range plus a good codec (CameraDNG or ProRes).
There are loads of tutorials and reviews on adapting Canon or Nikon lenses to the BMPCC, but I could find nothing for Sony Alpha. Well, that’s about to change. Watch this space!
The video was shot hand held using only the Panasonic 14mm f2.5, shot wide open to get a de-sharpened, analog feel.
The source footage is ProResHQ (2.45GB for about 2.5 minutes of footage) and this was edited using only Premiere Pro with the Colorista 2 and Tiffen Dfx plugins. No LUTs or presets were used.
The slow motion effect was added in Premiere via Twixtor. The text motion graphics were created using the standard Premiere animation, masking and blur tools (After Effects was not used).
In a previous post, I discussed using a Tiffen Low Contrast filter when filming with an AVCHD enabled camera. I didn’t illustrate the point with any of my test footage.
Here it is.
Low contrast filters and video encoders
To recap, we use low contrast filters in AVCHD DSLR video because AVCHD compresses footage using a perceptual filter: what your eye’s can’t perceive gets the chop in the quest for smaller file size. Our eyes cannot see into shadow, so AVCHD ignores (filters out and discards) most of the shadow data. ACVHD knows we can’t see the difference between small variations in color, so it removes such slight differences and replaces them with a single color.
That’s fine if you will not be editing the footage (because your eye will never see the difference), but if you do any post processing involving exposing up the footage, the missing information shows up through macro blocking or color banding. To fix this, we can do one of three things:
Use a low contrast filter. This works by taking ambient light and adding it to shadows, thus lifting the shadows up towards mid-tones and tricking AVCHD into leaving them alone. The low contrast filter thus gives us more information in shadow not because it adds more information in the shadows itself, but because it forces the AVCHD encoder to leave information that the encoder would otherwise discard.
Use Apical Iridix. This goes under different names (e.g. Dynamic Range Optimisation or DRO for Sony and i.Dynamic for Panasonic), but is available on most DSLRs and advanced compacts. It is a digital version of a low contrast filter (its actually a real time tone mapping algorithm). It works by lightening blacks and preserving highlights. Although it again doesn’t add any new information of itself, Iridix is before the AVCHD encoder, so again can force the encoder to leave shadow detail intact.
Use both a low contrast filter and Iridix together.
The video uses the third option.
Deconstructing the video
The low contrast filter allows us to see detail.. even though it is in full shadow
The video consists of three short scenes. They were taken with a Panasonic Lumix LX7, 28Mbit/s AVCHD at 1080p and 50fps (exported from Premiere as 25fps with frame blending), manual-exposure, custom photo style (-1, -1, -1, 0). It was shot hand held and stabilized in post (Adobe Premiere warp stabilizer). The important settings were
ISO set to ISO80 and
i.Dynamic set to HIGH.
I chose the lowest ISO so that I could set i.Dynamic high without it causing noise when it it lifts the shadows.
The cameras had a 37mm filter thread attached and a 37-52mm step up ring, on which were attached a Tiffen low contrast filter and Variable Neutral density filter. The reason I did not use two 37mm filters rather than two 52mm filters (i.e. bigger than the lens diameter) is that stacking filters can cause vignette unless you step up as I have done.
Here’s the three scenes. Left side is as-shot, right side is after post processing. Click on the images to see larger versions.
Notice in this scene that the low contrast filter is keeping the blacks lifted. This prevents macro-blocking. Also note that the highlights have a film-like roll off. Again, this is caused by the low contrast filter. The Variable ND filter is also working hard in this scene: the little white disk in the top right is the sun, and it and the sky around it were too bright to look at!
Scene 2 is shot directly at the sun, and you would typically end up with a white sky and properly exposed rocks/tree, or a properly exposed sky and black rocks/tree. The low contrast filter and Iridix (i.Dynamic) give us enough properly exposed sky and shadow to enable us to fix it in post. Nevertheless, we are at the limits of what 28Mbit/s AVCHD can give us. The sky is beginning to macro block, and the branches are showing moiré. I shot it all this way to give us an extreme example, but would more normally shoot with the sun oblique to the camera rather than straight at it.
Scene 3 is a more typical shot. We are in full sun and there is a lot of shadow. The low contrast filter allows us to see detail in the far rock (top right) even though it is in full shadow. It also stops our blacks from clipping, which is important because near-black holds a lot of hidden color. For example, the large shift from straw to lush grass was not done by increasing green saturation in the mid-tones, but in the shadows. If you want to make large color changes, make them in the shadows, because making the same changes in the mid-tones looks far less natural (too vivid). Of course, if we didn’t use a low contrast filter to protect our blacks (and therefore the color they hold) from clipping, we would not have the option to raise shadow colors!
Shooting flat is something you should do if you will be post editing your video footage. Many cameras do not allow you to shoot flat enough, and to get around this, you can use either a Tiffen Low contrast filter, or the camera’s inbuilt Apical Iridix feature. To maximise the effect, you can use both, as illustrated in this example.
The main advantages of using a low contrast filter are:
Protects blacks from clipping, thus preventing shadows from macro-blocking and preserving dark color. The latter is important if you are going to make substantial color correction in post because raising shadow color usually results in much more natural edits.
Better highlight roll-off. The effect looks more like film than digital (digital sensors have a hard cut-off rather than a roll-off).
Lower contrast that looks like film. Although many people add lots of contrast (i.e. dark, blue blacks) to their footage, true film actually has very few true blacks. The low contrast film gives this more realistic look.
Removes digital sharpness and ‘baked-in’ color’. Many cameras cannot shoot as flat as we would like, and produce footage that is obviously digital because of its sharpness (especially true of the Panasonic GHx cameras). Adding a low contrast filter is useful to mitigate against these issues.
The main disadvantages of using a low contrast filter/Apical Iridix are:
The filter loses you about 1/3 stop of light.
You usually have to use the low contrast filter along with a variable ND filter (which you need to control exposure). The two filters have associated optical defects apart from their intended function (may cause vignette because you are stacking filters, loss of sharpness). However, remember that you are shooting at a much lower resolution for video, so the sharpness effects will be much smaller than for stills. You can eliminate vignette by using larger filters and a step-up ring.
Apical Iridix will increase shadow noise. Use it at maximum only with very low (typically base) ISO.
To answer a reader query about whether using a loc contrast filter is a viable alternative to using log footage, I have added the following section below.
Comparison of Low contrast filter vs log footage.
The three pictures are frames from three movies shot with (top to bottom) rec709, rec709 plus low contrast 3 filter, and log.
All were shot on a camera capable of shooting rec709 and log (a BMPCC), using a high bitrate (Prores HQ which is about 200Mbit/s) so that there are no codec effects to confuse the issue. The lens was a SLR Magic 12mm, set wide open (T1.6, which is about f1.4), no ND.
The rec709 frame is what you would see on all DSLRs that do not support log.
Once you add the low contrast filter, you see the highlights spread out across the frame so that the darks (lows) and mids (gamma) are lifted, and because the highlights are spread out, they also dim slightly. The effect is however, low, and depends on the light – if there are no bright highlights in your shot (i.e. sky or a bright ambient), the low contrast filter will not work.
Looking at the log frame, you see that all color is subdued and this will always (it does not depend on you having highlights in the shot). There is hardly any green (or in fact, any color) at all. The image also loses a lot more contrast, and although you cannot see it, there is also no sharpening at all.
We can also have a look at this more objectively by taking a look at the YC waveforms;
The YC waveform shows you the brightness of the image left to right (why video enabled cameras don’t give it as an option over the photographer’s histogram I don’t know; the YC is much more useful because it shows you not only that your image is clipping, but where it is happening!).
You can see here that the filter has a slight effect in reducing contrast (the blacks are lifted slightly and the lights are dropped slightly), but the log footage has a much more pronounced effect. Height of the YC waveform represents contrast, and you can see immedatly that log footage has much less contrast. Although this results in very subdued looking footage, it actually enables you to push your footage harder in post without it breaking up, and also gives you a much more neutral starting point for your post-work, preventing your footage looking ‘similar to everyone else with the same camera’.
So, although a low contrast filter does reduce contrast by lifting the blacks and dropping the lights, it does not have as large an effect as log footage.
Log footage also gives you other things that make your footage more gradable in post, such as;
Reduces color saturation considerably, allowing you to boost color farther in post without it breaking up (banding).
Removes all color styling, allowing you more creative options to create your own styles to match your production.
Gives you access to better and more flexible third party look up tables (LUTs)
Removes all in-camera sharpening and other post production (and if you are shooting raw it also removes the effects of color temperature; color temperature is only saved as metadata and not as something that permanently affects your footage data). The lack of sharpening allows you to make better selections, and add superior sharpening in post (a desktop generally has better sharpening algorithms than your camera).
allows you to learn proper grading, because you are no longer relying on a ‘nearly there’ rec709 output, but instead start with neutral log footage.
I quickly edited the log footage towards a final look. Here’s where I got to after a couple of minutes with Première and Colorista II;
Here’s what I did;
Selected the background and reduced its clarity and made it colder (this is to visually separate it from the plant in the foreground and make the background less busy). Having unsharpened footage enabled me to quickly make the selection to do this (bear in mind that this is a selection on a moving image; sharpening gives you halos and all sorts of trouble as sharpened edges move!).
Boosted the greens and yellows of the plant, overall exposure, and highlights.
Sharpened the plant by increasing its clarity slightly.
Note that using log footage does increase your effective dynamic range but doesn’t stop you clipping; I did not use a variable ND filter in this test, and that caused clipping in all the footage! However, you can see that the dynamic range of the final edited image looks better than the rec709 footage you would get out of camera, and rec709 footage probably could not be pushed as far as the image above without causing banding on the deep greens and/or a nasty transition at the boosted leaf highlights.
In a previous post, I wrote about low contrast (and ultra-contrast) filters and their use in DSLR video to increase video quality. They do this by lifting low-tones, which prevents AVCHD encoding causing macro blocking. What I did not realize at the time is that there is a built in feature of Sony Alpha, Panasonic and other DSLRs that pretty much gives you the same thing for free: Dynamic Range Optimization.
Dynamic Range Optimization
Cameras have a more limited dynamic range than our eyes. If we photograph a bright sky looking into the sun (so there are shadows in our scene), then the camera can only expose for the sky or shadows, but not both, yet our eyes can see detail in both.
Most current cameras have a feature that attempts to emulate how our eyes see such a scene. They go under different names: iExposure, Active D-Lighting, Auto Lighting Optimizer, Shadow Adjustment Technology, and so on. Sony’s version is called Dynamic Range Optimization (DRO) and Panasonic’s is called Intelligent Dynamic (i.Dynamic).
Although some of these systems change exposure as part of their operation, most of them use a range compression algorithm that brightens shadows and adds texture to highlights to better approximate what the human eye would see.
Although Sony are often cagey over what DRO is, it is simply Sony’s branding of their licensed use of Apical Iridix, as shown here. Other manufacturers use Iridix without trying to pretend it is proprietary (the name Iridix even appears on the box for my Olympus Stylus 1 as a feature!). You can also read a technical interview with the Apical CEO here. Finally, you can see what Iridix actually does behind the scenes here.
Iridix is not a simple tone curve but tone mapping It works on a per pixel basis, checking the brightness of each pixel against both its neighbors and against the dynamic range within the overall photo. Iridix is actually very similar to tone mapping as used in HDR images, but has crucial differences:
Iridix it is very fast computationally because it is implemented at a low level: either as dedicated on-chip signal processing or as part of the camera firmware.
Iridix is designed to keep the image looking realistic, so you don’t end up with any HDR effects (HDR halos and HDR that starts to look ‘painterly’).
It is important to realize that Iridix is not a simple tone curve, and you cannot replicate it completely by using a typical ‘S’ tone curve in Photoshop or Lightroom.
Disadvantages of Dynamic range optimization
There are two big disadvantages of Apical:
It is not applied to RAW. You have to be shooting JPEGs to be able to use it. For Sony Alpha, the camera does alter the RAW metadata so that the DRO settings are available to your RAW converter, but unless the application also licenses Iridix, the DRO settings will be ignored. Adobe don’t license it, so Photoshop/Lightroom ignore DRO, and DxO Optics also seems to ignore it. The Sony-specific RAW editing software (Sony Image Data Converter) does use the DRO settings, and it can be downloaded here.
As Iridix brightens shadows, it also increases noise visibility in the dark areas. If you are shooting at high ISO this can become problematic.
It’s also worth noting that Iridix does not increase the actual dynamic range. It’s just a different way of rendering the image data, but is closer to how our own eyes would perceive the same scene. In particular, it should be noted that using Iridix does not increase the shadow detail but is instead ‘perceptual’. Iridix simply pulls lows up so that the human eye perceives the detail better but since AVCHD also uses a perceptual system to decide what to throw away when optimising video, Iridix and AVCHD actually work together well: DRO stops AVCHD throwing shadow detail away by making low tones perceptually more important.
Using DRO in video
Where Iridix really comes into its own is in video. Yep: DRO works in video! For Sony, you can see it by putting your camera in A video mode and then pressing the Fn button, selecting DRO/Auto HDR and setting it to Lvl1 to Lvl5 or AUTO. For Panasonic GH2, press MENU button and then go MOTION PICTURE ICON > Page 2 > I.DYNAMIC. Olympus possibly have the best implementation of all (you can sellect level 1-5 for shadows and level 1-5 for highlights separately, but since Olympus DSLR video doesn’t generally work in ASM modes, it isn’t much use for DSLR film, so I don’t consider it further).
The following example is from a Sony Alpha A77, so for the rest of the article, I refer to Iridix by its Sony name, DRO.
As noted in my previous video post, a Tiffen low contrast or Tiffen ultra contrast filter is useful with AVCHD video because they lift your blacks, and this prevents macro blocking. Good low contrast/ultra contrast filters are expensive though, but that’s okay, because it turns out that DRO does exactly the same thing – it lifts the blacks! Better still, Iridix does this without losing you as much contrast as the Tiffen filters.
Again, its worth noting why DRO increases final video quality even though it does not increase the detail in your shadows. AVCHD encoding optimizes your video by removing data in the areas you will not notice. Its favorite place to do this is the low-tones, on the basis that our eyes cannot see into shadow well. This means that as soon as you brighten your video, your shadows start to block up (‘macro block’) because the lack of detail starts to become apparent.
By allowing DRO to brighten shadows before AVCHD video encoding, you reduce the encoder’s propensity to reduce shadow information (because the brightened shadows are now taken to be closer to mid-tones) and you therefore eliminate macro blocking. As macro blocking is the main bugbear of using AVCHD (especially when you will be post processing your video), using DRO is to be recommended.
DRO is also useful even when you do not intend to post process your video. DRO models how our eyes see the scene, and this often makes the video look more natural.
The only downside I have encountered to using DRO in video is noise. As DRO brightens your shadows, it also makes noise more visible. At high ISOs the noise can be noticeable especially because it only occurs in shadows, causing the shadows to seem to ‘shimmer’ relative to the mid and high tones. If you are above ISO 200, I would suggest turning DRO either off or lower that level5 (or putting it on AUTO as this is a conservative setting).
Using DRO in RAW
You can’t use DRO in RAW (unless you use Sony Image Date Converter to do it off-camera), but you can get pretty close optically with a low contrast filter.
The use of a low contrast filter in RAW lifts your blacks so that the camera adds more information to them (all digital cameras assign more data to brighter areas of your photo to better represent how your eye sees – your eyes resolve more detail in bright areas and less in shadows). This lifting allows you to either expose up your dark areas without them banding, or to exposure compensate by about -1/3rd of a stop to protect your highlights (which you can do without clipping your shadows because they have been lifted by the filter). Either way, you end up with your dynamic range pulled away from clipping, allowing you to push the file further in post. Adding a low contrast filter costs you the 1/3rd stop light loss caused by the filter and the slight distortion having some extra glass in the light path usually causes… but if you are careful it will not cost you much in terms of noise. The image to the right actually has far less noise than you would see if you exposed the image on the left up to the same levels.
Using both DRO and low contrast filter in video
So if DRO gives you better shadow encoding, and a low contrast filter gives you flatter footage that is more post production friendly, using both together would be interesting. Here’s what you get:
The effects of DRO and a low contrast filter used together are cumulative: you lift shadows by enabling DRO and lift them more by adding the filter. Noise, however, is also additive: you gain shadow noise by enabling DRO, and lose light (about 1/3rd stop) by adding the filter. The advantage is clear though: the image on the right has much better looking light than the one on the left. Not only does the light look better, the brights have a more film-like roll-off as there are no sharp digital transitions. Most importantly, in the image to the left, the AVCHD encoder would have fun with the shadows as it totally removes all information in your lows, preventing you from being able to do almost anything in post. The image to the right would cause the encoder to retain much more information in the lows, and this gives you more options in post.
If you are using a camera that creates very sharp video and/or creates video that cannot be shot flat, you should consider using a low contrast filter. I find this especially true of the Panasonic GHx cameras, as they otherwise create sharp video with lots of color/tone ‘baked-in’. Without the filter, this ends up giving you footage that looks very ‘digital’ and can be difficult to work with in post (despite the higher/hacked bit-rates of the GHx series).
Update March 2014: see this post for a video example (shown below) of a low contrast filter and Apical Iridix being used together.
Shot with the Panasonic Lumix LX7, 28Mbit/s 50fps, conformed to 25fps with frame blending.
Macro blocking is the main issue with using AVCHD. It is caused when AVCHD removes shadow data during encoding, and you later try to expose up the footage. Using a high DRO setting can eliminate this because it brightens shadows so that AVCHD is forced to treat them more like mid-tones (and therefore hold on to the data). You can also do the same thing with a low contrast filter when DRO is not possible/not available. Using a low contrast filter also gives you a roll-off on highlights that is very reminiscent of film.
All tests were performed with a Sony Alpha A77 using the 1.07 firmware.
Altered the post March 8th 2014 to make it a bit less Sony specific (as all cameras I have seen have Iridix).
DSLRs are photograph devices that happen to have video capability. Most DSLRs don’t therefore have any special features that make shooting video as easy as shooting stills. There are workarounds but your camera manual will not tell you what they are because like your DSLR itself, the manual is most concerned with stills shooting.
This blog post explains how to get around the issues with the minimum additional kit: a variable ND fader (required) and a low contrast filter (optional but recommended if you will be performing heavy post processing).
If you are coming to video with a stills mentality there is no way to control exposure!
When shooting stills, you have a lot of control over how you set exposure. You can vary shutter, aperture or ISO. If you were to take a series of photographs of (say) a bride and groom leaving a church, you or your camera would maintain correct exposure by varying these three values as the couple moved from the darker interior and out to the bright sunlight. None of this will work for video:
You cannot easily change aperture or ISO midway through a take without it being obvious (i.e. it will look awful), so you are stuck with the values at the start of the take even though the lighting conditions may change midway through.
You typically set the aperture fairly low in video (around f2-f4, with f3.5 being a good default value), so your ability to control exposure with it is limited. Like in stills, aperture is more of a stylistic control (i.e. it is used to set depth of field and sharpness) rather than linked to exposure in any case.
Although some cameras do change shutter to maintain exposure when shooting video on auto, this is never done in professional production. Too high a shutter causes less smooth video and strobing. Instead, shutter is fixed to the frame rate. As a rule of thumb, you set the shutter to twice the frame rate. So if you are shooting 24fps video, you set the shutter to 1/50s. If you want to shoot something moving fast, you do not increase shutter as you do in stills. Instead, you have to increase fps, and then increase the shutter to match.
So here’s the problem: If you are coming to video with a stills mentality there is no way to control exposure!
In video you have to control exposure via a variable Neutral Density (ND) fader.
So, the variable ND fader keeps you happy for a while. You video starts to look smoother and less like it was taken with an iPhone. But then you realise that a lot of the cool stuff in professional films occurs in post-production, and you try your hand at that.
But weird stuff starts happening. If you try to change exposure too much you start to see either blockiness in what used to be shadows (‘macro blocking’) or color banding if you try to give your scene more punch. The above image is a good example of poorly shot footage: we have allowed the blacks to macro-block because of the underexposed dark tones.
As a stills shooter, you will know that if you want to do any significant post-processing, you have to shoot RAW. The option to shoot RAW video is generally not available to you in current DSLRs and you instead use a compressed format rather like JPEG in that It looks good unless you try to edit it too much whereupon it will break up and start to show up quality issues (banding, compression artifacts, crushed shadows and blown highlights).
In video you have to shoot flat if you want to post process your footage.
‘Flat’ means shooting such that your Luma (brightness or ‘Y’) and Chroma (color or ‘C’) values are near the center of the available range so you end up with de-saturated, low contrast footage. All your YC values are well away from extreme values that would cause clipping so you can now push the values much further in post-production (recoloring, changing lighting digitally, changing exposure digitally).
Shooting flat seems like an easy step: you just set an in-camera video style with low contrast, low saturation and low sharpness values. Many DSLRs don’t allow you get particularly flat footage out of them using the available digital controls though (and especially not the Panasonic GH2/GH3, which are otherwise excellent video DSLRs) so you may have to do it optically via a Low Contrast filter.
It is worth noting that if you want to create footage that you can post process heavily, you may use both a variable ND fader and a low contrast filter together. This raises light loss and optical aberration issues caused by the additional glass in your light path. However, most color-editing video software assumes you are using flat footage, and if you are not, you may have bigger problems.
For example, most plugins and applications come with lots of preset ‘film looks’, which sounds great until you try them with non-flattened footage: the preset result becomes too extreme to be usable, and if you mix them down, the effect becomes negligible. Not good!
In the next section I will show how both the variable ND fader and Low Contrast filter can be used to create well exposed and flatter footage. I am using a Panasonic GH2, but also retested using a Sony Alpha A77 to confirm the workflow on both cameras.
I will be shooting the same footage with and without the two filters. To make sure the footage is identical, I am moving the camera on a motorised slider (a Digislider).
I am shooting foliage because panning along foliage is actually a very good test of video: it generates massive amounts of data, as well as producing lots of varying highlight/dark areas. That and the fact that the garden is when most enthusiast photographers test out most things (hence howgreenisyourgarden).
I am using a Panasonic GH2 with the ‘Flowmotion’ hack that allows me to shoot high bitrate video (100Mbit/s AVCHD). I set the GH2 to shoot 1080p 24fps video, which is as close as we can get to motion-picture-like film on the camera. To start the flattening process, I set the GH2 picture style (which the camera calls a ‘Film style’ when applied to video) as contrast -2, sharpness -2, saturation -2 and noise reduction 0. On a Sony Alpha, you would do the same, but set the sharpness to as low as it goes (-3) and forget the noise reduction (that option is not available as in their wisdom, Sony have simply limited the maximum video ISO!).
Low Contrast filter
A Low contrast filter is optional. You will not see any reason to use it when you start with video but the requirement to use it may become more apparent if you get heavily into post-processing.
Using my setup, I took two identical pans, moving left to right along the hedge.
The footage without the contrast filter (top) initially looks better, but it more susceptible to issues if you try to change it. This becomes more apparent when we look at the underlying data as graphs.
The YC graph (as seen in most video editing software) has the image width as its x-axis and plots luma (cyan or light blue-green) and chroma (blue) on the y-axis. Think of it as a much more useful version of the standard camera histogram. Its more useful because (a) it shows brightness and color separately but at the same time in relation to each other and (b) is directly related to the image width, so you can tell where along the width of the image you have shadow clipping or highlight burn – with a histogram, you only know you have clipping/burn but not where on the image).
As you can see, the low contrast filter lifts the low part of the data. This corresponds to brightening shadows. It is important to realise that this is not giving you more information in the shadows. The filter is merely creating diffused local light and then adding it to the shadows to given them a bit of lift. You will see many people on the internet telling you this means a low contrast filter is useless. This is not the case because:
One of the way most video codecs work to optimise filesize is by removing data in areas where we cannot see detail anyway. This can occur in several places, but one place this always occurs is in near-black shadows. By lifting (brightening) such shadows optically before the camera sensor sees them, we force the codec to encode more information in our shadows, thus giving us a more even amount of data across the lower tonal range.
We often have to increase exposure in post-production. If we did this without the shadow lift that low contrast filters give us, the shadows would be encoded with very low data levels assigned to them. When we expose them up, we see this lack of data as macro-blocking (shadows become blocky and banded). This is especially true of AVCHD, and if your camera creates AVCHD, then you need to be particularly careful of editing shadow areas. Further, some busy scenes may even show macro blocking in the shadows before you edit for standard 24-28Mbit/s (unhacked) cameras. Without a Low Contrast filter in place, such footage may have to be retaken.
There is a way of lifting shadows without using the filter – simply expose your footage a little to the right (about 1/3rd of a stop), and then underexpose by the same amount in post. That is fine, but makes it more likely you will burn highlights. If you use the low contrast filter you have a better time because you can now underexpose to protect highlights, knowing that you will not clip the shadows because the darks will now be lifted by the low contrast filte so you end up with flatter lows and flatter highs.
There is one final issue to consider with Low contrast filters: low colors.
The RGB parade is another common video-editing graph, and it shows the color values (0-100%) across the frame for the three color components.
Notice that when we use the low contrast filter, the low color values are also lifted upwards towards the average. This means we are less likely to shoot footage with clipped color in the lowlights, and thus we can push our dark colors further in post-production. This is especially important when we look at skintones: we usually want to brighten skin overall (to make it the focus of attention) and that means exposing up skin that is in shadow. We are particularly sensitive to odd looking skin, so it is important that we don’t clip any skin color because we will almost certainly need to expose it up and clipped skin will then look odd.
So, the low contrast filter protects our shadows by optically adding some mid tone light (‘gamma’) to it. The filter also raises our darker color values towards the average. Both these changes reduce the chance of clipping and color change in post-production.
In the image below, the left side of the frame was shot with the low contrast filter, and then post-processed (using RGB curves and the Colorista 2 plugin, both via Adobe Premiere CC). The right half of the image is as-shot without the filter or any processing.
Click the image to see a bigger version.
Notice that the left half of the image seems to have the healthiest leaves, and is therefore the nicest to look at. This of course was done via color correction – the low greens were significantly lifted upwards (notice also that lifting the dark colors gives a much more subtle and realistic looking edit than just saturating the mid-greens). Further, notice that the darks on the left side never drop down to true black – they are always slightly off-black and blend in better with the mid-tones, whereas the dark areas in the right hand side look like true black, and consequently contain no detail (and very little information). Although the left side is actually the most processed of the two, it looks the most natural because of the more believable color variation and contrast between the lows and mids, and (somewhat counter-productively) this is down to using an optical Low Contrast filter on the camera.
Is worth noting that if we tried to bring up the blacks in the right hand side so they looked as natural as the ones on the left, we would not be able to because there is not enough information in the shadows to do this. As soon as we exposed up the shadows, we would start to see macro-blocking and banding because the footage doesn’t contain enough information in the darks for us to be able to change them significantly, so you would have to enhance the leaves using only gamma (mid-colors), which tends to look more processed and artificial.
Another very important thing to see here is how subtle the color correction gets if you are careful not to clip your blacks. The two halves of the image have not been blurred together at the join in any way: there is no alpha transition. So there is an immediate cutoff between the color corrected (left) and non-corrected (right) version. Look at any leaf on the center line: half of it is corrected and half is not. Can you see the join? No, me neither: it just looks like half the leaf is an anemic off-green and half is healthy. The reason why professional color correctors fix the lows first is because that is where all the color is: shadows contain lots of hidden color. By increasing vibrancy and saturation in the low colors (assuming you have not clipped your shadows), your color editing becomes much more subtle than if you edit the mids, and you corrections become much more natural and believable. Further, if you need to enhance only one set of colors, editing only the lows makes your changes more realistic, and you often no longer even need to use masking.
Although as a photographer, you are taught to protect the highs from clipping (because that kills your high tones), in video, your shadows are often more important because they contain the low color. It may be hidden, but it is this color information that you will be boosting or diminishing in post to create atmosphere in your footage. This is the main reason that you use a low contrast filter in video: not to protect tone, but to protect color.
Poor shadow encoding is a common default failing of all DSLR footage (unless you are shooting RAW with a 5D, or using the BlackMagic/Red, but then you have a much more non-trivial post production process: see here if you want to see why at least one videographer actually prefers an AVCHD camera to a Red Scarlet… the resulting flame wars are also quite entertaining to read in the comments!).
There is a final advantage in using a Low contrast filter: it reduces the bandwidth requirements slightly. Using Bitrate Viewer on my footage, I find that the bandwidth is about 5% lower with the filter attached (probably because the footage is less contrasty and has smaller color transitions). It’s not much of a difference, but perhaps a use-case if you are using a Sony A77 or other camera with the default AVCHD bitrates (24 to 28Mbit/s), especially when it will also eliminate the occurrence of shadow macro blocking that can occasionally occur in shadows before you edit if you are using the default AVCHD bitrates in difficult lighting or busy scenes.
My Low contrast Filter also sees some use in stills. Because it spreads out highlights, I see less highlight clipping, and a more rounded roll off. (NB -the photographer in me thinks this is a very cool thing about this filter, but the videographer in me says ‘nah, its the richness of color in your darks that makes this filter great!)
It is also useful if you are shooting into the sun. The downside is that you usually have to significantly post-process any stills taken with a Low contrast filter to get your tones back: it’s one to use only in tricky tonal lighting conditions, when you want to achieve a low contrast look (it is actually used often in stills to give a retro 70’s look), or when you know you will be post processing color significantly (usual in video production).
In terms of buying a low contrast filter, there is only really one option: buy Tiffen as they are the only ones who do them well. I use the Low Contrast 3 filter. Some videographers use an Ultra Contrast filter, but I find the Low contrast filter to be better with highlights (it gives a film like rolloff to highlights because it diffuses bright lights slightly). You can see a discussion of the differences between Low Contrast and Ultra Contrast on the Tiffen website.
Update January 2014: Since writing this post, I have found out that Sony Alpha cameras apply DRO (dynamic range optimization) to video. DRO works electronically by lifting shadows. Although this process does not increase detail, it may prevent macro blocking, making it a good alternative to using an expensive low contrast filter. See this post for further details.
The low contrast filter reduces the final exposure by about 1/3rd of a stop, but that is because of the way the filter works rather than intentional. If we actually want to control exposure, we have to use the next filter…
Variable Neutral Density fader
For shooting video that looks like film rather than live TV, using a variable ND fader is not an option: you have to use one because it’s the only way you can control exposure. A variable ND filter is rotated to vary the light passing through the filter, and this (rather than shutter or aperture) controls the exposure of the final shot. Without a variable ND filter, shooting at wide apertures and a shutter of around 1/50s (which you need for the DSLR film look) will leave you with overexposed footage.
All ‘DSLR film’ (footage shot with a DSLR that has a filmic look to it) is shot using a variable ND fader, and using one is a given. There is only one real issue to consider: which one to buy. There are a few good video reviews:
Learning DSLR video have a good Variable ND fader shootout at http://www.learningdslrvideo.com/variable-nd-filter-shootout/ but it’s worth noting that the review considers stills resolution rather than video resolution in the testing, meaning that he’s testing at far too high a resolution, and probably discounting faders that are acceptable for video.
For the price of 1 expensive Tiffen variable ND fader, I bought the following:
6 Fotga variable ND faders, one for each lens diameter I use for video on my Sony Alpha A77 and Panasonic GH2 (82, 77, 72, 55, 52 and 49mm)
1 Polaroid ND fader at 37mm for my Panasonic LX7 advanced compact.
The Fotga gives 2 to 9 stops of exposure control, but only the first 2-7 stops are usable (you start to see vignette and uneven filtering after that, which is fine by me for the price: $10-30, depending on size and where you buy them). Sharpness is not really an issue at video resolutions (but certainly is at stills resolutions: buy Tiffen if you want to use your Variable NDs for stills as well as video).
The most important thing to watch out on Variable ND faders is that you are getting optically flat glass and not normal glass. You can check this by looking through the fader whilst turning it left-right in your hand. If the view through the fader seems to wobble, then you are not getting constant diffraction and this occurs because you do not have optically flat glass (and probably need to throw the fader away). You can also check for sharpness by videoing something that generates moiré. Moiré is caused by capturing something that is on the edge of what your sensor can resolve. If the moiré changes significantly with the fader on, then it is because the resolving power has changed significantly (i.e. the fader is bringing down the resolving power of your sensor), and you again probably need to bin the fader. Its worth noting that moiré may occur in stills but not in video, depending on how your camera sensor is set up, so check in video footage only.
In terms of actual use, here’s a quick before and after of shooting a scene with and without a variable ND fader. I’m shooting at the sky with an aperture and shutter dictated by my frame rate and requirement to capture ‘filmic’ footage (f3.5, 1/50s, 24fps, 1080p).
I think the before and after speak for themselves, and we don’t need justification via graphs!
Using Variable ND fader and Low Contrast filter together
Using an ND fader and Low contrast filter together is often necessary where you have a very high contrast scene, typically when you have to shoot into the sun, or when you have deep shadow and highlights in the same scene. If you are shooting outside in full sun, this may occur so often that it is actually the norm rather than an edge case.
Consider the following scene.
The top image shows our situation with no fader. We have a wall in semi-shade with the sun shining over it. If we add a variable ND fader, we can reduce the exposure so that we now have no blown out sky highlights, but that leaves us with foliage that is too dark (middle), so we have clipping in the shadows (and probably macro-blocking if we try to expose the foliage up in post-production). If we now add the Low contrast filter, some of the ambient is added to the shadows, and we now have brighter foliage (and darks that we can lift up without causing macro-blocking) andno blown highlights. Note thatwe have raised the darks optically during the shot so we don’t get digital noise associated with raising shadows in post processing. This is a significant difference.
Although this is all very good, the filters add some issues of their own. In particular, because the low contrast filter lifts the blacks based on the local gamma, its effects will vary as the local gamma varies. Thus, we have halation at the border between our sky and foliage, which may be difficult to get rid of because it may not be linear, and it gets worse with more extreme light-dark borders (such as sun straight to to dark shadow as we have here). However, the final image is better than either of the other two n that it is the one that contains most information for post processing… although you will have to be fairly experienced to edit back to normality, so should nto attempt this until you have some experience of color correction.
Finally, consider the tree example we looked at earlier for the Variable ND fader.
Although we get a correctly exposed sky when we add just the ND fader, we also get black foliage with little information in it, something that will easily macro-block in post-production. If we now add the Low contrast filter, not only do we now lift the blacks (so we reduce the chances of macro-blocking in the shadows), we also make the shot look more like true film.
You will notice in the top frame that I have tried to hide the sun behind the tree because I know it will cause blown highlights. With the Low contrast filter, the sun will now have a much more analog-film like rolloff, and will actually hardly clip at all, so I can stop hiding it.
The bottom frame is therefore much easier to work with in post-production. This is true even though lifting the blacks in the tree outline adds no extra information, because lifting the blacks before video encoding in-camera drastically reduces the chances of macro-blocking at any point after capture.
Update April 2014: see this post for a video example (shown below) of a low contrast filter, ND filter and in-camera dynamic range optimization (Apical Iridix) being used together with 28mbit/s AVCHD footage.
As you can see by the clip,this can produce some very cinematic footage!
When shooting video with a DSLR, there are two filters you can use to make life easier.
A variable Neutral Density fader is a must-have because it is the standard way you control exposure.
A Low contrast filter is something to consider if you will be heavily post-processing, It does bring problems of its own (potentially unwanted halation), but it does give you flatter video in the low tones and allows you to shoot slightly underexposed without losing shadow data, (so it effectively allows you to flatten both lows and highs). This makes your footage much easier to handle in any post-production process.
This was supposed to be part three of my ‘Sony Alpha Video’ series, and was going to be titled ‘Sony Alpha Video: Part 3: Filters for video, but I didn’t want to alienate non Sony AVCHD camera users, to who this post equally applies.
In the main text I say ‘shadows contain lots of hidden color’. Many photographers know this already: if you want to boost a color in Lightroom/Photoshop but keep the edit realistic, you need to darken it slightly rather than increase its saturation. Although we often don’t see dark colors, we do respond to them, making them a very good thing to know about especially when you want to keep your edits non-obvious.
Color correction and grading are often used to promote a style, ambiance or ‘look’ rather than reflect reality. You want to meet the viewer’s ideal expectations, not boring reality.
After writing my last blog post, I realised there was no video that showing my tips on AVCHD editing being used in anger. This quick post puts that right. You can see the associated video here or by viewing the video below (I recommend you watch it full screen at 1920×1080).
Note that the youtube version is compressed from the original 28Mbit/s to 8Mbit/s, as are most web videos).
Note also that I don’t use a Sony Alpha A77 for the footage in this post: I use a Panasonic Lumix LX7 because I was traveling light, and the LX7 is my ‘DSLR replacement’ camera of choice. Both cameras use the same video codec and bitrates, so there is not much difference when we come to post production, except that the Sony Alphas have better depth of field and are therefore more ‘film like’, whereas the LX7 will produce sharper video that is less ‘filmic’.
Changing the time of day with post production
Myself and my partner were recently walking on Bingley moor (which is in Yorkshire, England, close to Howarth and Halifax, places associated with Emily Bronte and Wuthering Heights).
It was about an hour before sunset, and I thought it would be nice to capture the setting sun in video.
Alas, we were too early, and the recordings looked nothing like what I wanted.
A couple of weeks later we were walking in the same place in the early morning. I took some footage of the nearby glen (glen – UK English: a deep narrow valley). So now I had some footage of the moor and glen in evening and morning sun, but no sunset footage. Not to worry: I could just add the sunset via post production.
If nothing else, it would make a good example of how AVCHD footage can be edited whilst making large tone/color corrections without coming up against issues once you follow the handy tips from the last post!
The original footage
As per the tips in the previous post, I did the following whilst shooting the original footage
Shot the footage using a fixed shutter and aperture, and varied exposure using a variable Neutral Density Filter. Reason: as a general rule, shoot all footage at a fixed aperture (typically around f3.5, going wider if you want low depth of field, or narrower if your subject is in the distance), and fixed shutter (typically twice the frame rate of your footage). Control your exposure via a variable ND filter.
Set the camera to record desaturated, low contrast and un-sharpened footage. Reason: this gives your footage more latitude for change in post-production.
Exposed the footage slightly brighter than I needed, being mindful of burning highlights. Reason: AVCHD tends to break up or produce artifacts if you increase exposure, but never if you decrease exposure.
Color post production has two workflow areas
Color correction or correction of faults in the footage. A bucket could be too red, or a sky needs to be be more blue. Correction is done on a per clip basis, correcting color/tone issues or adding emphasis/de-emphasis to areas within the scene. Framing and stabilization is also performed on a per clip basis. As an aside, this is the reason why the left side of the footage seems to wobble more in the video: the right side has been stabilized with the inbuilt Adobe Premiere stabilization plugin.
Grading or setting the look of the final film. Grading is done equally too all clips and sets the final style.
Here’s a quick run through of the corrections:
Top image. As-shot.
Second Image. Added an emulated Tiffen Glimmerglass filter. This diffuses the sharp water highlights and softens the video a little (I would not have had to do this if I was shooting with my Sony Alpha A77, and you would not have to soften the video if you were using any other traditional Canon/Nikon DSLR as all of them produce soft video). I also added a Fast Color Corrector to fix a few color issues specific to the clip (white and black point, cast removal).
Third image. Added a warm red gradient to the foliage top left to bottom right. The shape and coverage of the gradient is shown in the inset (white is gradient, black is transparent).
Fourth image. Added a second gradient, this time a yellow one going from bottom to top. Again, the shape and coverage of the gradient is shown in the inset.
For this footage, I used an emulated film stock via Tiffen Dfx. The stock is Agfa Optima. I also added back a little bit of global saturation and sharpness using the default Adobe Premiere tools (Fast color corrector and unsharp mask).
Top Image. Corrected footage so far minus the two gradients
Middle image. Grading and global tweaks (Agfa Optima stock emulation plus global color tweaks and sharpness).
Bottom Image. Adding the two gradients back for the final output.
Merging color correction and grading
Combining the two color change tasks (grading and color correction) is a bit of a black art, and I do both together. Generally, I start by picking an existing film stock from Tiffen Dfx or Magic Bullets Looks as an Adjustment layer. Then, I start adding the clips, color correcting each in turn, and switching in/out the grading adjustment layer as I go. Finally, I add a new adjustment layer for sharpness and final global tweaks. I avoid adding noise reduction as it massively increases render time. Instead, I add a grain that hides noise as part of the grading.
Reality vs. Message
Color correction and grading are often used to promote a style, ambiance or ‘look’ rather than reflect reality. You want to meet the viewer’s ideal expectations, not boring reality.
The video includes this frame. The leaves in the water are red to signify the time of year (autumn/fall).
Real leaves in water lose their color quickly, becoming much more muddy in appearance. I enhanced the muddy leaves towards earth-reds because ‘muddy’ did not fit with my message, even though rotting grey leaves are closer to reality.
Here’s the timeline for the project.
I have my color adjustment and grading in as separate adjustment layers. (V2/V3) The first half of the timeline is more or less identical to the second half except that the second half has the unedited versions of the clips on layer v4. These clips have a Crop effect (Video Effects > Tranform > Crop) on them with the Right value set to 50%. This is how I get the edited/unedited footage split-screens at the end of the video.
When adding backing sound, the music file is never as long as the video clip, so to make the two the same length, I often do this simple trick to edit the music so it is shorter:
Put the music clip on the timeline so that the start of the music lines up with the start of the footage, and
On a different sound layer, put another version of the same music on a the layer below such that this time the end of the music lines up with the end of the video.
Make the two soudclips overlap in the middle, and where they overlap, zoom into the waveforms and find and match the percussive sounds (generally the drums).
Fade between the two sounds on the overlap.
In the timeline section above, I have matched (lined-up) the drum sounds on the layer a1 and a2 music clips (both of which are different sections of the same music file), then faded from layerA1 to A2 by tweening volume level. This will produce a smooth splice between the two music sections. If space allows, you should also of course match on the end of the bar (or ‘on the beat repeat’). For my timeline, you can see (via the previous ‘Project timeline’ screenshot) that I have spliced between four sections of the same file.
During color correction I kept an eye on the YC waveform scope, which is available in Premiere and most video editing applications.
The YC wafeform shows both Luma (Y or brightness) and Chroma (C or color). Luma is Cyan, and Chroma is dark blue on the scope.
The x axis of the scope is the x-axis of the footage, so points on the y-axis are the YC values along the width of the footage itself. Sounds a bit complicated, but if you use the waveform on your own footage it becomes immediately obvious what the waveform represents.
For the broadcast standard I am using (European, PAL), true black is 0.3V on the scale, and true white is 1.0V (NTSC is very similar). The original footage is shown on the left side of the image, and the corresponding YC wafeform is shown below it. The waveform shows highlights in the sky area are clipping (we see pixels above 1.0V), and the darkest areas are not true black (the wafeform doesn’t get down to 0.3V). The right side of the image shows the final footage and we can see that we now have no clipping (in either brightness or color saturation), and our blacks are closer to true black.
Keeping an eye on the YC waveform is always something I do when editing color. You may think your eye is good enough, but your vision gets tired or so used to color that it no longer recognizes casts, but the scope never tires and lies! Another useful scope to use for skintones is the Vectorscope. Something for another post…
This post shows the workflow I used to correct a small number of clips such that they could be added together into a single scene. The final movie shows a typical English autumn sunset (or at least, one where you can see the sun!) yet none of the clips were actually taken at this time of day nor under the lighting conditions of the final scene.
By manipulating the color of our footage via color correction and grading, we achieved our desired output despite the constraints of reality on the day!
Finally, by following additional steps and rules of thumb whilst shooting and editing the AVCHD footage, we have prevented coming up against its limitations. In fact, the only time in the video where you may see any artifacts is the only place where I did not follow my own advice: at about 0.30″ I the footage has exposure increased slightly, and shows small artifacts in the shadows.
You can see all previous video related posts from this blog here.
The music in the video is Spc-Eco, Telling You. Spotify Link.
The YC graph is so much more useful than the histogram seen on most stills cameras that I often wonder why digital cameras don’t have the YC waveform instead! For example, the YC waveform not only tells you whether your image has clipped pixels, but unlike the Histogram, the YC tells you where along the width of the image those pixels are! You can still ‘shoot to the right’ using the YC (and it actually makes more sense) since brightness is Luma height. The YC also separates out brightness and color information, so you can see at a glance the tonality and color information within your photograph in a single visual. How’s that for useful!