Tag Archives: DSLR Video

Using standard DSLR lenses with the Black Magic Pocket Cinema Camera

The Black Magic Pocket Camera is without doubt the best quality video camera for cheap (i.e. under $1000, or under $500 if you were lucky and bought during the summer price reduction). Anything else is either outside the price bracket of the BMPCC or simply not as good for video.

The BMPCC is designed to be disruptive: a cheap, small format camera with professional output. There is a problem with the BMPCC though: lenses! If you are not careful, you can end up spending a lot of cash trying to get around the BMPCC’s large crop factor (2.88).

Ideally, you should be re-purposing your existing DSLR lenses. After all, the BMPCC is designed to be cheap, and uses the micro43rds mount.  Almost any lens can fit that mount with a cheap and dumb adapter.

Here’s the list of inexpensive or already-owned lenses I use.

Ultra wide

The BMPCC has a small sensor, approximating the super 16 format. This is a 2.88 crop on full frame, so any DSLR lens you use will end up effectively having x2.88 the focal length.

This is generally very bad for the videographer because of the difficulty with a superwide field of view (less than 24mm on full frame). A lack of superwide is not a big problem for a stills photographer (especially when superwide is seen as niche and not mainstream), but for video it is much more essential. You need it for hand held and run-and-gun.

Wide view, Opteka 6.5mm (8mm)
Wide view, Opteka 6.5mm (8mm)

Superwide is often used in as the establishing scene in many film sequences, not least because in cinematic language, wide angle is the ‘first person view’: it places the viewer into the scene.

Here’s the problem in a nutshell: to get a superwide field of view on a BMPCC, you have to have a 6-8mm lens, and such a short focal range doesn’t exist for DSLRs. There are several options: A modern ~6mm 1” c-mount TV camera lens (expensive), an old ~8mm 1” c-mount (not sharp and no longer cheap because of the demand), the Panasonic 7-14 micro43rds (slow and overpriced), the Sigma 8-16 (even slower, but a better price especially when you can also use it on APS-C and/or use a focal reducer to up the f-stop for low light) or a focal reducer plus a longer APS-C superwide such as the Tokina 11-16 (a reasonable option for Canon/Nikon users especially when the Tokina is a video friendly constant f2.8, but not a superwide option for Sony Alpha, as there is no Sony alpha to micro43rds focal reducer).

I chose an option that is cheaper than all the above: I used an APS-C fisheye lens that I already own. APS-C Fisheyes are typically ~8mm, so are wide even on the BMPCC. The fisheye effect is smaller on the BMPCC because you are cropping into the center of the lens, so you can ‘defish’ easily in post. The lens I use is the Opteka 6.5mm. Its actually 8mm, and you have to modify it to accept a ND filter for video use (black duck tape and a step-up filter ring works wonders). To make the Opteka mount on the BMPCC, I bought a Sony Alpha to micro43rds adapter with aperture control. It cost about $30 from Amazon. It has a big advantage over focal reducers in that it contains no glass so there is no drop in lens quality.

If I had no existing lens, I’d probably have considered the Sigma 8-16, as it covers both the wide and standard range in a single lens.

Standard range

The standard range, corresponding to 16-50 on APS-C is fairly easy to achieve on the BMPCC via an APS-C superwide lens. I use the Tokina 11-16. Again, to fit the Tokina to the BMPCC, I used the same no-brand, no-glass $30 adapter.

Normal view, Tokina 11-16mm at 16mm
Normal view, Tokina 11-16mm at 16mm

You might also consider the Panasonic 14mm f2.5, a much lighter and more portable lens that works well for mid-shots to close-up shots, and the ubiquitous ‘noddy-shot’.

Tele

Once you are beyond normal range, the BMPCC crop factor starts to work in your favor, since most of your existing APS-C and full frame lenses will work somewhere in this range. Long tele lenses such as the cheap Minolta Beercan become +600mm f4, perfect for the wildlife photographer wanting to try out video on the cheap. At the macro end, a macro such as the Tamron 90mm lets you get much further away from the subject when on the BMPCC, allowing easier setup than a traditional DSLR.

tele view, Minolta AF, 50mm f1.4
tele view, Minolta AF, 50mm f1.4

One thing you will need is an f1.4 or 1.7 lens in this range (your cheapest bet is a 50mm, which you will most likely already have) because you will need at least one lens faster than f2 to enable you to get a decent depth of field with the BMPCC, The crop factor means you have to move back from the subject to get a proper framing, and this larger distance from the subject of course reduces depth of field.

Conclusion

There is a lot of noise on the internet about how the x2.88 crop of the BMPCC makes it a difficult camera to work with, and ‘You need exotic c-mount lenses such as the KOWA LM6HC’, or ‘The camera is useless without a Metabones speed booster’.

Neither of these are true. You may initially struggle a little with superwide if you are thinking in stills photography terms (where superwide is almost a graphical style, with extreme urban edges or panoramic natural vistas), but once you realise superwide in video really only means ‘just wide enough to imply the viewer is there by capturing the entire scene in one frame’, you begin to see you really only need 20-24mm rather than 14-20mm. 20-24mm full frame corresponds to about 8mm on the BMPCC and a cheap APS-C fisheye or  Sigma 8-16 (and no focal reducer for either) is really all you need for this.

Without a focal reducer, you will not get massive depth of field because of the crop factor on the BMPCC. This has made certain famous web videographers (whose style relies on high depth of field) dismissive of the BMPCC but that particular style is not true to super 16. That said, its not difficult to get a high depth of field with the BMPCC: buy a cheap focal reducer (less than $100 on eBay), and/or make sure you have at least one fast lens.

For what its worth, to get a decent range of lenses for the BMPCC, given I already have an APS-C camera (Sony Alpha A77) plus a full set of lenses for it, cost me less than $200, and most of that was in buying a Panasonic 14mm f2.5. The rest went on a $30 Sony Alpha to micro43rds adapter. Easy! Similar options exist for Canon and Nikon users (in fact, with the prevalence of cheap focal reducers for those mounts, you are spoilt for choice!).

Notes

  1. All lenses were shot wide open.
  2. Minimal grading was performed on the footage as we’re more concerned about the field of view here.

Time Slows: Black Magic Pocket Cinema camera

During the summer, Black Magic reduced the price of their Black Magic Pocket Cinema Camera (BMPCC) by half price.

This was a limited offer, so I ordered straight away. Remorse started sinking in when I read the reviews whilst waiting for delivery: it’s a difficult camera to work with (it is, but it’s also very rewarding), there’s a massive x2.88 crop (there is, but focal reducers are now very cheap and decent), and worst of all, it’s very difficult to grade the video.

Ungraded BMPCC footage (ProresHQ using the 'film' or flat color setting. Click image to see a full size version (1920x1080).
Ungraded BMPCC footage, ProresHQ using the ‘film’ (log/flat color setting). Click image to see a full size version (1920×1080).

Well, there’s fixes for the first two, and for the third, that’s a blatant lie: grading log footage is not difficult as long as you get your white balance right before you do any other grading or color correction. This point is crucial.

Assuming good white balance, all you have to do is push the saturation, add a LUT (look up table) or even just use an auto-color correction to get you in the right ballpark.
Anyway, shown above is my very first attempt with the BMPCC. Notice the sky and ground are well exposed in every shot. That’s the power of a high dynamic range plus a good codec (CameraDNG or ProRes).

There are loads of tutorials and reviews on adapting Canon or Nikon lenses to the BMPCC, but I could find nothing for Sony Alpha. Well, that’s about to change. Watch this space!

Notes

  1. The video was shot hand held using only the Panasonic 14mm f2.5, shot wide open to get a de-sharpened, analog feel.
  2. The source footage is ProResHQ (2.45GB for about 2.5 minutes of footage) and this was edited using only Premiere Pro with the Colorista 2 and Tiffen Dfx plugins. No LUTs or presets were used.
  3. The slow motion effect was added in Premiere via Twixtor. The text motion graphics were created using  the standard Premiere animation, masking and blur tools (After Effects was not used).
  4. The Soundtrack is Clockwork by The Silk Demise.

Using low contrast filters for video

In a previous post, I discussed using a Tiffen Low Contrast filter when filming with an AVCHD enabled camera. I didn’t illustrate the point with any of my test footage.

Here it is.

Low contrast filters and video encoders

To recap, we use low contrast filters in AVCHD DSLR video because AVCHD compresses footage using a perceptual filter: what your eye’s can’t perceive gets the chop in the quest for smaller file size. Our eyes cannot see into shadow, so AVCHD ignores (filters out and discards) most of the shadow data. ACVHD knows we can’t see the difference between small variations in color, so it removes such slight differences and replaces them with a single color.

That’s fine if you will not be editing the footage (because your eye will never see the difference), but if you do any post processing involving exposing up the footage, the missing information shows up through macro blocking or color banding. To fix this, we can do one of three things:

  1. Use a low contrast filter. This works by taking ambient light and adding it to shadows, thus lifting the shadows up towards mid-tones and tricking AVCHD into leaving them alone. The low contrast filter thus gives us more information in shadow not because it adds more information in the shadows itself, but because it forces the AVCHD encoder to leave information that the encoder would otherwise discard.
  2. Use Apical Iridix. This goes under different names (e.g. Dynamic Range Optimisation or DRO for Sony and i.Dynamic for Panasonic), but is available on most DSLRs and advanced compacts. It is a digital version of a low contrast filter (its actually a real time tone mapping algorithm). It works by lightening blacks and preserving highlights. Although it again doesn’t add any new information of itself, Iridix is before the AVCHD encoder, so again can force the encoder to leave shadow detail intact.
  3. Use both a low contrast filter and Iridix together.

The video uses the third option.

Deconstructing the video

The low contrast filter allows us to see detail.. even though it is in full shadow

The video consists of three short scenes. They were taken with a Panasonic Lumix LX7, 28Mbit/s AVCHD at 1080p and 50fps (exported from Premiere as 25fps with frame blending), manual-exposure, custom photo style (-1, -1, -1, 0). It was shot hand held and stabilized in post (Adobe Premiere warp stabilizer). The important settings were

  • ISO set to  ISO80 and
  • i.Dynamic set to HIGH.

I chose the lowest ISO so that I could set i.Dynamic high without it causing noise when it it lifts the shadows.

The cameras had a 37mm filter thread attached and a 37-52mm step up ring, on which were attached a Tiffen low contrast filter and Variable Neutral density filter. The reason I did not use two 37mm filters rather than two 52mm filters (i.e. bigger than the lens diameter) is that stacking filters can cause vignette unless you step up as I have done.

Here’s the three scenes. Left side is as-shot, right side is after post processing. Click on the images to see larger versions.

Scene 1
Scene 1

Notice in this scene that the low contrast filter is keeping the blacks lifted. This prevents macro-blocking. Also note that the highlights have a film-like roll off. Again, this is caused by the low contrast filter. The Variable ND filter is also working hard in this scene: the little white disk in the top right is the sun, and it and the sky around it were too bright to look at!

Scene 2
Scene 2

Scene 2 is shot directly at the sun, and you would typically end up with a white sky and properly exposed rocks/tree, or a properly exposed sky and black rocks/tree. The low contrast filter and Iridix (i.Dynamic) give us enough properly exposed sky and shadow to enable us to fix it in post. Nevertheless, we are at the limits of what 28Mbit/s AVCHD can give us. The sky is beginning to macro block, and the branches are showing moiré. I shot it all this way to give us an extreme example, but would more normally shoot with the sun oblique to the camera rather than straight at it.

Scene 3
Scene 3

Scene 3 is a more typical shot. We are in full sun and there is a lot of shadow. The low contrast filter allows us to see detail in the far rock (top right) even though it is in full shadow. It also stops our blacks from clipping, which is important because near-black holds a lot of hidden color. For example, the large shift from straw to lush grass was not done by increasing green saturation in the mid-tones, but in the shadows. If you want to make large color changes, make them in the shadows, because making the same changes in the mid-tones looks far less natural (too vivid). Of course, if we didn’t use a low contrast filter to protect our blacks (and therefore the color they hold) from clipping, we would not have the option to raise shadow colors!

Conclusion

Shooting flat is something you should do if you will be post editing your video footage.  Many cameras do not allow you to shoot flat enough, and to get around this, you can use either a Tiffen Low contrast filter, or the camera’s inbuilt Apical Iridix feature. To maximise the effect, you can use both, as illustrated in this example.

The main advantages of using a low contrast filter are:

  • Protects blacks from clipping, thus preventing shadows from macro-blocking and preserving dark color. The latter is important if you are going to make substantial color correction in post because raising shadow color usually results in much more natural edits.
  • Better highlight roll-off. The effect looks more like film than digital (digital sensors have a hard cut-off rather than a roll-off).
  • Lower contrast that looks like film. Although many people add lots of contrast (i.e. dark, blue  blacks) to their footage, true film actually has very few true blacks. The low contrast film gives this more realistic look.
  • Removes digital sharpness and ‘baked-in’ color’. Many cameras cannot shoot as flat as we would like, and produce footage that is obviously digital because of its sharpness (especially true of the Panasonic GHx cameras). Adding a low contrast filter is useful to mitigate against these issues.

The main disadvantages of using a low contrast filter/Apical Iridix are:

  • The filter loses you about 1/3 stop of light.
  • You usually have to use the low contrast filter along with a variable ND filter (which you need to control exposure). The two filters have associated optical defects apart from their intended function (may cause vignette because you are stacking filters, loss of sharpness). However, remember that you are shooting at a much lower resolution for video, so the sharpness effects will be much smaller than for stills. You can eliminate vignette by using larger filters and a step-up ring.
  • Apical Iridix will increase shadow noise. Use it at maximum only with very low (typically base) ISO.

Notes

None.

DSLR Video: Lens filters for video

DSLRs are photograph devices that happen to have video capability. Most DSLRs don’t therefore have any special features that make shooting video as easy as shooting stills. There are workarounds but your camera manual will not tell you what they are because like your DSLR itself, the manual is most concerned with stills shooting.

This blog post explains how to get around the issues with the minimum additional kit: a variable ND fader (required) and a low contrast filter (optional but recommended if you will be performing heavy post processing).

The problem

If you are coming to video with a stills mentality there is no way to control exposure!

When shooting stills, you have a lot of control over how you set exposure. You can vary shutter, aperture or ISO. If you were to take a series of photographs of (say) a bride and groom leaving a church, you or your camera would maintain correct exposure by varying these three values as the couple moved from the darker interior and out to the bright sunlight. None of this will work for video:

  • You cannot easily change aperture or ISO midway through a take without it being obvious (i.e. it will look awful), so you are stuck with the values at the start of the take even though the lighting conditions may change midway through.
  • You typically set the aperture fairly low in video (around f2-f4, with f3.5 being a good default value), so your ability to control exposure with it is limited. Like in stills, aperture is more of a stylistic control (i.e. it is used to set depth of field and sharpness) rather than linked to exposure in any case.
  • Although some cameras do change shutter to maintain exposure when shooting video on auto, this is never done in professional production. Too high a shutter causes less smooth video and strobing. Instead, shutter is fixed to the frame rate. As a rule of thumb, you set the shutter to twice the frame rate. So if you are shooting 24fps video, you set the shutter to 1/50s. If you want to shoot something moving fast, you do not increase shutter as you do in stills. Instead, you have to increase fps, and then increase the shutter to match.

So here’s the problem: If you are coming to video with a stills mentality there is no way to control exposure!

In video you have to control exposure via a variable Neutral Density (ND) fader.

So, the variable ND fader keeps you happy for a while. You video starts to look smoother and less like it was taken with an iPhone. But then you realise that a lot of the cool stuff in professional films occurs in post-production, and you try your hand at that.

An example of macro blocking. The splotches on the black container and the some of the mushiness in the background foliage are all signs of macro blocking
An example of macro blocking. The splotches on the black container and some of the mushiness in the background foliage are all signs of macro blocking. These would both become much more prominent if you sharpened and/or brightened the footage.

But weird stuff starts happening. If you try to change exposure too much you start to see either blockiness in what used to be shadows (‘macro blocking’) or color banding if you try to give your scene more punch. The above image is a good example of poorly shot footage: we have allowed the blacks to macro-block because of the underexposed dark tones.

As a stills shooter, you will know that if you want to do any significant post-processing, you have to shoot RAW. The option to shoot RAW video is generally not available to you in current DSLRs and you instead use a compressed format rather like JPEG in that It looks good unless you try to edit it too much whereupon it will break up and start to show up quality issues (banding, compression artifacts, crushed shadows and blown highlights).

In video you have to shoot flat if you want to post process your footage.

‘Flat’ means shooting such that your Luma (brightness or ‘Y’)  and Chroma (color or ‘C’) values are near the center of the available range so you end up with de-saturated, low contrast footage. All your YC values are well away from extreme values that would cause clipping so you can now push the values much further in post-production (recoloring, changing lighting digitally, changing exposure digitally).

Shooting flat seems like an easy step: you just set an in-camera video style with low contrast, low saturation and low sharpness values. Many DSLRs don’t allow you get particularly flat footage out of them using the available digital controls though (and especially not the Panasonic GH2/GH3, which are otherwise excellent video DSLRs) so you may have to do it optically via a Low Contrast filter.

It is worth noting that if you want to create footage that you can post process heavily, you may use both a variable ND fader and a low contrast filter together. This raises light loss and optical aberration issues caused by the additional glass in your light path. However, most color-editing video software assumes you are using flat footage, and if you are not, you may have bigger problems.

For example, most plugins and applications come with lots of preset ‘film looks’, which sounds great until you try them with non-flattened footage: the preset result becomes too extreme to be usable, and if you mix them down, the effect becomes negligible. Not good!

In the next section I will show how both the variable ND fader and Low Contrast filter can be used to create well exposed and flatter footage. I am using a Panasonic GH2, but also retested using a Sony Alpha A77 to confirm the workflow on both cameras.

Setup

I will be shooting the same footage with and without the two filters. To make sure the footage is identical, I am moving the camera on a motorised slider (a Digislider).

Test setup
Test setup

I am shooting foliage because panning along foliage is actually a very good test of video: it generates massive amounts of data, as well as producing lots of varying highlight/dark areas. That and the fact that the garden is when most enthusiast photographers test out most things (hence howgreenisyourgarden).

I am using a Panasonic GH2 with the ‘Flowmotion’ hack that allows me to shoot high bitrate video (100Mbit/s AVCHD). I set the GH2 to shoot 1080p 24fps video, which is as close as we can get to motion-picture-like film on the camera. To start the flattening process, I set the GH2 picture style (which the camera calls a ‘Film style’ when applied to video) as contrast -2, sharpness -2, saturation -2 and noise reduction 0. On a Sony Alpha, you would do the same, but set the sharpness to as low as it goes (-3) and forget the noise reduction (that option is not available as in their wisdom, Sony have simply limited the maximum video ISO!).

Low Contrast filter

A Low contrast filter is optional. You will not see any reason to use it when you start with video but the requirement to use it may become more apparent if you get heavily into post-processing.

Using my setup, I took two identical pans, moving left to right along the hedge.

Top – a frame from the ‘without filter’ video. Bottom – a frame from the ‘with Low contrast filter’ video
Top – a frame from the ‘without filter’ video. Bottom – a frame from the ‘with Low contrast filter’ video

The footage without the contrast filter (top) initially looks better, but it more susceptible to issues if you try to change it. This becomes more apparent when we look at the underlying data as graphs.

The YC graph (as seen in most video editing software) has the image width as its x-axis and plots luma (cyan or light blue-green) and chroma (blue) on the y-axis. Think of it as a much more useful version of the standard camera histogram. Its more useful because (a) it shows brightness and color separately but at the same time in relation to each other and (b) is directly related to the image width, so you can tell where along the width of the image you have shadow clipping or highlight burn – with a histogram, you only know you have clipping/burn but not where on the image).

YC graph showing effects of Low Contrast filter
Left – ‘without filter’ footage. Right – ‘with low contrast filter’ footage

As you can see, the low contrast filter lifts the low part of the data. This corresponds to brightening shadows. It is important to realise that this is not giving you more information in the shadows. The filter is merely creating diffused local light and then adding it to the shadows to given them a bit of lift. You will see many people on the internet telling you this means a low contrast filter is useless. This is not the case because:

  • One of the way most video codecs work to optimise filesize is by removing data in areas where we cannot see detail anyway. This can occur in several places, but one place this always occurs is in near-black shadows. By lifting (brightening) such shadows optically before the camera sensor sees them, we force the codec to encode more information in our shadows, thus giving us a more even amount of data across the lower tonal range.
  • We often have to increase exposure in post-production. If we did this without the shadow lift that low contrast filters give us, the shadows would be encoded with very low data levels assigned to them. When we expose them up, we see this lack of data as macro-blocking (shadows become blocky and banded). This is especially true of AVCHD, and if your camera creates AVCHD, then you need to be particularly careful of editing shadow areas. Further, some busy scenes may even show macro blocking in the shadows before you edit for standard 24-28Mbit/s (unhacked) cameras. Without a Low Contrast filter in place, such footage may have to be retaken.

There is a way of lifting shadows without using the filter – simply expose your footage a little to the right (about 1/3rd of a stop), and then underexpose by the same amount in post. That is fine, but makes it more likely you will burn highlights. If you use the low contrast filter you have a better time because you can now underexpose to protect highlights, knowing that you will not clip the shadows because the darks will now be lifted by the low contrast filte so you end up with flatter lows and flatter highs.

There is one final issue to consider with Low contrast filters: low colors.

The RGB parade is another common video-editing graph, and it shows the color values (0-100%) across the frame for the three color components.

RGB Parade showing effects of Low Contrast filter
Left – ‘without filter’ footage. Right – ‘with low contrast filter’ footage

Notice that when we use the low contrast filter, the low color values are also lifted upwards towards the average. This means we are less likely to shoot footage with clipped color in the lowlights, and thus we can push our dark colors further in post-production. This is especially important when we look at skintones: we usually want to brighten skin overall (to make it the focus of attention) and that means exposing up skin that is in shadow. We are particularly sensitive to odd looking skin, so it is important that we don’t clip any skin color because we will almost certainly need to expose it up and clipped skin will then look odd.

So, the low contrast filter protects our shadows by optically adding some mid tone light (‘gamma’) to it. The filter also raises our darker color values towards the average. Both these changes reduce the chance of clipping and color change in post-production.

In the image below, the left side of the frame was shot with the low contrast filter, and then post-processed (using RGB curves and the Colorista 2 plugin, both via Adobe Premiere CC). The right half of the image is as-shot without the filter or any processing.

Edited footage shot with Low contrast filter (left side of image) vs normal footage (right side)
Edited footage shot with Low contrast filter (left side of image) vs normal footage (right side)

Click the image to see a bigger version.

Notice that the left half of the image seems to have the healthiest leaves, and is therefore the nicest to look at. This of course was done via color correction – the low greens were significantly lifted upwards (notice also that lifting the dark colors gives a much more subtle and realistic looking edit than just saturating the mid-greens). Further, notice that the darks on the left side never drop down to true black – they are always slightly off-black and blend in better with the mid-tones, whereas the dark areas in the right hand side look like true black, and consequently contain no detail (and very little information). Although the left side is actually the most processed of the two, it looks the most natural because of the more believable color variation and contrast between the lows and mids, and (somewhat counter-productively) this is down to using an optical Low Contrast filter on the camera.

Is worth noting that if we tried to bring up the blacks in the right hand side so they looked as natural as the ones on the left, we would not be able to because there is not enough information in the shadows to do this. As soon as we exposed up the shadows, we would start to see macro-blocking and banding because the footage doesn’t contain enough information in the darks for us to be able to change them significantly, so you would have to enhance the leaves using only gamma (mid-colors), which tends to look more processed and artificial.

Another very important thing to see here is how subtle the color correction gets if you are careful not to clip your blacks. The two halves of the image have not been blurred together at the join in any way: there is no alpha transition. So there is an immediate cutoff between the color corrected (left) and non-corrected (right) version. Look at any leaf on the center line:  half of it is corrected and half is not. Can you see the join? No, me neither: it just looks like half the leaf is an anemic off-green and half is healthy. The reason why professional color correctors fix the lows first is because that is where all the color is: shadows contain lots of hidden color. By increasing vibrancy and saturation in the low colors (assuming you have not clipped your shadows), your color editing becomes much more subtle than if you edit the mids, and you corrections become much more natural and believable. Further, if you need to enhance only one set of colors, editing only the lows makes your changes more realistic, and you often no longer even need to use masking.

Although as a photographer, you are taught to protect the highs from clipping (because that kills your high tones), in video, your shadows are often more important because they contain the low color. It may be hidden, but it is this color information that you will be boosting or diminishing in post to create atmosphere in your footage. This is the main reason that you use a low contrast filter in video: not to protect tone, but to protect color.

Poor shadow encoding is a common default failing of all DSLR footage (unless you are shooting RAW with a 5D, or using the BlackMagic/Red, but then you have a much more non-trivial post production process: see here if you want to see why at least one videographer actually prefers an AVCHD camera to a Red Scarlet… the resulting flame wars are also quite entertaining to read in the comments!).

There is a final advantage in using a Low contrast filter: it reduces the bandwidth requirements slightly. Using Bitrate Viewer on my footage, I find that the bandwidth is about 5% lower with the filter attached (probably because the footage is less contrasty and has smaller color transitions). It’s not much of a difference, but perhaps a use-case if you are using a Sony A77 or other camera with the default AVCHD bitrates (24 to 28Mbit/s), especially when it will also eliminate the occurrence of shadow macro blocking that can occasionally occur in shadows before you edit if you are using the default AVCHD bitrates in difficult lighting or busy scenes.

My Low contrast Filter also sees some use in stills. Because it spreads out highlights, I see less highlight clipping, and a more rounded roll off. (NB -the photographer in me thinks this is a very cool thing about this filter, but the videographer in me says ‘nah, its the richness of color in your darks that makes this filter great!)

candle without Low Contrast filter (L), with filter (M) and after processing
Left – Raw image imported into Lightroom. Notice the red dot in the middle of the candle flame, signifying highlight clipping. Middle – The same shot with the Low Contrast filter attached. Note that the clipping is much less pronounced and has a more subtle rolloff. Also note that the background is brighter because the filter is pushing some of the light on the background highlight (top left) into the ambient gamma. Right – The middle photo after post processing in Lightroom. Because of the better highlight rolloff, the light from the candle has more presence. If we wanted to recolor the flame (to, say, yellow), we could, because the highlight has not been clipped, and contains lots of data to work with.

It is also useful if you are shooting into the sun. The downside is that you usually have to significantly post-process any stills taken with a Low contrast filter to get your tones back: it’s one to use only in tricky tonal lighting conditions, when you want to achieve a low contrast look (it is actually used often in stills to give a retro 70’s look), or when you know you will be post processing color significantly (usual in video production).

In terms of buying a low contrast filter, there is only really one option: buy Tiffen as they are the only ones who do them well. I use the Low Contrast 3 filter. Some videographers use an Ultra Contrast filter, but I find the Low contrast filter to be better with highlights (it gives a film like rolloff to highlights because it diffuses bright lights slightly). You can see a discussion of the differences between Low Contrast and Ultra Contrast on the Tiffen website.

Update January 2014: Since writing this post, I have found out that Sony Alpha cameras apply DRO (dynamic range optimization) to video. DRO works electronically by lifting shadows. Although this process does not increase detail, it may prevent macro blocking, making it a good alternative to using an expensive low contrast filter.  See this post for further details.

The low contrast filter reduces the final exposure by about 1/3rd of a stop, but that is because of the way the filter works rather than intentional. If we actually want to control exposure, we have to use the next filter…

Variable Neutral Density fader

For shooting video that looks like film rather than live TV, using a variable ND fader is not an option: you have to use one because it’s the only way you can control exposure. A variable ND filter is rotated to vary the light passing through the filter, and this (rather than shutter or aperture) controls the exposure of the final shot. Without a variable ND filter, shooting at wide apertures and a shutter of around 1/50s (which you need for the DSLR film look) will leave you with overexposed footage.

All ‘DSLR film’ (footage shot with a DSLR that has a filmic look to it) is shot using a variable ND fader, and using one is a given. There is only one real issue to consider: which one to buy. There are a few good video reviews:

For the price of 1 expensive Tiffen variable ND fader, I bought the following:

  • 6 Fotga variable ND faders, one for each lens diameter I use for video on my Sony Alpha A77 and Panasonic GH2 (82, 77, 72, 55, 52 and 49mm)
  • 1 Polaroid ND fader at 37mm for my Panasonic LX7 advanced compact.

The Fotga gives 2 to 9 stops of exposure control, but only the first 2-7 stops are usable (you start to see vignette and uneven filtering after that, which is fine by me for the price: $10-30, depending on size and where you buy them). Sharpness is not really an issue at video resolutions (but certainly is at stills resolutions: buy Tiffen if you want to use your Variable NDs for stills as well as video).

The most important thing to watch out on Variable ND faders is that you are getting optically flat glass and not normal glass. You can check this by looking through the fader whilst turning it left-right in your hand. If the view through the fader seems to wobble, then you are not getting constant diffraction and this occurs because you do not have optically flat glass (and probably need to throw the fader away). You can also check for sharpness by videoing something that generates moiré. Moiré is caused by capturing something that is on the edge of what your sensor can resolve.  If the moiré changes significantly with the fader on, then it is because the resolving power has changed significantly (i.e. the fader is bringing down the resolving power of your sensor), and you again probably need to bin the fader. Its worth noting that moiré may occur in stills but not in video, depending on how your camera sensor is set up, so check in video footage only.

In terms of actual use, here’s a quick before and after of shooting a scene with and without a variable ND fader. I’m shooting at the sky with an aperture and shutter dictated by my frame rate and requirement to capture ‘filmic’ footage (f3.5, 1/50s, 24fps, 1080p).

Comparison without ND fader (L) vs with (R)
Comparison without ND fader (L) vs with (R)

I think the before and after speak for themselves, and we don’t need justification via graphs!

Using Variable ND fader and Low Contrast filter together

Using an ND fader and Low contrast filter together is often necessary where you have a very high contrast scene, typically when you have to shoot into the sun, or when you have deep shadow and highlights in the same scene. If you are shooting outside in full sun, this may occur so often that it is actually the norm rather than an edge case.

Consider the following scene.

Using an ND fader and Low contrast filter together.
Using an ND fader and Low contrast filter together. See text below for information.

The top image shows our situation with no fader. We have a wall in semi-shade with the sun shining over it. If we add a variable ND fader, we can reduce the exposure so that we now have no blown out sky highlights, but that leaves us with foliage that is too dark (middle), so we have clipping in the shadows (and probably macro-blocking if we try to expose the foliage up in post-production). If we now add the Low contrast filter, some of the ambient is added to the shadows, and we now have brighter foliage (and darks that we can lift up without causing macro-blocking) and no blown highlights. Note thatwe have raised the darks optically during the shot  so we don’t get digital noise associated with raising shadows in post processing. This is a significant difference.

Although this is all very good, the filters add some issues of their own. In particular, because the low contrast filter lifts the blacks based on the local gamma, its effects will vary as the local gamma varies. Thus, we have halation at the border between our sky and foliage, which may be difficult to get rid of because it may not be linear, and it gets worse with more extreme light-dark borders (such as sun straight to to dark shadow as we have here). However, the final image is better than either of the other two n that it is the one that contains most information for post processing… although you will have to be fairly experienced to edit back to normality, so should nto attempt this until you have some experience of color correction.

Finally, consider the tree example we looked at earlier for the Variable ND fader.

Example of how variable ND faders and Low Contrast filters can help when shooting into the sun.
Example of how variable ND faders and Low Contrast filters can help when shooting into the sun. See text below for information.

Although we get a correctly exposed sky when we add just the ND fader, we also get black foliage with little information in it, something that will easily macro-block in post-production. If we now add the Low contrast filter, not only do we now lift the blacks (so we reduce the chances of macro-blocking in the shadows), we also make the shot look more like true film.

You will notice in the top frame that I have tried to hide the sun behind the tree because I know it will cause blown highlights. With the Low contrast filter, the sun will now have a much more analog-film like rolloff, and will actually hardly clip at all, so I can stop hiding it.

The bottom frame is therefore much easier to work with in post-production. This is true even though lifting the blacks in the tree outline adds no extra information, because lifting the blacks before video encoding in-camera drastically reduces the chances of macro-blocking at any point after capture.

Update April 2014: see this post for a video example (shown below) of a low contrast filter, ND filter and in-camera dynamic range optimization (Apical Iridix) being used together with 28mbit/s AVCHD footage.

As you can see by the clip,this can produce some very cinematic footage!

Conclusion

When shooting video with a DSLR, there are two filters you can use to make life easier.

A variable Neutral Density fader is a must-have because it is the standard way you control exposure.

A Low contrast filter is something to consider if you will be heavily post-processing, It does bring problems of its own (potentially unwanted halation), but it does give you flatter video in the low tones and allows you to shoot slightly underexposed without losing shadow data, (so it effectively allows you to flatten both lows and highs). This makes your footage much easier to handle in any post-production process.

Notes

  1. This was supposed to be part three of my ‘Sony Alpha Video’ series, and was going to be titled ‘Sony Alpha Video: Part 3: Filters for video, but I didn’t want to alienate non Sony AVCHD camera users, to who this post equally applies.
  2. In the main text I say ‘shadows contain lots of hidden color’. Many photographers know this already: if you want to boost a color in Lightroom/Photoshop but keep the edit realistic, you need to darken it slightly rather than increase its saturation. Although we often don’t see dark colors, we do respond to them, making them a very good thing to know about especially when you want to keep your edits non-obvious.

The sunset that never was

Color correction and grading are often used to promote a style, ambiance or ‘look’ rather than reflect reality. You want to meet the viewer’s ideal expectations, not boring reality.

After writing my last blog post, I realised there was no video that showing my tips on AVCHD editing being used in anger. This quick post puts that right. You can see the associated video here or by viewing the video below (I recommend you watch it full screen at 1920×1080).

Note that the youtube version is compressed from the original 28Mbit/s to 8Mbit/s, as are most web videos).

Note also that I don’t use a Sony Alpha A77 for the footage in this post: I use a Panasonic Lumix LX7  because I was traveling light, and the LX7 is my ‘DSLR replacement’ camera of choice. Both cameras use the same video codec and bitrates, so there is not much difference when we come to post production, except that the Sony Alphas have better depth of field and are therefore more ‘film like’, whereas the LX7 will produce sharper video that is less ‘filmic’.

Changing the time of day with post production

Myself and my partner were recently walking on Bingley moor (which is in Yorkshire, England, close to Howarth and Halifax, places associated with Emily Bronte and Wuthering Heights).

The final footage. Color grading and correction via Tiffen Dfx running within Adobe Premiere.
The final footage. Color grading and correction via Tiffen Dfx running within Adobe Premiere. Click on the image to open the original frame (1920×1080).

It was about an hour before sunset, and I thought it would be nice to capture the setting sun in video.

The original raw footage. Captured with a Panasonic Lumix LX7 with attached Polaroid variable ND filter.
The original raw footage. Captured with a Panasonic Lumix LX7 with attached Polaroid variable ND filter.

Alas, we were too early, and the recordings looked nothing like what I wanted.

A couple of weeks later we were walking in the same place in the early morning. I took some footage of the nearby glen (glen – UK English: a deep narrow valley). So now I had some footage of the moor and glen in evening and morning sun, but no sunset footage. Not to worry: I could just add the sunset via post production.

If nothing else, it would make a good example of how AVCHD footage can be edited whilst making large tone/color corrections without coming up against issues once you follow the handy tips from the last post!

The original footage

As per the tips in the previous post, I did the following whilst shooting the original footage

  • Shot the footage using a fixed shutter and aperture, and varied exposure using a variable Neutral Density Filter. Reason: as a general rule, shoot all footage at a fixed aperture  (typically around f3.5, going wider if you want low depth of field, or narrower if your subject is in the distance), and fixed shutter (typically twice the frame rate of your footage). Control your exposure via a variable ND filter.
  • Set the camera to record desaturated, low contrast and un-sharpened footage. Reason: this gives your footage more latitude for change in post-production.
  • Exposed the footage slightly brighter than I needed, being mindful of burning highlights. Reason: AVCHD tends to break up or produce artifacts if you increase exposure, but never if you decrease exposure.

Workflow

Color post production has two workflow areas

  • Color correction or correction of faults in the footage. A bucket could be too red, or a sky needs to be be more blue. Correction is done on a per clip basis, correcting color/tone issues or adding emphasis/de-emphasis to areas within the scene. Framing and stabilization is also performed on a per clip basis. As an aside, this is the reason why the left side of the footage seems to wobble more in the video: the right side has been stabilized with the inbuilt Adobe Premiere stabilization plugin.
  • Grading or setting the look of the final film. Grading is done equally too all clips and sets the final style.

Color correction

Here’s a quick run through of the corrections:

Color correction
Color correction
  • Top image. As-shot.
  • Second Image. Added an emulated Tiffen Glimmerglass filter. This diffuses the sharp water highlights and softens the video a little (I would not have had to do this if I was shooting with my Sony Alpha A77, and you would not have to soften the video if you were using any other traditional Canon/Nikon DSLR as all of them produce soft video). I also added a Fast Color Corrector to fix a few color issues specific to the clip (white and black point,  cast removal).
  • Third image. Added a warm red gradient to the foliage top left to bottom right. The shape and coverage of the gradient is shown in the inset (white is gradient, black is transparent).
  • Fourth image. Added a second gradient, this time a yellow one going from bottom to top. Again, the shape and coverage of the gradient is shown in the inset.

Color Grading

For this footage, I used an emulated film stock via Tiffen Dfx. The stock is Agfa Optima. I also added back a little bit of global saturation and sharpness using the default Adobe Premiere tools (Fast color corrector and unsharp mask).

The top image is the original footage. The middle image is the same frame after grading and global tweaks have been applied. For reference,  the bottom shows the frame after all color correction
Color Grading
  •  Top Image. Corrected footage so far minus the two gradients
  • Middle image. Grading and global tweaks (Agfa Optima stock emulation plus global color tweaks and sharpness).
  • Bottom Image. Adding the two gradients back for the final output.

Merging color correction and grading

Combining the two color change tasks (grading and color correction) is a bit of a black art, and I do both together. Generally, I start by picking an existing film stock from Tiffen Dfx or Magic Bullets Looks as an Adjustment layer. Then, I start adding the clips, color correcting each in turn, and switching in/out the grading adjustment layer as I go. Finally, I add a new adjustment layer for sharpness and final global tweaks. I avoid adding noise reduction as it massively increases render time. Instead, I add a grain that hides noise as part of the grading.

Reality vs. Message

Color correction and grading are often used to promote a style, ambiance or ‘look’ rather than reflect reality. You want to meet the viewer’s ideal expectations, not boring reality.

Color corrected/graded scene
Color corrected/graded scene

The video includes this frame. The leaves in the water are red to signify the time of year (autumn/fall).

Original scene (note also that the original scene is lighter than the final scene, as per my AVCHD shooting tips)
Original scene (note also that the original scene is lighter than the final scene, as per my AVCHD shooting tips)

Real leaves in water lose their color quickly, becoming much more muddy in appearance. I enhanced the muddy leaves towards earth-reds because ‘muddy’ did not fit with my message, even though rotting grey leaves are closer to reality.

Timeline

Here’s the timeline for the project.

Project timeline
Project timeline (click on image to view full size version)

I have my color adjustment and grading in as separate adjustment layers. (V2/V3)  The first half of the timeline is more or less identical to the second half except that the second half has the unedited versions of the clips on layer v4. These clips have a Crop effect (Video Effects > Tranform > Crop) on them with the Right value set to 50%. This is how I get the edited/unedited footage split-screens at the end of the video.

When adding backing sound, the music file is never as long as the video clip, so to make the two the same length, I often do this simple trick to edit the music so it is shorter:

  • Put the music clip on the timeline so that the start of the music lines up with the start of the footage, and
  • On a different sound layer, put another version of the same music on a the layer below such that this time the end of the music lines up with the end of the video.
  • Make the two soudclips overlap in the middle, and where they overlap, zoom into the waveforms and find and match the percussive sounds (generally the drums).
  • Fade between the two sounds on the overlap.
Matching sound sections
Matching the sound sections

In the timeline section above, I have matched (lined-up) the drum sounds on the layer a1 and a2 music clips (both of which are different sections of the same music file), then faded from layerA1 to A2 by tweening volume level. This will produce a smooth splice between the two music sections. If space allows, you should also of course match on the end of the bar (or ‘on the beat repeat’).  For my timeline, you can see (via the previous ‘Project timeline’ screenshot) that I have spliced between four sections of the same file.

Tools

During color correction I kept an eye on the YC waveform scope, which is available in Premiere and most video editing applications.

Footage vs YC waveform
Footage vs YC waveform

The YC wafeform shows both Luma (Y or brightness) and Chroma (C or color). Luma is Cyan, and Chroma is dark blue on the scope.

The x axis of the scope is the x-axis of the footage, so points on the y-axis are the YC values along the width of the footage itself. Sounds a bit complicated, but if you use the waveform on your own footage it becomes immediately obvious what the waveform represents.

For the broadcast standard I am using (European, PAL), true black is 0.3V on the scale, and true white is 1.0V (NTSC is very similar). The original footage is shown on the left side of the image, and the corresponding YC wafeform is shown below it. The waveform shows highlights in the sky area are clipping (we see pixels above 1.0V),  and the darkest areas are not true black (the wafeform doesn’t get down to 0.3V). The right side of the image shows the final footage and we can see that we now have no clipping (in either brightness or color saturation), and our blacks are closer to true black.

Keeping an eye on the YC waveform is always something I do when editing color. You may think your eye is good enough, but your vision gets tired or so used to color that it no longer recognizes casts, but the scope never tires and lies! Another useful scope to use for skintones is the Vectorscope. Something for another post…

Conclusion

This post shows the workflow I used to correct a small number of clips such that they could be added together into a single scene. The final movie shows a typical English autumn sunset (or at least, one where you can see the sun!) yet none of the clips were actually taken at this time of day nor under the lighting conditions of the final scene.

By manipulating the color of our footage via color correction and grading, we achieved our desired output despite the constraints of reality on the day!

Finally, by following additional steps and rules of thumb whilst shooting and editing the AVCHD footage, we have prevented coming up against its limitations. In fact, the only time in the video where you may see any artifacts is  the only place where I did not follow my own advice: at about 0.30″ I the footage has exposure increased slightly, and shows small artifacts in the shadows.

You can see all previous video related posts from this blog here.

Notes

  1. The music in the video is Spc-Eco, Telling You. Spotify Link.
  2. The YC graph is so much more useful than the histogram seen on most stills cameras that I often wonder why digital cameras don’t have the YC waveform instead! For example, the YC waveform not only tells you whether your image has clipped pixels, but unlike the Histogram, the YC tells you where along the width of the image those pixels are! You can still ‘shoot to the right’ using the YC (and it actually makes more sense) since brightness is Luma height. The YC also separates out brightness and color information, so you can see at a glance the tonality and color information within your photograph in a single visual. How’s that for useful!