Using low contrast filters for video

In a previous post, I discussed using a Tiffen Low Contrast filter when filming with an AVCHD enabled camera. I didn’t illustrate the point with any of my test footage.

Here it is.

Low contrast filters and video encoders

To recap, we use low contrast filters in AVCHD DSLR video because AVCHD compresses footage using a perceptual filter: what your eye’s can’t perceive gets the chop in the quest for smaller file size. Our eyes cannot see into shadow, so AVCHD ignores (filters out and discards) most of the shadow data. ACVHD knows we can’t see the difference between small variations in color, so it removes such slight differences and replaces them with a single color.

That’s fine if you will not be editing the footage (because your eye will never see the difference), but if you do any post processing involving exposing up the footage, the missing information shows up through macro blocking or color banding. To fix this, we can do one of three things:

  1. Use a low contrast filter. This works by taking ambient light and adding it to shadows, thus lifting the shadows up towards mid-tones and tricking AVCHD into leaving them alone. The low contrast filter thus gives us more information in shadow not because it adds more information in the shadows itself, but because it forces the AVCHD encoder to leave information that the encoder would otherwise discard.
  2. Use Apical Iridix. This goes under different names (e.g. Dynamic Range Optimisation or DRO for Sony and i.Dynamic for Panasonic), but is available on most DSLRs and advanced compacts. It is a digital version of a low contrast filter (its actually a real time tone mapping algorithm). It works by lightening blacks and preserving highlights. Although it again doesn’t add any new information of itself, Iridix is before the AVCHD encoder, so again can force the encoder to leave shadow detail intact.
  3. Use both a low contrast filter and Iridix together.

The video uses the third option.

Deconstructing the video

The video consists of three short scenes. They were taken with a Panasonic Lumix LX7, 28Mbit/s AVCHD at 1080p and 50fps (exported from Premiere as 25fps with frame blending), manual-exposure, custom photo style (-1, -1, -1, 0). It was shot hand held and stabilized in post (Adobe Premiere warp stabilizer). The important settings were

  • ISO set to  ISO80 and
  • i.Dynamic set to HIGH.

I chose the lowest ISO so that I could set i.Dynamic high without it causing noise when it it lifts the shadows.

The cameras had a 37mm filter thread attached and a 37-52mm step up ring, on which were attached a Tiffen low contrast filter and Variable Neutral density filter. The reason I did not use two 37mm filters rather than two 52mm filters (i.e. bigger than the lens diameter) is that stacking filters can cause vignette unless you step up as I have done.

Here’s the three scenes. Left side is as-shot, right side is after post processing. Click on the images to see larger versions.

Scene 1

Scene 1

Notice in this scene that the low contrast filter is keeping the blacks lifted. This prevents macro-blocking. Also note that the highlights have a film-like roll off. Again, this is caused by the low contrast filter. The Variable ND filter is also working hard in this scene: the little white disk in the top right is the sun, and it and the sky around it were too bright to look at!

Scene 2

Scene 2

Scene 2 is shot directly at the sun, and you would typically end up with a white sky and properly exposed rocks/tree, or a properly exposed sky and black rocks/tree. The low contrast filter and Iridix (i.Dynamic) give us enough properly exposed sky and shadow to enable us to fix it in post. Nevertheless, we are at the limits of what 28Mbit/s AVCHD can give us. The sky is beginning to macro block, and the branches are showing moiré. I shot it all this way to give us an extreme example, but would more normally shoot with the sun oblique to the camera rather than straight at it.

Scene 3

Scene 3

Scene 3 is a more typical shot. We are in full sun and there is a lot of shadow. The low contrast filter allows us to see detail in the far rock (top right) even though it is in full shadow. It also stops our blacks from clipping, which is important because near-black holds a lot of hidden color. For example, the large shift from straw to lush grass was not done by increasing green saturation in the mid-tones, but in the shadows. If you want to make large color changes, make them in the shadows, because making the same changes in the mid-tones looks far less natural (too vivid). Of course, if we didn’t use a low contrast filter to protect our blacks (and therefore the color they hold) from clipping, we would not have the option to raise shadow colors!

Conclusion

Shooting flat is something you should do if you will be post editing your video footage.  Many cameras do not allow you to shoot flat enough, and to get around this, you can use either a Tiffen Low contrast filter, or the camera’s inbuilt Apical Iridix feature. To maximise the effect, you can use both, as illustrated in this example.

The main advantages of using a low contrast filter are:

  • Protects blacks from clipping, thus preventing shadows from macro-blocking and preserving dark color. The latter is important if you are going to make substantial color correction in post because raising shadow color usually results in much more natural edits.
  • Better highlight roll-off. The effect looks more like film than digital (digital sensors have a hard cut-off rather than a roll-off).
  • Lower contrast that looks like film. Although many people add lots of contrast (i.e. dark, blue  blacks) to their footage, true film actually has very few true blacks. The low contrast film gives this more realistic look.
  • Removes digital sharpness and ‘baked-in’ color’. Many cameras cannot shoot as flat as we would like, and produce footage that is obviously digital because of its sharpness (especially true of the Panasonic GHx cameras). Adding a low contrast filter is useful to mitigate against these issues.

The main disadvantages of using a low contrast filter/Apical Iridix are:

  • The filter loses you about 1/3 stop of light.
  • You usually have to use the low contrast filter along with a variable ND filter (which you need to control exposure). The two filters have associated optical defects apart from their intended function (may cause vignette because you are stacking filters, loss of sharpness). However, remember that you are shooting at a much lower resolution for video, so the sharpness effects will be much smaller than for stills. You can eliminate vignette by using larger filters and a step-up ring.
  • Apical Iridix will increase shadow noise. Use it at maximum only with very low (typically base) ISO.

Notes

None.

DSLR Video: Lens filters for video

DSLRs are photograph devices that happen to have video capability. Most DSLRs don’t therefore have any special features that make shooting video as easy as shooting stills. There are workarounds but your camera manual will not tell you what they are because like your DSLR itself, the manual is most concerned with stills shooting.

This blog post explains how to get around the issues with the minimum additional kit: a variable ND fader (required) and a low contrast filter (optional but recommended if you will be performing heavy post processing).

The problem

When shooting stills, you have a lot of control over how you set exposure. You can vary shutter, aperture or ISO. If you were to take a series of photographs of (say) a bride and groom leaving a church, you or your camera would maintain correct exposure by varying these three values as the couple moved from the darker interior and out to the bright sunlight. None of this will work for video:

  • You cannot easily change aperture or ISO midway through a take without it being obvious (i.e. it will look awful), so you are stuck with the values at the start of the take even though the lighting conditions may change midway through.
  • You typically set the aperture fairly low in video (around f2-f4, with f3.5 being a good default value), so your ability to control exposure with it is limited. Like in stills, aperture is more of a stylistic control (i.e. it is used to set depth of field and sharpness) rather than linked to exposure in any case.
  • Although some cameras do change shutter to maintain exposure when shooting video on auto, this is never done in professional production. Too high a shutter causes less smooth video and strobing. Instead, shutter is fixed to the frame rate. As a rule of thumb, you set the shutter to twice the frame rate. So if you are shooting 24fps video, you set the shutter to 1/50s. If you want to shoot something moving fast, you do not increase shutter as you do in stills. Instead, you have to increase fps, and then increase the shutter to match.

So here’s the problem: If you are coming to video with a stills mentality there is no way to control exposure!

In video you have to control exposure via a variable Neutral Density (ND) fader.

So, the variable ND fader keeps you happy for a while. You video starts to look smoother and less like it was taken with an iPhone. But then you realise that a lot of the cool stuff in professional films occurs in post-production, and you try your hand at that.

An example of macro blocking. The splotches on the black container and the some of the mushiness in the background foliage are all signs of macro blocking

An example of macro blocking. The splotches on the black container and some of the mushiness in the background foliage are all signs of macro blocking. These would both become much more prominent if you sharpened and/or brightened the footage.

But weird stuff starts happening. If you try to change exposure too much you start to see either blockiness in what used to be shadows (‘macro blocking’) or color banding if you try to give your scene more punch. The above image is a good example of poorly shot footage: we have allowed the blacks to macro-block because of the underexposed dark tones.

As a stills shooter, you will know that if you want to do any significant post-processing, you have to shoot RAW. The option to shoot RAW video is generally not available to you in current DSLRs and you instead use a compressed format rather like JPEG in that It looks good unless you try to edit it too much whereupon it will break up and start to show up quality issues (banding, compression artifacts, crushed shadows and blown highlights).

In video you have to shoot flat if you want to post process your footage.

‘Flat’ means shooting such that your Luma (brightness or ‘Y’)  and Chroma (color or ‘C’) values are near the center of the available range so you end up with de-saturated, low contrast footage. All your YC values are well away from extreme values that would cause clipping so you can now push the values much further in post-production (recoloring, changing lighting digitally, changing exposure digitally).

Shooting flat seems like an easy step: you just set an in-camera video style with low contrast, low saturation and low sharpness values. Many DSLRs don’t allow you get particularly flat footage out of them using the available digital controls though (and especially not the Panasonic GH2/GH3, which are otherwise excellent video DSLRs) so you may have to do it optically via a Low Contrast filter.

It is worth noting that if you want to create footage that you can post process heavily, you may use both a variable ND fader and a low contrast filter together. This raises light loss and optical aberration issues caused by the additional glass in your light path. However, most color-editing video software assumes you are using flat footage, and if you are not, you may have bigger problems.

For example, most plugins and applications come with lots of preset ‘film looks’, which sounds great until you try them with non-flattened footage: the preset result becomes too extreme to be usable, and if you mix them down, the effect becomes negligible. Not good!

In the next section I will show how both the variable ND fader and Low Contrast filter can be used to create well exposed and flatter footage. I am using a Panasonic GH2, but also retested using a Sony Alpha A77 to confirm the workflow on both cameras.

Setup

I will be shooting the same footage with and without the two filters. To make sure the footage is identical, I am moving the camera on a motorised slider (a Digislider).

Test setup

Test setup

I am shooting foliage because panning along foliage is actually a very good test of video: it generates massive amounts of data, as well as producing lots of varying highlight/dark areas. That and the fact that the garden is when most enthusiast photographers test out most things (hence howgreenisyourgarden).

I am using a Panasonic GH2 with the ‘Flowmotion’ hack that allows me to shoot high bitrate video (100Mbit/s AVCHD). I set the GH2 to shoot 1080p 24fps video, which is as close as we can get to motion-picture-like film on the camera. To start the flattening process, I set the GH2 picture style (which the camera calls a ‘Film style’ when applied to video) as contrast -2, sharpness -2, saturation -2 and noise reduction 0. On a Sony Alpha, you would do the same, but set the sharpness to as low as it goes (-3) and forget the noise reduction (that option is not available as in their wisdom, Sony have simply limited the maximum video ISO!).

Low Contrast filter

A Low contrast filter is optional. You will not see any reason to use it when you start with video but the requirement to use it may become more apparent if you get heavily into post-processing.

Using my setup, I took two identical pans, moving left to right along the hedge.

Top – a frame from the ‘without filter’ video. Bottom – a frame from the ‘with Low contrast filter’ video

Top – a frame from the ‘without filter’ video. Bottom – a frame from the ‘with Low contrast filter’ video

The footage without the contrast filter (top) initially looks better, but it more susceptible to issues if you try to change it. This becomes more apparent when we look at the underlying data as graphs.

The YC graph (as seen in most video editing software) has the image width as its x-axis and plots luma (cyan or light blue-green) and chroma (blue) on the y-axis. Think of it as a much more useful version of the standard camera histogram. Its more useful because (a) it shows brightness and color separately but at the same time in relation to each other and (b) is directly related to the image width, so you can tell where along the width of the image you have shadow clipping or highlight burn – with a histogram, you only know you have clipping/burn but not where on the image).

YC graph showing effects of Low Contrast filter

Left – ‘without filter’ footage. Right – ‘with low contrast filter’ footage

As you can see, the low contrast filter lifts the low part of the data. This corresponds to brightening shadows. It is important to realise that this is not giving you more information in the shadows. The filter is merely creating diffused local light and then adding it to the shadows to given them a bit of lift. You will see many people on the internet telling you this means a low contrast filter is useless. This is not the case because:

  • One of the way most video codecs work to optimise filesize is by removing data in areas where we cannot see detail anyway. This can occur in several places, but one place this always occurs is in near-black shadows. By lifting (brightening) such shadows optically before the camera sensor sees them, we force the codec to encode more information in our shadows, thus giving us a more even amount of data across the lower tonal range.
  • We often have to increase exposure in post-production. If we did this without the shadow lift that low contrast filters give us, the shadows would be encoded with very low data levels assigned to them. When we expose them up, we see this lack of data as macro-blocking (shadows become blocky and banded). This is especially true of AVCHD, and if your camera creates AVCHD, then you need to be particularly careful of editing shadow areas. Further, some busy scenes may even show macro blocking in the shadows before you edit for standard 24-28Mbit/s (unhacked) cameras. Without a Low Contrast filter in place, such footage may have to be retaken.
  • With a contrast filter, shadows are often brighter than we need so we typically have to darken them. Doing this never causes macro-blocking, and because macro-blocking almost always occurs in shadows, it is eliminated in our edits.

There is a way of lifting shadows without using the filter – simply expose your footage a little to the right (about 1/3rd of a stop), and then underexpose by the same amount in post. That is fine, but makes it more likely you will burn highlights. If you use the low contrast filter you have a better time because you can now underexpose to protect highlights, knowing that you will not clip the shadows because the darks will now be lifted by the low contrast filte so you end up with flatter lows and flatter highs.

There is one final issue to consider with Low contrast filters: low colors.

The RGB parade is another common video-editing graph, and it shows the color values (0-100%) across the frame for the three color components.

RGB Parade showing effects of Low Contrast filter

Left – ‘without filter’ footage. Right – ‘with low contrast filter’ footage

Notice that when we use the low contrast filter, the low color values are also lifted upwards towards the average. This means we are less likely to shoot footage with clipped color in the lowlights, and thus we can push our dark colors further in post-production. This is especially important when we look at skintones: we usually want to brighten skin overall (to make it the focus of attention) and that means exposing up skin that is in shadow. We are particularly sensitive to odd looking skin, so it is important that we don’t clip any skin color because we will almost certainly need to expose it up and clipped skin will then look odd.

So, the low contrast filter protects our shadows by optically adding some mid tone light (‘gamma’) to it. The filter also raises our darker color values towards the average. Both these changes reduce the chance of clipping and color change in post-production.

In the image below, the left side of the frame was shot with the low contrast filter, and then post-processed (using RGB curves and the Colorista 2 plugin, both via Adobe Premiere CC). The right half of the image is as-shot without the filter or any processing.

Edited footage shot with Low contrast filter (left side of image) vs normal footage (right side)

Edited footage shot with Low contrast filter (left side of image) vs normal footage (right side)

Click the image to see a bigger version.

Notice that the left half of the image seems to have the healthiest leaves, and is therefore the nicest to look at. This of course was done via color correction – the low greens were significantly lifted upwards (notice also that lifting the dark colors gives a much more subtle and realistic looking edit than just saturating the mid-greens). Further, notice that the darks on the left side never drop down to true black – they are always slightly off-black and blend in better with the mid-tones, whereas the dark areas in the right hand side look like true black, and consequently contain no detail (and very little information). Although the left side is actually the most processed of the two, it looks the most natural because of the more believable color variation and contrast between the lows and mids, and (somewhat counter-productively) this is down to using an optical Low Contrast filter on the camera.

Is worth noting that if we tried to bring up the blacks in the right hand side so they looked as natural as the ones on the left, we would not be able to because there is not enough information in the shadows to do this. As soon as we exposed up the shadows, we would start to see macro-blocking and banding because the footage doesn’t contain enough information in the darks for us to be able to change them significantly, so you would have to enhance the leaves using only gamma (mid-colors), which tends to look more processed and artificial.

Another very important thing to see here is how subtle the color correction gets if you are careful not to clip your blacks. The two halves of the image have not been blurred together at the join in any way: there is no alpha transition. So there is an immediate cutoff between the color corrected (left) and non-corrected (right) version. Look at any leaf on the center line:  half of it is corrected and half is not. Can you see the join? No, me neither: it just looks like half the leaf is an anemic off-green and half is healthy. The reason why professional color correctors fix the lows first is because that is where all the color is: shadows contain lots of hidden color. By increasing vibrancy and saturation in the low colors (assuming you have not clipped your shadows), your color editing becomes much more subtle than if you edit the mids, and you corrections become much more natural and believable. Further, if you need to enhance only one set of colors, editing only the lows makes your changes more realistic, and you often no longer even need to use masking.

Although as a photographer, you are taught to protect the highs from clipping (because that kills your high tones), in video, your shadows are often more important because they contain the low color. It may be hidden, but it is this color information that you will be boosting or diminishing in post to create atmosphere in your footage. This is the main reason that you use a low contrast filter in video: not to protect tone, but to protect color.

Poor shadow encoding is a common default failing of all DSLR footage (unless you are shooting RAW with a 5D, or using the BlackMagic/Red, but then you have a much more non-trivial post production process: see here if you want to see why at least one videographer actually prefers an AVCHD camera to a Red Scarlet… the resulting flame wars are also quite entertaining to read in the comments!).

There is a final advantage in using a Low contrast filter: it reduces the bandwidth requirements slightly. Using Bitrate Viewer on my footage, I find that the bandwidth is about 5% lower with the filter attached (probably because the footage is less contrasty and has smaller color transitions). It’s not much of a difference, but perhaps a use-case if you are using a Sony A77 or other camera with the default AVCHD bitrates (24 to 28Mbit/s), especially when it will also eliminate the occurrence of shadow macro blocking that can occasionally occur in shadows before you edit if you are using the default AVCHD bitrates in difficult lighting or busy scenes.

My Low contrast Filter also sees some use in stills. Because it spreads out highlights, I see less highlight clipping, and a more rounded roll off. (NB -the photographer in me thinks this is a very cool thing about this filter, but the videographer in me says ‘nah, its the richness of color in your darks that makes this filter great!)

candle without Low Contrast filter (L), with filter (M) and after processing

Left – Raw image imported into Lightroom. Notice the red dot in the middle of the candle flame, signifying highlight clipping. Middle – The same shot with the Low Contrast filter attached. Note that the clipping is much less pronounced and has a more subtle rolloff. Also note that the background is brighter because the filter is pushing some of the light on the background highlight (top left) into the ambient gamma. Right – The middle photo after post processing in Lightroom. Because of the better highlight rolloff, the light from the candle has more presence. If we wanted to recolor the flame (to, say, yellow), we could, because the highlight has not been clipped, and contains lots of data to work with.

It is also useful if you are shooting into the sun. The downside is that you usually have to significantly post-process any stills taken with a Low contrast filter to get your tones back: it’s one to use only in tricky tonal lighting conditions, when you want to achieve a low contrast look (it is actually used often in stills to give a retro 70’s look), or when you know you will be post processing color significantly (usual in video production).

In terms of buying a low contrast filter, there is only really one option: buy Tiffen as they are the only ones who do them well. I use the Low Contrast 3 filter. Some videographers use an Ultra Contrast filter, but I find the Low contrast filter to be better with highlights (it gives a film like rolloff to highlights because it diffuses bright lights slightly). You can see a discussion of the differences between Low Contrast and Ultra Contrast on the Tiffen website.

Update January 2014: Since writing this post, I have found out that Sony Alpha cameras apply DRO (dynamic range optimization) to video. DRO works electronically by lifting shadows. Although this process does not increase detail, it may prevent macro blocking, making it a good alternative to using an expensive low contrast filter.  See this post for further details.

The low contrast filter reduces the final exposure by about 1/3rd of a stop, but that is because of the way the filter works rather than intentional. If we actually want to control exposure, we have to use the next filter…

Variable Neutral Density fader

For shooting video that looks like film rather than live TV, using a variable ND fader is not an option: you have to use one because it’s the only way you can control exposure. A variable ND filter is rotated to vary the light passing through the filter, and this (rather than shutter or aperture) controls the exposure of the final shot. Without a variable ND filter, shooting at wide apertures and a shutter of around 1/50s (which you need for the DSLR film look) will leave you with overexposed footage.

All ‘DSLR film’ (footage shot with a DSLR that has a filmic look to it) is shot using a variable ND fader, and using one is a given. There is only one real issue to consider: which one to buy. There are a few good video reviews:

For the price of 1 expensive Tiffen variable ND fader, I bought the following:

  • 6 Fotga variable ND faders, one for each lens diameter I use for video on my Sony Alpha A77 and Panasonic GH2 (82, 77, 72, 55, 52 and 49mm)
  • 1 Polaroid ND fader at 37mm for my Panasonic LX7 advanced compact.

The Fotga gives 2 to 9 stops of exposure control, but only the first 2-7 stops are usable (you start to see vignette and uneven filtering after that, which is fine by me for the price: $10-30, depending on size and where you buy them). Sharpness is not really an issue at video resolutions (but certainly is at stills resolutions: buy Tiffen if you want to use your Variable NDs for stills as well as video).

The most important thing to watch out on Variable ND faders is that you are getting optically flat glass and not normal glass. You can check this by looking through the fader whilst turning it left-right in your hand. If the view through the fader seems to wobble, then you are not getting constant diffraction and this occurs because you do not have optically flat glass (and probably need to throw the fader away). You can also check for sharpness by videoing something that generates moiré. Moiré is caused by capturing something that is on the edge of what your sensor can resolve.  If the moiré changes significantly with the fader on, then it is because the resolving power has changed significantly (i.e. the fader is bringing down the resolving power of your sensor), and you again probably need to bin the fader. Its worth noting that moiré may occur in stills but not in video, depending on how your camera sensor is set up, so check in video footage only.

In terms of actual use, here’s a quick before and after of shooting a scene with and without a variable ND fader. I’m shooting at the sky with an aperture and shutter dictated by my frame rate and requirement to capture ‘filmic’ footage (f3.5, 1/50s, 24fps, 1080p).

Comparison without ND fader (L) vs with (R)

Comparison without ND fader (L) vs with (R)

I think the before and after speak for themselves, and we don’t need justification via graphs!

Using Variable ND fader and Low Contrast filter together

Using an ND fader and Low contrast filter together is often necessary where you have a very high contrast scene, typically when you have to shoot into the sun, or when you have deep shadow and highlights in the same scene. If you are shooting outside in full sun, this may occur so often that it is actually the norm rather than an edge case.

Consider the following scene.

Using an ND fader and Low contrast filter together.

Using an ND fader and Low contrast filter together. See text below for information.

The top image shows our situation with no fader. We have a wall in semi-shade with the sun shining over it. If we add a variable ND fader, we can reduce the exposure so that we now have no blown out sky highlights, but that leaves us with foliage that is too dark (middle), so we have clipping in the shadows (and probably macro-blocking if we try to expose the foliage up in post-production). If we now add the Low contrast filter, some of the ambient is added to the shadows, and we now have brighter foliage (and darks that we can lift up without causing macro-blocking) and no blown highlights. Note thatwe have raised the darks optically during the shot  so we don’t get digital noise associated with raising shadows in post processing. This is a significant difference.

Although this is all very good, the filters add some issues of their own. In particular, because the low contrast filter lifts the blacks based on the local gamma, its effects will vary as the local gamma varies. Thus, we have halation at the border between our sky and foliage, which may be difficult to get rid of because it may not be linear, and it gets worse with more extreme light-dark borders (such as sun straight to to dark shadow as we have here). However, the final image is better than either of the other two n that it is the one that contains most information for post processing… although you will have to be fairly experienced to edit back to normality, so should nto attempt this until you have some experience of color correction.

Finally, consider the tree example we looked at earlier for the Variable ND fader.

Example of how variable ND faders and Low Contrast filters can help when shooting into the sun.

Example of how variable ND faders and Low Contrast filters can help when shooting into the sun. See text below for information.

Although we get a correctly exposed sky when we add just the ND fader, we also get black foliage with little information in it, something that will easily macro-block in post-production. If we now add the Low contrast filter, not only do we now lift the blacks (so we reduce the chances of macro-blocking in the shadows), we also make the shot look more like true film.

You will notice in the top frame that I have tried to hide the sun behind the tree because I know it will cause blown highlights. With the Low contrast filter, the sun will now have a much more analog-film like rolloff, and will actually hardly clip at all, so I can stop hiding it.

The bottom frame is therefore much easier to work with in post-production. This is true even though lifting the blacks in the tree outline adds no extra information, because lifting the blacks before video encoding in-camera drastically reduces the chances of macro-blocking at any point after capture.

Update April 2014: see this post for a video example (shown below) of a low contrast filter, ND filter and in-camera dynamic range optimization (Apical Iridix) being used together with 28mbit/s AVCHD footage.

As you can see by the clip,this can produce some very cinematic footage!

Conclusion

When shooting video with a DSLR, there are two filters you can use to make life easier.

A variable Neutral Density fader is a must-have because it is the standard way you control exposure.

A Low contrast filter is something to consider if you will be heavily post-processing, It does bring problems of its own (potentially unwanted halation), but it does give you flatter video in the low tones and allows you to shoot slightly underexposed without losing shadow data, (so it effectively allows you to flatten both lows and highs). This makes your footage much easier to handle in any post-production process.

Notes

  1. This was supposed to be part three of my ‘Sony Alpha Video’ series, and was going to be titled ‘Sony Alpha Video: Part 3: Filters for video, but I didn’t want to alienate non Sony AVCHD camera users, to who this post equally applies.
  2. In the main text I say ‘shadows contain lots of hidden color’. Many photographers know this already: if you want to boost a color in Lightroom/Photoshop but keep the edit realistic, you need to darken it slightly rather than increase its saturation. Although we often don’t see dark colors, we do respond to them, making them a very good thing to know about especially when you want to keep your edits non-obvious.

The sunset that never was

After writing my last blog post, I realised there was no video that showing my tips on AVCHD editing being used in anger. This quick post puts that right. You can see the associated video here or by viewing the video below (I recommend you watch it full screen at 1920×1080).

Note that the youtube version is compressed from the original 28Mbit/s to 8Mbit/s, as are most web videos).

Note also that I don’t use a Sony Alpha A77 for the footage in this post: I use a Panasonic Lumix LX7  because I was traveling light, and the LX7 is my ‘DSLR replacement’ camera of choice. Both cameras use the same video codec and bitrates, so there is not much difference when we come to post production, except that the Sony Alphas have better depth of field and are therefore more ‘film like’, whereas the LX7 will produce sharper video that is less ‘filmic’.

Changing the time of day with post production

Myself and my partner were recently walking on Bingley moor (which is in Yorkshire, England, close to Howarth and Halifax, places associated with Emily Bronte and Wuthering Heights).

The final footage. Color grading and correction via Tiffen Dfx running within Adobe Premiere.

The final footage. Color grading and correction via Tiffen Dfx running within Adobe Premiere. Click on the image to open the original frame (1920×1080).

It was about an hour before sunset, and I thought it would be nice to capture the setting sun in video.

The original raw footage. Captured with a Panasonic Lumix LX7 with attached Polaroid variable ND filter.

The original raw footage. Captured with a Panasonic Lumix LX7 with attached Polaroid variable ND filter.

Alas, we were too early, and the recordings looked nothing like what I wanted.

A couple of weeks later we were walking in the same place in the early morning. I took some footage of the nearby glen (glen – UK English: a deep narrow valley). So now I had some footage of the moor and glen in evening and morning sun, but no sunset footage. Not to worry: I could just add the sunset via post production.

If nothing else, it would make a good example of how AVCHD footage can be edited whilst making large tone/color corrections without coming up against issues once you follow the handy tips from the last post!

The original footage

As per the tips in the previous post, I did the following whilst shooting the original footage

  • Shot the footage using a fixed shutter and aperture, and varied exposure using a variable Neutral Density Filter. Reason: as a general rule, shoot all footage at a fixed aperture  (typically around f3.5, going wider if you want low depth of field, or narrower if your subject is in the distance), and fixed shutter (typically twice the frame rate of your footage). Control your exposure via a variable ND filter.
  • Set the camera to record desaturated, low contrast and un-sharpened footage. Reason: this gives your footage more latitude for change in post-production.
  • Exposed the footage slightly brighter than I needed, being mindful of burning highlights. Reason: AVCHD tends to break up or produce artifacts if you increase exposure, but never if you decrease exposure.

Workflow

Color post production has two workflow areas

  • Color correction or correction of faults in the footage. A bucket could be too red, or a sky needs to be be more blue. Correction is done on a per clip basis, correcting color/tone issues or adding emphasis/de-emphasis to areas within the scene. Framing and stabilization is also performed on a per clip basis. As an aside, this is the reason why the left side of the footage seems to wobble more in the video: the right side has been stabilized with the inbuilt Adobe Premiere stabilization plugin.
  • Grading or setting the look of the final film. Grading is done equally too all clips and sets the final style.

Color correction

Here’s a quick run through of the corrections:

Color correction

Color correction

  • Top image. As-shot.
  • Second Image. Added an emulated Tiffen Glimmerglass filter. This diffuses the sharp water highlights and softens the video a little (I would not have had to do this if I was shooting with my Sony Alpha A77, and you would not have to soften the video if you were using any other traditional Canon/Nikon DSLR as all of them produce soft video). I also added a Fast Color Corrector to fix a few color issues specific to the clip (white and black point,  cast removal).
  • Third image. Added a warm red gradient to the foliage top left to bottom right. The shape and coverage of the gradient is shown in the inset (white is gradient, black is transparent).
  • Fourth image. Added a second gradient, this time a yellow one going from bottom to top. Again, the shape and coverage of the gradient is shown in the inset.

Color Grading

For this footage, I used an emulated film stock via Tiffen Dfx. The stock is Agfa Optima. I also added back a little bit of global saturation and sharpness using the default Adobe Premiere tools (Fast color corrector and unsharp mask).

The top image is the original footage. The middle image is the same frame after grading and global tweaks have been applied. For reference,  the bottom shows the frame after all color correction

Color Grading

  •  Top Image. Corrected footage so far minus the two gradients
  • Middle image. Grading and global tweaks (Agfa Optima stock emulation plus global color tweaks and sharpness).
  • Bottom Image. Adding the two gradients back for the final output.

Merging color correction and grading

Combining the two color change tasks (grading and color correction) is a bit of a black art, and I do both together. Generally, I start by picking an existing film stock from Tiffen Dfx or Magic Bullets Looks as an Adjustment layer. Then, I start adding the clips, color correcting each in turn, and switching in/out the grading adjustment layer as I go. Finally, I add a new adjustment layer for sharpness and final global tweaks. I avoid adding noise reduction as it massively increases render time. Instead, I add a grain that hides noise as part of the grading.

Reality vs. Message

Color correction and grading are often used to promote a style, ambiance or ‘look’ rather than reflect reality. You want to meet the viewer’s ideal expectations, not boring reality.

Color corrected/graded scene

Color corrected/graded scene

The video includes this frame. The leaves in the water are red to signify the time of year (autumn/fall).

Original scene (note also that the original scene is lighter than the final scene, as per my AVCHD shooting tips)

Original scene (note also that the original scene is lighter than the final scene, as per my AVCHD shooting tips)

Real leaves in water lose their color quickly, becoming much more muddy in appearance. I enhanced the muddy leaves towards earth-reds because ‘muddy’ did not fit with my message, even though rotting grey leaves are closer to reality.

Timeline

Here’s the timeline for the project.

Project timeline

Project timeline (click on image to view full size version)

I have my color adjustment and grading in as separate adjustment layers. (V2/V3)  The first half of the timeline is more or less identical to the second half except that the second half has the unedited versions of the clips on layer v4. These clips have a Crop effect (Video Effects > Tranform > Crop) on them with the Right value set to 50%. This is how I get the edited/unedited footage split-screens at the end of the video.

When adding backing sound, the music file is never as long as the video clip, so to make the two the same length, I often do this simple trick to edit the music so it is shorter:

  • Put the music clip on the timeline so that the start of the music lines up with the start of the footage, and
  • On a different sound layer, put another version of the same music on a the layer below such that this time the end of the music lines up with the end of the video.
  • Make the two soudclips overlap in the middle, and where they overlap, zoom into the waveforms and find and match the percussive sounds (generally the drums).
  • Fade between the two sounds on the overlap.
Matching sound sections

Matching the sound sections

In the timeline section above, I have matched (lined-up) the drum sounds on the layer a1 and a2 music clips (both of which are different sections of the same music file), then faded from layerA1 to A2 by tweening volume level. This will produce a smooth splice between the two music sections. If space allows, you should also of course match on the end of the bar (or ‘on the beat repeat’).  For my timeline, you can see (via the previous ‘Project timeline’ screenshot) that I have spliced between four sections of the same file.

Tools

During color correction I kept an eye on the YC waveform scope, which is available in Premiere and most video editing applications.

Footage vs YC waveform

Footage vs YC waveform

The YC wafeform shows both Luma (Y or brightness) and Chroma (C or color). Luma is Cyan, and Chroma is dark blue on the scope.

The x axis of the scope is the x-axis of the footage, so points on the y-axis are the YC values along the width of the footage itself. Sounds a bit complicated, but if you use the waveform on your own footage it becomes immediately obvious what the waveform represents.

For the broadcast standard I am using (European, PAL), true black is 0.3V on the scale, and true white is 1.0V (NTSC is very similar). The original footage is shown on the left side of the image, and the corresponding YC wafeform is shown below it. The waveform shows highlights in the sky area are clipping (we see pixels above 1.0V),  and the darkest areas are not true black (the wafeform doesn’t get down to 0.3V). The right side of the image shows the final footage and we can see that we now have no clipping (in either brightness or color saturation), and our blacks are closer to true black.

Keeping an eye on the YC waveform is always something I do when editing color. You may think your eye is good enough, but your vision gets tired or so used to color that it no longer recognizes casts, but the scope never tires and lies! Another useful scope to use for skintones is the Vectorscope. Something for another post…

Conclusion

This post shows the workflow I used to correct a small number of clips such that they could be added together into a single scene. The final movie shows a typical English autumn sunset (or at least, one where you can see the sun!) yet none of the clips were actually taken at this time of day nor under the lighting conditions of the final scene.

By manipulating the color of our footage via color correction and grading, we achieved our desired output despite the constraints of reality on the day!

Finally, by following additional steps and rules of thumb whilst shooting and editing the AVCHD footage, we have prevented coming up against its limitations. In fact, the only time in the video where you may see any artifacts is  the only place where I did not follow my own advice: at about 0.30″ I the footage has exposure increased slightly, and shows small artifacts in the shadows.

You can see all previous video related posts from this blog here.

Notes

  1. The music in the video is Spc-Eco, Telling You. Spotify Link.
  2. The YC graph is so much more useful than the histogram seen on most stills cameras that I often wonder why digital cameras don’t have the YC waveform instead! For example, the YC waveform not only tells you whether your image has clipped pixels, but unlike the Histogram, the YC tells you where along the width of the image those pixels are! You can still ‘shoot to the right’ using the YC (and it actually makes more sense) since brightness is Luma height. The YC also separates out brightness and color information, so you can see at a glance the tonality and color information within your photograph in a single visual. How’s that for useful!

Sony Alpha Video Part 2: AVCHD

If you look on the internet, you will find all sorts of advice that the A77 and other Sony Alpha cameras are useless for video. The image doesn’t ‘pop’ because the bitrate is too low, and you can’t easily edit the video because AVCHD ‘breaks up’ if you tweak it too much.

Sony Alpha A77 Magic Bullets Colorista test by Hounddogpictures

Sony Alpha A77 Magic Bullets Colorista test by Hounddogpictures

But then you see something like the video above on youtube. The foreground is clearly separated from the background to give the ‘3d’ effect, and there’s tons of post processing in this footage to give it plenty of ‘pop’, and it all looks great! How was it done?

To create decent video with the Sony Alpha, you need a good understanding of the video file type used by Sony (and Panasonic) cameras: AVCHD, plus how to shoot and handle AVCHD to give you lots of latitude in post processing.

Core issues to know about are line skipping, bitrate and dynamic range. The Sony Alpha is average on line skipping and bitrate, and the king of dynamic range. As we will see, any video DSLR is a set of compromises, and getting the best out of any DSLR involves playing to the strengths of your particular model and codec, something that is very important with the A77

My video cameras. L-R: Sony Alpha A77, Panasonic Lumix GH2, Panasonic Lumix LX7

My video cameras. L-R: Sony Alpha A77, Panasonic Lumix GH2, Panasonic Lumix LX7

I have noticed a lot of converters from the Sony A77 to the Panasonic GH2 appearing on the video DSLR forums. Clearly, a lot of people have got suckered into the cult of high bitrate and decided that the Sony Alphas are useless for video. I own a GH2 (complete with that magic hacked +100 Mbit/s AVCHD) and will save you the trouble of GH2 envy by showing that bitrate is not always king in video. In fact, along with the GH2/GH3, there is another type of stills camera that gives sharper video than a typical FF/APS-C DSLR and I will consider three types of AVCHD video DSLRs in this post:

  • Large sensor stills camera DSLRs with manual video. This includes all Full frame and APS-C cameras, and I consider the A77, which I of course own and will be the focus of this post.
  • 35mm film sized DSLRs with manual video. Micro Four Thirds (MFT) has a sensor size smaller than the Full frame and APS-C cameras, but very close to 35mm film (that’s 35mm film, not 35mm stills film – big difference that, incidentally, means that all the Canon 5D people going on about their ‘filmic depth of field’ are overcooking it!). For this size I will look at the Panasonic GH2, a camera I own.
  • Small sensor advanced compact cameras with manual video. These cameras can shoot RAW and have most of the features of full frame DSLRs but have very small sensor sizes (i.e. 1/2.3”… the size of your smallest fingernail). These sensor sizes don’t suffer from soft video (which we will look at later). The perfect example of such a camera is the Panasonic LX7, which comes with a very video friendly 28-90mm f1.4-f2.3 lens that can take a 37mm ND filter, making it ideal for quality pocket-sized video. Oh, it also has an ND filter built in, so you often don’t even need a screw on filter unless in direct sun. Cool!

A lot of this post will concentrate on the A77, but because all three cameras shoot AVCHD footage, much of the post will actually be common for all three cameras

Before we get too far though, let’s address the elephant in the room…

How good is Sony DSLR video?

The Sony A77 can record up to 28Mbits/s. The highest you need for web or DVDs is about 12Mbit/s, and its less than 8Mbit/s for Youtube/Vimeo. You lose the extra bandwidth as soon as you upload to Youtube/Vimeo or burn a DVD.

Sony and Panasonic chose 28Mbit/s for a reason – an advanced enthusiast will rarely need more bandwidth. 28Mbit/s is a good video quality for the prosumer, which is most readers of this post.

If you approach a terrestrial broadcaster (such as the BBC) with video work, they will specify 55Mbit/s minimum (and they will need about 40Mbit/s if they want to sell Blue-rays of your show), but they will also expect your camera to natively and continuously output a recognized intermediate codec that can be used to match your footage with other parts of the broadcast unless you can justify otherwise. Particular action shots, reportage and other use-cases exist for DSLRs, but you have to be able to justify using a DSLR for them beyond just saying ‘it’s all I’ve got’.

No DSLR does the ‘native recognized intermediate codec’ bit (unless we count the Black Magic pocket cinema as a DSLR, but then it isn’t primarily a stills camera), and instead produces output that is too ‘baked in’ too allow strong color corrections. The A77 can’t and neither can the 5D Mk 3 (at least, not continuously and not using a recognized format), nor the GH2/GH3. Yes, the 5D was used for an episode of House but the extra cost in grading the footage meant there was no saving over using a proper video camera from the start.

The Sony A77 cannot be considered a pro video device. Neither can any other stills camera. This is a crucial point to consider when working with DSLR video. Yes, you can create pro results, but only if you have the time to jump through a few hoops, but renting pro equipment when pro work appears may be cheaper overall and get you where you want to be quicker.

Finally, its important to realize that a hacked camera has drawbacks. I do not use the highest bitrate hack for my GH2 (as of this writing, the 140Mbit/s Driftwood hack), instead using the 100Mbit/s ‘FlowMotion’ hack. Driftwood drops the occasional frame. Not many, but enough for me to notice. That’s the problem with hacks: they are fine for bragging rights and going around saying you have ‘unlocked your cameras full capabilities’, but the same hacks make your gear less reliable or produce more noise or other glitches.

So, the A77 is about equal to its peers for prosumer video. Some have better bitrate, some can be hacked, and some have better something else, but none of them can say they have moved into professional territory.

AVCHD and why it is ‘not a codec for editing’

It’s important to realize that a file format (such as an .mts or .mp4) is NOT a codec before moving on. See notes 1 and 2 if you need more information.

Rather than use a low compression codec to maintain quality (as used in pro film cameras), Sony and Panasonic realized that using faster processing power to compress and decompress frames very efficiently might be a better idea. That way, you get low file size and higher quality, and can then store your video on slow and small solid state devices such as (then emerging) solid state SD cards. The resulting video codec is a custom version of H.264 (which is itself a derivative of the popular but now old MPEG4 codec) and is the codec that AVCHD (Advanced Video Compression High Definition) uses. The custom codec AVCHD uses is more efficient than older, less processor intensive codecs, and therefore provides better quality given the same file size.

So why is AVCHD good for burning wedding videos but not shooting Star Wars?

AVCHD is designed to faithfully compress and decompress your original footage so that you cannot tell the difference between AVCHD vs full frame film when played back on a typical consumer 1920×1080 screen. So for all intents and purposes, the AVCHD format is lossless: you can’t tell the difference. What’s the catch?

  • It all works unless you edit the AVCHD footage. The compression optimizes for the original video and leaves no wriggle room for changing color or tone. Doing so will cause your video to show compression artefacts (usually blockiness or banding). Unfortunately, changing tone/exposure and color are two things you will want to do often in video editing!
  • Because AVCHD compression/decompression trades filesize for a cpu heavy codec, your computer may be unable to play AVCHD in real time during edit.
An example of macro blocking. The splotches on the black container and the some of the mushiness in the background foliage are all signs of macro blocking

An example of macro blocking, caused by AVCHD removing some of the data in the low tone areas. It is not caused by ISO noise or other issues. The large dark grey splotches on the black container and the mushiness in the background foliage are all signs of macro blocking. All this can be avoided though, as we will see below…

There’s a lot of talk about AVCHD on the web and how it is not good enough. Whatever reason you are given, the core reason is derived from the two points above: in a nutshell, AVCHD was designed for playback and not to be edited, and is too cpu heavy to be edited in real time.

This was true 4 years ago for AVCHD, but hardware and software have caught up.

To play back 10s of 28Mbit/s AVCHD, your computer has to take a 50-100Mb file and decompress it to the multi GB file it actually is on the fly at the same time as displaying the frames with no lag. To edit and save an AVCHD file, your computer has to extract a frame from the file (which is non-trivial in itself, because the frame data is not in one place: some of the data is often held is previous frames), perform the edit, then recompress the frame and some frames that precede it. So editing one frame might actually change 10 frames. As 1 minute of video can actually be 8GB when uncompressed, that’s a lot of data!

The good news is that a modern Intel i7 or equivalent computer can handle this, and lesser CPUs can also do it if the software knows how to farm off processing tasks to the graphics card or GPU (Premiere CS knows how to do this for certain NVidia GPUs, and Premiere CC knows how to do this for most NVidia and AMD GPUs, which is almost all the GPUs worth considering). I am currently editing in Premiere using only AVCHD files, and it is all working well.

There is no need to convert AVCHD into an intermediate codec because the reasons to do it are no longer issues. You may still see old arguments on the web: check the dates of the posts!

A reason still often given for re-encoding AVCHD to other formats (such as ProRes or Cineform, which, incidentally are examples of the pro intermediate formats we talked about earlier) is it gives you more headroom in color manipulation. This is not valid anymore. As of this writing, Premiere up-scales ALL footage to 32 bit 4:4:4 internally (which in layman’s terms means ‘far better than your initial AVCHD’) when creating its footage, so encoding AVCHD into anything else for better final quality will not do any such thing. At best it will replace your AVCHD source files for files x10 larger for not much gain in quality (although an older computer may be better at keeping the edits working in real time if it can’t handle AVCHD). In fact, in a normal editing workflow, you will typically not convert your source AVCHD to anything. Just use it as source footage, and the final output is whatever your final production needs to be.

Another very good reason not to convert your input AVCHD to anything else is because re-encoding always either loses information or increases file size, and never increases quality of itself. If you re-encode AVCHD into any other codec, you either retain the same information in a larger file or lose some information when you re-compress the AVCHD data into a smaller or processor friendly format (MPEG4, AVI).

There is actually only one use-case where you must change your AVCHD files: when an application doesn’t recognize AVCHD. Even then we don’t re-encode, but instead re-wrap the AVCHD stream into a more universal file format (such as Quicktime). Don’t worry, I’ll show you how to do all this below…

Using AVCHD and working around its issues

So ‘AVCHD is ok, as long as I have a recent version of Premiere (or similar) and a decent mid-high range computer’, right? It is fine as long as you do certain things:

Don’t shorten your AVCHD files.

If you have an .mts file with a bad take early on and then a good take, don’t edit out the first take and save the shorter .mts file. AVCHD is what is known as a GOP format, which means it consists of a complete frame every few frames, with the other frames containing the differences (or ‘deltas’). Each set of frames is a GOP (Group Of Frames), and if you split the group, you may cause problems. Better to keep the .mts file as is. Never resave a .mts file as another ,mts file, or anything else for that matter. There are few reasons to do it, and you may lose information in doing so. In fact, even if you did nothing and resaved a .mts file, you may still lose information because you would inadvertently decompress and then recompress it. Note that there is one very good reason to chnage your AVCHD files: because your authoring application does not recognise AVCHD. I go through how to fix this without changing the video data  (a process called ‘re-wrapping’) below.

Avoid AVCHD breaking up in post by shooting flat

Notwithstanding the issues caused by nested color editing, AVCHD can still ‘break up’ in post if you need extreme  exposure and color changes. This is all to do with the fact that AVCHD was originally designed for playback and not for editing: it is virtually lossless unless you start to edit it. Each frame of an AVCHD video is split into lots of 16×16 pixel sections, and these are (amongst other things) compressed at different levels depending on the detail seen in them. Think of it as cutting up an image into separate JPEGs whose quality is varied to reflect how detailed each 16×16 area was to begin with.

So what the AVCHD compressor does is to take blocks with near solid color or shadow, and save them using less data (because they don’t actually need much data), and save the bandwidth for parts of the image that actually have lots of detail or movement. That’s fine, until you try to

  • brighten the shadows in post and see that it comes out as a pixelated gunky mess
  • try to brighten a very subtle gradient, and watch it start to band
  • brighten skin highlights in flat-lit skin and watch it break up and pixelate.

All these edits cause AVCHD to break up because you are trying to bring out detail in areas that AVCHD has decided are not the main areas of detail (or change), and so the compressor has saved them with low bandwidth.

How to fix this?

  • Don’t overexpose in post, especially skin tones or graduated sky. Its better to to overexpose very slightly in as-shot, and then underexposing in post.
  • Don’t brighten dark areas in post much more than ½ a stop. Again, its better to overexpose slightly in as-shot and underexpose in post.
  • In general, its better to overexpose your footage slightly so you will be darkening rather than lightening shadows in post (but there is a balancing act between this and burning highlights!).
  • Use a Tiffen low-contrast filter (or similar diffusing filter) that optically flattens tone so that highlights are diffused and shadows are lifted by ambient light. This not only gives you a more neutral look (so that AVCHD doesn’t reduce detail in dark areas) but also allows you to expose up more before you are in danger of burning the highlights. See also this post for more information on using a low contrast filter for video.

…or

  • Simply shoot as close as possible to the final look, and don’t make big changes in post.

You may be thinking ‘ah, that’s fine, but I can’t control the sun or the shadows, and that Tiffen filter thing is expensive’. There is one other thing you can do: shoot flat digitally.

AVCHD and other ‘not for editing’ codecs don’t have many color or brightness levels, so by keeping the recorded values of both low you are less likely to clip them or cause errors that may be amplified in post to cause banding. This subdued recording style is called ‘shooting flat’.

Before/after shots. The top one is the original ‘flat’ clip. This clip looks a little flat, but its lack of strong color and contrast means it has lots of latitude in post work. The bottom clip is the edited file. It has more contrast and color, and most importantly, is still sharp.

Before/after shots. The top one is the original ‘flat’ clip. This clip looks a little flat, but its lack of strong color and contrast means it has lots of latitude in post work. The bottom clip is the edited file. It has more contrast and color, and most importantly, is still sharp.

To shoot flat on the A77, you need to press Fn > Creative Style, then select an appropriate creative style (‘Neutral’ is a good general purpose one, but some people swear by ‘Sunset’ for banding prevention), then push the joystick to the right to access the contrast/saturation/sharpness settings. Ideally, you should set them all to something below zero (-1 to -3). The most important one is sharpness, which you should set to -3. Sharpness makes by far the biggest difference, and setting it to -3, then sharpening back up in post-production (where you are using a far more powerful cpu that can do a far better job) is the way to go.

Overexposing in as-shot. By overexposing (top) and darkening in post (bottom), your shadows have no chance of breaking up. If you did the reverse (overexposing in post), your shadows might start 'macro-blocking' or banding.

Overexposing in as-shot. By overexposing (top) and darkening in post (bottom), your shadows have no chance of breaking up. If you did the reverse (overexposing in post), your shadows might start ‘macro-blocking’ or banding.

What if I shot the clp using a vivid style instead of flat? Well, bits of the red watering can may have ended up a saturated red. That’s fine if I leave the footage as shot, but if I want to tweak (say) the yellows up in the scene I would now have a problem: I can’t do that easily without oversaturating the red, which would cause banding in the watering can.

Another happy outcome of shooting flat is that your video takes less bandwith (but in my testing, its hardly by anything so don’t count on it).

Sharpening in post: the top image is unsharpened out-of-camera, and the lower image is the same thing sharpened in Premiere.

Sharpening in post: the top image is unsharpened out-of-camera, and the lower image is the same thing sharpened (and editedf for contrast/color) in Premiere.

Finally, when color correcting, you need to keep an eye on your scopes. The one I use the most is the YC waveform. This shows Luminance (Y) and Chroma (C) across the frame.

The top image shows raw (left half) and edited footage (right half). The graph is the YC scope, showing luminance only. The right half of the footage looks better, but by checking the graph, we see that the lunanance range is also technically better (no clipping off the scale, uses more of the available range, etc).

The top image shows raw (left half) and edited footage (right half). The graph is the YC scope, showing luminance only. The right half of the footage looks better, but by checking the graph, we see that the lunanance range is also technically better (no clipping off the scale, uses more of the available range, etc).

Using the YC waveform is a full tutorial in itself and for brevity, I’ll just link to one of the better existing tutorials to get a handle on the YC waveform here (Youtube: YC Waveform Graph in Premiere Pro). Note that the YC waveform lets you check your footage against broadcast limits for brightness and saturation. A PC screen can handle a larger range, so if you are creating content for the web, you might want to go a little further on range. I limit myself to broadcast limits anyway: to my eyes the footage just feels more balanced that way.

Don’t sharpen if you are close to macro blocking

If you end up with dark blacks, don’t sharpen and brighten them . this is the easiest way to create really ugly macro blocking!

Avoid AVCHD compatibility problems

Although many applications now work with AVCHD, there’s still a few applications that do not. One of the non-workers is DaVinci Resolve, a professional coloring application that I strongly recommend you have a go with because (a) it is a true Pro application (as used in Hollywood Blockbusters) and (b) it is free from http://www.blackmagicdesign.com/uk/products/davinciresolve/models.

Although you can re-encode (transcode) AVCHD to another format to get Resolve (and older versions of more common video editing applications such as Premiere) working with AVCHD, you don’t want to do that because re-encoding either loses you quality or increases filesize for no reason. Instead you want to put the AVCHD stream into a more universal wrapper (Quicktime/MOV) to increase your footage compatibility without affecting quality. The way to do this is both easy, quick and free (the following steps are for Windows);

  • Download SmartFFmpeg from http://freeware.satria.de/SmartFFmpeg/index.php?lang=EN
  • Download FFmpeg http://ffmpeg.zeranoe.com/builds/, selecting the 32 or 64 bit static version. Unzip the zip file, rename the unzipped folder to ffmpeg and put it somewhere you won’t delete it by accident (I placed it in my Program Files directory)
  • Run SmartFFmpeg and in the Options menu select the location of ffmpeg.exe (if you did the same as me, the path will be  C:\Program Files\ffmpeg\bin)
smartFFmpeg: to rewrap AVCHD into Quicktime, set the red-arrowed options as shown, drag-drop your .mts AVCHD files into the top panne, and click the green RUN arrow (pointed at by the green arrow, top right)

smartFFmpeg: to rewrap AVCHD into Quicktime, set the red-arrowed options as shown, drag-drop your .mts AVCHD files into the top panne, and click the green RUN arrow (pointed at by the green arrow, top right)

Drag your AVCHD files into the window at the top of SmartFFmpeg, and use the settings in the image above. In particular, you need to (red arrows in image above)

  • Set Format to QuickTime/MOV.
  • Set the Threads value to  as high as it will go if you want the conversion to take place as quickly as possible (although it wont take long as the process is really just moving the data into a new wrapper rather than transcoding).
  • Set Video Codec to COPY STREAM (important!)
  • Don’t enter or change any other setting

Then click the green RUN button at the top right. The MOV files will be created in the same directory as the .mts files. you will notice the MOv files are slightly smaller than the original ,mts file, but if you look at the two files in Bitrate viewer (more on Bitrate viewer below), you will see that the bitrates are exactly the same: the actual video stream is unchanged in the .MOV files.

As a .MOV file is generally more universal than a .mts file, you may consider always rewrapping your AVCHD files into QuickTime .MOV, and especially if you will be sharing your footage with other users or using the footage on old versions of common video editing applications (e.g. Adobe CS4 or earlier).

Update: see this post for a worked AVCHD example using most of the above tips in its workflow.

A common issue cited with Sony Alpha Video is maximum bitrates: the Sony range of cameras have a maximum bitrate of 28Mbit/s. My GH2 that has a maximum bitrate of 100Mbit/s or higher depending on which hack I load.  Surely the GH2 is far better?

Not by as much as you would expect…

What bitrate actually means in AVCHD

Lets look at bitrate visually. Download bitrate viewer (windows only) from http://www.winhoros.de/docs/bitrate-viewer/  If you are Mac, not to worry. I have screenshots.

You can use this to view the bitrates in your AVDHD files. Run it and click Load. In the files of type: dropdown  select All files and select one of your Sony A77 files. The application menu is the little icon on the top left corner of the window title bar (not shown in the screenshots, which look at the graphs only). Click this icon to see the menu, and select GOP based.

Bitrate graph of 10s clip, Sony A77 @24Mbit/s, 1080p, 25p

Bitrate graph of 10s clip, Sony A77 @24Mbit/s, 1080p, 25fps

You can now see a bar-chart of values. Each bar is one of the Group of Frames we talked about earlier. The first frame in each group is a whole frame, and subsequent frames consist of changes from the first whole frame. The height of each frame is how much of the bandwidth the group used up. For a 24Mbit/s A77 24/25p AVCHD, the max height of a GOP is the bitrate, 24000.

Notice something strange? Yup, the A77 never actually uses the full 24Mbit/s in our clip!

The internet is full of people saying that A77 AVCHD breaks up because there is not enough bandwidth, but as you can see, the truth is that the A77 often never even uses the full bandwidth! (The reason for A77 soft video is actually nothing to do with bandwidth, more on this later).

Want to see something even stranger? Here’s the exact same scene shot with my GH2 at 50Mbits/s and 100Mbit/s:

Bitrate graph of 10s clip, Panasonic Lumix GH2 @50Mbit/s, 1080p, 25fps

Bitrate graph of 10s clip, Panasonic Lumix GH2 @50Mbit/s, 1080p, 25fps

Bitrate graph of 10s clip, Panasonic Lumix GH2 @100Mbit/s, 1080p, 25fps

Bitrate graph of 10s clip, Panasonic Lumix GH2 @100Mbit/s, 1080p, 25fps

We see two graphs. The first is a 50Mbit/s AVCHD clip at 25p (Bitrate viewer tells us it is 50fps because the GH2 does 25p as a ‘pseudo 50i’), and the second is 100Mbit/s (1080p 24H ‘cinema), both shot with the ‘FlowMotion v2.2 GH2 hack’. To shoot these clips, I ran the cameras along a motorized sider whilst pointing at foliage 12 inches away. So the three graphs so far are showing identical scenes

Look at the A77 24Mbit/s (first graph) and GH2 50Mbit/s (second graph). The actual bandwidth used is far closer than the bitrates suggest: low 20s vs high 20s, and certainly not 100% difference!

The stated AVCHD bitrate is a maximum and only used if the scene requires it. Thus, if two cameras are shooting exactly the same scene, but one has twice the bitrate of the other, the quality of one will only be double the other if both cameras are using the maximum bitrate.

What the GH2 is doing that makes its quality better is it is using smaller GOPs, but that is only of advantage because I am forcing a very fast pan which is not typical of the ‘DSLR film’ style.

If I use a fast lens with wide aperture (in my case the Minolta 50mm f1.4, shot at f1.4) on the GH2, the bandwidth rarely goes above 50Mbit/s even though the stated bitrate may be over 100Mbit/s, and actual bitrate usually hovers around 30-35Mbit/s.  As a fast lens used wide open is common in DSLR video, its worth noting that increasing bitrate may not give you much if you are shooting wide aperture filmic footage (and if you are not shooting that with a DSLR, you are probably misusing your equipment!).

Utilised bandwidth is always low when using fast, wide apertures, so high bandwidth AVCHD would be wasted: in most cases the 28Mbit/s available from most un-hacked cameras is pretty much all you will ever need in these cases.  

But then we look at the LX7 footage of the exact same scene:

Bitrate graph of 10s clip, Panasonic Lumix LX7 @28Mbit/s, 1080p, 50fps

Bitrate graph of 10s clip, Panasonic Lumix LX7 @28Mbit/s, 1080p, 50fps

The LX7 is shooting at 50fps (it has no 24/25fps mode), which kind of explains why it is consistently using a higher bandwidth than the A77, but not why it is using so much more. In fact, the LX7 is using close to its limit of 28Mbit/s, and is the only one of the three cameras that is doing so. Why, when all three cameras are shooting the same scene? Because the LX7 has the smallest sensor, and is therefore capturing the sharpest input video. Sharp video looks less filmic though, so I often find myself having to soften the LX7 footage in post!

A small medium/low resolution sensor is better than a physically big, high resolution stills sensor for capturing sharp video, but you may find yourself having to soften this video in post to get a good ‘DSLR film’ look.

The LX7 has a small sensor. Such a sensor has very little in optical compensation (it is actually a very good infra-red photography camera because of its lack of sensor filters or micro lenses), and doesn’t need to do a lot of things the A77 and GH2 need to do for video. In particular, it does not need to smooth or average the signals coming from the sensor pixels, something we will touch on in the next section.

I find that the LX7 is always more likely to be using the maximum bandwidth out of my three cameras. The fact that the GH2 can shoot at over 100Mbit/s does not mean it is always capturing three times the video quality, and the fact that the A77 has the biggest sensor does not mean it is capturing the most light because it throws most of that data away… which brings us to the real reason why A77 video is often seen as ‘soft’…

Conforming stills sized images to video frames

A sensor optimized for stills is not the same as a sensor optimized for video. A current large (Full frame or APS-C) stills sensor is typically 25MPixels (6000×4000 pixels) or higher. To shoot video with the same camera, you have to discard a lot of that data to get down to 1920×1080. The way this reduction takes place makes a big difference on final video quality, and is more of an issue than bitrate, especially for most ‘average’ scenes you will take (i.e. ones that do not hit bitrate limits).

Some smallish sensor cameras (such as the GH2/GH3) can (and do) average adjacent pixels, but large sensor cameras such as the A77 and all other APS-C/Full frame DSLRs simply skip the extra data. The resulting video has artifacts caused by the gaps. To hide these effects, the camera intentionally softens detail that looks like it may be an artifact. You can see this clearly here: http://www.eoshd.com/content/6616/shootout-in-the-snow-sony-a65-vs-panasonic-gh2-vs-canon-600d?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+EOSHD+%28EOSHD.com%29. The A65 and Canon 60D are always softer than the GH2.

Although the Canons produce soft video, the A77 (and A99) is a little bit softer. This is probably because of processing overhead: the Canons only do 30fps, whereas the A77/A99 also do 50/60fps video, which means they do a more approximate line skipping/soften function to allow for the data throughput of the higher maximum frame rate.

The effects of the A77 line skipping does not show up at all on close or mid distance detail, but it does show up on far detail (because then the detail looks like it might be caused by the line skipping to the smoothing algorithm). The smoother turns distant foliage into a green mush, whereas something like the Panasonic GH2 keeps it all looking good.

I have seen this clearly when looking at A77 footage in Bitrate viewer. If I shoot a close subject, the footage actually looks very detailed, and the bitrates are close to the maximum, implying the A77 is capturing lots of detail.

For footage that contains lots of mid-long distance detail (trees and foliage, complex clouds, distant architecture and detailed terrain), the detail disappears and the bitrate reflects this – the A77 is discarding the data by smoothing it and is not even attempting to add it to the video stream: the bitrate is usually well below 20Mbit/s.

What to do? Most often, I do nothing.

I actually like the A77 ‘look’ because it more closely resembles film stock. As long as you keep to low apertures and let the distant objects defocus naturally through a low depth of field, you have no issue, and often don’t even need more than 24/28Mbit/s. You would shoot at low apertures if you want to shoot ‘DSLR film’ anyway, so distant detail is rarely an issue. If you are shooting close subjects with lots of detail behind them, the A77’s way of doing things actually maximizes detail on the main subject, so it makes sense both stylistically and in terms of best use of limited bandwidth.

Shooting with low apertures (top) prevents Sony alpha smoothing affecting your scene, and is usually stylistically better anyway. Shooting flat and then sharpening in post (bottom) can also often fix the problem

Shooting with low apertures (top) prevents Sony alpha smoothing affecting your scene, and is usually stylistically better anyway. Shooting flat and then sharpening in post (bottom) can also often fix the problem

There is a slight issue with the A77 and wide aperture though: if you use autofocus during video, the camera limits you to f3.5. There’s videos on the internet telling you to get around this by jamming the lens aperture lever at wide open: don’t do this! All you have to do is

  • autofocus, then..
  • hit the AF/MF toggle, which..
  • drops you into manual and wide open.
  • Your camera will still be in focus (and you can use focus peaking to make minor adjustments in any case).

Of course, you will not now be able to change focus via  autofocus for a moving object, but if you are wider than f3.5, you’re depth of focus is so thin that you probably should not expect to keep fast moving objects in focus anyway!

I use this trick with the excellent and inexpensive Minolta 50mm f1.4 (full frame), the Tokina 11-16 f2.8 (crop frame), the 16-50 f2.8 A77 kit lens (crop frame) and Sony 50mm  f1.8 (crop frame), all of which give a very nice cinematic depth of field wide open, and give you a nice range of focal lengths for video (11-50mm then 80mm f1.4 from the Minolta, which is a nice portrait length and aperture for tight character shots)  for not much cash.

If I absolutely need a scene with sharp middle and distant video with minimum extra kit over the A77, I take the LX7.  You can now get hold of an LX7 for less than the cost of an A77 kit lens (the cost has gone down significantly because everyone wants a Sony RX100), and because of the LX7’s extremely small sensor, line skipping artifacts or smoothing just don’t occur in its AVCHD output. Plus the LX7 footage goes pretty well with the A77’s (as long as you know how to conform 50/60fps to 24/25 fps – the LX7 doesn’t do 24/25p).

Distant detail captured with the LX7. NB - the 'look' is different for this series of clips because I am using Tiffin DFX rather than Colorista for my post work, and not because I am using different cameras.

Distant detail captured with the LX7. NB – the ‘look’ is different for this series of clips because I am using Tiffen DFX rather than Colorista for my post work, and not because I am using different cameras.

Best of all, like the A77, the LX7 requires almost nothing to shoot video.

All I needed was to add a 37mm screw thread (available from Panasonic but cheaper copies are available from eBay or Amazon) and a 37mm ND filter (Polaroid do a cheap one, again from eBay/Amazon).

The GH2 uses a pixel sampling pattern (or ‘pixel binning’) to reduce full sensor data to video resolution, so it averages data rather than just throwing it away. It also as an ‘Extended Tele’ mode where it just uses the center 1920×1080 pixels of the sensor, so it acts more like a small sensor camera such as the LX7 (and makes a 70-210 Minolta ‘beercan’ a telescope: 210mm x2 for full frame/MFT, then x4 for the tele mode, so over 1000mm effective with no loss of video quality!).

The GH2 can use its higher bandwidth to allow far more details to be brought out in post, and this is one of the big advantages of high bitrates with it. The hacks also give you >30minute constant recording and the ability to move between NTSC frame rates (for the US and web) and PAL (for weird non-standard outposts such as here in England).

The GH2 looks really really good on paper, but it has one big issue that means I still use the A77 more: the GH2 is a good camera, but the overall system is not. To get full features from a GH2, you need Panasonic MFT lenses, and they are awful: awfully slow and too long, or fast and awfully expensive. I therefore use my Sony Alpha lenses with the GH2, with manual focusing via an adapter. The GH2 has a poor focus assist (no focus peaking) so it is not the fast run-and-gun one-man camera that the a77 (or LX7) is.

So, we’ve looked at the disadvantages of the a77 (and their fixes). What are the advantages?

Advantages of the a77 over the GH2/LX7, and every other DSLR

There are several big advantages my A77 has over a high bitrate camera (i.e my hacked GH2) or a sharper-video small sensor camera (LX7):

  1. Dynamic range. The A77 has far better dynamic range than any other DSLR. The GH2/LX7 will clip highlights like no tomorrow, but at low ISO (around ISO100-200), the A77 has better dynamic range than almost anything else out there, including the Canon 5D Mk3. You have far less leeway with the GH2, and one of the main reasons for it needing more bitrate is to help with the dynamic range. You don’t need this for the A77.
  2. Focusing. The a77 SLT enhanced video focusing has no hunting. Every other camera I have used either hunts or doesn’t do video focusing at all. This makes hand held video with the A77 very easy, and in fact, shooting any video with the A77 is far easier than with any other DSLR I own or have tried.
  3. Steadyshot.  Before trying other video DSLRs, I believed the given truth that in-lens stabilization is better than the in-body version that the A77 uses. Not true. The A77 has a far smoother video stabilization system than the Panasonic MFT in-lens OIS system. Handheld is smooth with the A77 unless I run. Handheld with the GH2 and Panasonic OIS lenses is smooth only if I stand still. On the one occasion I borrowed a Canon DSLR from a friend, the required stabilization rig marked me out as a target for robbery or ridicule. Don’t even go there.
  4. Minimum equipment required with the A77 for video is almost zero. You need a slightly better and bigger SD card than you do for stills and an ND filter, and perhaps a separate audio recorder (I use a Tascam DR07) and….um. That’s it. No additional monitor with focus peaking (comes with the camera). No shroud to put over the monitor (just use the EVF – it’s the same screen). No rig (steadyshot is damped enough for walking whilst shooting without recourse to sufficient prosthetics to make you look like a cyborg), and the kit-lens is the best video lens for the system, so no lens outlay unless you want to shoot really wide, long or in the dark (in which case get a Tokina 11-16, Minolta beercan or Minolta 50 f1.4,  none of which are particularly expensive… and all of them do autofocus in video). In fact, dslrvideoshooter.com still recommends it as the best standalone DSLR for video here (its right at the bottom of the page).
  5. Reliable. Its not until you stray away from the a77 that you begin to realize how reliable it actually is. Sony are very conservative, and the thing never breaks, and post 1.05 firmware, is nippy and fast.
  6. ‘Film-like’ output. If you want film like output, then soft video is not as much of an issue as half the internet seems to tell you. Further, if you are using wide apertures associated with the ‘film look’, then high bandwidth is not that much of an issue either. Thus, despite the fact that soft video and ‘only 28Mbit/s are both seen as disadvantages, once you actually start using the A77 for DSLR video, you find that in practice, neither matter, and you actually end up with pretty good footage if you want to create DSLR film. Of course, this advantage only occurs if you slightly overexpose, otherwise that 28Mbit/s will come back and bite you when you come to expose-up shadows!
Best Standalone  DSLR for video (taken from http://dslrvideoshooter.com/best-dslr-for-video/, November 2013)

Best Standalone DSLR for video (taken from http://dslrvideoshooter.com/best-dslr-for-video/, November 2013)

Conclusion

The A77 may have a low bitrate, but the bitrate is enough for all non-pro uses a DSLR would typically be used for. Other DSLRs may have better bitrate, but that is not enough for them to be considered more ‘pro’ because none of them are on the preferred cameras list of the top broadcasters.

If you are shooting DSLR video, your target audience is web or personal events (such as weddings), and the AVCHD format sony use is all you need, assuming you take care to shoot flat. If you are really good, you might be able to squeeze out an Indie short, but you might be better off just learning on a DSLR and perhaps using it for some shots, but then hiring pro gear for a week for the main shoot.   Finally, the A77 is simply reliable: it rarely fails on focusing or dynamic range enough to kill a take.

The big advantage of the A77 is that you need very little to get started shooting video with it: no rig, no focus control, no monitor and none of the other video doohickeys apart from an ND filter and maybe a separate audio recorder. That 16-50 f2.8 you got with the A77 is FAR better than most other lenses for video, so think carefully before abandoning it.

The fact that the Sony Alpha shoots 28Mbit/s AVCHD is a non issue: higher bitrates are not as important as you think most of the time, and a camera shooting at double the bitrate may actually be recording almost the same quality video.

By far the most important thing in your setup is not actually the camera, but how you use it. You can get around the Sony soft-distant-video simply by shooting wide aperture DSLR film. Canon users had the same issues with soft video, and if you look at default Canon video straight out of the camera, it looks a lot like default A77 video straight out the camera. With all DSLRs, the trick is to shoot with wide apertures.

I have bought a GH2, and although it does output sharper video, that sharpness is not always a good thing. The GH2 with Panasonic lenses can actually be too sharp, and old DSLR lenses are often the way to go (I use Minolta AF or Rokkor) often give a more filmic feel, that  will bring you back to square one: it will look like default A77 footage!

My favorite setup is to use the Sony A77 and the LX7. Neither need much setup or additional kit for video, and the footage from them goes well together. They complement each other as a DSLR+DSLR replacement when shooting stills, and as a main and backup camera for video. I’m also glad I didn’t opt for the RX100 as my ‘DSLR replacement’ now that I know how good the LX7 is for video!

Videos associated with this post can be found at http://www.youtube.com/user/howgreenisyourvideo

NB – I had previously promised video cheat sheets for the a77 along with this post… but as this post has overrun, I will have to blog them later. Sorry!

Notes

  1. It’s important to differentiate between a file on your computer (such as a .mts AVCHD file or mp3 or mp4) and the codec. The file holds data and tags that tell an application what it needs to use that data. The tags hold information such as ‘this is the video data, and this is the audio data, and this is what you need to use to play them together’. The codec is the ‘what you need’ bit, and is a part of the application that opens the file (although it may be provided by the operating system). The codec may be strongly associated with the file type (such as .mp4 files and the MPEG4 codec), it isn’t always the case: the filetype may be able to contain several slightly different codecs, or future computers may choose an updated and different codec. Although this distinction may not have been something you need to consider when you are just playing video files, now that we are editing and creating our own videos, an understanding of the underlying codecs we will be using is critical. In particular, to edit a given file, you need to know which codecs are required to decompress it, or have an authoring tool that knows for you. As of 2013, you are in luck, because almost all applications worth using recognize AVCHD fully, so you have nothing to worry about.
  2. Although most codecs are designed to reduce file size, some codecs have very little compression because they are written for editing rather than simply viewing, and place quality over everything else. They are sometimes referred to as ‘intermediate codecs’ because they are not designed for playback.  You will hear about ProRes, Cineform, Avid DNxHD, and other such editing formats on the internet, usually by someone telling you it’s not worth editing video unless you are using one of them. Unless you are getting uncompressed video from your camera via a raw format or clean HDMI out (if you are not sure what that means, you are not), then you don’t need to bother with intermediate codecs, and probably never need to change your file format during editing. If you really must use an intermediate codec for your video editing, have a look at Lagarith or Ut Video (PC) or Avid DNxHD (PC/MAC). All are free, so all you will be wasting is hard drive space! In a nutshell, pro codecs are only useful with pro cameras, and no DSLR is a pro video camera, so go with what you get out of the camera.
  3. Some GH2 hacks try to use high bit rates irrespective of whether the current scene requires it or not, such as Driftwood’s ‘Moon T7′.

Sony Alpha Video: Part 1

Six months ago, I upgraded to a Sony A77, my first DSLR with video. This series of blog posts will document my learning curve in shooting video with it. Hopefully it will help anyone else starting out with Sony Alpha for video.

In this post, I will show how to set up a Sony Alpha A77 for video and discuss tips, pitfalls and hacks in shooting DSLR video, specific to the Sony Alpha shooter. This is something that is missing on the web, and many assume video is broken on the A77. For example, many A77 shooters assume  that SLT auto-focus video cannot be set up to work with a wide aperture and manual shutter, (which is the preferred configuration for film-like DSLR video) . I’ve got good news for you: it can work that way without pestering Sony for an A77 firmware fix, as we will see…

In later blogs I will look at major issues in more detail (lenses, audio, grading, color-correction, etc).

The howgreenisyourvideo YouTube channel

The howgreenisyourvideo YouTube channel

I have set up a YouTube video channel at www.youtube.com/user/howgreenisyourvideo to accompany my video related posts. As of this writing, I’ve put up the first two videos I shot. I will go through exactly what I did wrong (and what I learned from it) in a later post. As you will see, those two videos are full of mistakes and there is a lot to learn from them!

Note that all the camera specific instructions in this post refer to the Sony Alpha A77 (partly because the A77 is the best specced Sony Alpha APS-C camera for video as of this writing, but mostly because I own one). I may not be able to answer questions specific to other models because the only video-SLT I own is the A77.

Note also that the discussions below assume you know your way around a Sony Alpha  for stills photography (you should be able to shoot stills in full auto, and know AEL stands for ‘Auto Exposure Lock’ for example). Shooting video is easy on a Sony Alpha, but shooting good video on any DSLR requires a good understanding of  stops, aperture, shutter, and exposure. Unfortunately, this blog post is already very long, and I made the decision to cut all the beginner stills stuff.

Let’s get started…

Why use DSLRs when even your phone can shoot video?

When you shoot footage with most devices (phones, tablets,  compact cameras, camcorders), it looks like video. The versatility of DSLRs means that with a little tweaking, you can create footage that looks more like film. The cool thing about Sony Alphas is that they are perhaps the most accessible for video use. In typical Sony fashion though, the factory settings are not set to the optimum, and the changes you need to make to begin shooting quality footage are not obvious…

Creating the movie ‘look’ with a Sony A77

The difference between video and film are obvious: just turn on your TV and tune in to live news, then have a look at your favourite film DVD:

  1. Video is sharp, with everything in focus. Film has a much more dreamy look caused by a much reduced depth of focus.
  2. Video contains no motion blur. Film has lots of it.
  3. Video contains true to life color. Film color is usually either more vibrant or desaturated or has a creative or process-based recoloring that adds selective saturation or de-saturation (and usually both).

Choosing video format, frame rate, shutter and aperture

To address 1 and 2, I shoot at 25p (Menu > Film1 > Record settings > File Format > AVCHD 50i/50p and Menu > Film1 > Record settings > Record Settings 25p 24M(FX)), and then choose the Manual film mode, with shutter set to 1/50 and aperture set to somewhere between f2 and f4. If in doubt, pick f3.5.

The 1/50s shutter for 25 fps video is a general rule of thumb: use twice the frame rate for the shutter speed because that’s what most movie cameras do (they have a rotating cam that lets in light to the film for 180 degrees of its 360 degree rotation, effectively giving a shutter speed of twice the frame rate).

Left to its own devices, the A77 (and most other DSLRs) will want to go a lot higher to maintain exposure, but don’t let it, because high shutter speeds  create sharp frame images. When sharp images are placed together into motion, the eye can often discern strobing and jerkiness (and often also moire). Surprisingly, the human eye is is less likely to see a strobe or jerkiness if you take fewer frames that are exposed for longer so that they blur into each other. This is clearly counterproductive for a stills shooter, so let me digress for a moment and explain.

In stills, you want a very high shutter speed to get clean capture of fast moving objects. In video, you do not increase the shutter speed on its own, because that will make the video jerky. If you want to capture a very fast moving object in video you increase the frame rate then make the shutter twice the frame rate. It is crucially important to realize that shutter speed for stills does not work the same way as shutter speed in video, and the only reason you would increase shutter in video is because you have increased the frame rate. video frame rate is equivalent to shutter speed in stills, but you often can’t change it because changing frame rate also changes your video’s ‘look’. Further, because shutter is locked to frame rate, you often cannot change that either. You can’t change aperture because changing that in video looks awful (you get an immediate step change in exposure, plus lens shake) So to change exposure, you use a variable ND filter.

The only time I use 50p is when I want to

  • slow down my footage: 50p video played back at 25p (without removing any frames) makes very good slo-mo.
  • when I am creating a fast pan and absolutely need to remove the effects of rolling shutter (commonly called the ‘jello effect’). I discuss rolling shutter effects below, but all you need to know for now is that the A77 is one of the few cameras that can almost eradicate it: just shoot at 50p and convert to 25p (by merging frames) in post production.

There’s one big gotcha with using 50p over 25p though: don’t use 50p over 25p because you think it will increase picture quality: using 50p/60p over 24p/25p/30p will increase bit rate and sharpness, but may not increase the perception of quality to your audience because their idea of quality is associated with the look and feel of film rather than bitrate, and look and feel comes from 24p/25p.

Also worth noting that 50p has its own issue: it removes rolling shutter but may add moire. Clearly, the choice of frame rate is something you have to get a feel for by trying it, but my advice is stick with 25p unless you are

  • fast panning and there are no detailed textures that may cause moire
  • want to do slow-mo
  • need clean frames for an effect (such as Twixtor)

As an aside, there were lots of reviewers complaining about moire on A77 video when it first came out, which begged the question ‘what did you expect with 50p video!’.

The f2-f4 needed for  DSLR film comes from cinema: this is a depth of field that cinema uses often, so your audience will be used to it.

Note that you have the option to shoot 25i (interlaced) video. Without going into the details (Wikipedia is your friend if you want the history), progressive video is the one to choose when shooting video that will be edited in any way. If you require the final content to be 25i, just turn it into that at the end of the post processing.

Hacking auto-focus for film-like video

If you want to use Sony SLT auto focusing (which you do because it is one of the main reasons to use a SLT for DSLR video in the first place), then you initially look to be out of luck: auto-focusing doesn’t work in manual so you can’t auto focus at the same time as controlling shutter (and as we have seen above, keeping the shutter speed low is pretty much crucial for video that is easy on the eye). There is a sneaky fix though:  AEL lock works in video on an A77. So, to use auto-focusing and have enough control over aperture/shutter to get a nice film look, do the following:

  • Make sure you have Menu > Cog3 > Func of AEL button set to AEL toggle.
  • Put a variable ND filter on your lens. You do not need the $200.00 dollar ones for  video  (because video resolutions are smaller than for stills): a ten dollar one will do. However, the cheaper ND filters will affect sharpness, so be prepared to spend more if you have good glass and want to maintain picture quality. See https://www.youtube.com/watch?v=nECdBiu5Rrw for a good review of the price-quality tradeoff for variable ND filters. For those creating DSLR film on a budget, you may want to ignore the above review and go straight to a review of the cheapest available variable ND filter here: http://www.youtube.com/watch?v=jQ87WIn9Ulo
  • Select the Program Auto movie mode. The camera will pick an aperture of around f3.5 (assuming your lens can get there) which is close enough for what we need, and a shutter speed to give correct exposure (which will typically be too high).
  • Either cover the lens or point it at the sky until the shutter goes to 1/50 (you can also use exposure compensation, noting that you need to move exposure comp in the opposite direction than the one you want shutter to go in). Now press the AEL button. Shutter and aperture are now locked yet you still have auto-focusing.
  • To control exposure, change the ND filter value.

Although this may seem a little complicated, it is far less complicated than what you have to do on most non Sony DSLR cameras. On a Canon you would have to control exposure with the ND and manually focus. Doing these two things at the same time is difficult enough for you to need to rehearse the shot beforehand (and have mark the focus points on the lens). You may even need two people to do it. Using a SLT, you control exposure with the ND, saving the requirement for an extra pair of hands to do the focusing (the ‘focus-puller’). This is important: with a Sony Alpha, you can shoot video hand held because you only need one person. You can also maintain focus with the camera on a Steadicam (or even strapped onto the side of a car).

Note that a variable ND filter is pretty much a requisite for outdoor video whatever you do: you are very likely to blow highlights on any DSLR camera without an ND because you are shooting wide with a low shutter.

Setting color

Getting back to our list, there are two ways to address point 3. The easiest is to simply make use of the creative styles (fn > Creative style, then pick a style that suits your movie). This has a big disadvantage though: you are very likely to saturate color, resulting in color banding (this looks very bad in video, and is to be avoided).

The footage to the top left is how the video looked out-of-camera via a 'flat' color profile. The footage top right is the final footage after post production. The initial flat state allows us to significantly change color without clipping.

The footage to the top left is how the video looked out-of-camera via a ‘flat’ color profile. The footage top right is the final footage after post production. The initial flat state allows us to significantly change color without clipping.

The slightly more complicated but most often used method is to set a flat style in-camera (e.g. fn > Creative Style > Neutral, dialing in negative values for contrast, saturation and sharpness (e.g. -2, -2, -3), then editing the flat video in post-production to get the final colors and ‘look’. Rather than using a flat style, many shooters choose the sunset style, as its  less likely to cause color banding than any other default style, and gives you a more natural looking starting point for post (especially for skin, which can be difficult to get right if you are starting with flat video).

The reason you don’t have to do this with stills is because stills can be shot in RAW. In video, you are effectively shooting JPEG, and you need to be careful not to clip color or exposure because your raw footage will often require a lot of post-processing.

Cool SLT video hacks

If you intend to shoot in full manual (manual shutter, aperture and focusing), then there are a couple of useful A77 features than can really help out.

Firstly, when in manual focusing, you can force the camera to auto-focus before you start recording by pressing the AF/MF button. This is useful when you want to shoot live events: rather than having to quickly find focus before you start recording, just hit AF/MF. You can decide whether you want the AF/MF button to be a toggle or hold via Menu > Cog3 > AF/MF button (I have mine set to AF/MF Control Hold).

Another really cool feature of the A77 is the ability to drop into manual focusing with focus peaking. This is a very cool feature for stills as well as video. What does this do? For stills…

  • When you half press the stills button, the camera will auto-focus.
  • As soon as it finds focus it will drop to manual focusing and show you what is actually in focus via the peaking. You can tweak focus manually
  • You can now take the picture. As you have changed priority to RELEASE, the camera is forced to take the photo even if it thinks you are no longer in focus (usually because you have manually changed focus onto something else).

For video, you do exactly as above so that when you are ready to start recording video you know you are in focus (and if you are not, then you know what (if anything) is in focus via the peaking. How cool is that?

To set this up, turn focus peaking on (Menu > Cog2 > Peaking Level, setting it to anything other than OFF), then select Menu > Camera3 > AF-A setup and set it to DMF and Menu > Camera3 > Priority setup to RELEASE. Finally, set the focusing mode (on an A77, its on the camera front: the dial below the lens) dial to A.

Note that once you are shooting video, you cannot switch back to auto-focus if you are in A, S or M modes, but you can switch from auto to manual if you are shooting video in P mode, with the focus mode set to C (continuous).

If you want to enable the maximum auto functionality of the A77 whilst giving you enough control over aperture, shutter and manual focus to enable you to shoot film-like footage you need to:

  1. Shoot 25P full HD video.
  2. Select the P video mode, with focus mode C, using a lens that is fast enough to go down to at least f3.5, and with an attached ND filter.
  3. The camera will choose f3.5 or lower (f3.5 is good enough but if you absolutely want to move aperture,  you will need to use the trick in ”Hacking auto-focus for film-like video’, last bullet point). The camera will also  try to change shutter to maintain exposure (which you don’t want). To stop it changing shutter use the AEL button trick to force the camera to choose 1/50s and stick with it.
  4. Then, when shooting video, control exposure with your variable ND filter. If you want to switch between manual and auto focusing during shooting, use the AF/MF button (setting it to be a toggle or control hold if you wish via Menu > Cog3 > AF/MF button).

This allows you to shoot film-like video with the option to switch between auto and manual focusing whilst the video is recording, something that no other camera can do (including an A77 with the stock settings!).

Okay, so now we have a camera set up for video, let’s have a look at a few issues and gotchas that you need to be aware of.

Transitioning from stills to video

A DSLR is optimized for stills not video. Knowing the trade-offs this creates when creating video lets you circumvent them pretty easily. More subtly, your photography skills are also optimized for stills. You cannot assume that video is like taking a quick series of stills. You instead need to recognize the differences that shooting footage presents and make small changes to your assumptions…

Sharp focus is not as important in video as it is in stills

This is the single biggest problem I had to get my head around when transitioning from stills to film-like video. In stills, sharpness is crucially important because the subject has to be in focus.

Without actually seeing this video, you would not know what the subject is! Its actually the out-of-focus figure, top right. Although it looks wrong in the still, its obvious in the video, because the figure is the only thing moving.

Without actually seeing this video, you would not know what the subject is! Its actually the out-of-focus figure, top right. Although it looks wrong in the still, its obvious in the video, because the figure is the only thing moving.

In video you often don’t care as much about sharpness because

  • Getting a film-like look often requires a soft (wide open aperture) creaminess, and attempts to sharpen this (via stopping down on aperture) loses you depth of field. This results in video that looks like a live news stream rather than The Good, the Bad and the Ugly.
  • A lot of your footage’s ‘look’ is created by your choice of aperture and shutter. It is often more important to keep that consistent than it is to change them to keep things sharp
  • In film, the convention is often for the subject to be the thing that is moving and not the thing that is sharpest in focus.
  • Many film treatments actually digitally de-focus to get their look. I have often found I have to add Gaussian blur to sharp footage to get the look I want!

Exposure stays the same in stills but changes in video

In stills, you only need to control your exposure for the instant you take your photograph. In video, you have to control the exposure across each take, and that can be problematic.

Consider taking landscape photographs on a hot sunny day. You might take a photograph of a field, then take another shot of the sky. A competent shooter can handle it by just stopping down for the sky shot (and perhaps adding a polarizer), so this presents no problem.

Now consider what happens if you want a video shot of the same thing, i.e. a pan from the field moving up to the sky. You are moving between two widely different exposure regimes but you can’t change shutter, because that would change the video’s ‘look’. You don’t want to change aperture, because that will change depth of focus and therefore might impinge on the ‘feel’ of the shot… and even if you are happy to accept that, lenses change aperture in steps so exposure will also change in discernible steps (which on video looks terrible).

In this video sequence, the camera is panning upwards to the sky, whilst at the same time a variable ND filter is being used to maintain exposure. Although the ND filter helps, it cannot easily be used if the exposure change is large... in the last image, the ND filter is keeping the sky from blowing out, but the foliage is now black: the change in exposure across the pan is too great.

In this video sequence, the camera is panning upwards to the sky, whilst at the same time a variable ND filter is being used to maintain exposure. Although the ND filter helps, it cannot easily be used if the exposure change is large… in the last image, the ND filter is keeping the sky from blowing out, but the foliage is now black: the change in exposure across the pan is too great.

Variable ND filters help out a lot here, but there are many outdoor shots where the exposure change is too great.

The trick is to take exposure into account when setting up your shots: if the change in exposure is too high, you have to split the shot out into two. This point escaped me for ages, and I instead wasted lots of time trying to fix the problem by editing in post. I now know that this was a workaround and not the solution!

Sensor Heating

With video, you are effectively taking 24 or 50 shots per second for minutes at a time. Your camera sensor will get hot! This limits the maximum time that you can shoot. Most cameras can capture a single take of about 7-10 minutes. The Sony A77 can go on for 30 minutes. Heating isn’t normally a problem unless  you intend to shoot live events: you will then need two cameras. It may also be an issue if you do a lot of takes one after the other. Instead, you may have to wait a couple of minutes between takes (or accept that taking lots of video over a short time may heat up the sensor and give you more noise).

Electronic shutter

When taking stills, your camera uses a mechanical shutter that physically stops light to the sensor when a photograph has been taken. When you shoot DSLR video, the shutter is not fast enough to handle 24 or 50 movements per second, so it isn’t used at all. An electronic shutter is used.

A physical stills shutter closes when the photo has been exposed. The electronic shutter closes when the frame has been exposed and the sensor has been read.

So with video, if you suddenly move the camera while the sensor is half way through a read, half the frame will show the movement, and half will not. A straight lamp post caught mid-frame in this fashion would appear to ‘wobble’ like jello because different parts of the post are caught at different parts of the movement. Sony cameras can prevent this jello effect when it is caused by sudden camera movements (because of their stready-shot sensor stabilisation), but you will still see the full jello effect on a Sony if you are panning quickly.

You should avoid very fast panning if shooting with a DSLR (or at least do a few test shots and modify your panning speed accordingly to avoid ‘jello’).

Good Video Editing and Post production software can be expensive (but it can also be free)

If you are serious about video then you will need a good software package.

Premiere Pro CS6

Premiere Pro CS6

I use Adobe Production Premium Pro, which includes Premiere, AfterEffects, SpeedGrade, and Audition. This can cost a lot of money. If you do not expect to be  creating commercial work for some time and you have friends or relatives in education, consider getting the student version. This is exactly the same as the full version: only the licence is different (you are not licensed to create commercial work). You can upgrade the licence if you ever start selling your videos.

If money is an issue, you can get Hollywood grade editing and color correction software for free via DaVinci Resolve and Lightworks. The ‘Hollywood grade’ bit is not an exaggeration either: both these applications are used in major Hollywood releases. See note one at the end of this blog for the links.

The AVCHD advice on the internet is out of date

All Sony Alphas except the A99 (as of this writing) output HD video in AVCHD only (the ’77 does .MOV, but not in full HD, and I’m not sure what codec the .MOV uses anyway). There are big advantages of using AVCHD (the video is highly compressed so you get a lot of footage on a single SD card). If you look on the internet, you will be advised to convert it to a non-compressed, high bit depth format (such as Cineform) before reworking (grading, coloring) your footage. There are three reasons for this.

  • When editing AVCHD, your computer has to decompress the video at every frame during edits or even during playback, and this can slow your computer or make playback choppy.
  • By converting to a format with higher bit depth, your color changes are more accurate, and you are less likely to see color or exposure clipping.
  • Some post production software doesn’t handle AVCHD well, resulting in choppy video.

These reasons are out of date if you are using Adobe Production Pro CS6 or later as your main editing suite:

  • Premiere Pro internally calculates each pixel in an effect or change at the maximum bit depth, and all final output can also be set to internally use the maximum bit depth available (usually 24 bit). There is no longer any advantage in color accuracy if you convert from AVCHD to another format for editing.
  • Most modern computers can now decompress AVCHD on the fly for playback, so there is no need to convert it to another format for editing. If you have a fast i5 or i7 based computer, you are probably covered.
  • Current versions of most editing software can now handle AVCHD smoothly (but see note 1 at the end of this blog entry).
  • Cineform and similar ‘intermediate-for-editing’ video formats are uncompressed and very large in filesize. Given that there is now little advantage in using them, all they will do is fill your hard drive really fast.

Finally, if you search on the web for A77 and AVCHD, you will find a lot of posts about having to convert your AVCHD. All of them are fake posts created by the software seller: they are trying to sell you software you most likely do not need!

Bit rates, broadcast quality, the web, and the A77

A Sony Alpha A77 can shoot HD video at 24Mb/s  (24p) or 28Mb/s (50p). A fair question to ask is ‘how good is that?’.

Well, on the internet you will hear that ‘camera XXX can do 75Mb/s! Buy camera XXX!’ or ‘Camera YYY was used to shoot an episode of ‘House’, buy a YYY!’, and finally ‘Philip Bloom bought an A77 and sold it immediately: he still uses camera YYY, so it must be the one to go for…’.

Well, for starters, the footage for the episode of House needed significantly more post production than a professional broadcast camera, and the cost benefits of using a DSLR were therefore close to zero. Your camera footage is less than half the equation.

Secondly, file size is not a proxy for quality. You cannot compare a 2.5MB JPEG with a 25MB RAW file and conclude that the RAW file will be x10 sharper and x10 better quality because 25/2.5 = 10. The RAW will be better, but not by x10. The law of diminishing returns comes in pretty fast. In photographic terms, you are usually only 1/3 of a stop better off in RAW over JPEG, and the difference reduces to zero if you are simply printing the photographs. So much for x10 file size envy. And then there’s the fact that most DSLR video is shot pretty wide anyway: sharpness is important, but not as important as it is in stills. I’d take better dynamic range over better sharpness.

So a better question is ‘if an A77 is not good enough for broadcast off-the-bat, how far off the mark is it and what do I have to do to fix it?’. The most finicky broadcaster by far is the BBC. Their spec of what constitutes good quality footage is done on a case by case basis (live streams or current affairs can be much lower quality, etc), but their general guideline for acceptable full HD video shot in AVCHD is a 56Mb/s or better.

So the A77 is capable of half the British HD broadcast quality. A hacked Panasonic GH won’t make it either because the hack to increase bit rates increases noise significantly… so if you’re waiting for a A77 firmware hack to give 60Mb/s, don’t hold your breath.

American broadcasters and many private broadcasters are far less snobby, and I’m guessing that most DSLR footage will be acceptable after tweaks. But that is really beside the point: if you really want to create an Indie DSLR film or a TV documentary with your DSLR, go for it, but be aware that the biggest issue is not how you are getting your footage, but your post processing: be prepared to spend a lot of time in post production!

My advice is to forget bit rates and just go for it: any current DSLR can produce greater quality than the maximum quality needed for the web (or for that matter, weddings and other social events) using only stills lenses, a couple of ND filters, and a standard desktop. If that goes well and you stand out, then things will move you forward on their own. Worst case: someone will hire you and tell you to get better footage and you’ll just go out and rent a pro camera, or learn grading in  a hurry then spend a week in post.

Audio

Although video is pretty good on a DSLR, sound is less so. The inbuilt mic will pick up camera noise especially if you do not have a silent HSM (Hyper Sonic Motor) lens. Even with a HSM lens, your mic will pick up button clicks, strap movement, and (because a camera mic is not very directional) even your breath. Poor audio can drag your video down, and I recommend you look at something other than the inbuilt camera microphone. As well as recording sound, you will often want  a musical score accompanying your footage. We will look at sound in a later post.

Conclusion

Although Canon cameras are well covered by the DSLR video community, Sony specific information is less easy to find. After reading this post, you have hopefully now got the Sony specific information needed to shoot video that looks like film.

After so much information, you will probably be thinking ‘woah, that was a lot to get through’. But if you think about it, it all boils down to this:

Shoot at anything close to 25p, with shutter set at 1/50s and aperture at  f2-f4, and don’t go outside these settings. Control your exposure with a cheap variable ND filter so you don’t clip the highlights. Use a color profile that doesn’t sharpen, and that desaturates your color a bit so that you don’t clip color.  Don’t let your camera sensor get too hot, and don’t pan so fast that you get the ‘jello’ effect.

Apart from knowing how to trick your A77 to do the above at the same time as auto-focus, nothing else really matters. Forget bit rate. Forget a high shutter or small aperture. Most of this blog post was about explaining why none of the other things you hear discussed on the internet  really matter!

Hold tight for the next post. I promise I’ll try to make it shorter than this one!

Part 2 here.

Notes

  1. For post production, you will need an editor and color correcting software. Both can be had for free. First, get DaVinci Resolve go to http://www.blackmagicdesign.com/products/davinciresolve. download the Lite version (Lite means full-featured but you are ‘limited’ to full HD!). They use DaVinci Resolve in big Hollywood productions, and yes, it is hard to believe, but its free if you are a DSLR video shooter! Then get Lightworks (http://www.lwks.com/), another piece of software that is used by Hollywood post production, but is free in a Lite version. As of this writing, the free download page is here: http://www.lwks.com/index.php?option=com_content&view=article&id=45&Itemid=184
  2. When moving from manual to auto-focus in video, the aperture and shutter will only stay the same when the video is running. If you try it when video is not running, the aperture and shutter will change in manual mode. Don’t worry about it, because it works once video is running. If you have an old A77 firmware revision (before v1.07), you may have to set the aperture and shutter in in Manual  stills mode to match your video auto AEL-locked values (sounds complicated, but it will make sense if you ever encounter the issue).
  3. If you use the trial version of Adobe Premiere Pro CS6 to edit your AVCHD SLT video, you will see choppy AVCHD playback. This is because required software (AVCHD codecs) are not included in the trial (as Adobe would have to pay Sony a royalty for each user, and they don’t want to pay for trial users). You have to have the full version of Premiere to prevent this issue. If you are using the ‘free’ Adobe CS2 download (which includes Premiere CS2), shoot in MP4 (Menu >film1 > File Format, changing it to MP4), as CS2 probably won’t know what AVCHD is.
  4. The single most important thing you need (in terms of hardware ) to transition from stills to video is variable ND filters. Get one for each lens you intend to use for video. You don’t need the expensive ones (because video is lower resolution than stills, so you won’t notice the difference in quality, and even 1080P video will not pick up the black ‘X’ that you see in high resolution stills), so go for the cheapest you can find.
  5. As an advanced user aside, there are some situations when you might be shooting video and would like to switch instantaneously between two different exposure or shutter settings without stopping the video or needing to do it manually. You would for example want to move from a slow shutter to a fast shutter if you intended to use an effect such as Twixtor for part of your footage (such as the instant when a point is scored in a sports shoot). You can also select two sets of exposure for ‘shadow’ and ‘bright’ if your scene had two very different exposure ranges (e.g. ‘into sun’ and ‘away from sun’) and you absolutely have to use one take (e.g. the scene you are shooting is live and can’t be stopped).  Selecting a different exposure between manual stills mode and auto video mode with the AEL hack, then switching between the two exposures during rolling video with AF/MF allows you to do this with a single button press on the A77. That is such a useful hack is should really be upgraded and called a ‘feature’!
Follow

Get every new post delivered to your Inbox.

Join 29 other followers