During the summer, Black Magic reduced the price of their Black Magic Pocket Cinema Camera (BMPCC) by half price.
This was a limited offer, so I ordered straight away. Remorse started sinking in when I read the reviews whilst waiting for delivery: it’s a difficult camera to work with (it is, but it’s also very rewarding), there’s a massive x2.88 crop (there is, but focal reducers are now very cheap and decent), and worst of all, it’s very difficult to grade the video.
Well, there’s fixes for the first two, and for the third, that’s a blatant lie: grading log footage is not difficult as long as you get your white balance right before you do any other grading or color correction. This point is crucial.
Assuming good white balance, all you have to do is push the saturation, add a LUT (look up table) or even just use an auto-color correction to get you in the right ballpark. Anyway, shown above is my very first attempt with the BMPCC. Notice the sky and ground are well exposed in every shot. That’s the power of a high dynamic range plus a good codec (CameraDNG or ProRes).
There are loads of tutorials and reviews on adapting Canon or Nikon lenses to the BMPCC, but I could find nothing for Sony Alpha. Well, that’s about to change. Watch this space!
The video was shot hand held using only the Panasonic 14mm f2.5, shot wide open to get a de-sharpened, analog feel.
The source footage is ProResHQ (2.45GB for about 2.5 minutes of footage) and this was edited using only Premiere Pro with the Colorista 2 and Tiffen Dfx plugins. No LUTs or presets were used.
The slow motion effect was added in Premiere via Twixtor. The text motion graphics were created using the standard Premiere animation, masking and blur tools (After Effects was not used).
In a previous post, I discussed using a Tiffen Low Contrast filter when filming with an AVCHD enabled camera. I didn’t illustrate the point with any of my test footage.
Here it is.
Low contrast filters and video encoders
To recap, we use low contrast filters in AVCHD DSLR video because AVCHD compresses footage using a perceptual filter: what your eye’s can’t perceive gets the chop in the quest for smaller file size. Our eyes cannot see into shadow, so AVCHD ignores (filters out and discards) most of the shadow data. ACVHD knows we can’t see the difference between small variations in color, so it removes such slight differences and replaces them with a single color.
That’s fine if you will not be editing the footage (because your eye will never see the difference), but if you do any post processing involving exposing up the footage, the missing information shows up through macro blocking or color banding. To fix this, we can do one of three things:
Use a low contrast filter. This works by taking ambient light and adding it to shadows, thus lifting the shadows up towards mid-tones and tricking AVCHD into leaving them alone. The low contrast filter thus gives us more information in shadow not because it adds more information in the shadows itself, but because it forces the AVCHD encoder to leave information that the encoder would otherwise discard.
Use Apical Iridix. This goes under different names (e.g. Dynamic Range Optimisation or DRO for Sony and i.Dynamic for Panasonic), but is available on most DSLRs and advanced compacts. It is a digital version of a low contrast filter (its actually a real time tone mapping algorithm). It works by lightening blacks and preserving highlights. Although it again doesn’t add any new information of itself, Iridix is before the AVCHD encoder, so again can force the encoder to leave shadow detail intact.
Use both a low contrast filter and Iridix together.
The video uses the third option.
Deconstructing the video
The low contrast filter allows us to see detail.. even though it is in full shadow
The video consists of three short scenes. They were taken with a Panasonic Lumix LX7, 28Mbit/s AVCHD at 1080p and 50fps (exported from Premiere as 25fps with frame blending), manual-exposure, custom photo style (-1, -1, -1, 0). It was shot hand held and stabilized in post (Adobe Premiere warp stabilizer). The important settings were
ISO set to ISO80 and
i.Dynamic set to HIGH.
I chose the lowest ISO so that I could set i.Dynamic high without it causing noise when it it lifts the shadows.
The cameras had a 37mm filter thread attached and a 37-52mm step up ring, on which were attached a Tiffen low contrast filter and Variable Neutral density filter. The reason I did not use two 37mm filters rather than two 52mm filters (i.e. bigger than the lens diameter) is that stacking filters can cause vignette unless you step up as I have done.
Here’s the three scenes. Left side is as-shot, right side is after post processing. Click on the images to see larger versions.
Notice in this scene that the low contrast filter is keeping the blacks lifted. This prevents macro-blocking. Also note that the highlights have a film-like roll off. Again, this is caused by the low contrast filter. The Variable ND filter is also working hard in this scene: the little white disk in the top right is the sun, and it and the sky around it were too bright to look at!
Scene 2 is shot directly at the sun, and you would typically end up with a white sky and properly exposed rocks/tree, or a properly exposed sky and black rocks/tree. The low contrast filter and Iridix (i.Dynamic) give us enough properly exposed sky and shadow to enable us to fix it in post. Nevertheless, we are at the limits of what 28Mbit/s AVCHD can give us. The sky is beginning to macro block, and the branches are showing moiré. I shot it all this way to give us an extreme example, but would more normally shoot with the sun oblique to the camera rather than straight at it.
Scene 3 is a more typical shot. We are in full sun and there is a lot of shadow. The low contrast filter allows us to see detail in the far rock (top right) even though it is in full shadow. It also stops our blacks from clipping, which is important because near-black holds a lot of hidden color. For example, the large shift from straw to lush grass was not done by increasing green saturation in the mid-tones, but in the shadows. If you want to make large color changes, make them in the shadows, because making the same changes in the mid-tones looks far less natural (too vivid). Of course, if we didn’t use a low contrast filter to protect our blacks (and therefore the color they hold) from clipping, we would not have the option to raise shadow colors!
Shooting flat is something you should do if you will be post editing your video footage. Many cameras do not allow you to shoot flat enough, and to get around this, you can use either a Tiffen Low contrast filter, or the camera’s inbuilt Apical Iridix feature. To maximise the effect, you can use both, as illustrated in this example.
The main advantages of using a low contrast filter are:
Protects blacks from clipping, thus preventing shadows from macro-blocking and preserving dark color. The latter is important if you are going to make substantial color correction in post because raising shadow color usually results in much more natural edits.
Better highlight roll-off. The effect looks more like film than digital (digital sensors have a hard cut-off rather than a roll-off).
Lower contrast that looks like film. Although many people add lots of contrast (i.e. dark, blue blacks) to their footage, true film actually has very few true blacks. The low contrast film gives this more realistic look.
Removes digital sharpness and ‘baked-in’ color’. Many cameras cannot shoot as flat as we would like, and produce footage that is obviously digital because of its sharpness (especially true of the Panasonic GHx cameras). Adding a low contrast filter is useful to mitigate against these issues.
The main disadvantages of using a low contrast filter/Apical Iridix are:
The filter loses you about 1/3 stop of light.
You usually have to use the low contrast filter along with a variable ND filter (which you need to control exposure). The two filters have associated optical defects apart from their intended function (may cause vignette because you are stacking filters, loss of sharpness). However, remember that you are shooting at a much lower resolution for video, so the sharpness effects will be much smaller than for stills. You can eliminate vignette by using larger filters and a step-up ring.
Apical Iridix will increase shadow noise. Use it at maximum only with very low (typically base) ISO.
To answer a reader query about whether using a loc contrast filter is a viable alternative to using log footage, I have added the following section below.
Comparison of Low contrast filter vs log footage.
The three pictures are frames from three movies shot with (top to bottom) rec709, rec709 plus low contrast 3 filter, and log.
All were shot on a camera capable of shooting rec709 and log (a BMPCC), using a high bitrate (Prores HQ which is about 200Mbit/s) so that there are no codec effects to confuse the issue. The lens was a SLR Magic 12mm, set wide open (T1.6, which is about f1.4), no ND.
The rec709 frame is what you would see on all DSLRs that do not support log.
Once you add the low contrast filter, you see the highlights spread out across the frame so that the darks (lows) and mids (gamma) are lifted, and because the highlights are spread out, they also dim slightly. The effect is however, low, and depends on the light – if there are no bright highlights in your shot (i.e. sky or a bright ambient), the low contrast filter will not work.
Looking at the log frame, you see that all color is subdued and this will always (it does not depend on you having highlights in the shot). There is hardly any green (or in fact, any color) at all. The image also loses a lot more contrast, and although you cannot see it, there is also no sharpening at all.
We can also have a look at this more objectively by taking a look at the YC waveforms;
The YC waveform shows you the brightness of the image left to right (why video enabled cameras don’t give it as an option over the photographer’s histogram I don’t know; the YC is much more useful because it shows you not only that your image is clipping, but where it is happening!).
You can see here that the filter has a slight effect in reducing contrast (the blacks are lifted slightly and the lights are dropped slightly), but the log footage has a much more pronounced effect. Height of the YC waveform represents contrast, and you can see immedatly that log footage has much less contrast. Although this results in very subdued looking footage, it actually enables you to push your footage harder in post without it breaking up, and also gives you a much more neutral starting point for your post-work, preventing your footage looking ‘similar to everyone else with the same camera’.
So, although a low contrast filter does reduce contrast by lifting the blacks and dropping the lights, it does not have as large an effect as log footage.
Log footage also gives you other things that make your footage more gradable in post, such as;
Reduces color saturation considerably, allowing you to boost color farther in post without it breaking up (banding).
Removes all color styling, allowing you more creative options to create your own styles to match your production.
Gives you access to better and more flexible third party look up tables (LUTs)
Removes all in-camera sharpening and other post production (and if you are shooting raw it also removes the effects of color temperature; color temperature is only saved as metadata and not as something that permanently affects your footage data). The lack of sharpening allows you to make better selections, and add superior sharpening in post (a desktop generally has better sharpening algorithms than your camera).
allows you to learn proper grading, because you are no longer relying on a ‘nearly there’ rec709 output, but instead start with neutral log footage.
I quickly edited the log footage towards a final look. Here’s where I got to after a couple of minutes with Première and Colorista II;
Here’s what I did;
Selected the background and reduced its clarity and made it colder (this is to visually separate it from the plant in the foreground and make the background less busy). Having unsharpened footage enabled me to quickly make the selection to do this (bear in mind that this is a selection on a moving image; sharpening gives you halos and all sorts of trouble as sharpened edges move!).
Boosted the greens and yellows of the plant, overall exposure, and highlights.
Sharpened the plant by increasing its clarity slightly.
Note that using log footage does increase your effective dynamic range but doesn’t stop you clipping; I did not use a variable ND filter in this test, and that caused clipping in all the footage! However, you can see that the dynamic range of the final edited image looks better than the rec709 footage you would get out of camera, and rec709 footage probably could not be pushed as far as the image above without causing banding on the deep greens and/or a nasty transition at the boosted leaf highlights.
Color correction and grading are often used to promote a style, ambiance or ‘look’ rather than reflect reality. You want to meet the viewer’s ideal expectations, not boring reality.
After writing my last blog post, I realised there was no video that showing my tips on AVCHD editing being used in anger. This quick post puts that right. You can see the associated video here or by viewing the video below (I recommend you watch it full screen at 1920×1080).
Note that the youtube version is compressed from the original 28Mbit/s to 8Mbit/s, as are most web videos).
Note also that I don’t use a Sony Alpha A77 for the footage in this post: I use a Panasonic Lumix LX7 because I was traveling light, and the LX7 is my ‘DSLR replacement’ camera of choice. Both cameras use the same video codec and bitrates, so there is not much difference when we come to post production, except that the Sony Alphas have better depth of field and are therefore more ‘film like’, whereas the LX7 will produce sharper video that is less ‘filmic’.
Changing the time of day with post production
Myself and my partner were recently walking on Bingley moor (which is in Yorkshire, England, close to Howarth and Halifax, places associated with Emily Bronte and Wuthering Heights).
It was about an hour before sunset, and I thought it would be nice to capture the setting sun in video.
Alas, we were too early, and the recordings looked nothing like what I wanted.
A couple of weeks later we were walking in the same place in the early morning. I took some footage of the nearby glen (glen – UK English: a deep narrow valley). So now I had some footage of the moor and glen in evening and morning sun, but no sunset footage. Not to worry: I could just add the sunset via post production.
If nothing else, it would make a good example of how AVCHD footage can be edited whilst making large tone/color corrections without coming up against issues once you follow the handy tips from the last post!
The original footage
As per the tips in the previous post, I did the following whilst shooting the original footage
Shot the footage using a fixed shutter and aperture, and varied exposure using a variable Neutral Density Filter. Reason: as a general rule, shoot all footage at a fixed aperture (typically around f3.5, going wider if you want low depth of field, or narrower if your subject is in the distance), and fixed shutter (typically twice the frame rate of your footage). Control your exposure via a variable ND filter.
Set the camera to record desaturated, low contrast and un-sharpened footage. Reason: this gives your footage more latitude for change in post-production.
Exposed the footage slightly brighter than I needed, being mindful of burning highlights. Reason: AVCHD tends to break up or produce artifacts if you increase exposure, but never if you decrease exposure.
Color post production has two workflow areas
Color correction or correction of faults in the footage. A bucket could be too red, or a sky needs to be be more blue. Correction is done on a per clip basis, correcting color/tone issues or adding emphasis/de-emphasis to areas within the scene. Framing and stabilization is also performed on a per clip basis. As an aside, this is the reason why the left side of the footage seems to wobble more in the video: the right side has been stabilized with the inbuilt Adobe Premiere stabilization plugin.
Grading or setting the look of the final film. Grading is done equally too all clips and sets the final style.
Here’s a quick run through of the corrections:
Top image. As-shot.
Second Image. Added an emulated Tiffen Glimmerglass filter. This diffuses the sharp water highlights and softens the video a little (I would not have had to do this if I was shooting with my Sony Alpha A77, and you would not have to soften the video if you were using any other traditional Canon/Nikon DSLR as all of them produce soft video). I also added a Fast Color Corrector to fix a few color issues specific to the clip (white and black point, cast removal).
Third image. Added a warm red gradient to the foliage top left to bottom right. The shape and coverage of the gradient is shown in the inset (white is gradient, black is transparent).
Fourth image. Added a second gradient, this time a yellow one going from bottom to top. Again, the shape and coverage of the gradient is shown in the inset.
For this footage, I used an emulated film stock via Tiffen Dfx. The stock is Agfa Optima. I also added back a little bit of global saturation and sharpness using the default Adobe Premiere tools (Fast color corrector and unsharp mask).
Top Image. Corrected footage so far minus the two gradients
Middle image. Grading and global tweaks (Agfa Optima stock emulation plus global color tweaks and sharpness).
Bottom Image. Adding the two gradients back for the final output.
Merging color correction and grading
Combining the two color change tasks (grading and color correction) is a bit of a black art, and I do both together. Generally, I start by picking an existing film stock from Tiffen Dfx or Magic Bullets Looks as an Adjustment layer. Then, I start adding the clips, color correcting each in turn, and switching in/out the grading adjustment layer as I go. Finally, I add a new adjustment layer for sharpness and final global tweaks. I avoid adding noise reduction as it massively increases render time. Instead, I add a grain that hides noise as part of the grading.
Reality vs. Message
Color correction and grading are often used to promote a style, ambiance or ‘look’ rather than reflect reality. You want to meet the viewer’s ideal expectations, not boring reality.
The video includes this frame. The leaves in the water are red to signify the time of year (autumn/fall).
Real leaves in water lose their color quickly, becoming much more muddy in appearance. I enhanced the muddy leaves towards earth-reds because ‘muddy’ did not fit with my message, even though rotting grey leaves are closer to reality.
Here’s the timeline for the project.
I have my color adjustment and grading in as separate adjustment layers. (V2/V3) The first half of the timeline is more or less identical to the second half except that the second half has the unedited versions of the clips on layer v4. These clips have a Crop effect (Video Effects > Tranform > Crop) on them with the Right value set to 50%. This is how I get the edited/unedited footage split-screens at the end of the video.
When adding backing sound, the music file is never as long as the video clip, so to make the two the same length, I often do this simple trick to edit the music so it is shorter:
Put the music clip on the timeline so that the start of the music lines up with the start of the footage, and
On a different sound layer, put another version of the same music on a the layer below such that this time the end of the music lines up with the end of the video.
Make the two soudclips overlap in the middle, and where they overlap, zoom into the waveforms and find and match the percussive sounds (generally the drums).
Fade between the two sounds on the overlap.
In the timeline section above, I have matched (lined-up) the drum sounds on the layer a1 and a2 music clips (both of which are different sections of the same music file), then faded from layerA1 to A2 by tweening volume level. This will produce a smooth splice between the two music sections. If space allows, you should also of course match on the end of the bar (or ‘on the beat repeat’). For my timeline, you can see (via the previous ‘Project timeline’ screenshot) that I have spliced between four sections of the same file.
During color correction I kept an eye on the YC waveform scope, which is available in Premiere and most video editing applications.
The YC wafeform shows both Luma (Y or brightness) and Chroma (C or color). Luma is Cyan, and Chroma is dark blue on the scope.
The x axis of the scope is the x-axis of the footage, so points on the y-axis are the YC values along the width of the footage itself. Sounds a bit complicated, but if you use the waveform on your own footage it becomes immediately obvious what the waveform represents.
For the broadcast standard I am using (European, PAL), true black is 0.3V on the scale, and true white is 1.0V (NTSC is very similar). The original footage is shown on the left side of the image, and the corresponding YC wafeform is shown below it. The waveform shows highlights in the sky area are clipping (we see pixels above 1.0V), and the darkest areas are not true black (the wafeform doesn’t get down to 0.3V). The right side of the image shows the final footage and we can see that we now have no clipping (in either brightness or color saturation), and our blacks are closer to true black.
Keeping an eye on the YC waveform is always something I do when editing color. You may think your eye is good enough, but your vision gets tired or so used to color that it no longer recognizes casts, but the scope never tires and lies! Another useful scope to use for skintones is the Vectorscope. Something for another post…
This post shows the workflow I used to correct a small number of clips such that they could be added together into a single scene. The final movie shows a typical English autumn sunset (or at least, one where you can see the sun!) yet none of the clips were actually taken at this time of day nor under the lighting conditions of the final scene.
By manipulating the color of our footage via color correction and grading, we achieved our desired output despite the constraints of reality on the day!
Finally, by following additional steps and rules of thumb whilst shooting and editing the AVCHD footage, we have prevented coming up against its limitations. In fact, the only time in the video where you may see any artifacts is the only place where I did not follow my own advice: at about 0.30″ I the footage has exposure increased slightly, and shows small artifacts in the shadows.
You can see all previous video related posts from this blog here.
The music in the video is Spc-Eco, Telling You. Spotify Link.
The YC graph is so much more useful than the histogram seen on most stills cameras that I often wonder why digital cameras don’t have the YC waveform instead! For example, the YC waveform not only tells you whether your image has clipped pixels, but unlike the Histogram, the YC tells you where along the width of the image those pixels are! You can still ‘shoot to the right’ using the YC (and it actually makes more sense) since brightness is Luma height. The YC also separates out brightness and color information, so you can see at a glance the tonality and color information within your photograph in a single visual. How’s that for useful!