Magnolia: Minolta 50/f1.4

Magnolia 1

Magnolia 1 (click for larger image)

The Minolta 50mm f1.4 is one lens that every Sony Alpha and m43rds videographer should consider getting. Its cheap on eBay (its one of the most affordable f1.4 lenses on any system), and produces typical 1980’s Minolta color and bokeh, all with a wonderful depth of field wide open.

Magnolia 2

Magnolia 2 (click for larger image)

It suffers from the usual issues of 1980s glass (chromatic aberration and veil hazing wide open, less contrast than modern equivalents when wide open, chance of fungus). The chromatic aberration is easily removed in Lightroom (or is a non-issue at video resolutions), and the lens becomes very sharp from f2.8 onwards. Most issues have totally disappeared by the time you are above f3.5.

The less adventurous APS-C user would be better off with the cheap crop frame 50mm primes that Sony, Canon, Nikon and almost every other DSLR range offer (for Sony its the 50mm f1.8 DT) . Despite the issues of the Minolta f1.4 wide open, that’s exactly where I tend to use it as per the photographs above.

Five ways to be better

This would have been a good shot for me 5 years ago. Now, it’s pretty much my minimum standard for handheld. This article explains how I am getting better at this.

This would have been a good shot for me 5 years ago. Now, it’s pretty much my minimum standard for handheld. This article explains how I am getting better at this.

We all do photography for various reasons, but there is one thing that we all want. We want to be better. Here’s how I intend to be better.

One: Finish Everything.

Whatever you do, finish it. The only thing you ever waste is uncompleted work.

If you have an idea that you don’t want to expand, at least share it as a complete concept that others can pick up. If you take a series of bad photographs, compare them to your best work and reach closure by knowing what went wrong.

Two: Leave Physical Marks.

Every year, select your single best photograph. Get it printed on a canvas. Sign it on the front and date it on the back.

We live in a digital, social world, and it is easy to believe that the online world is where your final work sits. It doesn’t.

Print your best photograph for posterity every year. Canvas prints don’t even cost much anymore, (even archival canvas prints are now cheap). Over time, they will become your body of work: a display of your best stuff. More importantly, you will have this final medium in mind whenever you raise the camera, and will start thinking ‘what do I have to do to make this one my best of the year?’. Competing with yourself is the best game because there are no losers.

Even if you do not intend to earn a living or find fame through photography, those canvases will go to your family when you die. Something for them to remember you through what you did. Forget Facebook and Instagram: those canvases will become real social media.

Three: Set a Personal Standard

If you want to get better at photography, you have to set a standard.

Here’s mine:

  1. The main subject must be sharp unless there is a good reason for it to be otherwise.
  2. The main subject must be within 0.3eV of correct exposure out-of-camera.
  3. I must be able to explain the composition to myself.
  4. I must be able to explain the artistic direction or theme to myself.

Here’s an example of this:

It may only be a hand-held and quick snap of my partner eating a sandwich, but its sharp, well exposed and I could easily tell you why I’ve composed the main subject as I have and what I am trying to achieve. Panasonic Lumix LX7

It may only be a hand-held and quick snap of my partner eating a sandwich, but its sharp, well exposed and I could easily tell you why I’ve composed the main subject as I have and what I am trying to achieve.

 

The left eye is the focus if interest and unsurprisingly, it is in focus, well exposed and sharp, despite it being in the shadow of the hat.

The left eye is the focus of interest and unsurprisingly, it is in focus, well exposed and sharp.

OK, my four rules are not much of a standard, but they do something crucial. They allow me to recognize an unsuccessful photograph through structured self-criticism.

Out of camera photograph, Olympus Stylus 1

Out of camera photograph, Olympus Stylus 1

Have a look at this photograph. Lots of mistakes here! A few years ago, I would have discarded it because I can see my thumb (bottom left), the scene is crooked and the lighting is a dull grey. Despite that, I now know it meets rather than breaks my standard: it’s a keeper.

I know I took this shot specifically because the lighting is grey (I can easily re-color or even replace it in post because it is so uniform), and the composition is sound despite the camera being crooked. The finished shot is the first photograph in this article, and it is exactly what I was thinking of when I took the shot. In fact, the big difference between me as a photographer now and five years ago is that now I can see the final, post-edited shot when I raise my camera (because points 3 and 4 of my standard rules are always now at the back of my mind).

Four: Upgrade skills, not equipment

Upgrade your camera only when it is preventing you completing a project. Don’t upgrade simply because there is a newer model.

Upgrading hardware is a fool’s errand: there is always something better out there. Instead, identify the skills you need, and buy the equipment that facilitates them. Further, if there is no client with high spec deliverable requirments, a good photographer can get good results from almost any equipment irrespective of the price tag and manufacturor badge.

My main DSLR is an APS-C camera (a Sony Alpha A77). Full frame cameras are now cheap, so perhaps I should upgrade to one of those. Thinking about future work though, I realize I am poor on lighting skills, so I have instead kept the A77 and bought a set of flashes and modifiers/stands, the cost of which would have got me a second hand Canon 5D or Sony 7r. That Full frame camera would have been obsolete in two years, but a new set of photography skills created by investing in lighting will never go out of fashion.

Lighting is key to this shot

Lighting is key to this shot

Consider the photograph above. There’s direct sunlight coming in from the right, the subject is in shadow, and I only have a prehistoric HTC Desire cameraphone with very poor dynamic range and awful high ISO. I made it work by reflecting sunlight into the shadow from my red t-shirt. I knew to do that because of practice with speed-lights and modifiers, and the photo you see is practically as-shot (I’ve increased contrast, but that’s about it).

Five: Create a journey

Photography is a journey. If you look on the internet, you will be forgiven for thinking it is a technical journey: getting better at photography is a process of being comfortable with more advanced equipment. That’s the ‘upgrade path’ to becoming better, but there are alternative journeys:

Look at photographers you admire and see what they were doing when they were you.

Social media is a wonderful thing for learning: pick an upcoming photographer and they will have a 500px, Flickr or even Facebook site. Don’t look at their best photographs though: look at their first few. You will find that they started taking photographs just like the ones you started with, then they got better.

Look at that transition from the early photographs to the final style and you can see the journey you would have to take if you want to end up at the same point as them. By breaking it down to the actual photographs, you get the literal path they took.

Invariably, you will find that everyone who becomes famous sets a theme or style early on, and then quickly gets good at it. That’s a big clue: you do not get better by simply taking lots of photographs: you get better by setting yourself a direction then defining and completing projects within that direction.

As a good example, I just Googled ‘500px’, and one of the top results that came out was Elena Shumilova. Some nice shots of her kids growing up there, but look at the earliest five, and work forward to see the progression of skills: it’s obvious the photographer here started with solid photography skills but has seriously ramped up Photoshop skills to get the final look. Look carefully and you can identify the specific post-editing skills develop, because they start off obvious and get better with time.

Do what interests you

You’ll only ever complete anything if it interests you or you are getting paid for it. There always has to be a ‘more’ factor to make you care. For me it is an interest in places, stories and motion-graphics/video. Find what that ‘more’ is for you.

Conclusion

Everyone who excels sets themselves projects early on that fast track a skill-set. They finish what they start (or keep it all related and ongoing) to create a body of work that meets a standard they have set for themselves. To do the same, you have to do all the things I have already mentioned:

  • You have to define projects and finish them to a self-imposed standard.
  • You have to create a physical body of work because that is the end medium for photographers.
  • You have to define your equipment by the projects you envision yourself doing, and not define your projects by your equipment.
  • You have to enjoy and have a passion for what you do.

 

Using low contrast filters for video

In a previous post, I discussed using a Tiffen Low Contrast filter when filming with an AVCHD enabled camera. I didn’t illustrate the point with any of my test footage.

Here it is.

Low contrast filters and video encoders

To recap, we use low contrast filters in AVCHD DSLR video because AVCHD compresses footage using a perceptual filter: what your eye’s can’t perceive gets the chop in the quest for smaller file size. Our eyes cannot see into shadow, so AVCHD ignores (filters out and discards) most of the shadow data. ACVHD knows we can’t see the difference between small variations in color, so it removes such slight differences and replaces them with a single color.

That’s fine if you will not be editing the footage (because your eye will never see the difference), but if you do any post processing involving exposing up the footage, the missing information shows up through macro blocking or color banding. To fix this, we can do one of three things:

  1. Use a low contrast filter. This works by taking ambient light and adding it to shadows, thus lifting the shadows up towards mid-tones and tricking AVCHD into leaving them alone. The low contrast filter thus gives us more information in shadow not because it adds more information in the shadows itself, but because it forces the AVCHD encoder to leave information that the encoder would otherwise discard.
  2. Use Apical Iridix. This goes under different names (e.g. Dynamic Range Optimisation or DRO for Sony and i.Dynamic for Panasonic), but is available on most DSLRs and advanced compacts. It is a digital version of a low contrast filter (its actually a real time tone mapping algorithm). It works by lightening blacks and preserving highlights. Although it again doesn’t add any new information of itself, Iridix is before the AVCHD encoder, so again can force the encoder to leave shadow detail intact.
  3. Use both a low contrast filter and Iridix together.

The video uses the third option.

Deconstructing the video

The video consists of three short scenes. They were taken with a Panasonic Lumix LX7, 28Mbit/s AVCHD at 1080p and 50fps (exported from Premiere as 25fps with frame blending), manual-exposure, custom photo style (-1, -1, -1, 0). It was shot hand held and stabilized in post (Adobe Premiere warp stabilizer). The important settings were

  • ISO set to  ISO80 and
  • i.Dynamic set to HIGH.

I chose the lowest ISO so that I could set i.Dynamic high without it causing noise when it it lifts the shadows.

The cameras had a 37mm filter thread attached and a 37-52mm step up ring, on which were attached a Tiffen low contrast filter and Variable Neutral density filter. The reason I did not use two 37mm filters rather than two 52mm filters (i.e. bigger than the lens diameter) is that stacking filters can cause vignette unless you step up as I have done.

Here’s the three scenes. Left side is as-shot, right side is after post processing. Click on the images to see larger versions.

Scene 1

Scene 1

Notice in this scene that the low contrast filter is keeping the blacks lifted. This prevents macro-blocking. Also note that the highlights have a film-like roll off. Again, this is caused by the low contrast filter. The Variable ND filter is also working hard in this scene: the little white disk in the top right is the sun, and it and the sky around it were too bright to look at!

Scene 2

Scene 2

Scene 2 is shot directly at the sun, and you would typically end up with a white sky and properly exposed rocks/tree, or a properly exposed sky and black rocks/tree. The low contrast filter and Iridix (i.Dynamic) give us enough properly exposed sky and shadow to enable us to fix it in post. Nevertheless, we are at the limits of what 28Mbit/s AVCHD can give us. The sky is beginning to macro block, and the branches are showing moiré. I shot it all this way to give us an extreme example, but would more normally shoot with the sun oblique to the camera rather than straight at it.

Scene 3

Scene 3

Scene 3 is a more typical shot. We are in full sun and there is a lot of shadow. The low contrast filter allows us to see detail in the far rock (top right) even though it is in full shadow. It also stops our blacks from clipping, which is important because near-black holds a lot of hidden color. For example, the large shift from straw to lush grass was not done by increasing green saturation in the mid-tones, but in the shadows. If you want to make large color changes, make them in the shadows, because making the same changes in the mid-tones looks far less natural (too vivid). Of course, if we didn’t use a low contrast filter to protect our blacks (and therefore the color they hold) from clipping, we would not have the option to raise shadow colors!

Conclusion

Shooting flat is something you should do if you will be post editing your video footage.  Many cameras do not allow you to shoot flat enough, and to get around this, you can use either a Tiffen Low contrast filter, or the camera’s inbuilt Apical Iridix feature. To maximise the effect, you can use both, as illustrated in this example.

The main advantages of using a low contrast filter are:

  • Protects blacks from clipping, thus preventing shadows from macro-blocking and preserving dark color. The latter is important if you are going to make substantial color correction in post because raising shadow color usually results in much more natural edits.
  • Better highlight roll-off. The effect looks more like film than digital (digital sensors have a hard cut-off rather than a roll-off).
  • Lower contrast that looks like film. Although many people add lots of contrast (i.e. dark, blue  blacks) to their footage, true film actually has very few true blacks. The low contrast film gives this more realistic look.
  • Removes digital sharpness and ‘baked-in’ color’. Many cameras cannot shoot as flat as we would like, and produce footage that is obviously digital because of its sharpness (especially true of the Panasonic GHx cameras). Adding a low contrast filter is useful to mitigate against these issues.

The main disadvantages of using a low contrast filter/Apical Iridix are:

  • The filter loses you about 1/3 stop of light.
  • You usually have to use the low contrast filter along with a variable ND filter (which you need to control exposure). The two filters have associated optical defects apart from their intended function (may cause vignette because you are stacking filters, loss of sharpness). However, remember that you are shooting at a much lower resolution for video, so the sharpness effects will be much smaller than for stills. You can eliminate vignette by using larger filters and a step-up ring.
  • Apical Iridix will increase shadow noise. Use it at maximum only with very low (typically base) ISO.

Notes

None.

Sunday

Sunday in Gods own County

Sunday

Walking out on the Moors last weekend and trying out my new camera: Olympus Stylus 1.

Valentine

This is where we sat for our Valentine dinner. Chocolate whilst watching the clouds go by. Click image for larger view.

Valentine bench.

This is where we sat for our Valentine dinner. Chocolate whilst watching the clouds go by. Click image for larger view.

Camera: Panasonic Lumix LX7.

Dynamic Range Optimization and video

In a previous post, I wrote about low contrast (and ultra-contrast) filters and their use in DSLR video to increase video quality. They do this by lifting low-tones, which prevents AVCHD encoding causing macro blocking.  What I did not realize at the time is that there is a built in feature of Sony Alpha, Panasonic and other DSLRs that pretty much gives you the same thing for free: Dynamic Range Optimization.

Dynamic Range Optimization

Cameras have a more limited dynamic range than our eyes. If we photograph a bright sky looking into the sun (so there are shadows in our scene), then the camera can only expose for the sky or shadows, but not both, yet our eyes can see detail in both.

Most current cameras have a feature that attempts to emulate how our eyes see such a scene. They go under different names: iExposure, Active D-Lighting, Auto Lighting Optimizer, Shadow Adjustment Technology, and so on. Sony’s version is called Dynamic Range Optimization (DRO) and Panasonic’s is called Intelligent Dynamic (i.Dynamic).

Although some of these systems change exposure as part of their operation, most of them use a range compression algorithm that brightens shadows and adds texture to highlights to better approximate what the human eye would see.

Left: a photograph taken with DRO off. Right: the same photo, this time with DRO set to level 5 (maximum). Notice that both the shadows and highlights are brighter in the shot that uses DRO.

Left: a photograph taken with DRO off. Right: the same photo, this time with DRO set to level 5 (maximum). Notice that both the shadows and highlights are brighter in the shot that uses DRO.

Although Sony are often cagey over what DRO is, it is simply Sony’s branding of their licensed use of Apical Iridix, as shown here. Other manufacturers use Iridix without trying to pretend it is proprietary (the name Iridix even appears on the box for my Olympus Stylus 1 as a feature!).  You can also read a technical interview with the Apical CEO here. Finally, you can see what Iridix actually does behind the scenes here.

Iridix is not a simple tone curve but tone mapping It works on a per pixel basis, checking the brightness of each pixel against both its neighbors and against the dynamic range within the overall photo.  Iridix is actually very similar to tone mapping as used in HDR images, but has crucial differences:

  • Iridix it is very fast computationally because it is implemented at a low level: either as dedicated on-chip signal processing or as part of the camera firmware.
  • Iridix is designed to keep the image looking realistic, so you don’t end up with any HDR effects (HDR halos and HDR that starts to look ‘painterly’).

It is important to realize that Iridix is not a simple tone curve, and you cannot replicate it completely by using a typical ‘S’ tone curve in Photoshop or Lightroom.

Disadvantages of Dynamic range optimization

There are two big disadvantages of Apical:

  • It is not applied to RAW. You have to be shooting JPEGs to be able to use it. or Sony Alpha, the camera does alter the RAW metadata so that the DRO settings are available to your RAW converter, but unless the application also licenses Iridix, the DRO settings will be ignored. Adobe don’t license it, so Photoshop/Lightroom ignore DRO, and DxO Optics also seems to ignore it. The Sony-specific RAW editing software (Sony Image Data Converter) does use the DRO settings, and it can be downloaded here.
  • As Iridix brightens shadows, it also increases noise visibility in the dark areas. If you are shooting at high ISO this can become problematic.

It’s also worth noting that Iridix does not increase the actual dynamic range. It’s just a different way of rendering the image data, but is closer to how our own eyes would perceive the same scene. In particular, it should be noted that using Iridix does not increase the shadow detail but is instead ‘perceptual’. Iridix simply pulls lows up so that the human eye perceives the detail better but since AVCHD also uses a perceptual system to decide what to throw away when optimising video, Iridix and AVCHD actually work together well: DRO stops AVCHD throwing shadow detail away by making low tones perceptually more important.

Using DRO in video

Where Iridix really comes into its own is in video. Yep: DRO works in video!  For Sony, you can see it by putting your camera in A video mode and then pressing the Fn button, selecting DRO/Auto HDR and setting it to Lvl1 to Lvl5 or AUTO. You must be in Movie Mode other wise the setting will be applied to stills. For Panasonic GH2, press MENU button and then go MOTION PICTURE ICON > Page 2 > I.DYNAMIC. Olympus possibly have the best implementation of all (you can sellect level 1-5 for shadows and level 1-5 for highlights separately, but since Olympus DSLR video doesn’t generally work in ASM modes, it isn’t much use for DSLR film, so I don’t consider it further).

The following example is from a Sony Alpha A77, so for the rest of the article, I refer to Iridix by its Sony name, DRO.

As noted in my previous video post, a Tiffen low contrast or Tiffen ultra contrast filter is useful with AVCHD video because they lift your blacks, and this prevents macro blocking. Good low contrast/ultra contrast filters are expensive though, but that’s okay, because it turns out that DRO does exactly the same thing – it lifts the blacks! Better still, Iridix does this without losing you as much contrast as the Tiffen filters.

Top: video still with DRO set to Off. Bottom: the same video, this time shot with DRO level 5. Notice that the lows and highs are both brightened.

Top: video still with DRO set to Off. Bottom: the same video, this time shot with DRO level 5. Notice that the lows and highs are both brightened. Click Image to see a larger version.

Again, its worth noting why DRO increases final video quality even though it does not increase the detail in your shadows. AVCHD encoding optimizes your video by removing data in the areas you will not notice. Its favorite place to do this is the low-tones, on the basis that our eyes cannot see into shadow well. This means that as soon as you brighten your video, your shadows start to block up (‘macro block’) because the lack of detail starts to become apparent.

By allowing DRO to brighten shadows before AVCHD video encoding, you reduce the encoder’s propensity to reduce shadow information (because the brightened shadows are now taken to be closer to mid-tones) and you therefore eliminate macro blocking. As macro blocking is the main bugbear of using AVCHD (especially when you will be post processing your video), using DRO is to be recommended.

DRO is also useful even when you do not intend to post process your video. DRO models how our eyes see the scene, and this often makes the video look more natural.

The only downside I have encountered to using DRO in video is noise. As DRO brightens your shadows, it also makes noise more visible. At high ISOs the noise can be noticeable especially because it only occurs in shadows, causing the shadows to seem to ‘shimmer’ relative to the mid and high tones.  If you are above ISO 200, I would suggest turning DRO either off or lower that level5 (or putting it on AUTO as this is a conservative setting).

Using DRO in RAW

You can’t use DRO in RAW (unless you use Sony Image Date Converter to do it off-camera), but you can get pretty close optically with a low contrast filter.

Left: photo exposed for the center, as shot.  Middle: exposed for the center with a Tiffen low contrast3 filter, as shot. Right: the middle version edited in Lightroom

Left: photo exposed for the center, as shot. Middle: exposed for the center with a Tiffen low contrast3 filter, as shot. Right: the middle version edited in Lightroom

The use of a low contrast filter in RAW lifts your blacks so that the camera adds more information to them (all digital cameras assign more data to brighter areas of your photo to better represent how your eye sees – your eyes resolve more detail in bright areas and less in shadows). This lifting allows you to either expose up your dark areas without them banding, or to exposure compensate by about -1/3rd of a stop to protect your highlights (which you can do without clipping your shadows because they have been lifted by the filter). Either way, you end up with your dynamic range pulled away from clipping, allowing you to push the file further in post. Adding a low contrast filter costs you the 1/3rd stop light loss caused by the filter and the slight distortion having some extra glass in the light path usually causes… but if you are careful it will not cost you much in terms of noise. The image to the right actually has far less noise than you would see if you exposed the image on the left up to the same levels.

Using both DRO and low contrast filter in video

So if DRO gives you better shadow encoding, and a low contrast filter gives you flatter footage that is more post production friendly, using both together would be interesting. Here’s what you get:

Left: orignal photo, metered for center. Middle: same shot, this time with DRO level 5, otherwise as shot. Right: same as middle shot, this time with DRO and low contrast3 filter

Left: original photo, metered for center. Middle: same shot, this time with DRO level 5, otherwise as shot. Right: same as middle shot, this time with DRO and low contrast3 filter

The effects of DRO and a low contrast filter used together are cumulative: you lift shadows by enabling DRO and lift them more by adding the filter. Noise, however, is also additive: you gain shadow noise by enabling DRO, and lose light (about 1/3rd stop) by adding the filter. The advantage is clear though: the image on the right has much better looking light than the one on the left. Not only does the light look better, the brights have a more film-like roll-off as there are no sharp digital transitions. Most importantly, in the image to the left, the AVCHD encoder would have fun with the shadows as it totally removes all information in your lows, preventing you from being able to do almost anything in post. The image to the right would cause the encoder to retain much more information in the lows, and this gives you more options in post.

If you are using a camera that creates very sharp video and/or creates video that cannot be shot flat, you should consider using a low contrast filter. I find this especially true of the Panasonic GHx cameras, as they otherwise create sharp video with lots of color/tone ‘baked-in’. Without the filter, this ends up giving you footage that looks very ‘digital’ and can be difficult to work with in post (despite the higher/hacked bit-rates of the GHx series).

Update March 2014: see this post for a video example (shown below) of a low contrast filter and Apical Iridix being used together.

Shot with the Panasonic Lumix LX7, 28Mbit/s 50fps, conformed to 25fps with frame blending.

Conclusion

Macro blocking is the main issue with using AVCHD. It is caused when AVCHD removes shadow data during encoding, and you later try to expose up the footage. Using a high DRO setting can eliminate this because it brightens shadows so that AVCHD is forced to treat them more like mid-tones (and therefore hold on to the data). You can also do the same thing with a low contrast filter when DRO is not possible/not available. Using a low contrast filter also gives you a roll-off on highlights that is very reminiscent of film.

Notes

  1. All tests were performed with a Sony Alpha A77 using the 1.07 firmware.
  2. Altered the post March 8th 2014 to make it a bit less Sony specific (as all cameras I have seen have Iridix).

Photography: things I learned last year

Every new year, WordPress sends all blog owners a traffic summary report,  including the option to publish this report. Not for me: rather than share my blog stats I’ll tell you what I’ve learned about photography in 2013.

Beauty is never original

Beauty is defined by a set of common stereotypes and well defined templates and attributes. If you are only driven by a sense of beauty in your photography then by definition your work is not original because you are following the same well defined stereotypes and templates.

This hit me last year whilst looking at photographs someone else had taken of a location I had recently visited. His choice of photographic opportunities were not dissimilar to mine: we had taken the same photographs! Try it for yourself. Search on Flickr for a place you have been to and see how different the photographs are from yours.  Usually not by much. What to do?

Modern art is often seen as ugly because it follows few of the standard/commercial ‘beauty stereotypes’, but that can make it much more original and cutting edge. So before you start following the same old set of templates (rule of thirds, only photographing photogenic people/landscapes, shooting only in good light and generally following accepted rules and practices), ask yourself ‘am I setting up my shots this way because I am trying to take something beautiful/commercial/safe (or worse, simply copying) when I should be aiming at something original?’.

Look at the photograph below. It’s my partner.

'Chemotherapy' (click to see larger image). Shot with Sony A500

‘Chemotherapy’ (click to see larger image).
Photographed with Sony Alpha A500

It  was taken during chemotherapy for cancer. She’d lost all her hair and feeling very ill. She was sat in a nightgown and (understandably) feeling down and claiming she looked awful. I said she looked beautiful and that I would prove it. I took about 30 photographs over about 5 minutes. This is the first one in the sequence.

There’s lots of photo retouching tips and courses out there that defines ‘good’ in terms of stereotypical beauty. Believe me, there is far more to beauty than that. Sometimes you have to look for it, but often it is staring you in the face.

A photograph is a one frame movie

This is a key point that video editing has taught me. Look at your best photographs. I bet they are the ones that evoke memories, tell a story, include a visual joke, illustrate a concept, assume a context (or subvert an assumed context) or visually show the relationship between its subjects.

In all cases, a single frame sets off a visual or emotive sequence of thought in the mind of the viewer. That is also a good definition of the best movie scenes you have seen, right? So a good movie scene and a good photograph are perhaps more similar than we assume.

By thinking as a cinematographer when you take your shots, you start to include so much more in your photography. Instead of ‘capturing snapshots’, you start to think about other things, such as movement, relationships, story, back-story, humor, context, emotion. Thinking about ‘story’ in a single frame may seem a stretch too far in a single photograph, but bear with it, because as we will see, thinking about movement will always imply story.

Hair (click to see bigger photo) Taken with Panasonic Lumix LX7

‘Hair’ (click to see bigger photo)
Photographed with Panasonic Lumix LX7

The photograph above is taken some time after the chemotherapy had ended. Hair has started growing back and is nearly long enough to be styled. My partner should be happy, right? Have a look at the photograph, and tell me what you think she is saying about her hair. Is she as happy as she should be, and is she better than she was in the last photo? I was laughing when I took this photograph. Can you tell? Why?

I have clearly changed the color balance between the foreground and background. The background has a much colder color balance, and I have desaturated everything in the background except the reds. I have left the skin at the original, warmer balance. I have also used a very odd focal length: at 28mm, it is far wider than a typical portrait shot. What does this all add, and why have I taken it that way?

Finally, take a look at the photograph below.

'Summer ice cream' (click to see larger photo). Taken with Panasonic Lumix LX7

‘Summer ice cream’ (click to see larger photo).
Photographed with Sony Alpha A77

This photograph follows almost no compositional rules. Yet for people born in the UK, it tells a very strong story: Summer. During summer, we get ice cream vans stopping on every street to sell ice-cream (US version is ‘ice cream truck’, although from what I am told, they tend to park near public events or near busy areas rather than go street to street).

That the ice cream van is almost totally hidden is part of the story: the photo was taken at child eye level. Just by looking at this photograph, I can imagine a much younger me running inside and asking/pestering for money to buy an ice-cream. and there’s the story and stream of images that our ‘one frame movie’ intends to instil in its target audience.

Learning about movement creates better stills

If your camera has a video capability, learn to use it as well as stills. The brain works in strange ways, and one of them is learning two related skills makes you much better in either  of them. Developing a good cinematography eye will make your composition eye better. Here’s things I have learned through my video editing that I feel have made me a much better stills photographer:

  • Position your camera in the expectation of movement. If your subject is moving, you will already tend to leave free space in the direction of movement. By moving your camera like a video camera and anticipating future movement, you create a better composition. In fact, I now always move my camera as if I am taking video, and my photographs are now essentially key frames in the footage I would have taken if I was shooting video.
  • Create Tension. In script writing, the difference between a scene and ‘just people talking’ is the element of tension (which can be any kind of tension, comedic, suspense, suspension of disbelief, a growing call to action, etc). For example, a scene where two old friends are reminiscing is just ‘two people talking’ and has little interest. If you change the context so that the viewer knows one character is hiding an unfulfilled romantic love for the other, you have introduced tension making the conversation interesting. By creating an element of structural or emotional tension in your photographic composition, you make the photograph much more interesting in exactly the same way. If you think about it, this point is really just a rewording of beauty is never original. By setting up a beautiful or perfect scene and then adding an element of tension or opposition, you create an original twist that begs further investigation. If you are on a typical engagement shoot, where would you take the context if you gave the bride a gun as a prop? Would it take you in an original direction? You bet it would!  Tension can also trigger a sequence of events or story in the mind of the viewer, which brings us to the next point
  • Color is a story shortcut. Color is routinely used to signify emotion or atmosphere in photography, and there’s nothing new in this. Blue for cold or natural expanse, red for strong emotion or heat, green for balance or nature. In cinematography, color is routinely used in a different way: it can give a visual cue to where the story is about to go. Read ‘If its Purple someone’s gonna die’ or any good book on movie scripting. If you want to buy into the ‘a photograph is a one frame movie’ concept  then use of color to imply story or relationship between characters (or between characters and their environment) is key. Your ‘movie’ is only one frame long so you have to be succinct and you have to use color cues to imply  story. Almost all my color edits in the photographs above are purely cinematic.

Conclusion

Take stories not photographs.

We are nothing without our internal stories. Without them we would only be instinctive animals with no sense of context, self, or history. The same goes for photographs. Don’t create visual icons or good-looking stereotypes: they are just not memorable because they have already been done to death. Instead, tell a story. One of the best ways to do this is think in terms of cinematography.

The easiest way to begin forcing yourself to do this is to make a resolution to press the video button on your DSLR more often. You will not only learn video. You may also start to think about stills differently.

Another easy way to think about your photograph is as a one frame movie. As well as thinking about photographic composition, its worth also thinking about cinematic scripting: what is the story and how do your choices in framing, color, depth of field and choice of focal length carry this story forward? Are there any elements that do not add anything to the story, and can they be removed?

Notes

  1. I just noticed in the second photograph that the background mask misses bits. Look at the top of the hand on the right. This is what comes of uploading photographs before you’ve really finished the edit!
Follow

Get every new post delivered to your Inbox.

Join 30 other followers