November 25, 2013 Leave a comment
Photographed with Panasonic Lumix LX7. Color correction in Lightroom 5.
Click image for larger view (1200×800).
Pictures of nice things that grow.
November 25, 2013 1 Comment
After writing my last blog post, I realised there was no video that showing my tips on AVCHD editing being used in anger. This quick post puts that right. You can see the associated video here or by viewing the video below (I recommend you watch it full screen at 1920×1080).
Note that the youtube version is compressed from the original 28Mbit/s to 8Mbit/s, as are most web videos).
Note also that I don’t use a Sony Alpha A77 for the footage in this post: I use a Panasonic Lumix LX7 because I was traveling light, and the LX7 is my ‘DSLR replacement’ camera of choice. Both cameras use the same video codec and bitrates, so there is not much difference when we come to post production, except that the Sony Alphas have better depth of field and are therefore more ‘film like’, whereas the LX7 will produce sharper video that is less ‘filmic’.
Myself and my partner were recently walking on Bingley moor (which is in Yorkshire, England, close to Howarth and Halifax, places associated with Emily Bronte and Wuthering Heights).
It was about an hour before sunset, and I thought it would be nice to capture the setting sun in video.
Alas, we were too early, and the recordings looked nothing like what I wanted.
A couple of weeks later we were walking in the same place in the early morning. I took some footage of the nearby glen (glen – UK English: a deep narrow valley). So now I had some footage of the moor and glen in evening and morning sun, but no sunset footage. Not to worry: I could just add the sunset via post production.
If nothing else, it would make a good example of how AVCHD footage can be edited whilst making large tone/color corrections without coming up against issues once you follow the handy tips from the last post!
As per the tips in the previous post, I did the following whilst shooting the original footage
Color post production has two workflow areas
Here’s a quick run through of the corrections:
For this footage, I used an emulated film stock via Tiffen Dfx. The stock is Agfa Optima. I also added back a little bit of global saturation and sharpness using the default Adobe Premiere tools (Fast color corrector and unsharp mask).
Combining the two color change tasks (grading and color correction) is a bit of a black art, and I do both together. Generally, I start by picking an existing film stock from Tiffen Dfx or Magic Bullets Looks as an Adjustment layer. Then, I start adding the clips, color correcting each in turn, and switching in/out the grading adjustment layer as I go. Finally, I add a new adjustment layer for sharpness and final global tweaks. I avoid adding noise reduction as it massively increases render time. Instead, I add a grain that hides noise as part of the grading.
Here’s the timeline for the project.
I have my color adjustment and grading in as separate adjustment layers. (V2/V3) The first half of the timeline is more or less identical to the second half except that the second half has the unedited versions of the clips on layer v4. These clips have a Crop effect (Video Effects > Tranform > Crop) on them with the Right value set to 50%. This is how I get the edited/unedited footage split-screens at the end of the video.
When adding backing sound, the music file is never as long as the video clip, so to make the two the same length, I often do this simple trick to edit the music so it is shorter:
In the timeline section above, I have matched (lined-up) the drum sounds on the layer a1 and a2 music clips (both of which are different sections of the same music file), then faded from layerA1 to A2 by tweening volume level. This will produce a smooth splice between the two music sections. If space allows, you should also of course match on the end of the bar (or ‘on the beat repeat’). For my timeline, you can see (via the previous ‘Project timeline’ screenshot) that I have spliced between four sections of the same file.
During color correction I kept an eye on the YC waveform scope, which is available in Premiere and most video editing applications.
The YC wafeform shows both Luma (Y or brightness) and Chroma (C or color). Luma is Cyan, and Chroma is dark blue on the scope.
The x axis of the scope is the x-axis of the footage, so points on the y-axis are the YC values along the width of the footage itself. Sounds a bit complicated, but if you use the waveform on your own footage it becomes immediately obvious what the waveform represents.
For the broadcast standard I am using (European, PAL), true black is 0.3V on the scale, and true white is 1.0V (NTSC is very similar). The original footage is shown on the left side of the image, and the corresponding YC wafeform is shown below it. The waveform shows highlights in the sky area are clipping (we see pixels above 1.0V), and the darkest areas are not true black (the wafeform doesn’t get down to 0.3V). The right side of the image shows the final footage and we can see that we now have no clipping (in either brightness or color saturation), and our blacks are closer to true black.
Keeping an eye on the YC waveform is always something I do when editing color. You may think your eye is good enough, but your vision gets tired or so used to color that it no longer recognizes casts, but the scope never tires and lies! Another useful scope to use for skintones is the Vectorscope. Something for another post…
This post shows the workflow I used to correct a small number of clips such that they could be added together into a single scene. The final movie shows a typical English autumn sunset (or at least, one where you can see the sun!) yet none of the clips were actually taken at this time of day nor under the lighting conditions of the final scene.
By manipulating the color of our footage via color correction and grading, we achieved our desired output despite the constraints of reality on the day!
Finally, by following additional steps and rules of thumb whilst shooting and editing the AVCHD footage, we have prevented coming up against its limitations. In fact, the only time in the video where you may see any artifacts is the only place where I did not follow my own advice: at about 0.30″ I the footage has exposure increased slightly, and shows small artifacts in the shadows.
You can see all previous video related posts from this blog here.
November 16, 2013 1 Comment
If you look on the internet, you will find all sorts of advice that the A77 and other Sony Alpha cameras are useless for video. The image doesn’t ‘pop’ because the bitrate is too low, and you can’t easily edit the video because AVCHD ‘breaks up’ if you tweak it too much.
But then you see something like the video above on youtube. The foreground is clearly separated from the background to give the ‘3d’ effect, and there’s tons of post processing in this footage to give it plenty of ‘pop’, and it all looks great! How was it done?
To create decent video with the Sony Alpha, you need a good understanding of the video file type used by Sony (and Panasonic) cameras: AVCHD, plus how to shoot and handle AVCHD to give you lots of latitude in post processing.
Core issues to know about are line skipping, bitrate and dynamic range. The Sony Alpha is average on line skipping and bitrate, and the king of dynamic range. As we will see, any video DSLR is a set of compromises, and getting the best out of any DSLR involves playing to the strengths of your particular model and codec, something that is very important with the A77
I have noticed a lot of converters from the Sony A77 to the Panasonic GH2 appearing on the video DSLR forums. Clearly, a lot of people have got suckered into the cult of high bitrate and decided that the Sony Alphas are useless for video. I own a GH2 (complete with that magic hacked +100 Mbit/s AVCHD) and will save you the trouble of GH2 envy by showing that bitrate is not always king in video. In fact, along with the GH2/GH3, there is another type of stills camera that gives sharper video than a typical FF/APS-C DSLR and I will consider three types of AVCHD video DSLRs in this post:
A lot of this post will concentrate on the A77, but because all three cameras shoot AVCHD footage, much of the post will actually be common for all three cameras
Before we get too far though, let’s address the elephant in the room…
The Sony A77 can record up to 28Mbits/s. The highest you need for web or DVDs is about 12Mbit/s, and its less than 8Mbit/s for Youtube/Vimeo. You lose the extra bandwidth as soon as you upload to Youtube/Vimeo or burn a DVD.
Sony and Panasonic chose 28Mbit/s for a reason – an advanced enthusiast will rarely need more bandwidth. 28Mbit/s is a good video quality for the prosumer, which is most readers of this post.
If you approach a terrestrial broadcaster (such as the BBC) with video work, they will specify 55Mbit/s minimum (and they will need about 40Mbit/s if they want to sell Blue-rays of your show), but they will also expect your camera to natively and continuously output a recognized intermediate codec that can be used to match your footage with other parts of the broadcast unless you can justify otherwise. Particular action shots, reportage and other use-cases exist for DSLRs, but you have to be able to justify using a DSLR for them beyond just saying ‘it’s all I’ve got’.
No DSLR does the ‘native recognized intermediate codec’ bit (unless we count the Black Magic pocket cinema as a DSLR, but then it isn’t primarily a stills camera), and instead produces output that is too ‘baked in’ too allow strong color corrections. The A77 can’t and neither can the 5D Mk 3 (at least, not continuously and not using a recognized format), nor the GH2/GH3. Yes, the 5D was used for an episode of House but the extra cost in grading the footage meant there was no saving over using a proper video camera from the start.
The Sony A77 cannot be considered a pro video device. Neither can any other stills camera. This is a crucial point to consider when working with DSLR video. Yes, you can create pro results, but only if you have the time to jump through a few hoops, but renting pro equipment when pro work appears may be cheaper overall and get you where you want to be quicker.
Finally, its important to realize that a hacked camera has drawbacks. I do not use the highest bitrate hack for my GH2 (as of this writing, the 140Mbit/s Driftwood hack), instead using the 100Mbit/s ‘FlowMotion’ hack. Driftwood drops the occasional frame. Not many, but enough for me to notice. That’s the problem with hacks: they are fine for bragging rights and going around saying you have ‘unlocked your cameras full capabilities’, but the same hacks make your gear less reliable or produce more noise or other glitches.
So, the A77 is about equal to its peers for prosumer video. Some have better bitrate, some can be hacked, and some have better something else, but none of them can say they have moved into professional territory.
It’s important to realize that a file format (such as an .mts or .mp4) is NOT a codec before moving on. See notes 1 and 2 if you need more information.
Rather than use a low compression codec to maintain quality (as used in pro film cameras), Sony and Panasonic realized that using faster processing power to compress and decompress frames very efficiently might be a better idea. That way, you get low file size and higher quality, and can then store your video on slow and small solid state devices such as (then emerging) solid state SD cards. The resulting video codec is a custom version of H.264 (which is itself a derivative of the popular but now old MPEG4 codec) and is the codec that AVCHD (Advanced Video Compression High Definition) uses. The custom codec AVCHD uses is more efficient than older, less processor intensive codecs, and therefore provides better quality given the same file size.
So why is AVCHD good for burning wedding videos but not shooting Star Wars?
AVCHD is designed to faithfully compress and decompress your original footage so that you cannot tell the difference between AVCHD vs full frame film when played back on a typical consumer 1920×1080 screen. So for all intents and purposes, the AVCHD format is lossless: you can’t tell the difference. What’s the catch?
There’s a lot of talk about AVCHD on the web and how it is not good enough. Whatever reason you are given, the core reason is derived from the two points above: in a nutshell, AVCHD was designed for playback and not to be edited, and is too cpu heavy to be edited in real time.
This was true 4 years ago for AVCHD, but hardware and software have caught up.
To play back 10s of 28Mbit/s AVCHD, your computer has to take a 50-100Mb file and decompress it to the multi GB file it actually is on the fly at the same time as displaying the frames with no lag. To edit and save an AVCHD file, your computer has to extract a frame from the file (which is non-trivial in itself, because the frame data is not in one place: some of the data is often held is previous frames), perform the edit, then recompress the frame and some frames that precede it. So editing one frame might actually change 10 frames. As 1 minute of video can actually be 8GB when uncompressed, that’s a lot of data!
The good news is that a modern Intel i7 or equivalent computer can handle this, and lesser CPUs can also do it if the software knows how to farm off processing tasks to the graphics card or GPU (Premiere CS knows how to do this for certain NVidia GPUs, and Premiere CC knows how to do this for most NVidia and AMD GPUs, which is almost all the GPUs worth considering). I am currently editing in Premiere using only AVCHD files, and it is all working well.
There is no need to convert AVCHD into an intermediate codec because the reasons to do it are no longer issues. You may still see old arguments on the web: check the dates of the posts!
A reason still often given for converting AVCHD to other formats (such as ProRes or Cineform, which, incidentally are examples of the pro intermediate formats we talked about earlier) is it gives you more headroom in color manipulation. This is not valid anymore. As of this writing, Premiere up-scales ALL footage to 32 bit 4:4:4 internally (which in layman’s terms means ‘far better than your initial AVCHD’) when creating its footage, so turning AVCHD into anything else for better final quality will not do any such thing. At best it will replace your AVCHD source files for files x10 larger for not much gain in quality (although an older computer may be better at keeping the edits working in real time if it can’t handle AVCHD). In fact, in a normal editing workflow, you will typically not convert your source AVCHD to anything. Just use it as source footage, and the final output is whatever your final production needs to be.
Another very good reason not to convert your input AVCHD to anything else is because conversion to another format always either loses information or increases file size, and never increases quality of itself. If you turn AVCHD into any other format, you either retain the same information in a larger file or lose some information when you compress the AVCHD data into a smaller or processor friendly format (MPEG4, AVI).
So ‘AVCHD is ok, as long as I have a recent version of Premiere (or similar) and a decent mid-high range computer’, right? It is fine as long as you do certain things:
If you have an .mts file with a bad take early on and then a good take, don’t edit out the first take and save the shorter .mts file. AVCHD is what is known as a GOP format, which means it consists of a complete frame every few frames, with the other frames containing the differences (or ‘deltas’). Each set of frames is a GOP (Group Of Frames), and if you split the group, you may cause problems. Better to keep the .mts file as is. Never resave a .mts file as another ,mts file, or anything else for that matter. There is no reason to do it, and you may lose information in doing so. In fact, even if you did nothing and resaved a .mts file, you may still lose information because you would inadvertently decompress and then recompress it.
There’s lots of people on the internet complaining about AVCHD ‘breaking up’ in post. This is all to do with the fact that AVCHD was originally designed for playback and not for editing: it is virtually lossless unless you start to edit it. Each frame of an AVCHD video is split into lots of 16×16 pixel sections, and these are (amongst other things) compressed at different levels depending on the detail seen in them. Think of it as cutting up an image into separate JPEGs whose quality is varied to reflect how detailed each 16×16 area was to begin with.
So what the AVCHD compressor does is to take blocks with near solid color or shadow, and save them using less data (because they don’t actually need much data), and save the bandwidth for parts of the image that actually have lots of detail or movement. That’s fine, until you try to
All these edits cause AVCHD to break up because you are trying to bring out detail in areas that AVCHD has decided are not the main areas of detail (or change), and so the compressor has saved them with low bandwidth.
How to fix this?
You may be thinking ‘ah, that’s fine, but I can’t control the sun or the shadows, and that Tiffen filter thing is expensive’. There is one other thing you can do: shoot flat digitally.
AVCHD and other ‘not for editing’ codecs don’t have many color or brightness levels, so by keeping the recorded values of both low you are less likely to clip them or cause errors that may be amplified in post to cause banding. This subdued recording style is called ‘shooting flat’.
To shoot flat on the A77, you need to press Fn > Creative Style, then select an appropriate creative style (‘Neutral’ is a good general purpose one, but some people swear by ‘Sunset’ for banding prevention), then push the joystick to the right to access the contrast/saturation/sharpness settings. Ideally, you should set them all to something below zero (-1 to -3). The most important one is sharpness, which you should set to -3. Sharpness makes by far the biggest difference, and setting it to -3, then sharpening back up in post-production (where you are using a far more powerful cpu that can do a far better job) is the way to go.
What if I shot the clp using a vivid style instead of flat? Well, bits of the red watering can may have ended up a saturated red. That’s fine if I leave the footage as shot, but if I want to tweak (say) the yellows up in the scene I would now have a problem: I can’t do that easily without oversaturating the red, which would cause banding in the watering can.
Another happy outcome of shooting flat is that your video takes less bandwith (but in my testing, its hardly by anything so don’t count on it).
Finally, when color correcting, you need to keep an eye on your scopes. The one I use the most is the YC waveform. This shows Luminance (Y) and Chroma (C) across the frame.
Using the YC waveform is a full tutorial in itself and for brevity, I’ll just link to one of the better existing tutorials to get a handle on the YC waveform here (Youtube: YC Waveform Graph in Premiere Pro). Note that the YC waveform lets you check your footage against broadcast limits for brightness and saturation. A PC screen can handle a larger range, so if you are creating content for the web, you might want to go a little further on range. I limit myself to broadcast limits anyway: to my eyes the footage just feels more balanced that way.
Update: see this post for a worked AVCHD example using most of the above tips in its workflow.
A common issue cited with Sony Alpha Video is maximum bitrates: the Sony range of cameras have a maximum bitrate of 28Mbit/s. My GH2 that has a maximum bitrate of 100Mbit/s or higher depending on which hack I load. Surely the GH2 is far better?
Not by as much as you would expect…
Lets look at bitrate visually. Download bitrate viewer (windows only) from http://www.winhoros.de/docs/bitrate-viewer/ If you are Mac, not to worry. I have screenshots.
You can use this to view the bitrates in your AVDHD files. Run it and click Load. In the files of type: dropdown select All files and select one of your Sony A77 files. The application menu is the little icon on the top left corner of the window title bar (not shown in the screenshots, which look at the graphs only). Click this icon to see the menu, and select GOP based.
You can now see a bar-chart of values. Each bar is one of the Group of Frames we talked about earlier. The first frame in each group is a whole frame, and subsequent frames consist of changes from the first whole frame. The height of each frame is how much of the bandwidth the group used up. For a 24Mbit/s A77 24/25p AVCHD, the max height of a GOP is the bitrate, 24000.
Notice something strange? Yup, the A77 never actually uses the full 24Mbit/s in our clip!
The internet is full of people saying that A77 AVCHD breaks up because there is not enough bandwidth, but as you can see, the truth is that the A77 often never even uses the full bandwidth! (The reason for A77 soft video is actually nothing to do with bandwidth, more on this later).
Want to see something even stranger? Here’s the exact same scene shot with my GH2 at 50Mbits/s and 100Mbit/s:
We see two graphs. The first is a 50Mbit/s AVCHD clip at 25p (Bitrate viewer tells us it is 50fps because the GH2 does 25p as a ‘pseudo 50i’), and the second is 100Mbit/s (1080p 24H ‘cinema), both shot with the ‘FlowMotion v2.2 GH2 hack’. To shoot these clips, I ran the cameras along a motorized sider whilst pointing at foliage 12 inches away. So the three graphs so far are showing identical scenes
Look at the A77 24Mbit/s (first graph) and GH2 50Mbit/s (second graph). The actual bandwidth used is far closer than the bitrates suggest: low 20s vs high 20s, and certainly not 100% difference!
The stated AVCHD bitrate is a maximum and only used if the scene requires it. Thus, if two cameras are shooting exactly the same scene, but one has twice the bitrate of the other, the quality of one will only be double the other if both cameras are using the maximum bitrate.
What the GH2 is doing that makes its quality better is it is using smaller GOPs, but that is only of advantage because I am forcing a very fast pan which is not typical of the ‘DSLR film’ style.
But then we look at the LX7 footage of the exact same scene:
The LX7 is shooting at 50fps (it has no 24/25fps mode), which kind of explains why it is consistently using a higher bandwidth than the A77, but not why it is using so much more. In fact, the LX7 is using close to its limit of 28Mbit/s, and is the only one of the three cameras that is doing so. Why, when all three cameras are shooting the same scene? Because the LX7 has the smallest sensor, and is therefore capturing the sharpest input video.
A small medium/low resolution sensor is better than a physically big, high resolution stills sensor for capturing sharp video.
The LX7 has a small sensor. Such a sensor has very little in optical compensation (it is actually a very good infra-red photography camera because of its lack of sensor filters or micro lenses), and doesn’t need to do a lot of things the A77 and GH2 need to do for video. In particular, it does not need to smooth or average the signals coming from the sensor pixels, something we will touch on in the next section.
I find that the LX7 is always more likely to be using the maximum bandwidth out of my three cameras. The fact that the GH2 can shoot at over 100Mbit/s does not mean it is always capturing three times the video quality, and the fact that the A77 has the biggest sensor does not mean it is capturing the most light because it throws most of that data away… which brings us to the real reason why A77 video is often seen as ‘soft’…
A sensor optimized for stills is not the same as a sensor optimized for video. A current large (Full frame or APS-C) stills sensor is typically 25MPixels (6000×4000 pixels) or higher. To shoot video with the same camera, you have to discard a lot of that data to get down to 1920×1080. The way this reduction takes place makes a big difference on final video quality, and is more of an issue than bitrate, especially for most ‘average’ scenes you will take (i.e. ones that do not hit bitrate limits).
Some smallish sensor cameras (such as the GH2/GH3) can (and do) average adjacent pixels, but large sensor cameras such as the A77 and all other APS-C/Full frame DSLRs simply skip the extra data. The resulting video has artifacts caused by the gaps. To hide these effects, the camera intentionally softens detail that looks like it may be an artifact. You can see this clearly here: http://www.eoshd.com/content/6616/shootout-in-the-snow-sony-a65-vs-panasonic-gh2-vs-canon-600d?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+EOSHD+%28EOSHD.com%29. The A65 and Canon 60D are always softer than the GH2.
Although the Canons produce soft video, the A77 (and A99) is a little bit softer. This is probably because of processing overhead: the Canons only do 30fps, whereas the A77/A99 also do 50/60fps video, which means they do a more approximate line skipping/soften function to allow for the data throughput of the higher maximum frame rate.
The effects of the A77 line skipping does not show up at all on close or mid distance detail, but it does show up on far detail (because then the detail looks like it might be caused by the line skipping to the smoothing algorithm). The smoother turns distant foliage into a green mush, whereas something like the Panasonic GH2 keeps it all looking good.
I have seen this clearly when looking at A77 footage in Bitrate viewer. If I shoot a close subject, the footage actually looks very detailed, and the bitrates are close to the maximum, implying the A77 is capturing lots of detail.
For footage that contains lots of mid-long distance detail (trees and foliage, complex clouds, distant architecture and detailed terrain), the detail disappears and the bitrate reflects this – the A77 is discarding the data by smoothing it and is not even attempting to add it to the video stream: the bitrate is usually well below 20Mbit/s.
What to do? Most often, I do nothing.
I actually like the A77 ‘look’ because it more closely resembles film stock. As long as you keep to low apertures and let the distant objects defocus naturally through a low depth of field, you have no issue, and often don’t even need more than 24/28Mbit/s. You would shoot at low apertures if you want to shoot ‘DSLR film’ anyway, so distant detail is rarely an issue. If you are shooting close subjects with lots of detail behind them, the A77’s way of doing things actually maximizes detail on the main subject, so it makes sense both stylistically and in terms of best use of limited bandwidth.
If I absolutely need a scene with sharp middle and distant video with minimum extra kit over the A77, I take the LX7. You can now get hold of an LX7 for less than the cost of an A77 kit lens (the cost has gone down significantly because everyone wants a Sony RX100), and because of the LX7’s extremely small sensor, line skipping artifacts or smoothing just don’t occur in its AVCHD output. Plus the LX7 footage goes pretty well with the A77’s (as long as you know how to conform 50/60fps to 24/25 fps – the LX7 doesn’t do 24/25p).
Best of all, like the A77, the LX7 requires almost nothing to shoot video.
All I needed was to add a 37mm screw thread (available from Panasonic but cheaper copies are available from eBay or Amazon) and a 37mm ND filter (Polaroid do a cheap one, again from eBay/Amazon).
The GH2 uses a pixel sampling pattern (or ‘pixel binning’) to reduce full sensor data to video resolution, so it averages data rather than just throwing it away. It also as an ‘Extended Tele’ mode where it just uses the center 1920×1080 pixels of the sensor, so it acts more like a small sensor camera such as the LX7 (and makes a 70-210 Minolta ‘beercan’ a telescope: 210mm x2 for full frame/MFT, then x4 for the tele mode, so over 1000mm effective with no loss of video quality!).
The GH2 can use its higher bandwidth to allow far more details to be brought out in post, and this is one of the big advantages of high bitrates with it. The hacks also give you >30minute constant recording and the ability to move between NTSC frame rates (for the US and web) and PAL (for weird non-standard outposts such as here in England).
The GH2 looks really really good on paper, but it has one big issue that means I still use the A77 more: the GH2 is a good camera, but the overall system is not. To get full features from a GH2, you need Panasonic MFT lenses, and they are awful: awfully slow and too long, or fast and awfully expensive. I therefore use my Sony Alpha lenses with the GH2, with manual focusing via an adapter. The GH2 has a poor focus assist (no focus peaking) so it is not the fast run-and-gun one-man camera that the a77 (or LX7) is.
So, we’ve looked at the disadvantages of the a77 (and their fixes). What are the advantages?
There are several big advantages my A77 has over a high bitrate camera (i.e my hacked GH2) or a sharper-video small sensor camera (LX7):
The A77 may have a low bitrate, but the bitrate is enough for all non-pro uses a DSLR would typically be used for. Other DSLRs may have better bitrate, but that is not enough for them to be considered more ‘pro’ because none of them are on the preferred cameras list of the top broadcasters.
If you are shooting DSLR video, your target audience is web or personal events (such as weddings), and the AVCHD format sony use is all you need, assuming you take care to shoot flat. If you are really good, you might be able to squeeze out an Indie short, but you might be better off just learning on a DSLR and perhaps using it for some shots, but then hiring pro gear for a week for the main shoot. Finally, the A77 is simply reliable: it rarely fails on focusing or dynamic range enough to kill a take.
The big advantage of the A77 is that you need very little to get started shooting video with it: no rig, no focus control, no monitor and none of the other video doohickeys apart from an ND filter and maybe a separate audio recorder. That 16-50 f2.8 you got with the A77 is FAR better than most other lenses for video, so think carefully before abandoning it.
The fact that the Sony Alpha shoots 28Mbit/s AVCHD is a non issue: higher bitrates are not as important as you think most of the time, and a camera shooting at double the bitrate may actually be recording almost the same quality video.
By far the most important thing in your setup is not actually the camera, but how you use it. You can get around the Sony soft-distant-video simply by shooting wide aperture DSLR film. Canon users had the same issues with soft video, and if you look at default Canon video straight out of the camera, it looks a lot like default A77 video straight out the camera. With all DSLRs, the trick is to shoot with wide apertures.
I have bought a GH2, and although it does output sharper video, that sharpness is not always a good thing. The GH2 with Panasonic lenses can actually be too sharp, and old DSLR lenses are often the way to go (I use Minolta AF or Rokkor) often give a more filmic feel, that will bring you back to square one: it will look like default A77 footage!
My favorite setup is to use the Sony A77 and the LX7. Neither need much setup or additional kit for video, and the footage from them goes well together. They complement each other as a DSLR+DSLR replacement when shooting stills, and as a main and backup camera for video. I’m also glad I didn’t opt for the RX100 as my ‘DSLR replacement’ now that I know how good the LX7 is for video!
Videos associated with this post can be found at http://www.youtube.com/user/howgreenisyourvideo
NB – I had previously promised video cheat sheets for the a77 along with this post… but as this post has overrun, I will have to blog them later. Sorry!
September 7, 2013 2 Comments
The classic Minolta ‘beercan’ lenses date from the 1980s. There is a lot of conflicting advice on their suitability with modern cameras.
On the one hand, the 70-210 is seen by many as a classic: Minolta color, built like a tank and fast autofocusing on modern DSLRs. Although its constant f4 rather than f2.8, that’s only a stop difference, and it makes a good poor-man’s long telephoto. You can pick up a beercan for peanuts from eBay.
On the other hand, we have all the issues associated with 1980s film camera optics: it is poor on chromatic aberration and flare. There’s also the question of resolution. 1980s lenses may have seemed good back in the day when your final output came as print, but the old stuff may not hack it against modern glass when you go pixel peeping from a modern 24MP+ DSLR.
Finally, there is the question of age. These lenses are 30 years old and you have to be careful about lens mold. Many lenses of that age have it and if you keep them stored with your existing lenses, your whole collection may become infected!
Without any ado, let us forget the specs and science, and get straight to the photography.
The photograph above was shot with a large beercan on a Sony A77 APS-C 24MP camera in Program mode, f7.1, 1/400s, ISO120. It was shot hand held from a distance of about 40-50 metres away.
The statue is made of spun wire rather than stone. You can see this in the per-pixel size close-up above.
Let’s just recap: this is a 24MP image shot hand held at ISO125 from some distance away. Of course, there’s post production here, but this pretty much blows out the resolution question: sharp at 24MP. The color is also good.
Best of all, I am shooting on an APS-C, which means that 70-210 converts to 105-315 and with anti-shake (it comes as standard in all Sony Alpha camera bodies). I’m stupid enough to expect to be able to shoot at 315mm/f7.1 hand held… and it worked: no blur! This is not a one off either: all my shots with this lens came out just as sharp. This would just not happen with a more commonly used super-zoom (such as my Sony 18-250), which would have a minimum f stop at the high end of f6.5, so f7.1 would still be a bit soft.
Incidentally, its worth noting that old film lenses are analog devices and not digital. An old lens will not ‘shoot at a lower resolution’ but give reduced contrast. As long as the contrast can be brought back to normal levels in post production without removing detail, there is no problem. I have read internet posts where someone rejects an old lens because it ‘doesn’t have enough resolution’ or ‘resolving power’ for a given modern camera. Resolution is not something any lens has, and its not about the smallest dot a lens can resolve, but how sharp that dot appears in the final output (either through lens contrast or modern digital convolution filters and micro-contrast enhancements applied in post). Don’t worry about the numbers: judge by the contrast and detail in your final photograph as we have done with the sculpture above.
Another issue with older glass is optical aberration: distortion, flare and chromatic aberration.
At tele distances, everything will be flat, so we should not be concerned with distortion. The beercan flares like mad, but that’s fine as there’s no point taking a shot like this with the sun in front of you. As you can see by the boy’s shadow, the sun is almost exactly at 90 degrees to my right, and that’s probably as close to central you would want a hot summer sun unless you are also using Flash and ND filters.
The beercan also gives lots of purple chromatic aberration wide open, so I’m stopped down quite hard for 24MP: f7.1. The original image still gave me a little CA on the boy’s highlights. We’re looking at per pixel at 24MP here so this will never show up on print.
Nevertheless, Lightroom 5 easily got rid of the fringing and satisfied any pixel peeping urges I might have. This works well because the CA tends to be pure purple, making it easy to remove.
I use the beercan on a Sony Alpha A77, with which the beercan works very well: quick autofocus speed (but note that the lower end Sony alphas have a weaker focusing motor, and autofocus may be slower on those models), and despite the size, actually balances very well on the camera.
The nearest modern lens alternative is the Sony 70-200 f2.8. That goes for $2000.00, so although it has better optical characteristics, it only gives you a stop more in speed from f4. That stop may be important for professional shots, but for the happy enthusiast, it probably is not worth the x10 price hike! This is especially true when you consider that long fast tele is probably an edge-case for most shooters except sports or wildlife.
Physically, the lens is 100% metal apart from the lens hood and rubber grip area. It is a very shiny black (almost piano black). The lack of markings (compared to current lenses), constant diameter and coloring actually makes the lens look modern because of its minimalism. It certainly stands out against my drab grey-black modern lenses!
Optical extras include the fact that the lens is a ‘true zoom’ or parfocal, meaning that it maintains focus as you change focal length. This makes the lens very easy to use as it doesn’t call attention to itself as you compose your shot. It would also make the lens useful if you ever needed long tele with video (but note that the lens is noisy on focus). There is also macro at 210mm, probably 2:1, but I haven’t really tried it (as I have a dedicated 1:1 macro lens in my set).
Perhaps the best optical feature of the beercan is its color and contrast out-of-camera, as well as its colourful bokeh.
As you can see here, these three features can conspire to make even the most mundane photographic subject better! You can also see the chromatic aberration here (highlights at top of post), but as mentioned earlier, this is easy to remove in post-production, or by shooting stopped down (the photo was shot wide open to show depth of field at f4, but going above f5.6 would have fixed the CA).
When photographed in ideal conditions (not into the sun), the contrast and color out of camera is so good that you would assume polariser filters or post work has occurred
Have a look at the blue sky in this image, and the contrast between the sky and tree. There is not a hint of CA in this photograph either as we are away from wide open. Wonderful!
Once the sun is directly into the lens though, the issues start.
We now lose a lot of the contrast (although we do get a nice graduation in the sun highlight, something that does not occur on a typical kit lens, or even some more expensive current optics, and is a feat from the Sony A77 as I am shooting at ISO64!).
What we do see though is difficult to remove flare. It is several shades of purple so cannot be removed without cloning it out. If this was a paid for shot, you would be in trouble, because the beercan’s flare is not pretty enough to be passed off as artistic intent or styling.
So if you buy a beercan, Colors, contrast and bokeh are to die for, chromatic aberration is strong but can be removed in post, and flare is your worst enemy.
The lens is not quiet by any measure, although that may not be a problem at long tele, as the subject will probably be too far away to notice!
I got the lens from eBay. The seller sent over the original carrying case, the instruction book, and even threw in a free small beercan (35-70mm f4 constant), also with the original case. All well and good, but the 70-210 had mold in the front lens assembly. That is not fatal, and a quick look at a disassembly guide on the web enabled me to take the affected lens out and clean it all off. Nevertheless, I store my old lenses separate from my new ones. Not much of a constraint (they go into the same camera bag when I go out shooting), and a cheap way to build up on some classic mid speed glass.
If you are buying 1980′s lenses for a Sony Alpha camera, Minolta AF lenses from that period will fully work off the bat because modern Alphas maintain backwards compatibility with them. Third party lenses from the same period will most likely only work in manual unless they have been upgraded for modern autofocus (which will cost more than the lens is worth, given that the market is flooded with working Minoltas). Be wary of buying 1980′s Sigma and other non-supported brands.
The most important issue with old glass (if you believe half the internet) is ‘lack of resolution’ or ‘lack of resolving power’. As noted earlier, this is a non-issue. See the notes section at the end of this post if resolution is a bugbear for you (or you have heard otherwise so often that you need proof).
Optically, the biggest issue you will get with old glass generally is the lack of modern coatings. This presents itself with a greater loss of contrast and more flare when shooting into the sun. It occurs because old lenses are bad at controlling internal reflection between lens elements (modern lenses absorb the stray light through their coatings). You need to be aware of this when shooting with older glass, but in practice it is not a big constraint as you rarely need to take such a shot, and when you do, the resulting aberrations can often be used artistically (who needs Instagram when you have the original glass that causes the effect…). Another issue you may find at the low end is greater optical distortion. Old lenses were designed without the benefit of current computer simulation power, but that does not have to be an issue when the modern photographer has the benefit of modern computing power in post production, and most optical distortion can be corrected to current lens standards in Lightroom.
The issue with lens age and mold is just a part of the game. You will spend less money with old glass, but the downside is having to occasionally dismantle a lens or bin it completely if you got sold a dud. If nothing else, learning to work with old glass means you are forced to open the odd one up, and get a better understanding of what a lens actually is. The important thing is to actively look out for mold, and either fix it or bin the lens when you do find it (and don’t pay so much on an old lens that you cannot afford to bin it).
My default kitbag includes the following:
Although there’s 5 lenses here, 3 and 4 are tiny, so we’re only really talking 3 large lenses. I’ve got the most commonly used focal lengths at a constant f2.8. I also have a couple of fast primes at 50 and 75mm, for low light and high depth of focus. Finally, I have the long end covered up to 315mm at a constant f4.
I have nothing between 55-105mm except a 75mm prime, but that’s ok by me. I could cover it via my small beercan (50-112mm APS-C equivalent, f4 constant), but that stays home as that range isn’t really interesting to my shooting style except for portrait (and that is covered by the Minolta 75mm, especially when it can go down to f1.4 for flattering depth of field shots or indoor low light).
The take away from this lens set is that two of them are 1980s Minolta glass, both bought from eBay at a fraction of the price of equivalent modern glass. Yes, they have issues with shooting wide open into the sun, but to be honest, doing that doesn’t often lead to keepers with any glass (unless you are shooting with off-camera Flash, and that is another ball game for a later post). I’m happy to put up with having to fix the CA in post for the Minoltas, because the famed Minolta color (deep color and good contrast out of camera) means I save that time having to sort out other issues in post.
The initial photo from this post is a good example where ‘Minolta color’ becomes useful. The separation between the boy, foliage and sculpture is so large that it almost looks like the boy was composited in! In fact, the separation out-of-camera was actually larger: the sculpture was darker and the background foliage was lighter. The high contrast between the image elements gave me a lot of help in making mask selections and more than made up for the time lost (10 seconds) fixing the chromatic aberration.
An issue that crops up with old lenses is ‘resolution’ or ‘resolving power’. The argument goes along the lines of ‘old lenses were designed for far lower resolution than current cameras so image quality suffers’. What that argument doesn’t tell you is
Let’s look at the problem with a hypothetical simple sensor…
Consider a digital camera sensor with only three photo sites. Each site can detect black or white. If we try to take a picture with this three pixel sensor, such that only the centre pixel is lit, we see an image such as i).
We will get a high voltage for the center pixel and a low voltage for the outer pixels. The sensor digitiser will convert these to the signal ii), which as a digital bitstream is ’010′, and that is what our RAW file will contain. When we view the RAW file as an image, we see our row of three pixels as per i): black, white, black.
Now, suppose we put a lens on the front of this sensor that is unable to resolve correctly. What would happen? As a waveform, we would see something like iii) coming out from the sensor. The centre voltage has spread out so it is no longer a definite high voltage anymore, and the two low values have also degraded. How does the digitiser handle this? Well it has a trigger level half way between the high and low voltages. If the voltage is higher than this level, we see a ‘1’ in our RAW file, and a ‘0’ for anything else.
The digitiser will still see the center pixel as a ‘1’ because it is still more than the trigger level, and it will still see the outer pixels as ‘0’ because they are still below the level. Our blacks are still black, and our whites are still white, despite the fact that the input signal to the digitiser is significantly degraded!
As an aside, this feature of digital systems is actually the only reason why we started encoding analog values digitally for both storage and transmission: as long as the noise introduced to our digital signal is less than half the difference between a logic ‘0’ and a ‘1’, we get no noise because a ’0′ is still a ’0′ and a ’1′ is still a ’1′ . In this case, our noise is the lack of analog resolving power before the digitising stage, but we do not see it because its introduced error is less than half our sensor’s bit accuracy, and therefore rejected.
So, unless the lens is good for less than half the maximum resolving power of the camera sensor, you do not need to worry because the digitising process corrects the noise introduced by the lens. Put another way, if your lens is only good for 14MP (which it typically is), then you do not need to worry for a 25MP camera, because 14 > 25/2.
Ok, so now you are thinking ‘yeah but most sensors are not just detecting 0 or 1: they are detecting a 000000000000 to 111111111111 (plus they use separate photo sites for the red, green and blue components of each pixel), so instead of 0 or 1, so what you would actually see with three adjacent pixels from a real sensor is dark grey-light grey-dark grey (as per the image above) instead of black-white-black’ as per i). You might even be thinking ‘the resolving power of a lens varies with focal length: the more you zoom, the more the light is travelling through less of the glass area, which amplifies errors and changes resolving power for the worst… so at some point, a long tele lens like the beercan will be causing big enough resolving errors to cause worry’.
Yes exactly that will happen, and there is a name for this process. It is not called something scary like loss of resolution, loss or resolving power or wasting your sensor resolution by being a cheapskate. It is called simply losing contrast. Blacks turn to grey and whites become less bright. That is not anything to worry about because you can quantify it: you can see it physically just by looking at your photograph. An old lens that is focusing correctly but resolving to a lower level than your camera will just lose contrast. That’s not really any surprise because you know all about this already: a good lens that you artificially make worse by rubbing a greasy thumb all over it will do exactly the same thing. The grease scatters the light and causes the same resolving issue.
You can correct all this easily: just increase the contrast in post. Better still, you can realise that the issue is really micro contrast rather than contrast, and increase clarity (except of course on skin, where the loss of contrast is potentially a good thing). Either way, all it takes is a small tweak on a single Lightroom slider (and perhaps a mask to avoid changing skin contrast).
Alternatively, you can just use a brand of old lens well known for high contrast so that loss of contrast is less of an issue. Well, ‘Minolta color’ means many things, but one thing it means is really good contrast, so if you are using Sony NEX or Alpha, buy Minolta and don’t worry!
August 17, 2013 Leave a comment
A few days ago, myself and my partner visited York Cathedral (for US readers, that’s the original York, not the New one). Although an atheist, I enjoy exploring great buildings, and cathedrals are some of the better sort in terms of photogenic opportunities.
Although I don’t believe in God, I can think of no valid reason that my lack of belief invalidates either the faith of others, or invalidates the need for things like churches. In fact, there are several very good arguments I can think of that validate religion without shaking my belief that there is no god.
Most people believe that medicine is a good thing. It cures your illness and makes you a better person. But what about Aspirin, Paracetamol, and other pain relief. These are not medicines because they don’t actually cure anything: they can’t make us better people. In fact, they can make us ill for longer, because they fool us into acting as if we are well when we should be resting and taking it easy.
So why use painkillers at all? Because painkillers are not there to cure us and make us better people: they are to help us make do with what we have to endure now.
We live in a world where some people have everything, and many have far less. Those that have less will be born poor and probably die the same way. Religion is their pain relief, and without it they will have nothing for the pain. Why would you deny them even this? In war, they say ‘You’ll never find an atheist in a foxhole’. I bet you’ll not find many in a slum either.
Meanwhile in the more affluent world, we have people going through grief and loss. Why not allow them pain relief through belief? To do otherwise would not be humane, especially when belief has more efficacy than competing ideas such as the hard truth.
Finally, it is worth pointing out that as well as making us face up to our situation, Religion has a good record for curing addictive and mental behaviour. Anyone who has been to Alcoholics Anonymous will know all about this.
Science and psychology are very bad at curing mental conditions such that medication is no longer necessary. At best, they can only manage them until the brain choses to cure itself with time. One of the best ways for this to happen is to allow the brain to socialise with other brains. We know this because people who make good recoveries from mental illness are generally those with strong social networks.
The social aspect of organised religion seems to create exactly those social networks (name a religious event that is not social: birth, mass, marriage, death, and everything in between are all designed to be social events with God as witness), and is good at keeping us sane through social contact, a social safety net, a lack of fear of the unknown, and a sense of belonging. ’Primitive’ tribal groups tend to be very religious/social, and also seem to lack many of the mental conditions the developed world has, despite living a much more dangerous existence.
It is said that religion only works when everyone has the same one. Otherwise we get intolerance on a mass scale. Ethnic cleansing, purges, holy wars. Well, maybe, but there seems to be a lot of political force driving it all. Were the Cathers heretics or an opposing political force? Were Catholics and Protestants in Northern Ireland fighting for their religion or the asymmetric shareout of land, housing and wealth (caused by the British favouring one side early on), and if that asymmetry had not occurred, would the two sides have just co-existed despite their differing religions?
Looking at modern history, are Sunnis and Shias fighting for different religious beliefs or because they are ethnically, politically and economically separate groups (which in the Islamic world generally means ‘from different tribes’)? Were Jews targeted by the Nazis because they ‘killed Jesus’ or because some prominent Jews came in after World War 1, bought at rock bottom prices, and by the late ’20s were pretty well off and, through no fault of their own, already looking like a handy political target when everyone else was being burdened by war reparations and recession? If either was only religiously motivated, then you would also see forced conversion because religion would be the only point of contention (and conversion takes the contention away), but you don’t because it isn’t.
I am in no doubt that this is a contentious argument and apologise in advance. For those of faith it is often hard to believe that it was not their faith that was causing their troubles, but that they were simply being targeted as a social or economic group, being forced to move away from their land to make way for others, or were simply being set up as a scapegoat for other harder to fix and deeper problems (economic woes being the favorite reason). Unfortunately history and hindsight shows this is what often happens.
In engineering and science we have different ways to say ‘not correct’. We have known error (or ‘accuracy’), and approximation as well as plain wrong. Classical Newtonian Physics is wrong as soon as we consider General Relativity and Special Relativity, yet we build aircraft and bridges based on Newtonian Physics. Why? Although Newtonian Physics is not the whole story (and is therefore technically wrong), it is quicker to use, and gives us such a small error that it is a good approximation, and our accuracy is very good. So, despite the theory being technically wrong, it works in the real world (except of course when we are near a black hole or travelling near the speed of light, neither of which occurs often for me, YMMV).
So what? Well, what if ‘god’ is not just another word for ‘conscience’ like many non believers assume. What if ‘god’ is just an approximation of everything we don’t know and can never act on in the real world: human nature and unforeseen natural events. By calling all that stuff ‘god’ and ‘acts of god’, maybe we lose our fear of it, and we get to live better, happier lives. Like Newtonian Physics, a belief in god would be technically wrong, but it leads to such a small error in how we live our lives that it is a good enough approximation in the real world, and actually makes us better at doing the right things without overcomplicating it. If approximation is ok for one of the core (and most used) theories in Physics, it must be fine for religion, right?
One of the biggest issues levelled against religion is that that religion is always anti-progress.
Science is a questioning system that increases knowledge, whereas Religion constrains it. Blind faith prevents a questioning attitude, and if we all did that, we’d still be stuck in the Middle ages.
History says otherwise: religion has historically played a major role in science.
Take genetics. Ever wondered why scientific notation is always Latin? ‘Oh, that’s because Latin was the language of learning during the renaissance and onwards, so learned people readily adopted it’. No. Latin is the Language of God.
If you were ill, you didn’t go to a doctor: you went to a priest. Or Shaman, or Witch doctor: people of religion, because they invented medicine. And in Europe, they wrote in Latin. The clergy were the only people with enough time to look in detail at the theory of how to breed animals, or get more honey from bees, or ferment better beer, and they wrote it all up in Latin. Go back far enough, and the clergy are the giants that the Theory of Evolution rests upon. And you don’t even have to go back that far.
Scientists, Medics and Engineers as we know them did not appear until very late in our history, and even then, all of them were typically either religious or deeply religious until as late as the mid 1900s. We forget this important fact at our peril. Religion has not prevented scientific discovery. We know of people like Galileo, but they were the exception rather than the rule. Most scientists during the first big surge of technology that led to the world we now live in (17th, 18th and 19th centuries) were generally religious as well has having a critical, scientific mind. The two are not mutually exclusive.
If we go back to the start, life was hard in the 6th century, and the only people who could take time from just trying to say alive and start building up scientific theory was the clergy. We know this because universities came from cathedral schools and monasteries. Put another way, how many sciences would conflict with the belief that there was an unknowable force out there that cannot be measured, cannot be seen, and does not interfere with physical processes because it is outside the material world? Almost none: probably only evolution and the centrality of humans in creation. That makes sense, because historically the church has not only set up higher education, it has also repressed almost no scientific theory (or done such a bad job of it that we were able to put a man on the moon regardless).
Christianity is not alone in this. When you are writing numbers and formulas, you are not writing in the language of pure mathematics: you are writing in the language of Allah. Numbers are Arabic, and we are implicitly using the Islamic concept that god must only be shown symbolically when we consider reality. The medieval Christian mind saw god in the natural world through purpose: creation seemed to be laid out for us. Modern scientists with a religious background generally see god through the complex ordering of simplicity within reality that points to a natural perfection (i.e. when we try to represent reality symbolically, it is actually very ordered and follows rules, hence science), and that is a very Islamic thought, carried on from the Logical Tradition but supercharged with Arabic Mathematics. This is what gives theory the ability to predict, and without it, science just becomes an Ancient Greek theoretical talking shop.
Finally, there is the belief that accelerating scientific discovery is occurring because religious dogma is giving way to it. Perhaps, but more likely it is due to the increasing complexity of finance. If you had just discovered a new metal alloy in 1013AD, you would have to approach a member of the Royalty and agree to make cannons for them. In 2013AD, you just use AIM or other investment systems to raise capital. Fast track tech investment makes technology advance faster, not atheism.
Let me lay this out so there is no doubt in my atheism:
Nevertheless, religion is one of the best examples of Richard Dawkins’ ‘meme’: an idea that is naturally selected to remain popular (in fact, religion is perhaps one of the best examples of an evolving meme because it has lasted longer than any other big idea).
Let us hope that the major religions will evolve from these few remaining anachronisms and accept change.
I don’t believe in god. I don’t belittle those who do, because faith works in the real world.
Plus religion can be very photogenic.
July 14, 2013 2 Comments
There’s a Lensbaby review on Amazon that ends with words to the effect of ‘this is nowhere near as sharp as my Canon L Lenses, and I think I’ll stick with the L Lens thank you’.
Let me tell you the alternative side of the story. You may or may not like Picasso’s paintings, but look up his earliest works. The guy could really paint! Picasso turned to cubism and other primitive styles because he was at the top of his game technically and had nowhere else to go. So it is with Lensbaby: if you know your camera very well, and want to mix it up a bit, Lensbaby is a direction to take. For some, that may be a step backwards technically, but it can occasionally be a bigger step forward creatively.
This review won’t go into what a Lensbaby is and what it looks like, but instead I’ll go through what it does and doesn’t do, and what I think it is best used for.
First though, a little history
Lensbaby was released in 2004 with modest expectations. It was launched At the Wedding and Portrait Photographers International trade show, Las Vegas. The creators, Craig Strong and Sam Pardue sold out on the first day, and spent the remainder of the show working nights in the hotel room building more Lensbabies, all of which sold out the next day. Lensbabies are now a mass produced, international product.
A modern Lensbaby consists of a primitive optic (such as a single, uncoated lens, a plastic lens, or even just a pinhole). This optic has very little in the way of advanced features, so you can expect large variability, chromatic aberration, vignette, low contrast, and all the other things photographers usually pay good money not to have in a lens.
The whole point of the Lensbaby system is that you embrace all those aberrations and use them creatively. So, stick with the Canon L Lens (or Nikon ED, or Sony Carl Zeiss) when you want optical quality, but consider Lensbaby when you want to trade sharpness and quality for something more edgy, dreamy or totally leftfield.
I’ll let you look up the different types of Lensbaby and how you physically use them at lensbaby.com, and dive straight into the things you really need to know when considering owning a Lensbaby…
Lensbaby starts off fairly cheap, but all the accessories you need before you have a system you can begin to use it creatively add up.
Maybe the Lensbaby is worth the money. Well, there’s a number of ways to work out the true value of a given lens, but for me the best indication is resale value. Look on eBay, and you will see that Lensbabies can easily go on auction for significantly less than retail price. In my opinion, Lensbabies don’t hold their value because lots of people just don’t understand them or didn’t realize what they were buying into (See note 1 below), and the Lensbaby immediately ends up for sale as ‘opened but practically unused’ on eBay.
So that is your first big tip: If you want to try a Lensbaby, buy it second hand on eBay.
I bought a Lensbaby Muse, the Lensbaby tool, the double lens optic, a three optic starter set, a Lensbaby book, and a custom Lensbaby carrying case. All as a single lot, hardly used and fully boxed, for a bit more than the cost of a camera battery. A great deal for me (because it is a good, feature rich set to start exploring with), but I would have been furious if I had been the one selling, because he bought it for the same price as that battery and the entry level camera that comes with it.
Now, don’t get me wrong here. I’m not saying that Lensbaby is low quality stuff. It is actually surprisingly high quality (a lot of the bits I assumed would be plastic are machined metal for example). What I am saying is that lots of photographers buy Lensbaby but don’t like it or get bored quickly and dump on eBay, and that’s what pulls the resale price down.
Everything about the Lensbaby is not just simple but downright primitive. You could hand a Lensbaby to a photographer from the 1800’s Wild West and they would totally understand the technology. Simple, uncoated lenses, pinholes, Holga quality toy lenses. You don’t even get aperture blades: you have to swap out metal disks with the aperture holes cut out. And of course, the Lensbaby is completely manual. It doesn’t have any electrical connections at all. Your digital camera won’t even detect a lens is attached, so you have to know how to force the camera to fire even if it doesn’t detect a lens (on my Sony Alpha A77 its Menu Button > Cog 1 >Release w/o Lens set to ENABLE).
Focusing is done with your fingers: you manually change the shape of the lens body, and that takes a lot of practice. It is easy to take a Lensbaby photograph where everything is dreamy and blurry, but difficult to take a technically good Lensbaby photograph (where the main subject is typically in sharp focus).
This difficulty is hidden by the name. You might be thinking ‘Lensbaby: ahhh! Its a cute baby lens so it must be easy to use!’.
The reality is actually ‘Lensbaby: its the most primitive lens you can put on your camera, so you have to know your camera inside-out, because you will be the one sorting out the focus, depth of field, contrast, keeping aberrations at bay, and changing nappies. You will typically be doing most of that not only manually, but directly with your fingers, so you’ve got to be prepared to get your hands dirty’.
So second tip: don’t buy a Lensbaby unless you understand how to use your camera in either Aperture Priority or Full Manual, because those are the default modes you will be using with a Lensbaby.
If you don’t want to get your hands dirty, cheap traditional lenses and some Photoshop filters/blurs may be a better bet.
The images above were all taken using 1980s lenses (the classic Minolta ‘large beercan’ and the Minolta 50mm f1.4, both of which can be had on eBay for cheap, and both of which work perfectly in full automatic on my Sony A77), whilst I was practicing traditional photography skills such as use of traditional lens filters (such as polarizers, old-school on-lens graduated filters), and lights and light modifiers (natural light reflectors, softboxes, etc). The photographs are as seen out of camera.
It is worth considering whether practicing that traditional stuff with inexpensive old-school optics will, for the same money make you a better creative photographer than going off on a tangent with Lensbaby.
There is a very good, but also very subtle reason for using Flash with Lensbabies, and it was staring me in the face from the moment I unpacked my Lensbaby Muse. It was the photograph on the product packaging. It looks like this:
That’s a really nice photograph. But if you try getting that same controlled depth of focus, you also end up with low contrast, and that makes the image look bad for tone, and washed out for color. You can fix it in Photoshop, but then your photograph starts to look like the depth of field was done with a Photoshop camera blur.
Look at the face to see how it was done: there’s a big directional fill flash. That’s what is bringing the contrast back into the subject. You can even see its direction if you look at the shadow cast by the goggles.
Third tip: if you are using Lensbaby professionally, you typically need sharpness and contrast in the area of focus, and you use Flash extensively (or natural light with reflectors) to give you the contrast.
This will come as no surprise to wedding and portrait photographers, but may be a surprise to the rest of us. Knowing how to set up a Flash that doesn’t look too obvious is often an important part of taking good Lensbaby photographs. That typically means you know how to put your Flash off-camera and how to use light modifiers, both of which are advanced topics.
The aperture on a Lensbaby is a perfect circle cutout, so your bokeh will be perfectly circular rather than polygons. Most Lensbaby optics have poor or no coatings and zero flare resistance. If you want bokeh and flare, Lensbaby is where it is at.
Fourth tip: LensBabies allow you to add all sorts of optical aberrations if abstract or transformed graphics are your thing.
All of my Lensbaby photographs here except the bokeh one are shown as they came out of the camera. That is a big advantage of LensBaby: they take far less of your time in post-processing, and you often don’t need to do much in post.
The flipside to this is that everything a Lensbaby does can be emulated in Photoshop or Lightroom. The plastic lens is just a big surface blur. The glows can be done with guassian blur, and the streak effects are motion zoom blurs.
Emulating Lensbaby in Photoshop certainly gives you more control, but it doesn’t always give you the movement and atmosphere that LensBaby gives.
The above images show how I emulated Lensbaby effect in one of my own shots. The entire process took about 3 minutes to do, and about another 3-5 minutes of tweaking.
Fifth tip: Lensbaby provides graphical effects optically, so you don’t have to do it in post… but if you are good with post, you may not need a Lensbaby.
So what exactly do you use Lensbaby for? Lensbabies simplify your subject until you almost end up with a graphic rather than a photograph.
That is why wedding photographers use them so much: Lensbaby shots move the story of the day along with strong graphics that focus on only one thing: here’s the wedding cake on its own, here’s the shoes and dress the day before. In each case, anything extraneous is lifted out of the photograph via the selective focus.
Sometimes you need to tell the story not by a sharp image, but a feeling of something: the blur of the bride’s bouquet being thrown, or zooming into the happy father, lifted out from the clutter of the congregation because he is the only person in focus.
Sixth tip: A Lensbaby is good if you want to tell a story or imply a feeling through photography, because it is a good way of paring the photograph down to the bare story, graphic, or emotional elements.
So, a Lensbaby is certainly not for purists: you may prefer to spend your money on a 50mm f1.4 or cheap 1.7, and shoot wide open. That and a bit of post processing will get you to almost the same place as a Lensbaby. Doing your own post takes up time though, and because it is much more of a controlled process, doesn’t give you the edgy, primitive effects that Lensbaby can give you.
Although Lensbabies are primitive, you need a lot of skill to use them well: manual control and a good understanding of off camera Flash or natural lighting are important if you wan to use a Lensbaby professionally.
There is a very strong ‘Lensbaby effect’ and like most strong effects this may become old quickly if you use often.
A LensBaby is something you will typically take out when you have taken all your money shots, and have time to go a little leftfield. A LensBaby is not a replacement for good standard lenses.
LensBabies don’t hold their value, so you might want to consider buying second hand. eBay is your friend.
February 6, 2013 Leave a comment
Just a quick post for anyone looking at the new Sony a77 firmware and prevaricating on whether to update or not.
Although there are no features that some would have liked (updates to the JPEG engine in line with the A99, tethering support because the A700 had it), Sony have a track record of adding undocumented enhancements.
The 1.07 firmware is no different: start-up is now very fast. If I turn on the a77 and immediately try to take a photograph, the delay is about half a second, the back LCD is displaying live-view in about half a second, and the full interface is up in about a second. The sluggishness the camera had on release is now well and truly gone!
Changing from EVF to the LCD is now just over half a second.
Shutdown is immediate, and you no longer get the extended ticks and click that you got on release: you now get the noise of the lens retracting and the normal sensor clean ‘shake’ that all Alphas do. Gary Friedman has previously tested real shutdown times for the a77 (via current drain caused by battery use), and notes that the camera shutdown time hasn’t reduced much since initial release: in later firmware releases, it just plays dead while it shuts down. Gary may well be right there, but it doesn’t really matter because shutting down the A77 and then immediately turning back on again, then trying to take a photo doesn’t degrade start up time. Whatever it is doing during shutdown, the A77 now stops it and restarts without additional delay.
The Flash seems to work much better than 1.06 (I tried both the camera’s flash and a Sony HVL-F43). The A77 still seems to overexpose a little under some situations (especially with the external flash), but seems much more consistent using the built-in flash. Anyway, Gary Friedman thinks its fixed enough for prime time now.
So, no new stuff, and no changes to the JPEG engine, but you will get a faster interface, and may find the flash overexposure more manageable.
Whenever you update the A77, you always lose the AF micro adjustments, so the update has to be worth the half hour it takes to note the adjustments, flash the firmware, then laboriously attach each lens and enter the adjustment again.
Although the official change list tells up that the only addition is lens support for lenses I don’t own, it turned out to be worth the effort for me, because I finally have the nippy and quick camera I had with my non-SLT alphas.
I know there are many studio and JPEG shooters who are disappointing with the omissions, but along with the video hacks I detail here, the A77 is for me, a much better camera for than the one I initially bought.