Edit, Then Edit Again, And Then Again . . .

Ferry Operator on the Brahmaputra River

Ferry Operator

Photographs © George A. Jardine

When I’m traveling with a workshop group, I almost always have to download and edit each evening, and pick out some of my better photos to use during class discussions and presentations. And for better or for worse, those in-the-field picks generally end up in my final gallery that you see here on the blog.

And so I’m making my initial corrections on a laptop screen—which is way less than optimal, and at the same time I’m making those corrections while I’m tired and in a hurry. To make matters worse, I might be inclined to pump things up a bit more than normal to post on Facebook, or whatever. It’s only once I’m home that I have time to start rethinking some of those picks and corrections.

Once I’m sitting in front of a much better display, and with the time and patience to really look carefully at the entire shoot as a complete body of work, it’s time to begin finessing things. This is a shot that I loved from the beginning, but I had a really hard time cropping it because I didn’t want to end up with the boat operator right in the middle of the frame, horizontally.

The Majuli Ferry

There were certain elements on the right side of the frame that I felt I wanted to keep in the shot, and I guess I was probably using those elements as a justification for my original crop, which was cutting the bicycle in half, and putting the boat operator much farther to the left. Those compromises left a giant hole right in the middle of the composition, when the boat operator was really the center of interest. Nevertheless, that was the way I originally posted it on this blog. Now that I’ve looked at the photograph a few more times, and had time to really explore the processing, I feel this is a much better crop.

The pied kingfisher was another shot that I was not really sure about until I was home with time to really make a good correction of it.

Pied Kingfisher

This is an extreme crop—only about 1700 pixels across out of the 5D MKIII’s native 21MP (reduced here to 750 pixels across). And despite the fact that it was shot hand-held with a 2x converter at 400mm, it is a very sharp capture. But my original processing didn’t account for the fairly extreme chromatic aberration and fringing, which I have finally corrected here.

Most of Lightroom’s incredibly strong Lens Corrections are seldom used, frequently misunderstood, and generally not taught very well. But they’re not rocket science—they just take a little time and attention to detail, that simply spitting out to social media doesn’t require… or reward.

The idea of going back and looking hard at your edits and corrections over and over and over again, is one of the most difficult things to convey during the brief time you’re together in a workshop. It’s sort of like library organization—it really only starts to work for you once you’ve spent a bit of time on it. But in the end, it’s that extra effort that makes it all worthwhile.

Then… there’s content, which is an entirely different conversation. When I’m leading a workshop I always try to encourage the group to think about how they are going to tell the story of their experience with their pictures. Not an easy subject, but one that revolves around basic storytelling techniques, and making sure you are getting the small detail shots, along with the obvious, larger landscape (establishing) shots. Basically I try to encourage the students to shoot everything, because when you’re back home piecing it all together, you’ll find it’s those little detail shots that really help you recreate the texture of the place.

Tea Picker

Here’s an example. Near the end of the India workshop, we boarded a slightly larger ferry with a crossing on the Kamalabari – Neamati line. That particular day was much hotter than our first ferry ride, and everything just seemed washed out to me. I couldn’t see pictures anywhere. But as we were crossing, a guy walked around the boat and handed everyone a pass. I stuffed mine into a pocket thinking I might keep it with my travel stubs and other memorabilia once home.

Ferry Pass

I kept this crumpled paper around on my desk for a few weeks while I was editing pictures, and finally found a way to photograph it and get it into the final gallery. Many of the photos have been updated with better processing, a few have been removed, and this one final detail shot helps me remember that crossing of the Brahmaputra.

The updated gallery can be seen here.

For those of you who are prepared for a real travel adventure, we still have a few seats available for this extraordinary photo workshop coming up again in late November 2015. Click here for details, and remember to get your India visa right away!

A Workflow Essay On DAM That Never Saw The Light Of Day . . .

Door Knocker

Venice 2014

Photographs © George A. Jardine

I was recently asked to write a piece for an organization on Digital Asset Management, and one of the requirements was that it be 400 words. I didn’t push back on this requirement, or even ask why. But it reminded me of negotiating with one of the fast-food style video education aggregators, when they told me that my videos should be no longer than 5 – 7 minutes, because “no one will watch a video longer than 5 minutes.” Which of course is utterly ridiculous. You get what you ask for… meaning, if you pander to an audience that isn’t looking for comprehensive video education, then you will get what you deserve: an audience that wants fast-food video, pre-digested, and then spit out with a barnyard sound.

Despite that limitation I thought a bit about what I wanted to write for this particular org, and decided it was time to share a little project I’ve been working on. It’s a bash script that automates the process of downloading photos, synchronizing camera clocks, adjusting timestamps, renaming files, and making backups. It was a fun and interesting project because doing projects like this forces me to formalize my thinking… or my process, meaning… organize it in such a way that it can be scripted, and then, taught.

Not an easy thing to describe in 400 words!

Anyway, that’s what I wanted to write about, so that’s what I wrote. And by cutting every corner imaginable—while trying to preserve at least the core of the idea—I ended up at 1100 words. Which didn’t fly.

OK, that’s understandable. I would have rejected it too. It’s vague, out-of-context, and incomplete. But I still feel that it should be published, so here is (almost word-for-word) what was submitted.

Workflow is hard.

Well, let me take that back. My workflow is not hard. It’s easy. In fact, it’s so easy that it’s essentially finished at the push of a button.

But let’s back up a little, just to be sure we’re all talking about the same thing. In this article, when I talk about workflow, I’m talking about your basic import workflow. How you get your pictures downloaded, organized, renamed, stamped with basic metadata and keywords, and backed up. In this article, I’m NOT talking about how you identify your best shots, or process them, or export them for your clients. That becomes a much more subjective process that is very difficult to formalize.

What I’m talking about here is the part that you can formalize. And that part consists of the repetitive steps you take (or should be taking) each and every time you sit down to download new photos. How do you go about formalizing a workflow? I admit, this is not the easiest part of the process, because it requires working backwards. You have to start at the end, and think about what you need to do to get to the finish line.

Venice 2014

This process takes time. You don’t just come to it one morning, and decide you can work out the steps. Workflows evolve, and you only come to a series of discrete workflow steps after doing it a bunch of times, and after making a lot of mistakes. To make things even more complicated, formalizing an import workflow requires taking into account a lot of disconnected, but interrelated pieces.

To start that process, I began by thinking about my goals: what are my requirements for the end game? After working on my own Library organization—as well as teaching it for a number of years—I know that I want my entire photo library organized chronologically, with a very specific folder naming routine. The thinking behind why I feel chronological organization works is way out of scope for this article, but I’ve written extensively about it here and here.

Venice 2014

I also have very well-defined ideas about what’s important in file names, which I’ve written about here and here. So basically, the timestamp (including YYYY MM DD HH MM SS + GMT offset) ends up as part of my filename sequence numbers. (After all, why dream up my own subjective sequence numbering system, when we already have a universal one?) Further, whenever I’m traveling I always record a GPS tracklog that I store with the photos, and I use that tracklog to geotag every photo that I shoot on location, which I’ve written a bit about here. I’m a bit fanatical about making sure my geotags are right on the money, and I also always shoot with at least 2, and sometimes 3 or 4 camera bodies when traveling. This means all my camera clocks have to be accurate, as well as perfectly synchronized. Shooting with more than one camera and expecting your photos to always sort chronologically by the one truly universal piece of metadata across ALL operating systems and file formats (the file name!), also requires that my camera clocks are all accurate and synchronized.

The difficult thing about camera clocks is that they all drift a little bit, and to make matters worse, they all drift at slightly different rates. Further, I absolutely hate trying to get them all synchronized to the second (almost impossible) for the geotagging and file naming before every trip. All of this has led me to a system where I never set my camera clocks for local time, but always leave all of them set to UTC. My system of making sure each camera’s clock is perfectly synchronized meant that I was adjusting the timestamps for every single outing anyway, so I might as well correct for the local time zone at the same step in the workflow, which I’ve written a bit about here.

Venice 2014

Whew! Still with me? OK, that’s not an exhaustive list of my end game requirements, but I think it probably gives you the idea. To make it all work, I developed a system of creating and applying time zone offsets and clock corrections that can be made more or less “automatically” by simply photographing a synchronized clock on my phone or computer screen at the end of every camera card that I shoot.

The final piece of the puzzle was formalizing the series of steps required for import and backup as a bash (UNIX) script. I plug in a camera card, and fire up the script in a terminal window. The script opens the last photo on the card so that I can see the image, and asks me for two things. 1) it asks me to type in the local time that I see in the photo, 2) it asks me for a folder name and destination for the final photos. The script then uses my system clock to work out where I am and what the computed offset will be for both the local time zone plus or minus the camera clock drift for each individual card download. It makes backup copies using rsync (with checksum verification) as well as calculating and storing checksums from the actual camera card during download. It renames the files, and puts them all where they need to be.

Venice 2014

Capturing and storing the checksums directly from the camera card allows me to go back for verification at any time in the future, even after the files have migrated from drive-to-drive across multiple backups. This helps me detect bitrot and a host of other potential causes of corruption, for the entire raw file, not just the raw data, as DNG validation does. And, the checksums are captured at the one point in the workflow when I will visually inspect each and every frame shot, which is the one time you are most likely to see corruption and be able to identify its genesis.

It’s all wrapped up in one simple bash script, but writing the script wasn’t the hard part. The hard part was looking at what I do each and every time I sit down to download photos. Formalizing the exact steps and sequence required to take me from A to B, ensures that it will happen in precisely the same way every time, and it forced me to streamline, eliminating or correcting any flawed pieces of the process.

Venice 2014

A Few Pictures From Venice . . .

GPS: 45°26’16” N 12°20’7″ E

Venice at Night

Photograph © George A. Jardine

We’re back from our bi-annual ICP workshop in Venice. This year we had a record number of students, and they were—by far—my best group ever. We were very lucky to have such a fabulous range of talent and personalities. Thanks to our students, this workshop was incredibly rewarding.

Venice was easier for me this year, mostly due to two factors. First, the superb organizational efforts of The International Center of Photography, our incredible lead guide Sara Verlicchi, and her parent organization, Experience Plus. Second, the weather finally cooperated with this workshop. We had a day or two of rain, but mostly mild temperatures and great skies.

Click here to see the photo gallery.

Getting Smartmontools Up And Running On Mavericks . . .

When I teach workshops or consult with a photographer, the question invariably comes up: “What hard drive should I buy?” I always get stuck on this one because even if you know a lot about an individual’s workflow and storage needs, there are no simple answers. In the long run I will probably use this article as a starting point for an extended series on the more general subject of photographer’s storage and backup. But until then one of the very first things I think you should learn is how to test and monitor your existing devices.

Dried Corn

Photographs © George A. Jardine

(Please note that Smartmontools is available for Windows too. but I have not installed or tested it on any Windows machines. If there is enough interest I might try that and write about it. What follows is for Macintosh users.)

You’re probably noticed in the Macintosh Disk Utility, that most external hard drives show up as “Not Supported” under S.M.A.R.T. Status. At least until recently. Apple has heard the call and is finally including native SMART status support for external Thunderbolt drives in Mavericks. I’m not sure yet if all flavors of Thunderbolt support SMART status, but the ones I’ve tested (Pegasus, G-Tech and OWC) do. On both Mountain Lion and Mavericks you can also get SMART status working on most external USB and Thunderbolt drives by installing a bit of software called the SATSMARTDriver, which can be found here: https://github.com/kasbert/OS-X-SAT-SMART-Driver

Please read the README on that page, and pay attention to any warnings or incompatibilities!

(If you are using Mavericks 10.9.2 or later, and you only have Thunderbolt drives, you don’t need to install the SATSMARTDriver to get the full benefit of Smartmontools.)

If you want to add this driver to your system to help it see supported USB drives (or both Thunderbolt and USB drives under Mountain Lion), locate the latest release build, which is currently the SATSMARTDriver-0.8.dmg. Click that link, and on the next page, clicking the View Raw link will download the Mac installer disk image.

Ridge Vineyard Composite

(Be sure to dismount your external drives before you install the SATSMARTDriver, and restart your Mac after installation.)

Once you’ve installed the SATSMARTDriver, what does that give you? Well, not much, if the only thing you’re using is Apple Disk Utility to check your drive’s SMART status. For USB and Thunderbolt drives that do support SMART status, the only bit of information you will get in Disk Utility is a “Passed”, or “Failing” message. This is not very helpful because by the time Disk Utility begins reporting that a drive is “Failing”, it is already pretty far gone.

So the next step is to get the complete report of your drive’s health directly from the drive itself. Which does give you a lot more information. There are at least a couple of apps for the Mac that can do this for you, but under the covers they are simply polling the drive with utilities from Smartmontools anyway, and presenting the report to you using some sort of GUI. You see, the actual report that comes from the drive is a bit obscure, and all these apps really do is filter and format it in various ways in an attempt to make it more intelligible. But none of them are very good, and so using Smartmontools directly from the command line seems like the way to go.

Smartmontools can be a bit tricky to get installed and running, but I think it’s worth it. I also want you to know right up front that Smartmontools is a command line utility, and if you don’t have any experience with the UNIX command line, it’s probably not the answer for you. But if you are familiar with a few UNIX basics and the command line, it’s not terrible difficult. I’ve installed Smartmontools on computers running various versions of OS X going back to Snow Leopard without any crashes or problems. Having said that, I also have to add here that attempting to install and run any command line utility means you are traveling at your own risk. Please don’t even start if you don’t have a fresh, bootable backup of your system (that you’ve tested), or if you don’t feel comfortable using the command line in the Terminal app.

So first, please backup your system. If you don’t know how to create and test a bootable backup of your OS, I would not proceed. Time Machine backups are good, but don’t exactly give you the tools to roll back an installation like this should you need to.

To get started, you first have to have the OS X “Command Line Tools” installed. Getting the command line tools installed is pretty seamless in 10.9.2, but it wasn’t all that easy in 10.9.0 or 10.9.1, and was completely different in Mountain Lion. So if you’re wanting to install Smartmontools on 10.8.x systems, maybe drop a comment here or write me and I’ll outline the differences for you. On 10.9.2 you can just open a Terminal window and type in anything that requires a command line tool, and the Terminal will bounce back a message asking you if you want to install them.

In this case I simply typed in “gcc” (without the quotes) and that did the trick.

Installing Command Line Tools

(GCC stands for GNU Compiler Collection, and typing this in is just a way to see if it’s there. This installation process is also documented in more detail here: http://railsapps.github.io/xcode-command-line-tools.html)

Clicking the Install button should do the trick. Once you read and click to agree to the EULA, the installer does everything for you. When that’s finished you can leave the Terminal window open. You’re going to need it again in just a minute anyway.

Next you want to download and install the Smartmontools utilities. Go to http://sourceforge.net/projects/smartmontools/files/ and click the text link that says “Looking for the latest version? Download smartmontools-6.2.tar.gz (791.4 kB)”. (it was 6.2 at the time of this writing. The actual version number and text might have been updated by the time you read this.) Once the smartmontools-6.2.tar.gz file has downloaded, double-clicking it will decompress a folder named smartmontools-6.2. You can do all this right in your Downloads folder if you wish, or decompress the tar file somewhere else. It doesn’t matter where the smartmontools-6.2 folder is for installation.

Now would be a really good time to read the README in the smartmontools-6.2 folder, and the INSTALL file, especially section G on MacOS/Darwin installation.

Now, it’s back to that Terminal window you left open. To install Smartmontools, you first have to “cd” (change directory) into the smartmontools-6.2 folder. Just leave the Finder window open (probably your Downloads folder window) where the un-zipped smartmontools-6.2 folder is. Then switch to the Terminal and type “cd ” (that’s just cd, with no quotes, plus a space). Don’t forget the space after the cd command. Then without switching to the Finder window, just roll your mouse over the smartmontools-6.2 folder and drag it into the Terminal window.

Installing Smartmontools

When you let go of the mouse, the Terminal builds the path for you. Then with the Terminal window active, press Return or Enter. If the smartmontools-6.2 folder was in your Downloads folder, that path might look something like this:

Installing Smartmontools

Next type in the following: “./configure” (That’s a period, a forward slash, and the word configure. No spaces, and no quotes. Just ./configure.) Then press Return. At this point you should see a lot of text going by. Just wait, and when it’s done you’ll be back to your UNIX prompt, which is probably your username followed by a $ (like georgej$) or just a simple $ dollar sign.

When you see your prompt again, type “make”. (just make, no quotes.) Then press Return.

Again, lots of text, and a few moments later you should be back at your $ prompt again. When you see the $ prompt, this time type in “sudo make install” (again…. no quotes), and press Return. sudo is a command that gives you “super-user” authority, and requires your computer password. So this time you’ll see a “Password:” prompt. Type in your password, but don’t let it throw you that you don’t see any bullets or any text when you’re typing, the Terminal does not give you any feedback at this point. Just type in your password, and hit Return.

Once all the text flashes by, and you’re back at your $ prompt, you’re almost ready. One last item. At this point, if you type in a smartctl command your Terminal will likely report back to you: “smartctl: command not found”. And that’s because the Smartmontools are not in your path… so to speak. So the last item is to update your path. At your $ prompt, type this, and then hit return: “export PATH=/usr/local/sbin:$PATH” (no quotes).

Now you should be able to use smartctl commands in Smartmontools. Unfortunately, making the path permanent is a bit tricky. You see, by adding /usr/local/sbin to your path with the export command, you’re only adding it to the current Terminal session, and to make it work every time that you need it requires one more step. I’ve added export PATH=/usr/local/sbin:$PATH to my profile file using TextEdit, and that process is outlined here.

Or, you can just update the PATH variable each time you need to use Smartmontools in a new Terminal window by typing export PATH=/usr/local/sbin:$PATH first. But getting it into your .profile file is pretty easy, and a good idea.

(If you don’t want to use a UNIX text editor, a quick and easy Terminal command to open the .profile file is “open -a TextEdit .profile”. But that won’t work until you have a .profile file! So that’s what the tech-recipes.com article is good for. Anyway, once you’re in mucking around with your .profile file, you might as well add this line too: alias profile=’open -a TextEdit ~/.profile’. That line makes it easy to open your profile and edit it anytime by simply opening a Terminal window, and typing the word profile.)

If you want more information on how the PATH variable works, try this page.

To see what disks are attached to your system, type “diskutil list” (no quotes). Your disks will be shown as /dev/disk0 (that’s usually the boot disk…), /dev/disk1, /dev/disk2, and so on.

To get the SMART status from /dev/disk2, type in “smartctl -a /dev/disk2”

Now it’s up to you to learn how to interpret the data! I think the best way to do that is to spend a little time on the wikipedia page for S.M.A.R.T. Status: http://en.wikipedia.org/wiki/S.M.A.R.T. Use the smartctl command on a few different brands of drive mechanism, and you’ll find that each manufacturer supports a slightly different list of SMART attributes. Down near the bottom of a SMART report you’ll find a block of text for “Vendor Specific SMART Attributes with Thresholds”, and that’s where the real beef is. The ones to watch for are usually 5 (Reallocated_Sector_Ct), 10 (Spin_Retry_Count), 196, 197, and 198 (on “sector reallocation events”, “scan errors” and “offline uncorrectable errors”). Each vendor may have different names for these attributes.

Reading the Wikipedia page will give you a pretty good hint at which numbers are meaningful, and remember, each drive manufacturer uses different methods of displaying the values and thresholds, so if you spot any thing other than a “0” in the RAW_VALUE column for any of the attributes listed above, then Googling that attribute for “Seagate” for instance, will usually help you find what the numbers mean for your specific brand of drive.

Thai Monk

Why is it worthwhile installing Smartmontools and then digging into what the reports tell you? Well, because there’s a lot of useful information there! By monitoring when a drive starts reporting that more than a few sectors have been reallocated, or when you start to see any uncorrectable UDMA or CRC errors, it’s probably time to replace that mechanism. And trust me… that will be long before Disk Utility reports the drive as “Failing”.

Dear George … or, “How I learned To Love GMT, One More Time.”

Fisherman On The Andaman Ocean

Fisherman on the Andaman Ocean

Photographs © George A. Jardine

Yesterday I was presenting Lightroom at the APA Workshop for assistants here in town, and we had a pretty lively discussion about date stamps, time zones, and filenames. If you’ve read some of my previous postings on filenames or on time stamps, you already know that I have some pretty strange ideas on the subject.

As time has passed, I’ve become more convinced than ever that the one, natural, already-existing, best, all-around sequence number in the universe is already there right in front of us. Why invent a new sequence number, when the actual moment a photograph was taken is an easy to access, and perfectly valid universal sequence number?

Collections of photos assembled to tell a story have their own logical sequence. Which is obvious, and needs to be preserved so that the collection can tell its story. That’s why we have collections. But collections aside, no matter how I choose to search or view the photos in my library, by text search, by keyword, by folder, or whatever, I always—and I do mean always—want to see the results of that search in chronological order. A text or keyword search is a way to narrow down a library into a manageable chunk, and after that your search becomes visual. In a visual search, shoot chronology gives a meaningful context to the results. And my file naming gives me that context—both in the catalog, and in the file system.

That… in the file system part of the criteria is important for legacy and archival reasons that many photographers don’t get when they ask ‘Why not just sort by capture time, in the catalog?’

So, I have a bunch of reasons why the actual time stamp is the basis for the filename sequencing in my photo library. The only fly in the ointment is if you’re shooting, while actually crossing time zones.

When I returned home after the workshop, I found this in my mailbox:

Dear George,
What do you do with the capture time when you cross ‘backwards’ in time to a new zone. If the time on the camera is changed to the new time zone and the files are named using the capture time they will not appear in the sequence in which they were taken on that day. For example, we travel from Australia to Hawaii every year for holidays. When we go there we cross the International Date Line and arrive there at an earlier time on the same day that we left Australia. Obviously if the date in the camera is changed to US Pacific time, the early photos taken on arrival in Honolulu will sort ahead of those taken just before we left Sydney.

Do you have a technique for dealing with this?

Since I made my New Year’s Resolution last year to always keep my cameras set to GMT, that’s what I’ve been doing, and it seems to be working out pretty well so far. (I haven’t lost weight, but at least I’ve been sticking to that one resolution. Admittedly, I also have not been traveling as much either, so that one resolution probably hasn’t been tested well enough in the long term to prove bomb-proof. And that little thing about ‘what happens when you cross time zones while shooting’… has been in the back of my mind too, but so far, it hasn’t came back to bite me in the butt.

Long story short, this e-mail question from a customer caused me to start thinking a bit harder about the problem, and I began writing back a long thing about how ‘what you really want is something that sorts by YOUR internal time clock… not the local time!’… or some crap like that—essentially dodging the question. It’s when I got to there… that I realized he was right! What’s needed truly is a way to sort by the actual moment the photos were taken.

Why not encode GMT into the filenames rather than the “corrected”, local time. So the solution turned out to be pretty easy. It’s back to GMT again.

Weighing A Valuable Idea

When I made my New Year’s Resolution, I started keeping all my cameras set to GMT. When I get home from a shoot I first adjust the time stamp to the local time, and then create the new filenames using the adjusted time stamp.

That process gave me filenames that look like this:

20130813-135921-MDT-5DM30017.CR2

(I also encode the time zone abbreviation into the filenames, because without that, the actual time stamp is meaningless… something I wish the EXIF committee would wake up to. The original filename is also appended to the new filename for legacy reasons, and… to give me an accurate sorting sequence when I’m shooting more than one frame-per-second with a motor drive.)

The solution is to create the filenames using the GMT time stamp, and only after that change the time stamps in the catalog. This workflow gives me the best of both worlds: a filename that sorts chronologically no matter how or where I view it—just as before—and… a time stamp in EXIF that allows me to view and sort by that, if I need to (at least within the catalog…).

This process gives me filenames that look like this:

20130813-195921-GMT-5DM30017.CR2

Which is exactly the same, only with the 6 hour difference between GMT and MDT added back in. If I need to see a time stamp reflecting local time, I look at the Info Overlay or the EXIF anyway. This filename only serves as a sorting mechanism, so it’s all the same. And next time I cross a time zone shooting pictures, they will still sort “chronologically”, at least according to my clock. 🙂

So thanks to Geoff for sending in that question. It’s that kind of carefully thought out and crafted question that sometimes causes me to rethink things, and maybe even come up with a better solution for a problem. And in the end, they’re also the ones that show me which customers out there are truly paying attention, and make it all worth while.