Edit, Then Edit Again, And Then Again . . .

Ferry Operator on the Brahmaputra River

Ferry Operator

Photographs © George A. Jardine

When I’m traveling with a workshop group, I almost always have to download and edit each evening, and pick out some of my better photos to use during class discussions and presentations. And for better or for worse, those in-the-field picks generally end up in my final gallery that you see here on the blog.

And so I’m making my initial corrections on a laptop screen—which is way less than optimal, and at the same time I’m making those corrections while I’m tired and in a hurry. To make matters worse, I might be inclined to pump things up a bit more than normal to post on Facebook, or whatever. It’s only once I’m home that I have time to start rethinking some of those picks and corrections.

Once I’m sitting in front of a much better display, and with the time and patience to really look carefully at the entire shoot as a complete body of work, it’s time to begin finessing things. This is a shot that I loved from the beginning, but I had a really hard time cropping it because I didn’t want to end up with the boat operator right in the middle of the frame, horizontally.

The Majuli Ferry

There were certain elements on the right side of the frame that I felt I wanted to keep in the shot, and I guess I was probably using those elements as a justification for my original crop, which was cutting the bicycle in half, and putting the boat operator much farther to the left. Those compromises left a giant hole right in the middle of the composition, when the boat operator was really the center of interest. Nevertheless, that was the way I originally posted it on this blog. Now that I’ve looked at the photograph a few more times, and had time to really explore the processing, I feel this is a much better crop.

The pied kingfisher was another shot that I was not really sure about until I was home with time to really make a good correction of it.

Pied Kingfisher

This is an extreme crop—only about 1700 pixels across out of the 5D MKIII’s native 21MP (reduced here to 750 pixels across). And despite the fact that it was shot hand-held with a 2x converter at 400mm, it is a very sharp capture. But my original processing didn’t account for the fairly extreme chromatic aberration and fringing, which I have finally corrected here.

Most of Lightroom’s incredibly strong Lens Corrections are seldom used, frequently misunderstood, and generally not taught very well. But they’re not rocket science—they just take a little time and attention to detail, that simply spitting out to social media doesn’t require… or reward.

The idea of going back and looking hard at your edits and corrections over and over and over again, is one of the most difficult things to convey during the brief time you’re together in a workshop. It’s sort of like library organization—it really only starts to work for you once you’ve spent a bit of time on it. But in the end, it’s that extra effort that makes it all worthwhile.

Then… there’s content, which is an entirely different conversation. When I’m leading a workshop I always try to encourage the group to think about how they are going to tell the story of their experience with their pictures. Not an easy subject, but one that revolves around basic storytelling techniques, and making sure you are getting the small detail shots, along with the obvious, larger landscape (establishing) shots. Basically I try to encourage the students to shoot everything, because when you’re back home piecing it all together, you’ll find it’s those little detail shots that really help you recreate the texture of the place.

Tea Picker

Here’s an example. Near the end of the India workshop, we boarded a slightly larger ferry with a crossing on the Kamalabari – Neamati line. That particular day was much hotter than our first ferry ride, and everything just seemed washed out to me. I couldn’t see pictures anywhere. But as we were crossing, a guy walked around the boat and handed everyone a pass. I stuffed mine into a pocket thinking I might keep it with my travel stubs and other memorabilia once home.

Ferry Pass

I kept this crumpled paper around on my desk for a few weeks while I was editing pictures, and finally found a way to photograph it and get it into the final gallery. Many of the photos have been updated with better processing, a few have been removed, and this one final detail shot helps me remember that crossing of the Brahmaputra.

The updated gallery can be seen here.

For those of you who are prepared for a real travel adventure, we still have a few seats available for this extraordinary photo workshop coming up again in late November 2015. Click here for details, and remember to get your India visa right away!

A Workflow Essay On DAM That Never Saw The Light Of Day . . .

Door Knocker

Venice 2014

Photographs © George A. Jardine

I was recently asked to write a piece for an organization on Digital Asset Management, and one of the requirements was that it be 400 words. I didn’t push back on this requirement, or even ask why. But it reminded me of negotiating with one of the fast-food style video education aggregators, when they told me that my videos should be no longer than 5 – 7 minutes, because “no one will watch a video longer than 5 minutes.” Which of course is utterly ridiculous. You get what you ask for… meaning, if you pander to an audience that isn’t looking for comprehensive video education, then you will get what you deserve: an audience that wants fast-food video, pre-digested, and then spit out with a barnyard sound.

Despite that limitation I thought a bit about what I wanted to write for this particular org, and decided it was time to share a little project I’ve been working on. It’s a bash script that automates the process of downloading photos, synchronizing camera clocks, adjusting timestamps, renaming files, and making backups. It was a fun and interesting project because doing projects like this forces me to formalize my thinking… or my process, meaning… organize it in such a way that it can be scripted, and then, taught.

Not an easy thing to describe in 400 words!

Anyway, that’s what I wanted to write about, so that’s what I wrote. And by cutting every corner imaginable—while trying to preserve at least the core of the idea—I ended up at 1100 words. Which didn’t fly.

OK, that’s understandable. I would have rejected it too. It’s vague, out-of-context, and incomplete. But I still feel that it should be published, so here is (almost word-for-word) what was submitted.

Workflow is hard.

Well, let me take that back. My workflow is not hard. It’s easy. In fact, it’s so easy that it’s essentially finished at the push of a button.

But let’s back up a little, just to be sure we’re all talking about the same thing. In this article, when I talk about workflow, I’m talking about your basic import workflow. How you get your pictures downloaded, organized, renamed, stamped with basic metadata and keywords, and backed up. In this article, I’m NOT talking about how you identify your best shots, or process them, or export them for your clients. That becomes a much more subjective process that is very difficult to formalize.

What I’m talking about here is the part that you can formalize. And that part consists of the repetitive steps you take (or should be taking) each and every time you sit down to download new photos. How do you go about formalizing a workflow? I admit, this is not the easiest part of the process, because it requires working backwards. You have to start at the end, and think about what you need to do to get to the finish line.

Venice 2014

This process takes time. You don’t just come to it one morning, and decide you can work out the steps. Workflows evolve, and you only come to a series of discrete workflow steps after doing it a bunch of times, and after making a lot of mistakes. To make things even more complicated, formalizing an import workflow requires taking into account a lot of disconnected, but interrelated pieces.

To start that process, I began by thinking about my goals: what are my requirements for the end game? After working on my own Library organization—as well as teaching it for a number of years—I know that I want my entire photo library organized chronologically, with a very specific folder naming routine. The thinking behind why I feel chronological organization works is way out of scope for this article, but I’ve written extensively about it here and here.

Venice 2014

I also have very well-defined ideas about what’s important in file names, which I’ve written about here and here. So basically, the timestamp (including YYYY MM DD HH MM SS + GMT offset) ends up as part of my filename sequence numbers. (After all, why dream up my own subjective sequence numbering system, when we already have a universal one?) Further, whenever I’m traveling I always record a GPS tracklog that I store with the photos, and I use that tracklog to geotag every photo that I shoot on location, which I’ve written a bit about here. I’m a bit fanatical about making sure my geotags are right on the money, and I also always shoot with at least 2, and sometimes 3 or 4 camera bodies when traveling. This means all my camera clocks have to be accurate, as well as perfectly synchronized. Shooting with more than one camera and expecting your photos to always sort chronologically by the one truly universal piece of metadata across ALL operating systems and file formats (the file name!), also requires that my camera clocks are all accurate and synchronized.

The difficult thing about camera clocks is that they all drift a little bit, and to make matters worse, they all drift at slightly different rates. Further, I absolutely hate trying to get them all synchronized to the second (almost impossible) for the geotagging and file naming before every trip. All of this has led me to a system where I never set my camera clocks for local time, but always leave all of them set to UTC. My system of making sure each camera’s clock is perfectly synchronized meant that I was adjusting the timestamps for every single outing anyway, so I might as well correct for the local time zone at the same step in the workflow, which I’ve written a bit about here.

Venice 2014

Whew! Still with me? OK, that’s not an exhaustive list of my end game requirements, but I think it probably gives you the idea. To make it all work, I developed a system of creating and applying time zone offsets and clock corrections that can be made more or less “automatically” by simply photographing a synchronized clock on my phone or computer screen at the end of every camera card that I shoot.

The final piece of the puzzle was formalizing the series of steps required for import and backup as a bash (UNIX) script. I plug in a camera card, and fire up the script in a terminal window. The script opens the last photo on the card so that I can see the image, and asks me for two things. 1) it asks me to type in the local time that I see in the photo, 2) it asks me for a folder name and destination for the final photos. The script then uses my system clock to work out where I am and what the computed offset will be for both the local time zone plus or minus the camera clock drift for each individual card download. It makes backup copies using rsync (with checksum verification) as well as calculating and storing checksums from the actual camera card during download. It renames the files, and puts them all where they need to be.

Venice 2014

Capturing and storing the checksums directly from the camera card allows me to go back for verification at any time in the future, even after the files have migrated from drive-to-drive across multiple backups. This helps me detect bitrot and a host of other potential causes of corruption, for the entire raw file, not just the raw data, as DNG validation does. And, the checksums are captured at the one point in the workflow when I will visually inspect each and every frame shot, which is the one time you are most likely to see corruption and be able to identify its genesis.

It’s all wrapped up in one simple bash script, but writing the script wasn’t the hard part. The hard part was looking at what I do each and every time I sit down to download photos. Formalizing the exact steps and sequence required to take me from A to B, ensures that it will happen in precisely the same way every time, and it forced me to streamline, eliminating or correcting any flawed pieces of the process.

Venice 2014

Getting Smartmontools Up And Running On Mavericks . . .

When I teach workshops or consult with a photographer, the question invariably comes up: “What hard drive should I buy?” I always get stuck on this one because even if you know a lot about an individual’s workflow and storage needs, there are no simple answers. In the long run I will probably use this article as a starting point for an extended series on the more general subject of photographer’s storage and backup. But until then one of the very first things I think you should learn is how to test and monitor your existing devices.

Dried Corn

Photographs © George A. Jardine

(Please note that Smartmontools is available for Windows too. but I have not installed or tested it on any Windows machines. If there is enough interest I might try that and write about it. What follows is for Macintosh users.)

You’re probably noticed in the Macintosh Disk Utility, that most external hard drives show up as “Not Supported” under S.M.A.R.T. Status. At least until recently. Apple has heard the call and is finally including native SMART status support for external Thunderbolt drives in Mavericks. I’m not sure yet if all flavors of Thunderbolt support SMART status, but the ones I’ve tested (Pegasus, G-Tech and OWC) do. On both Mountain Lion and Mavericks you can also get SMART status working on most external USB and Thunderbolt drives by installing a bit of software called the SATSMARTDriver, which can be found here: https://github.com/kasbert/OS-X-SAT-SMART-Driver

Please read the README on that page, and pay attention to any warnings or incompatibilities!

(If you are using Mavericks 10.9.2 or later, and you only have Thunderbolt drives, you don’t need to install the SATSMARTDriver to get the full benefit of Smartmontools.)

If you want to add this driver to your system to help it see supported USB drives (or both Thunderbolt and USB drives under Mountain Lion), locate the latest release build, which is currently the SATSMARTDriver-0.8.dmg. Click that link, and on the next page, clicking the View Raw link will download the Mac installer disk image.

Ridge Vineyard Composite

(Be sure to dismount your external drives before you install the SATSMARTDriver, and restart your Mac after installation.)

Once you’ve installed the SATSMARTDriver, what does that give you? Well, not much, if the only thing you’re using is Apple Disk Utility to check your drive’s SMART status. For USB and Thunderbolt drives that do support SMART status, the only bit of information you will get in Disk Utility is a “Passed”, or “Failing” message. This is not very helpful because by the time Disk Utility begins reporting that a drive is “Failing”, it is already pretty far gone.

So the next step is to get the complete report of your drive’s health directly from the drive itself. Which does give you a lot more information. There are at least a couple of apps for the Mac that can do this for you, but under the covers they are simply polling the drive with utilities from Smartmontools anyway, and presenting the report to you using some sort of GUI. You see, the actual report that comes from the drive is a bit obscure, and all these apps really do is filter and format it in various ways in an attempt to make it more intelligible. But none of them are very good, and so using Smartmontools directly from the command line seems like the way to go.

Smartmontools can be a bit tricky to get installed and running, but I think it’s worth it. I also want you to know right up front that Smartmontools is a command line utility, and if you don’t have any experience with the UNIX command line, it’s probably not the answer for you. But if you are familiar with a few UNIX basics and the command line, it’s not terrible difficult. I’ve installed Smartmontools on computers running various versions of OS X going back to Snow Leopard without any crashes or problems. Having said that, I also have to add here that attempting to install and run any command line utility means you are traveling at your own risk. Please don’t even start if you don’t have a fresh, bootable backup of your system (that you’ve tested), or if you don’t feel comfortable using the command line in the Terminal app.

So first, please backup your system. If you don’t know how to create and test a bootable backup of your OS, I would not proceed. Time Machine backups are good, but don’t exactly give you the tools to roll back an installation like this should you need to.

To get started, you first have to have the OS X “Command Line Tools” installed. Getting the command line tools installed is pretty seamless in 10.9.2, but it wasn’t all that easy in 10.9.0 or 10.9.1, and was completely different in Mountain Lion. So if you’re wanting to install Smartmontools on 10.8.x systems, maybe drop a comment here or write me and I’ll outline the differences for you. On 10.9.2 you can just open a Terminal window and type in anything that requires a command line tool, and the Terminal will bounce back a message asking you if you want to install them.

In this case I simply typed in “gcc” (without the quotes) and that did the trick.

Installing Command Line Tools

(GCC stands for GNU Compiler Collection, and typing this in is just a way to see if it’s there. This installation process is also documented in more detail here: https://railsapps.github.io/xcode-command-line-tools.html)

Clicking the Install button should do the trick. Once you read and click to agree to the EULA, the installer does everything for you. When that’s finished you can leave the Terminal window open. You’re going to need it again in just a minute anyway.

Next you want to download and install the Smartmontools utilities. Go to https://sourceforge.net/projects/smartmontools/files/ and click the text link that says “Looking for the latest version? Download smartmontools-6.2.tar.gz (791.4 kB)”. (it was 6.2 at the time of this writing. The actual version number and text might have been updated by the time you read this.) Once the smartmontools-6.2.tar.gz file has downloaded, double-clicking it will decompress a folder named smartmontools-6.2. You can do all this right in your Downloads folder if you wish, or decompress the tar file somewhere else. It doesn’t matter where the smartmontools-6.2 folder is for installation.

Now would be a really good time to read the README in the smartmontools-6.2 folder, and the INSTALL file, especially section G on MacOS/Darwin installation.

Now, it’s back to that Terminal window you left open. To install Smartmontools, you first have to “cd” (change directory) into the smartmontools-6.2 folder. Just leave the Finder window open (probably your Downloads folder window) where the un-zipped smartmontools-6.2 folder is. Then switch to the Terminal and type “cd ” (that’s just cd, with no quotes, plus a space). Don’t forget the space after the cd command. Then without switching to the Finder window, just roll your mouse over the smartmontools-6.2 folder and drag it into the Terminal window.

Installing Smartmontools

When you let go of the mouse, the Terminal builds the path for you. Then with the Terminal window active, press Return or Enter. If the smartmontools-6.2 folder was in your Downloads folder, that path might look something like this:

Installing Smartmontools

Next type in the following: “./configure” (That’s a period, a forward slash, and the word configure. No spaces, and no quotes. Just ./configure.) Then press Return. At this point you should see a lot of text going by. Just wait, and when it’s done you’ll be back to your UNIX prompt, which is probably your username followed by a $ (like georgej$) or just a simple $ dollar sign.

When you see your prompt again, type “make”. (just make, no quotes.) Then press Return.

Again, lots of text, and a few moments later you should be back at your $ prompt again. When you see the $ prompt, this time type in “sudo make install” (again…. no quotes), and press Return. sudo is a command that gives you “super-user” authority, and requires your computer password. So this time you’ll see a “Password:” prompt. Type in your password, but don’t let it throw you that you don’t see any bullets or any text when you’re typing, the Terminal does not give you any feedback at this point. Just type in your password, and hit Return.

Once all the text flashes by, and you’re back at your $ prompt, you’re almost ready. One last item. At this point, if you type in a smartctl command your Terminal will likely report back to you: “smartctl: command not found”. And that’s because the Smartmontools are not in your path… so to speak. So the last item is to update your path. At your $ prompt, type this, and then hit return: “export PATH=/usr/local/sbin:$PATH” (no quotes).

Now you should be able to use smartctl commands in Smartmontools. Unfortunately, making the path permanent is a bit tricky. You see, by adding /usr/local/sbin to your path with the export command, you’re only adding it to the current Terminal session, and to make it work every time that you need it requires one more step. I’ve added export PATH=/usr/local/sbin:$PATH to my profile file using TextEdit, and that process is outlined here.

Or, you can just update the PATH variable each time you need to use Smartmontools in a new Terminal window by typing export PATH=/usr/local/sbin:$PATH first. But getting it into your .profile file is pretty easy, and a good idea.

(If you don’t want to use a UNIX text editor, a quick and easy Terminal command to open the .profile file is “open -a TextEdit .profile”. But that won’t work until you have a .profile file! So that’s what the tech-recipes.com article is good for. Anyway, once you’re in mucking around with your .profile file, you might as well add this line too: alias profile=’open -a TextEdit ~/.profile’. That line makes it easy to open your profile and edit it anytime by simply opening a Terminal window, and typing the word profile.)

If you want more information on how the PATH variable works, try this page.

To see what disks are attached to your system, type “diskutil list” (no quotes). Your disks will be shown as /dev/disk0 (that’s usually the boot disk…), /dev/disk1, /dev/disk2, and so on.

To get the SMART status from /dev/disk2, type in “smartctl -a /dev/disk2”

Now it’s up to you to learn how to interpret the data! I think the best way to do that is to spend a little time on the wikipedia page for S.M.A.R.T. Status: https://en.wikipedia.org/wiki/S.M.A.R.T. Use the smartctl command on a few different brands of drive mechanism, and you’ll find that each manufacturer supports a slightly different list of SMART attributes. Down near the bottom of a SMART report you’ll find a block of text for “Vendor Specific SMART Attributes with Thresholds”, and that’s where the real beef is. The ones to watch for are usually 5 (Reallocated_Sector_Ct), 10 (Spin_Retry_Count), 196, 197, and 198 (on “sector reallocation events”, “scan errors” and “offline uncorrectable errors”). Each vendor may have different names for these attributes.

Reading the Wikipedia page will give you a pretty good hint at which numbers are meaningful, and remember, each drive manufacturer uses different methods of displaying the values and thresholds, so if you spot any thing other than a “0” in the RAW_VALUE column for any of the attributes listed above, then Googling that attribute for “Seagate” for instance, will usually help you find what the numbers mean for your specific brand of drive.

Thai Monk

Why is it worthwhile installing Smartmontools and then digging into what the reports tell you? Well, because there’s a lot of useful information there! By monitoring when a drive starts reporting that more than a few sectors have been reallocated, or when you start to see any uncorrectable UDMA or CRC errors, it’s probably time to replace that mechanism. And trust me… that will be long before Disk Utility reports the drive as “Failing”.

Dear George … or, “How I learned To Love GMT, One More Time.”

Fisherman On The Andaman Ocean

Fisherman on the Andaman Ocean

Photographs © George A. Jardine

Yesterday I was presenting Lightroom at the APA Workshop for assistants here in town, and we had a pretty lively discussion about date stamps, time zones, and filenames. If you’ve read some of my previous postings on filenames or on time stamps, you already know that I have some pretty strange ideas on the subject.

As time has passed, I’ve become more convinced than ever that the one, natural, already-existing, best, all-around sequence number in the universe is already there right in front of us. Why invent a new sequence number, when the actual moment a photograph was taken is an easy to access, and perfectly valid universal sequence number?

Collections of photos assembled to tell a story have their own logical sequence. Which is obvious, and needs to be preserved so that the collection can tell its story. That’s why we have collections. But collections aside, no matter how I choose to search or view the photos in my library, by text search, by keyword, by folder, or whatever, I always—and I do mean always—want to see the results of that search in chronological order. A text or keyword search is a way to narrow down a library into a manageable chunk, and after that your search becomes visual. In a visual search, shoot chronology gives a meaningful context to the results. And my file naming gives me that context—both in the catalog, and in the file system.

That… in the file system part of the criteria is important for legacy and archival reasons that many photographers don’t get when they ask ‘Why not just sort by capture time, in the catalog?’

So, I have a bunch of reasons why the actual time stamp is the basis for the filename sequencing in my photo library. The only fly in the ointment is if you’re shooting, while actually crossing time zones.

When I returned home after the workshop, I found this in my mailbox:

Dear George,
What do you do with the capture time when you cross ‘backwards’ in time to a new zone. If the time on the camera is changed to the new time zone and the files are named using the capture time they will not appear in the sequence in which they were taken on that day. For example, we travel from Australia to Hawaii every year for holidays. When we go there we cross the International Date Line and arrive there at an earlier time on the same day that we left Australia. Obviously if the date in the camera is changed to US Pacific time, the early photos taken on arrival in Honolulu will sort ahead of those taken just before we left Sydney.

Do you have a technique for dealing with this?

Since I made my New Year’s Resolution last year to always keep my cameras set to GMT, that’s what I’ve been doing, and it seems to be working out pretty well so far. (I haven’t lost weight, but at least I’ve been sticking to that one resolution.) Admittedly, I also have not been traveling as much either, so that one resolution probably hasn’t been tested well enough in the long term to prove bomb-proof. And that little thing about ‘what happens when you cross time zones while shooting’… has been in the back of my mind too, but so far, it hasn’t came back to bite me in the butt.

Long story short, this e-mail question from a customer caused me to start thinking a bit harder about the problem, and I began writing back a long thing about how ‘what you really want is something that sorts by YOUR internal time clock… not the local time!’… or some crap like that—essentially dodging the question. It’s when I got to there… that I realized he was right! What’s needed truly is a way to sort by the actual moment the photos were taken.

Why not encode GMT into the filenames rather than the “corrected”, local time. So the solution turned out to be pretty easy. It’s back to GMT again.

Weighing A Valuable Idea

When I made my New Year’s Resolution, I started keeping all my cameras set to GMT. When I get home from a shoot I first adjust the time stamp to the local time, and then create the new filenames using the adjusted time stamp.

That process gave me filenames that look like this:


(I also encode the time zone abbreviation into the filenames, because without that, the actual time stamp is meaningless… something I wish the EXIF committee would wake up to. The original filename is also appended to the new filename for legacy reasons, and… to give me an accurate sorting sequence when I’m shooting more than one frame-per-second with a motor drive.)

The solution is to create the filenames using the GMT time stamp, and only after that change the time stamps in the catalog. This workflow gives me the best of both worlds: a filename that sorts chronologically no matter how or where I view it—just as before—and… a time stamp in EXIF that allows me to view and sort by that, if I need to (at least within the catalog…).

This process gives me filenames that look like this:


Which is exactly the same, only with the 6 hour difference between GMT and MDT added back in. If I need to see a time stamp reflecting local time, I look at the Info Overlay or the EXIF anyway. This filename only serves as a sorting mechanism, so it’s all the same. And next time I cross a time zone shooting pictures, they will still sort “chronologically”, at least according to my clock. 🙂

So thanks to Geoff for sending in that question. It’s that kind of carefully thought out and crafted question that sometimes causes me to rethink things, and maybe even come up with a better solution for a problem. And in the end, they’re also the ones that show me which customers out there are truly paying attention, and make it all worth while.

Misdirection For Using GPS Tracklogs And Lightroom’s Time Zone Offset Feature…

GPS: 46°2’42” N 9°21’23” E

Looking Into Switzerland

Photographs © George A. Jardine

Getting comfortable using GPS when I travel and shoot has been creeping up on me for several years now. It is a slightly peculiar beast, and putting your finger on the exact purpose GPS serves in photography can be a bit tricky. Combine that with the somewhat obscure nature of how timestamps work, and you’ve got a subject you can sink your teeth into. (And just as quickly, you can get it wrong. Which unfortunately, is what I found today in a tutorial on the esteemed AdobeTV.)

Ever since the early versions of Lightroom, we’ve had an innovative feature that linked GPS metadata to Google Maps, and it was that feature that first prompted me to purchase a GPS unit and start working with it. But you still had to encode your images with GPS metadata before you could do anything with it. At the time, geocoding photos was not nearly as easy as it is today. Looking around for a little help, one of the engineers on the Lightroom team recommended that I try a program from Houdah Software, and that got me moving in the right direction. (HoudahGeo is still my favorite way to pull .gpx tracklogs from my Garmin GPS unit, which surprisingly, Lightroom does not yet do.)

GPS: 39°41’49” N 104°58’9″ W

Early Frost

Early Frost

With Lightroom 4 we now have the Map module, which adds its own special twist to the mix. The Map module lets you drag photos directly onto the map, embed GPS metadata, reverse geocode location information, filter and select photos within any visible map area, and all sorts of other cool tricks. And in general, they’ve made it all pretty easy to use.

But there is one, small fly in the ointment. When I first started sniffing down this path, I was having a bit of trouble figuring out how to use Lightroom’s Time Zone Offset feature. And it goes back to that timestamp thing. The heart of the problem is that a timestamp is just a timestamp, and can only tell you what time a photo was shot, in some local time. Most cameras do not store the time zone that it was in at the moment of exposure. And if they do, it’s stored in a proprietary metadata tag that is not readable by most software. So the timestamp is completely ambiguous.

Pull Quote

From what I understand, this incredibly inelegant situation is not the fault of Adobe software engineers, or of Japanese camera manufacturers, or anyone else you might think to point your finger at. But rather, timestamps are what they are because the EXIF specification simply does not allow for a local time zone entry!

Some cameras do allow you to set a time zone on the menus, but really that’s just a red herring. There is still no context recorded in camera metadata to give the timestamp meaning. (Nothing in EXIF, anyway, that is used by Lightroom. See Manfred’s comment below.)

GPS units take a slightly different tact on the problem. They ignore the crucial idea of context too, but they do that because they can. GPS units simply record all timestamps as if they were in one time zone that never changes: UTC. (UTC = Coordinated Universal Time. Same as GMT or Zulu time, whatever you want to call it.) Sure, it’s true that you can set a time zone on your GPS unit, but that’s just so that you can read a local time on the GPS display that makes sense. This time zone setting has absolutely nothing to do with how the GPS understands where you are, or how it records that position into the tracklog. It simply records everything in UTC. Which makes sense. After all, it is a global system.

GPS: 42°56’45” N 122°10’9″ W

Crater Lake

Crater Lake, OR

Given all of that, the problem should be coming into a bit better focus now. In order to match up the timestamps created by your camera to the timestamps recorded by your GPS unit, your computer needs more information. It needs to know what the timestamps in your photos mean, or… put another way, it needs to have the context that comes from the time zone.

HoudahGeo has a very straight forward way of obtaining that bit of info. The moment you try to load any photos into it for geocoding, it pops up a small dialog and simply asks for it. It also provides a starting point (an assumption), by looking at your computer clock, and pulling the local time zone from there. And it puts the assumption right in front of you for your approval. The text in the dialog says “Camera time zone:”, and there’s a pop-up menu, conveniently set to your local zone. Or whatever time zone your computer is currently set to.

GPS: 36°53’7″ N 104°25’59” W


Not My Motorcycle!

So it’s unmistakable. HoudahGeo is asking you to verify the time zone of the photos you’re loading, the moment you try to do anything with it. There’s no escaping it. If you’re sitting in your hotel room in China, and you’ve been diligent enough to set your computer clock to the local time zone, (and… you’ve set your camera’s clock to the correct local time, before you started shooting) then geocoding is a slam dunk. You just load the photos, then load up the .gpx tracklog from your GPS device, and HoudahGeo matches them up for you, calculating the offset from the unit’s GMT-based timestamps to your photo’s ambiguous EXIF timestamps.

But then we come to Lightroom. Once you’re in the Map module and you’ve loaded up your tracklog, if you’re still in the same time zone that the photos’ timestamps are in, Lightroom’s Auto-Tag feature will work perfectly for you. My trouble with it is that there’s not a hint anywhere that Lightroom is making the same assumption as HoudahGeo, but that’s exactly what it does.

Danger, Will Robinson

Now, I agree that the software has to start somewhere. But there are two major problems here. First, Lightroom doesn’t give you a clue that it’s making an assumption about the time zone, or that you might need to match these two things up. (Especially for beginners, that’s a problem. HoudahGeo asks you to verify the camera time zone every time you use it.) Second is that, in general, when I’m traveling and shooting, I don’t want to be thinking about GPS metadata and time zones. I want to think about those things once I’m back home.

After doing a bit of thinking on the subject of timestamps, and after polling dozens of my friends and customers who regularly shoot with GPS, I found that I was not the only one. Most photographer’s would rather worry about this stuff when they get home. Which means that Lightroom’s assumption about where to get a useful time zone will nearly always be wrong.

This means that if you want Lightroom to auto-tag your photos to a tracklog, you’re going to have to tell Lightroom where the photos were taken relative to the computer’s current time zone. Thus, the Time Zone Offset feature, which is not exactly self-explanatory. (Also confirmed by my poll.)

Now, I’m not writing this to point fingers at the UI designer or the engineers. There are lots of other aspects of Lightroom that will ensure my job security as an educator. But I’m writing it because just this morning I watched one too many video tutorials that got it so utterly wrong, I couldn’t stop myself.

GPS: 38°48’51” N 115°17’46” W

Desert Love

Desert Love

This feature is not that complicated, and it deserves to be understood because it’s incredibly useful. It’s just been hindered by a really bad user interface, and further obscured by tutorial jockeys who won’t take the time to do a little research. The easy way to think about it is that you have to tell Lightroom how many hours apart the time zone of the photos is (or was), from the time zone your computer clock is currently set to, before auto-tagging will work. And rather than show you a step-by-step here on the blog, I’ve taken 6 minutes out of tutorial #7 from my Catalog Management series, that shows you exactly how to do it.

It’s a free video, and you can watch it by clicking here.