Making the Desert Glow

Get the 5DayDeal: Save 96% on this amazing bundle of tutorials from top instructors. And when you purchase using the link on this post, you’ll also get a free copy of my new Norway: From Start to Finish course (enrollment in the bonus course will be complete by Oct 22 after the 5DayDeal has ended). Learn more about the bundle here.

motionmailapp.com

 

Some of my favorite shots come from the most unexpected moments, like this long-exposure image taken long after sunset. Often times, if you’re patient, you’ll get a second chance at color. The original sunset fades, and then there is a late burst of color. It may be vibrant or somewhat subtle, but often creates some of my favorite soft images.

When I realized this was going to happen, I raced to setup another shot with a 2-minute exposure to help ensure smooth clouds. The RAW file looked a bit flat and the color is weak, but with the right processing, dodging & burning, and a few other little tricks I was able to extract an image that lived up to the awe of the moment taking in the last light of that gorgeous day.

 

I rarely endorse other products and only do when I think you would thoroughly enjoy them. When you purchase through my link, you will receive my Norway: From Start to Finish course and be supporting me with an affiliate commission at no cost to you as well as helping fund some great charities.

Don’t let this hidden setting RUIN your RAW smart objects

Camera RAW Smart Objects are hands-down one of the best features of Photoshop. If you don’t know why, first check out my previous tutorials: 3 Kinds of Smart Objects and 3 Common Misconceptions. The beauty of these special smart objects is that they always give you access to alter the RAW processing while keeping the highest-possible quality… unless you overlook this one critical (but hidden) setting.

If you have a RAW Smart Object embedded in an image set to 16-bits and Pro Photo RGB image, you’d assume that that’s what the RAW would give you. After all, you can output any RAW file with those settings. But that’s not the whole story. It is true that you will have a 16-bit, ProPhoto RGB layer rendered from your RAW Smart Object. That’s true even if you were in another color space or bit depth, as the layer is reprocessed as needed.

The problem is that Camera RAW Smart Objects contain their own color space and bit depth. Without them, you wouldn’t get a proper preview, RGB readings, or accurate histograms and clipping warnings inside ACR. More importantly, these settings are applied, no matter what the settings are in your document. So your ACR settings are applied, and then the layer is converted to your image’s settings externally. So if you have a RAW set to say sRGB and 8-bit, the layer will be converted to that FIRST and then converted to say the 16-bit ProPhoto RGB of your document. So yes, you will technically have the requested settings, but it’s just a conversion from a smaller color space and bit depth to a larger one. You’ve already lost a LOT of quality.

To preserve full quality, your Camera RAW Smart Object should use settings which are as good or better than the document’s color space and bit depth. This would ideally be the same settings, but you probably won’t see much difference if you RAW is set to ProPhoto inside an Adobe RGB image.

To avoid problems, there are some settings to check and update in both Photoshop and Lightroom. It’s important to check both, as you can get different results with different workflows. If you use Lightroom’s Edit in / Edit as a Smart Object workflow, you’ll open an image using the settings from Lightroom. If you instead open the image directly in Photoshop (basically any other method which invokes the ACR interface when opening the image), then the Photoshop settings will be applied.

Lightroom settings:

  • Go to Preferences / External editing
  • Change the color space to either Adobe RGB or ProPhoto RGB
  • Change the bit depth to 16 bits

Photoshop settings:

  • Go to Preferences / File Handling / Camera RAW Preferences / Workflow
  • Change the color space to either Adobe RGB or ProPhoto RGB **
  • Change the bit depth to 16 bits

** Note that Photoshop (unlike Lightroom unfortunately) lets you choose any color space to render your Smart Object. So you could use something like Beta RGB or REC2020 if you prefer. However, you should be aware of a limitation with color spaces in ACR. They are not embedded, just referenced by name. So if you choose something non-standard and open the image on another computer where that ICC profile is not installed, you could run into an issue. The image will be fine initially. However, if you double-click into the Smart Object to edit it and its previous profile is not available, ACR will use your default PS preference for you without warning (this applies to the color space, not bit depth). So simply opening ACR and clicking OK could convert from BetaRGB to Adobe RGB (or is set as default on that computer).

When you open an image, whichever defaults above apply (depending on on how you open), those settings will be set inside the Smart Object as well as on the document. So checking your document setting at import is a quick way to confirm that things worked internally as expected. You can change those settings afterward, but they will match when the RAW is opened the first time.

The good news is that if you didn’t know about this before, you can still fix your existing work (as long as you haven’t rasterized the Smart Objects). If you have the wrong settings inside the Smart Object, you can update them anytime. For example, if the Smart Object was internally set to 8 bit, you can switch it internally to 16-bits and you’ll get back the lost data. Just double-click the Smart Object, click on the text link at the bottom showing these details, update as desired, and click OK to save the Smart Object.

How much can you improve an old edit?

If you want to improve your photography, one great way to do that is to review your old images and reprocess them. Starting from scratch can reinforce important lessons as you compare the new version to the old. You might be surprised how hard you find it to recreate some of your best work sometimes. Or you can take the old image and try to improve it from where it is. I find this latter approach to be very powerful, as it gives me a way to keep the look and feel of an image as I improve it. This allows me to easily improve an image for print, while still delivering a result that’s fully consistent with the client’s expectations. It also helps me to learn better ways to correct problems after the fact, which can save time later instead of redoing an image.

I processed this image 7 years ago, so naturally there are a number of details I would process differently now as I’ve grown more skilled as an artist. I still enjoy the image, but upon close inspection feel that the following could be improved:

  1. The sky shows a fair bit of haze, which I’d like to minimize.
  2. The flower highlights are a bit blown out. In keeping with the very high contrast ratios here, I don’t need to restore everything, but I do think some improvement is warranted.
  3. The reflection of the flowers shows HDR artifacts. I processed this back at a time when I was still relying on HDR much more than exposure blending with luminosity masks and HDR often shows such “ghosted” results when water creates movement from one exposure to the next. There are some artifacts around the same reflected flowers from my D800, which shows lines extending from some of the highlights in the water. That wasn’t a common issue for me with the D800, but it certainly didn’t handle dark skies nearly as well as my D850.
  4. Some of the trees lit with bright yellow behind the flower are a bit too hot. The color separation is good, but the light source isn’t as obvious as the neon flowers and I think they could stand to be slightly dimmer so as not to compete with the main subject.
  5. The ambient lighting under the trees to the far left and right was constantly changing colors and not synchronized. There are two better options here. I could make the trees on the left purple to match the colors on the right, which would emphasize the character of the long walk through this park. Or I could make the trees on the right green so as to further emphasize the main subject. I think they’re both great approaches, but I’m going to go with the latter.

Here’s the approach I used to address each of the issues:

  1. I already had a darker exposure in my old image, so I can just mask more of it into the sky. I created a luminosity selection specific to the sky area to help me paint white onto the existing mask. The selection needed to target dark areas of the sky, so I used the Quick Selection tool to target the sky roughly, clicked D for a darks luminosity preview, and then feathered the quick selection slightly when clicking “Sel” to help ensure a smooth transition at the edges.
  2. The flowers needed some exposure blending with a new exposure I imported from my original shoot. This is pretty consistent with other blends I’ve demonstrated, but in this case there’s a twist. My processed image is a few pixels smaller than the new source material and I’m importing a Smart Object, so I had to manually align the new source. When adding the new layers with “PreBlend”, I just checked “check alignment (difference)” to put the layers into a blend mode that would make it easy to align. Just activate the move tool and click on the arrow keys to nudge the layer pixel by pixel until the result looks as dark as possible (difference blend mode shows generally very dark when things are perfectly aligned). Then just create a lights luminosity selection and start painting on the mask to reveal the improved flower details.
  3. The reflection is a bit different. It’s more of a local replacement than a blend because the working image has those artifacts (HDR ghosting and the lines in the RAW). I grabbed a source image which had both good details and a shutter speed that rendered the water in an ideal way. In addition to processing for details, I also needed to remove the camera artifacts. By using strong noise reduction in ACR and negative texture coupled with a boost in clarity, I was able to nearly eliminate the D800 artifacts and generate a great-looking reflection. As the water in both versions matches and lacks detail, I simply used a soft white brush to reveal the better layer.
  4. The background trees are also a little different. In this case, I wanted to darken the yellow trees without darkening the green/magenta/white flower, so I just clicked “Color” to make a selection based on yellows and brushed through it to reveal the darker trees. I added a BlendIf to target the lightest pixels. I don’t typically use BlendIf for blending, but it works just fine in this case. I then reduced opacity so that the result was a subtle correction.
  5. The tree color is a bit tricky. You could use an HSL adjustment, but I found better results by just replacing the color. I added a solid fill layer with the desired hue and saturation and set it to color blend mode. I then used “Color” to create a Blue/Magenta mask and then added an additional mask to target only the blue/magenta colors in the trees on the right side of the image.

Photoshop channel math: add subtract intersect

Once you’re getting the hang of luminosity masks, there are several ways you can combine them to make even more powerful masks and selections. Specifically, I’m referring to adding, subtracting, and intersecting selections, masks, or channels in Photoshop. In this tutorial, we’ll cover why you should use them and demystify how they work.

All of them work from a principle of starting from your current selection/mask/channel and then modifying it with another in order to produce a more targeted result. To keep the discussion simple, I’m simply going to refer to “selections” for the rest of the article, but the concepts apply equally to masks and channels. 0% selected is the same as black (0.000) in a mask/channel and 100% selected is the same as white (1.000) in a mask/channel.

 

Subtracted selections

These allow you to remove something from your current selection in a proportional way (ie, the pixels targeted by this selection but NOT that one). This is a particularly powerful tool for enhancing shadow detail. If you take a brightness adjustment layer and starting painting a mask through a D3 selection, you will definitely lighten the shadows. But you will be brightening the blackest detail more than the other shadow values. This will result in a muddy, low contrast mess. What you really want to do is to leave pure black alone and instead only brighten the slightly brighter dark tones. You can subtract a more restrictive darks selection such as D5, which will exclude the pure blacks and give you exactly what you need. Now when you paint through it, the shadow detail will be brightened, but without reducing contrast by lightening the pure blacks.

The concepts here are pretty simple when you’re dealing with ares which are fully selected or protected, but much less obvious when you start to consider all the partial values in between, which is the nature of all luminosity masks. If you remove a 100% selection from anything, the result will be 0% selected (or black in the channel/mask). What confuses people is that this isn’t just a simple subtraction of two numbers. If you subtract a 40% selection from a 50% selection, the result is not 10% selected, it’s actually 30%. The way Photoshop thinks about this is that you are starting from a 50% selection and then going 40% of the way from there to 0% (black).

To show you the actual math, let’s first need to define a few terms:

  • currentVal = the current value of your selection, mask, or channel
  • modifierVal = the value of the selection you wish to add to, subtract from, or intersect with your current one
  • The math is based on working with scalars, which are values ranging from 0.000 (black) – 1.000 (white), which work like percentages (ie, 0.535 would mean 53.5%). If you are reviewing the info panel, you will only see the correct values if you switch the display to show grayscale(K) values as 32-bit values.
  • The math is done directly on these grayscale values. It does not matter which colorspace, working space or bit depth your image uses, this is the correct way to view the mask/channel values if you want to understand how the add, subtract, and intersect work (ie, Photoshop internally uses conceptually similar values, regardless of bit depth or how you choose the view the numbers).

Subtract => currentVal – modifierVal * (currentVal – black) => currentVal – modifierVal * (currentVal – 0.000)
Which can be simplified to: Subtract => currentVal – modifierVal * (currentVal)

To learn more about subtracted selections, be sure to check out these previous tutorials:

Added selections selections

This allows you to increase the areas targeted by your selection (ie, the pixels in this selection OR that one). Most likely, you’d be using this to load two alpha channels for different subjects in order to work on both at the same time. For example, you may have selected each building in a cityscape individually for control but need to work on all of them together at some point. Generally speaking, you’re much less likely to use addition for luminosity selections than the other methods. But you might occasionally want to do something like target both zones 4 and 5 at the same time.

The way Photoshop thinks about this is that you are going from the current value some percentage of the way towards white. So adding a 40% selection to a 50% one would take you 40% of the way from 50% to 100% (white), which is 70%.

Add => currentVal + modifierVal * (white – currentVal)
Add => currentVal + modifierVal * (1 – currentVal)

 

Intersected selections

This allows you to restrict the areas targeted by both selections (ie, the pixels common to both this selection AND that one). This is particularly helpful for refining based on two criteria, such as pixels which are both L2 and yellow, so as to separate the bright yellow building from the bright blue sky. Or for choosing pixels which are both D4 and inside the general area you just targeted with a lasso selection. This is conceptually similar to working with group masks, but lets you do everything in a single step when that’s your preference.

The math here is a much simpler multiplication. So intersecting a 40% selection with a 50% one gives you 40% of 50% or 20%.

Intersect => currentVal * modifierVal

 

How to Blend Exposures for Interiors

Now through June 30: Use discount code SUMMERSALE for 25% off my Master Courses (or bundles with Lumenzia) to get the software and training to help master your own exposure blends.

 

I’m trying something new this week. I get a lot of requests for longer or more detailed tutorials than I can typically squeeze into a 15-minute YouTube video. So I’ve built out a more complete tutorial in which you’ll learn:

  • How to blend multiple exposures in dynamic light
  • How to customize and use some of the most advanced luminosity selection techniques involving color, subtraction, or restricting to specific areas such as windows
  • How to dodge and burn with your exposures
  • How to use the new four-sided blur borders with Web Sharp Pro

The images for this tutorial were provided by the talented Garey Gomez. He’s also an instructor, so be sure to check out his tutorials on the art and business of real estate photography.

Greg Benz Photography