Don’t let this hidden setting RUIN your RAW smart objects

Camera RAW Smart Objects are hands-down one of the best features of Photoshop. If you don’t know why, first check out my previous tutorials: 3 Kinds of Smart Objects and 3 Common Misconceptions. The beauty of these special smart objects is that they always give you access to alter the RAW processing while keeping the highest-possible quality… unless you overlook this one critical (but hidden) setting.

If you have a RAW Smart Object embedded in an image set to 16-bits and Pro Photo RGB image, you’d assume that that’s what the RAW would give you. After all, you can output any RAW file with those settings. But that’s not the whole story. It is true that you will have a 16-bit, ProPhoto RGB layer rendered from your RAW Smart Object. That’s true even if you were in another color space or bit depth, as the layer is reprocessed as needed.

The problem is that Camera RAW Smart Objects contain their own color space and bit depth. Without them, you wouldn’t get a proper preview, RGB readings, or accurate histograms and clipping warnings inside ACR. More importantly, these settings are applied, no matter what the settings are in your document. So your ACR settings are applied, and then the layer is converted to your image’s settings externally. So if you have a RAW set to say sRGB and 8-bit, the layer will be converted to that FIRST and then converted to say the 16-bit ProPhoto RGB of your document. So yes, you will technically have the requested settings, but it’s just a conversion from a smaller color space and bit depth to a larger one. You’ve already lost a LOT of quality.

To preserve full quality, your Camera RAW Smart Object should use settings which are as good or better than the document’s color space and bit depth. This would ideally be the same settings, but you probably won’t see much difference if you RAW is set to ProPhoto inside an Adobe RGB image.

To avoid problems, there are some settings to check and update in both Photoshop and Lightroom. It’s important to check both, as you can get different results with different workflows. If you use Lightroom’s Edit in / Edit as a Smart Object workflow, you’ll open an image using the settings from Lightroom. If you instead open the image directly in Photoshop (basically any other method which invokes the ACR interface when opening the image), then the Photoshop settings will be applied.

Lightroom settings:

  • Go to Preferences / External editing
  • Change the color space to either Adobe RGB or ProPhoto RGB
  • Change the bit depth to 16 bits

Photoshop settings:

  • Go to Preferences / File Handling / Camera RAW Preferences / Workflow
  • Change the color space to either Adobe RGB or ProPhoto RGB **
  • Change the bit depth to 16 bits

** Note that Photoshop (unlike Lightroom unfortunately) lets you choose any color space to render your Smart Object. So you could use something like Beta RGB or REC2020 if you prefer. However, you should be aware of a limitation with color spaces in ACR. They are not embedded, just referenced by name. So if you choose something non-standard and open the image on another computer where that ICC profile is not installed, you could run into an issue. The image will be fine initially. However, if you double-click into the Smart Object to edit it and its previous profile is not available, ACR will use your default PS preference for you without warning (this applies to the color space, not bit depth). So simply opening ACR and clicking OK could convert from BetaRGB to Adobe RGB (or is set as default on that computer).

When you open an image, whichever defaults above apply (depending on on how you open), those settings will be set inside the Smart Object as well as on the document. So checking your document setting at import is a quick way to confirm that things worked internally as expected. You can change those settings afterward, but they will match when the RAW is opened the first time.

The good news is that if you didn’t know about this before, you can still fix your existing work (as long as you haven’t rasterized the Smart Objects). If you have the wrong settings inside the Smart Object, you can update them anytime. For example, if the Smart Object was internally set to 8 bit, you can switch it internally to 16-bits and you’ll get back the lost data. Just double-click the Smart Object, click on the text link at the bottom showing these details, update as desired, and click OK to save the Smart Object.

How much can you improve an old edit?

If you want to improve your photography, one great way to do that is to review your old images and reprocess them. Starting from scratch can reinforce important lessons as you compare the new version to the old. You might be surprised how hard you find it to recreate some of your best work sometimes. Or you can take the old image and try to improve it from where it is. I find this latter approach to be very powerful, as it gives me a way to keep the look and feel of an image as I improve it. This allows me to easily improve an image for print, while still delivering a result that’s fully consistent with the client’s expectations. It also helps me to learn better ways to correct problems after the fact, which can save time later instead of redoing an image.

I processed this image 7 years ago, so naturally there are a number of details I would process differently now as I’ve grown more skilled as an artist. I still enjoy the image, but upon close inspection feel that the following could be improved:

  1. The sky shows a fair bit of haze, which I’d like to minimize.
  2. The flower highlights are a bit blown out. In keeping with the very high contrast ratios here, I don’t need to restore everything, but I do think some improvement is warranted.
  3. The reflection of the flowers shows HDR artifacts. I processed this back at a time when I was still relying on HDR much more than exposure blending with luminosity masks and HDR often shows such “ghosted” results when water creates movement from one exposure to the next. There are some artifacts around the same reflected flowers from my D800, which shows lines extending from some of the highlights in the water. That wasn’t a common issue for me with the D800, but it certainly didn’t handle dark skies nearly as well as my D850.
  4. Some of the trees lit with bright yellow behind the flower are a bit too hot. The color separation is good, but the light source isn’t as obvious as the neon flowers and I think they could stand to be slightly dimmer so as not to compete with the main subject.
  5. The ambient lighting under the trees to the far left and right was constantly changing colors and not synchronized. There are two better options here. I could make the trees on the left purple to match the colors on the right, which would emphasize the character of the long walk through this park. Or I could make the trees on the right green so as to further emphasize the main subject. I think they’re both great approaches, but I’m going to go with the latter.

Here’s the approach I used to address each of the issues:

  1. I already had a darker exposure in my old image, so I can just mask more of it into the sky. I created a luminosity selection specific to the sky area to help me paint white onto the existing mask. The selection needed to target dark areas of the sky, so I used the Quick Selection tool to target the sky roughly, clicked D for a darks luminosity preview, and then feathered the quick selection slightly when clicking “Sel” to help ensure a smooth transition at the edges.
  2. The flowers needed some exposure blending with a new exposure I imported from my original shoot. This is pretty consistent with other blends I’ve demonstrated, but in this case there’s a twist. My processed image is a few pixels smaller than the new source material and I’m importing a Smart Object, so I had to manually align the new source. When adding the new layers with “PreBlend”, I just checked “check alignment (difference)” to put the layers into a blend mode that would make it easy to align. Just activate the move tool and click on the arrow keys to nudge the layer pixel by pixel until the result looks as dark as possible (difference blend mode shows generally very dark when things are perfectly aligned). Then just create a lights luminosity selection and start painting on the mask to reveal the improved flower details.
  3. The reflection is a bit different. It’s more of a local replacement than a blend because the working image has those artifacts (HDR ghosting and the lines in the RAW). I grabbed a source image which had both good details and a shutter speed that rendered the water in an ideal way. In addition to processing for details, I also needed to remove the camera artifacts. By using strong noise reduction in ACR and negative texture coupled with a boost in clarity, I was able to nearly eliminate the D800 artifacts and generate a great-looking reflection. As the water in both versions matches and lacks detail, I simply used a soft white brush to reveal the better layer.
  4. The background trees are also a little different. In this case, I wanted to darken the yellow trees without darkening the green/magenta/white flower, so I just clicked “Color” to make a selection based on yellows and brushed through it to reveal the darker trees. I added a BlendIf to target the lightest pixels. I don’t typically use BlendIf for blending, but it works just fine in this case. I then reduced opacity so that the result was a subtle correction.
  5. The tree color is a bit tricky. You could use an HSL adjustment, but I found better results by just replacing the color. I added a solid fill layer with the desired hue and saturation and set it to color blend mode. I then used “Color” to create a Blue/Magenta mask and then added an additional mask to target only the blue/magenta colors in the trees on the right side of the image.

Photoshop channel math: add subtract intersect

Once you’re getting the hang of luminosity masks, there are several ways you can combine them to make even more powerful masks and selections. Specifically, I’m referring to adding, subtracting, and intersecting selections, masks, or channels in Photoshop. In this tutorial, we’ll cover why you should use them and demystify how they work.

All of them work from a principle of starting from your current selection/mask/channel and then modifying it with another in order to produce a more targeted result. To keep the discussion simple, I’m simply going to refer to “selections” for the rest of the article, but the concepts apply equally to masks and channels. 0% selected is the same as black (0.000) in a mask/channel and 100% selected is the same as white (1.000) in a mask/channel.


Subtracted selections

These allow you to remove something from your current selection in a proportional way (ie, the pixels targeted by this selection but NOT that one). This is a particularly powerful tool for enhancing shadow detail. If you take a brightness adjustment layer and starting painting a mask through a D3 selection, you will definitely lighten the shadows. But you will be brightening the blackest detail more than the other shadow values. This will result in a muddy, low contrast mess. What you really want to do is to leave pure black alone and instead only brighten the slightly brighter dark tones. You can subtract a more restrictive darks selection such as D5, which will exclude the pure blacks and give you exactly what you need. Now when you paint through it, the shadow detail will be brightened, but without reducing contrast by lightening the pure blacks.

The concepts here are pretty simple when you’re dealing with ares which are fully selected or protected, but much less obvious when you start to consider all the partial values in between, which is the nature of all luminosity masks. If you remove a 100% selection from anything, the result will be 0% selected (or black in the channel/mask). What confuses people is that this isn’t just a simple subtraction of two numbers. If you subtract a 40% selection from a 50% selection, the result is not 10% selected, it’s actually 30%. The way Photoshop thinks about this is that you are starting from a 50% selection and then going 40% of the way from there to 0% (black).

To show you the actual math, let’s first need to define a few terms:

  • currentVal = the current value of your selection, mask, or channel
  • modifierVal = the value of the selection you wish to add to, subtract from, or intersect with your current one
  • The math is based on working with scalars, which are values ranging from 0.000 (black) – 1.000 (white), which work like percentages (ie, 0.535 would mean 53.5%). If you are reviewing the info panel, you will only see the correct values if you switch the display to show grayscale(K) values as 32-bit values.
  • The math is done directly on these grayscale values. It does not matter which colorspace, working space or bit depth your image uses, this is the correct way to view the mask/channel values if you want to understand how the add, subtract, and intersect work (ie, Photoshop internally uses conceptually similar values, regardless of bit depth or how you choose the view the numbers).

Subtract => currentVal – modifierVal * (currentVal – black) => currentVal – modifierVal * (currentVal – 0.000)
Which can be simplified to: Subtract => currentVal – modifierVal * (currentVal)

To learn more about subtracted selections, be sure to check out these previous tutorials:

Added selections selections

This allows you to increase the areas targeted by your selection (ie, the pixels in this selection OR that one). Most likely, you’d be using this to load two alpha channels for different subjects in order to work on both at the same time. For example, you may have selected each building in a cityscape individually for control but need to work on all of them together at some point. Generally speaking, you’re much less likely to use addition for luminosity selections than the other methods. But you might occasionally want to do something like target both zones 4 and 5 at the same time.

The way Photoshop thinks about this is that you are going from the current value some percentage of the way towards white. So adding a 40% selection to a 50% one would take you 40% of the way from 50% to 100% (white), which is 70%.

Add => currentVal + modifierVal * (white – currentVal)
Add => currentVal + modifierVal * (1 – currentVal)


Intersected selections

This allows you to restrict the areas targeted by both selections (ie, the pixels common to both this selection AND that one). This is particularly helpful for refining based on two criteria, such as pixels which are both L2 and yellow, so as to separate the bright yellow building from the bright blue sky. Or for choosing pixels which are both D4 and inside the general area you just targeted with a lasso selection. This is conceptually similar to working with group masks, but lets you do everything in a single step when that’s your preference.

The math here is a much simpler multiplication. So intersecting a 40% selection with a 50% one gives you 40% of 50% or 20%.

Intersect => currentVal * modifierVal


How to Blend Exposures for Interiors

Now through June 30: Use discount code SUMMERSALE for 25% off my Master Courses (or bundles with Lumenzia) to get the software and training to help master your own exposure blends.


I’m trying something new this week. I get a lot of requests for longer or more detailed tutorials than I can typically squeeze into a 15-minute YouTube video. So I’ve built out a more complete tutorial in which you’ll learn:

  • How to blend multiple exposures in dynamic light
  • How to customize and use some of the most advanced luminosity selection techniques involving color, subtraction, or restricting to specific areas such as windows
  • How to dodge and burn with your exposures
  • How to use the new four-sided blur borders with Web Sharp Pro

The images for this tutorial were provided by the talented Garey Gomez. He’s also an instructor, so be sure to check out his tutorials on the art and business of real estate photography.

How to stitch challenging panoramas

I’ve previously posted several tutorials on how to create panoramas with Lightroom, including details on adaptive wide angle and how to stitch HDR panoramas in one step. Lightroom does a beautiful job in many situations, but panoramas aren’t its main focus and it can’t handle every job. That’s where a dedicating panorama program becomes a critical tool. They look daunting, but don’t let that stop you. If you know the few controls that really matter for most tough jobs, it’s actually super easy and you’ll learn how to quickly get the job done in this tutorial.

Important tools for shooting and processing panoramas:

While you don’t need all the fancy tools I use to capture panoramas, they do make your life easier by helping to capture the best quality source images. They can help ensure you get images that will stitch, simplify the stitching workflow (avoid more complex fixes), and help you keep more of the scene (by avoiding cropping due to gaps or slanted results).

The tools I used for this image edit include:

  • PTGui: The dedicated panorama stitching software I use for more challenging panoramas like this one. The “pro” version is not required for the type of work I showed here. The key benefits of the more expensive version are dedicated support for stitching multiple exposures (“HDR”) and batch processing if you plan to do a lot of this.
  • Arca Swiss “Cube”: This is a very expensive geared head, but built to last forever and it adds tremendous value if you shoot panoramas or architecture. If you need to shoot on a budget, this is definitely optional. Ball heads are fine and there are cheaper geared heads (but try them and make sure you don’t get one that feels loose when the camera is attached).
  • Really Right Stuff series-3 leveling base with hook: Whatever head you swivel on top of your tripod, it should be perfectly level and a tool like this makes the job so much easier than fiddling with the legs. There are different options for different models, call B&H if you have questions. I like the version with a hook so that I can hang my bag from the tripod for extra stability.
  • Really Right Stuff slider rail: This lets you offset the camera a little so that it can rotate around the “nodal” point, which avoids parallax that can make stitching very difficult in scenes where the foreground is within 10 feet or so of the camera.
  • Really Right Stuff pano head: If you wish to shoot multi-row panos like I used in this demo or anything where the camera doesn’t stay perfectly level, you’ll also want a head designed for vertical movement. If you only plan to shoot single-row panos, want to watch your budget, or travel as light as possible – just get the slider rail.
  • Luminar: For the light rays in this demo. It has nothing to do with panorama specifically, it’s just a nice filter effect for the final image.
  • Lumenzia: My luminosity masking panel, for more control of Luminar. It is also unrelated to the pano itself.

If you are on a budget, the most important tools here are PTGui, a slider rail for the nodal point, and ideally a leveling base to make life easy. If you do not have the time or tools to shoot on a leveled tripod with adjustment for the nodal point, you’ll want to capture images with a much higher degree of overlap from frame to frame. These extra images will help compensate for lower-quality source images.


How to stitch using PTGui:

I’ve been using PTGui for probably 10-15 years. It used to be be rather complicated to use, with a lot of complex option. These days, it just looks complicated. It’s quite simple now if you just follow the following steps:

  1. Open the Project Assistant. This should be the first thing you see, but you can navigate there via Tools / Main Window and clicking at the top of the left hand column.
  2. Drag and drop your or click “load images” to provide your source images. I recommend using TIF images you have processed with your desired white balance, etc. PTGui will accept RAW files, but there’s no real benefit in my opinion since it will not allow you to export the final image as a DNG (this is the primary advantage Lightroom overs PTGui and I’d love to see support added here too).
  3. Click “align images“, which should take you to the panorama editor window.
  4. In the panorama editor, try the different projections to find the best starting point. Use the cylinder for single-row panos and the sphere for multi-row. Be sure to try the “rectilinear” option in the drop down.
  5. Drag the horizontal and vertical sliders to help zoom the image area to your content.
  6. Click and drag the image to set the center.
  7. If the image needs rotation, click the popout options at top right, then numerical transform, and try + or – 0.1 degrees for roll. Click apply to update.
  8. Repeat steps 5-7 until you have the composition you like with either no gaps or ones you can fill via content-aware in Photoshop.
  9. Close the panorama editor or go back to the main window via Tools / Main Window and look for the “create panorama” section in the left hand column. You can skip all the other stuff.
  10. If this is your first time, select TIFF, 16-bits, LDR blended panorama (or HDR blended panorama if you are providing multiple exposures), select AdobeRGB or your preferred colorspace and check “use source image color space if possible“. Then go to File / Make Default to save these settings as the default for the future. Be very careful to ensure you are using 16-bit output.
  11. Click “create panorama” and look for the final file to be places next to your source images. Open it in Photoshop to finish editing.

Of course, feel free to explore those advanced options as you need, but they are overkill for most properly shot sequences of images.

Note that I deliberately allowed the converging verticals (keystoning) here. I had the camera pointed up a considerable amount and no amount of correction would straighten everything without tradeoffs. Personally, I like the look in this case. It’s  not slightly off, it’s clearly a choice. And it helps things flow visually towards the light source. If you would have wanted more correction, you could try the following steps to get this alternative version below.

  1. Adjust the pitch in PTGui’s panorama editor in the numerical transforms (in the same popout where you can rotate the image). By placing the centerpoint low in the image, the top gets stretched out to keep the columns more vertical.
  2. Try playing with the projection settings for horizontal and vertical compression in the projection (same popout on the right)
  3. Zoom out to keep critical edges. This will leave enmorous areas of blank pixels, which will need to be cropped later.
  4. In Photoshop, use the Adaptive Wide Angle filter (ideally on a Smart Object created from the output of PTGui). The capabilities here were a bit limited because of my use of a tilt shift lens, but using the “polygon constraint tool” on the ceiling helped further straighten bowing lines in the ceiling.
  5. Crop.

Here you can see the results. Personally, I do not like the dramatic change in column thickness nor the distortion in the ceiling. There is no free lunch when you shoot ultra-wide looking up, tradeoffs are part of the game. I could have shot head-on from the second floor, but then I’d lose the scale in the stairs. I prefer just embracing the scene for what it is. It makes you feel a little small relative to this incredible architecture.


How to add light rays using Luminar and Lumenzia:

To add light rays using Luminar 4:

  1. Right-click your image in Photoshop and convert the panorama to a Smart Object. This will allow you to change the filtering later as needed.
  2. Go to Filter / Skylum Software / Luminar 4.
  3. Click on the creative tab (the icon looks like a painter’s pallet) and then Sunrays.
  4. Click “place sun center” and then click and drag the white dot to where the light should come from in the image.
  5. Increase the amount to a high value so you can see it, adjust the various other settings to optimize the look, and then finally decrease the amount as desired.
  6. When you are done, click “apply” and then you may wish to paint with a soft black brush on the Smart Filter mask to reduce or remove the effect in some parts of the image.

To enhance the light rays using Lumenzia:

  1. Click “Dodge” to add a transparent or gray dodge and burn layer.
  2. Click Diff(+/-) and choose the “lighter” option. This will help target pixels which are brighter than those around them.
  3. Click and drag the slider to right to use a larger radius of comparison. Small values will target edges and you’ll need larger values like 200-500px to target the light rays.
  4. Click “Sel” to load the Diff preview as a selection.
  5. Paint on the dodge layer with a white brush as low flow through the selection to enhance the rays.


Disclosure: This article contains affiliate links. See my ethics statement for more information.

Greg Benz Photography