Make lightning glow with Lumenzia

I often get questions on how to use luminosity masks to enhance subtle detail, such as the tiny branches of lightning in this image submitted by Earl Gravois. My approach here is to lighten those areas, but also make the main branches more prominent so that there is still a clear structure. Whether you photograph lightning or not, the edit behind this image is a great example of how you might approach trying to use luminosity masks with Lumenzia to target subtle detail. The minor branches, main bolts, and glow around the lightening all need different approaches to create the right luminosity mask.

The key challenge here is that both the subject (the lightning) and background (the sky) contain a broad range of pixel values ranging from dark blue midtones to bright white. There is no range of color or luminosity that clearly separates one from the other as there is tremendous overlap. So even masks based on each individual’s luminosity, color, or even both won’t work to isolate the lightning.

What really separates them is that the lightning is always brighter than the surrounding sky, which is a perfect challenge for Lumenzia’s “Diff” preview. It lets you target pixels which are lighter or darker than their surroundings, and you can use the slider to determine how broad of a comparison you’d like to make, which is helpful for optimizing for smaller or larger details such as the minor and major branches of the lightning here.

However, a standard lights mask also works well in this image, for a different purpose. L3 works great here to select both the lighting and a bit of the sky around it in order to make the main bolt glow. The lightning is already mostly white, so a glow is a great way to make the main branches more prominent. Alternatively, this image could be edited as an HDR image to let the lightning get brighter than standard white (it looks amazing edited as HDR), but I wanted to stick with a standard edit here given Earl’s goals.

How to use the new LR / ACR “Point Color” tool

Adobe Lightroom (LR) and Camera RAW (ACR) recently added a new “Point Color” tool. It adds a completely new level of control for making precise color changes in your images.

There are now several ways to target and adjust color in ACR (and LR, I’ll just stop mentioning both now). Why did Adobe add yet another color tool? The key is the extra precision to select the exact hue / saturation / luminosity / tolerance you need to adjust. It’s quite a bit different from the previous tools and you’ll see how they compare in the video.

All these various tools are useful in different ways, and often complement each other very well. They offer various degrees of simplicity, control, and adjustment.

Here’s a quick comparison of the main color-specific tools in ACR:

  • Point color:
    • targeting:
      • a specific HSL combination (with custom tolerance)
      • global or local (may invert, subtract/intersect/add with masks)
      • based on the adjusted image, so it is sensitive to other edits: point color should be used LAST.
    • adjustment options: HSL
    • Ideal for: precise targeting for HSL adjustments.
  • Mixer:
    • targeting:
      • fixed hue ranges (no saturation / luminance / tolerance control)
      • global only
      • based on the global image adjustments (ignores color grade / local adjustments)
    • adjustment options: HSL
    • Ideal for: simple global HSL adjustments
  • Color range:
    • targeting:
      • a point or rectangular sample of “color” (no control over HSL tolerance)
      • based on the source image (ignores all global and local adjustments)
      • local (may invert, subtract/intersect/add with masks – and you can treat it like a global tool if needed by putting a linear gradientoff the edge of the image)
    • Adjustment options: same wide range of options available for any local mask. Most useful options typically include exposure, hue, and saturation (combining with point color is confusing and not typically beneficial)
    • Ideal for: local adjustments beyond HSL (such as clarity), predictable targeting (not sensitive to edits like Point Color).
The point color interface can be a little overwhelming at first. It includes (from top to bottom):
  • A color sampler (for new samples) and existing samples
    • You may add as many as you like, where each has a different target and adjustment.
    • Note that once you sample, there is no way to change the primary target without deleting the swatch.
  • a 2D plot for hue/saturation and a vertical luminosity strip:
    • little dot show the source you clicked (these cannot be moved)
    • larges dot showing the adjustment to make in H, S, or L
    • hold <alt/option> while clicking and dragging for smaller changes
    • hold <shift> to only change saturation
    • hold <ctrl/cmd> to only change hue
  • A solid 2-color bar showing the sampled color on the left, and the new adjusted output on the right (this of course shows both as the same color until you adjust the next sliders to make a change).
  • Hue, Sat, and Lum sliders. These are your actual adjustment, everything else is just refining the targeting.
  • A small mask icon.
    • Clicking this will visualize the targeting by leaving only those pixels in color.
    • Note that this can be a little confusing with local adjustments. Areas outside the rest of your mask remain in color. For example, if you target reds inside a radial gradient, you will see other colors outside the gradient.
  • and three middle sliders are all transforms to output (next to mask icon). This offers a quick way to adjusts the H, S, and L targeting tolerance all at once.
  • If you click the disclosure triangle (right of refine), you’ll see sliders for more precise control of H, S, and L targeting (these correspond directly to the HSL targeting boxes above, but offer tolerance control here).
    • The little dot in the bar matches
    • The central bar is fully targeted, and the lines show the extent to which each slider is feathered.

Instagram now supports HDR photos!

Instagram (IG) and Threads just added support for HDR (High Dynamic Range) photos! This means we are starting to see the first steps toward mainstream use of these enhanced images. HDR photos look more true to life by retaining the full color and dynamic range captured by the camera sensor.

I’m still testing to see what works best, but have posted a few HDR test images to my IG account (note that I have yet to find a way to get a final post that fully matches my the complex edits I’m doing on my computer, which isn’t surprising as support for sharing phone captures is the natural first step for a mobile-first platform).


How can you view HDR photos on Instagram / Threads?

IG supports HDR on all major platforms: iPhone, Android, and on the web (with a supporting browser like Chrome). Most smart phones  support HDR, as do most Apple laptops sold in the last several years (the 14-16″ M1 or later MacBook Pros look especially stunning). The majority of your audience will likely be able to see your HDR images, as IG is primarily used on mobile phones.

Support is automatic for most users. There’s nothing you need to do on a phone other than ensure you’re up to date on your operating system and the Instagram app. For a computer, see my tests to confirm that your display supports HDR.

Photos will show as HDR in the feed and when viewed large (such as by clicking an image in the grid on a web browser). They currently show as SDR (standard dynamic range) when viewing a profile grid on a mobile device until you click to view a specific image (the grid is HDR on the web). The images are shared as gain maps, which is a standard intended to help optimize display on any monitor (regardless of whether it supports HDR or just SDR).

Support appears to be starting to roll out across other Meta-owned platforms as well. The Threads iOS now supports HDR. And if you share share an IG or Threads post to Facebook, it will show as HDR when viewed on a supporting web browser. Support seems to be expanding/evolving quickly.


How do you capture and upload HDR photos to your account?

Initial support appears focused on sharing images from smart phone cameras (which makes sense as the primary way most people interact with IG). Just use the native iOS / Android camera app and you should have HDR support. I have tested JPG, HEIC, and RAW on the iPhone and all those formats are supported when captured with the native camera. On Android, I have found both the native Android app and IG’s built-in camera work.

You can also upload images exported from Web Sharp Pro, Lightroom, and ACR as an HDR AVIF encoded in the P3 colorspace. – at least when uploading through the iOS IG app. This is not an optimal solution as it does not include a gain map for optimal SDR rendering, but that’s less important here because most images on Instagram will be viewed on HDR-capable mobile phones. The following video provides more context and shows the ACR pathway through Web Sharp Pro (which offers an “enhance SDR to HDR” option to upgrade any image easily to HDR, Instagram size / cropping templates, converting to P3, optimized sharpening, optional borders, etc).


That’s an incredible start, but what about other options? It would be amazing to have support for every camera app, custom gain maps, etc right now – but we have to start somewhere. I haven’t even seen support documentation on these capabilities, so we are probably in beta territory at this point. Instagram has done an amazing job with this initial support and deserves a lot of credit for this achievement (along with the other the technologies involved here from Google, Apple, Samsung, etc).


What about Facebook?

There is actually some very limited support on FB now. If you share a link to one of your images from IG or Threads, it will show up as HDR when viewed in a supporting web browser – but the FB app will not support it on any device. Still, that means true HDR images for computers like the MacBook Pro when using Chrome, etc. And everyone else will get your fallback SDR, so the results will look great when properly encoded (which is a bit trick due to the limit noted below). You can see an example on my FB page.


What isn’t supported?

I have found the following scenarios do NOT currently support HDR (this is not an exhaustive list, and limitations here may be resolved by the time you read this):

  • Full HDR capacity. There are some bugs in some transcoding pathways which are causing the HDR to be a bit darker than the source. I’d expect this gets resolved soon enough.
  • Custom gain maps. The full gain map spec used by Adobe is not yet supported on IG, so we can’t provide HDR images which are also optimized for SDR fallback. I’m ok with the tradeoff for now, as HDR is already the norm for mobile devices.
  • Some iOS native camera captures do not include the required gain map. This includes pano images, as well as as portrait mode shots if you shoot too quickly (before text like “natural light” shows in yellow) or you use either of the mono (black and white) modes.
  • Third-party camera apps may not capture or share gain maps.
    • I tested ProCamera and found that JPG / HEIF images did not include gain maps and RAW / ProRAW captures did not generate gain maps when uploaded (unlike RAW files from the native iOS camera).
    • The IG camera on iOS did not support HDR in my testing (but direct captures within IG work great on Android / S24).
  • Filters work great for HDR in the Android app. However, selecting any IG filter in the iOS app during upload will result in an SDR result.
  • Uploading through a web browser on a computer will fail to preserve HDR in most cases.
  • Uploading iOS captures on Android or vice versa. Apple and Google use different gain map formats.

None of these limitations for advanced HDR photography surprise me for initial support. There are different gain map specs and implementations at this point which will likely require further work from multiple companies to allow full support across platforms. Focusing on sharing smart phone images is the right first step and the results are gorgeous and very easy to obtain (automatic actually). I had assumed we probably wouldn’t see social media support for HDR any sooner than the end of 2024, so this is a very exciting development.

==> Keep an eye on my newsletter and Instagram account for updates as HDR support continues to evolve on social media. I’ve been testing various approaches to upload edited images and am getting pretty close with my latest post.

What the HDR histogram can teach you about your RAW images

Lightroom (LR) and Adobe Camera RAW (ACR) don’t offer a RAW histogram, but there is a new way to get more information about your RAW images: the HDR histogram. And you can use it even if you do not have an HDR monitor or do not intend to process your image as HDR. Just click the “HDR” button in the develop module and watch how the right side of the histogram changes. Detail which is bunched up towards the right side of the SDR histogram can now be properly rendered as it was in the original scene. It is not a RAW histogram, but it will show you much more about the highlight data in your RAW image. **


And this full histogram helps illustrate a couple lessons:

We don’t “recover” highlights, we just process them so they fit into the capabilities of our monitors:

Truly “blown” pixels cannot be recovered with RAW processing. RAW processing tools like LR and ACR do not invent highlight detail (though I hope someone invents an AI tool for that to help manage slightly blown skies, that would be quite useful). We can clip pixels in our SDR processing, but if you are able to extract highlight detail from RAW, it was always there.

The HDR vs SDR histogram helps show the difference between truly blown pixels in the RAW vs “highlight rolloff” (compression of the highlights to squeeze them into the SDR range). You can also visualize this information on an SDR display simply by reducing the exposure slider. Everything is now too dark, but it does show what’s possible with your image.

As a side note for those of you who haven’t experienced HDR yet, you can also use this information to help determine if HDR display would help improve your image. For example, if you see color and detail restored when you set exposure to -2, then you you know that 2 stops of HDR headroom would be sufficient to see these pixels properly on an HDR display. Given many displays which offer 4 or even 5 stops of headroom (such as the M1 or later MacBook Pro), you would easily be able to see detail which becomes visible when you slide exposure down to -4.


If the untouched RAW doesn’t use much of the HDR range, it is probably under-exposed:

If you are properly exposing to the right (ETTR), your RAW images should frequently look too bright at first (ie require that you set exposure slightly negative when you process them). The reason for this is that brighter exposures have less noise (better signal to noise ratio).

There isn’t an exact point where the HDR histogram tells you your camera would have clipped – it varies by ISO and probably by camera. However, if you consistently see little or no use of the HDR range in the histogram, you are probably frequently under-exposing your images. Your RAW at base ISO can probably offer 3 or more stops of HDR support. So you might have an image that looks properly exposed for SDR, but the HDR histogram might be telling you it could have safely been exposed 3 stops brighter. In other words, your ISO 100 image might have the noise quality of an ISO 400 – 800 image and you could have avoided that with a longer shutter speed.

Note that you will likely see the histogram shows exposure increasing more than you set the slider, not everything in here is based on precise physics (this is still just an HDR histogram based on Adobe processing, not a RAW histogram). This is just a visual way to help understand ETTR principles. You can also watch for predicted clipping when bumping the exposure slider in SDR mode, you just won’t be seeing the same details in the histogram.

Of course, sometimes slower shutters or wider apertures aren’t ideal given other considerations for movement or detail in the image. But if you know the limits, you can reduce noise when possible.


** To see an actual RAW histogram:

The HDR histogram helps see more of our image data in LR / ACR. A true RAW histogram would show the mosaic sensor data, typically including two green channels. It wouldn’t have color data and it wouldn’t be responsive to any RAW processing choices (since it would be measured from the original data). This can be helpful to better understand your camera and shooting decisions.

If you want to go a step further and see a true RAW histogram, you might want to check out RAW Digger. I have not personally used it, but have heard great things both about the software and the developer’s excellent support of it.

Real estate exposure blending

Many of you have asked me for more real estate editing tutorials, especially for exposure blending of windows using the Lumenzia luminosity masking panel. New Zealand-based photographer Anthony Turnham has a great YouTube channel with several outstanding tutorials on that very topic (plus many other great post-processing videos). So I’m creating this post to share a collection of his videos to help answer your questions. The videos are set to play at the time point where Lumenzia is used, but I highly recommend watching the full videos as they are packed with great information on the complete edit.

His videos are also a great compliment to my Exposure Blending Master Course, which goes into great depth on fundamentals of blending – but doesn’t cover interiors with the depth Anthony does.




If you’re looking for even more videos showing how others use Lumenzia, be sure to check out this playlist on my channel as well.

Greg Benz Photography