ASUS PA32UCDM: The best HDR OLED monitor for photographers?

SALE: for a limited time at launch, you can save $200 off the cost of the ASUS PA32UCDM.

 

ASUS just launched a very exciting new HDR monitor: the ASUS PA32UCDM. This level of performance has never before been offered at this price point.

This monitor offers:

  • 1000-nits peak brightness to support up to 3.3 stops of additional dynamic range over standard monitors (certified DisplayHDR™ True Black 400)
  • OLED for perfect blacks
  • ∆E < 1  and hardware calibration for high accuracy (this is critical for HDR, as there is no standard yet to create ICC profiles for HDR display)
  • 99% P3 coverage
  • 4k resolution
  • Connect your laptop with a single cable: 96W USB-C power delivery and three downstream ports (Thunderbolt 4, USB 3.2 Gen 2 Type-C, and USB 3.2 Gen 2 Type-A)
  • Dolby Vision, HDR10, and HLG support
  • It also offers 240Hz and 0.1ms for those of you interested in gaming.
  • All for only $1,699-$1,899 (comparable alternatives to this monitor are typically $2-$3k).

There is nothing else like this currently on the market. Other OLED monitors either lack the brightness or the color accuracy for HDR. As an OLED, it will naturally offer superior shadow detail compared to mini-LED alternatives. And this is the lowest price point I’ve seen for a monitor offering such a great level of HDR support.

[Disclosure: This post contains affiliate links. I rarely endorse other products and only do when I think you would thoroughly enjoy them. By purchasing through these links, you are helping to support the creation of my tutorials at no cost to you.]

 

Conclusions: Who should buy this OLED monitor and what are the best alternatives?

This article goes into significant detail on performance, setup, calibration, and comparisons. To help make the bottom line clear, let’s start with the conclusions.

The ASUS PA32UCDM is an excellent monitor. It fills an important role as a high quality HDR OLED monitor at a price point that bridges the gap between budget HDR and professional mini-LED. It features the ability to calibrate in HDR mode, 1000-nits (up to 3.3 stops HDR headroom), 4k resolution, and overall high quality. Setup is important to achieve these results, so be sure to review the details below if you purchase this monitor.

As with almost any current OLED monitor, the peak nits should not be compared directly to a mini-LED. This display is rated at DisplayHDR 400 as it can only achieve 1000 nits in small percentage of the screen simultaneously. It’s an excellent display, but a 1000 nits mini-LED will have an advantage for working in a bright room or displaying images with a lot of bright pixels. On the flip side, shadow detail is outstanding and achieves perfect blacks which are not possible with today’s mini-LEDs. So this is a monitor which will naturally excel in a darker environment.

The ideal HDR monitor for you depends on your working environment and budget. Here are my recommendations:

  • If you work in an environment with controlled lighting or have a ~$1,700k budget: this OLED ASUS PA32UCDM is an excellent choice (budget $200-$300 more if you need to purchase a colorimeter).
  • If you work in a bright environment or want a full 4 stops of HDR and have a ~$3k budget: ASUS PA32UCXR mini-LED is ideal (see my review).
  • If you use MacOS and money is no object ($6k new), the Pro Display XDR is ideal for its simple setup, better customer support, 6k resolution, and absolute silence (no fans). Unless you can get one used, the first two options offer much better value for most photographers.
  • If you have a more limited budget, there are many good alternatives:
    • If you are willing to use a 42″ TV as a monitor, the LG C4 is an excellent option for $900 (while inventory lasts, the C5 has no advantages here but the price has increased).
    • If you are interested in a 32″ HDR monitor for ~$1k: ASUS PA32UCR-K mini-LED (see my review for caveats and tips on local dimming with this older model, the PA32UCDM is well worth the extra cost for its better image quality).
    • The Gigabyte AORUS FO32U2 appears very promising as a 4K, 1000-nits OLED monitor for $800 (I have not had the chance to test it, and it does not support HDR calibration).
    • If you have a very tight budget (~$350), the Xioami G Pro 27i offers a 1000-nit, 27″, 2K resolution mini-LED with moderate color accuracy (see my review).
    • An external monitor is just one option. All of the 14-16″ M1 and later MacBook Pros include an outstanding 1600 nits mini-LED. See my review of the M4 MacBook Pro. Highly recommended!
  • See my full list of recommended HDR monitors for much more detail on the key specs to consider and other great options.

 

Image Quality

It is important to remember that the peak brightness of an OLED is not directly comparable to mini-LED. While 1000 nits on this display is getting very close to the 1600 nits of an Apple laptop for example, that only applies when a modest percentage of the image pixels are very bright. Apple’s XDR mini-LED displays can light the entire display at 1000 nits, while an OLED like this probably only supports 2-10% of the display at peak values before it needs to dim a bit. With a properly edited HDR, you should have only a small number of pixels hitting such high values. So an OLED should work very well for editing HDR under proper ambient lighting conditions (your SDR brightness should ideally be 80-120 nits even for print work).

The biggest downside of this peak brightness limitation probably applies to photographers who are new to HDR and prone to pushing the brightness too high in which case the dimming might convince you to edit the content too bright. You can easily check your work on another display (such as your phone), Web Sharp Pro has HDR soft proofing features to warn you around excessively bright editing, and you can easily test your image with overlays or measurement tools in Lightroom, ACR, and Photoshop.

Like any OLED, it offers superior shadow detail compared to even the best mini-LED displays. I wouldn’t put too much weight on this, as high end mini-LED displays are excellent at the level needed for editing photos. You can see significantly better detail in extremely dark shadow detail. However, this doesn’t affect many photos and you wouldn’t be able to rely on your audience to see the benefit at this level (either because many may lack OLED or not be viewing in a suitably dark environment to appreciate it). So it’s beautiful and very nice to have, but I place more emphasis on headroom, sustained brightness, and color accuracy than the risk of blooming of shadows in photography.

On MacOS, HDR headroom ranges between 1.9 to 3.3 stops, depending on how high you set the (SDR) brightness slider in System Settings. You’d achieve that maximum when you set SDR brightness for 100 nits. This means that 3 stops is very reasonable in a darker room, but you should only expect 2 stops of support in brighter environments.

One of the key advantages of the ASUS ProArt lineup is its ability to calibrate in the hardware. There is no other option for HDR at this time, as there is no ICC standard for HDR. If you try to use a conventional custom profiling approach, the HDR content will clip (for any monitor on Mac or Windows). Unlike the other ASUS monitors I have reviewed, no colorimeter is included with the monitor and you’ll want to budget an extra $300 for one if that’s important to you and you don’t have one you own or can borrow. I did not find an official list on ASUS’s website, but their calibration software supports Calibrite Display Plus HL (which I own and used for my testing), Calibrite ColorChecker Display Pro, or Calibrite ColorChecker Display Plus.

With its 4k resolution, text is easy to read. I prefer the higher clarity of 6k on my Pro Display XDR, but only slightly. I find this 4k works quite well and have no concerns. Note that I set MacOS scaling to the 2560×1440 equivalent for both monitors.

The display has a glossy finish. There is no matte option for those of you who might work in an environment which may cause reflections.

 

How does this OLED compare to mini-LED?

Actual monitor results are what counts, but there are some key differences between OLED and mini-LED which tend to hold true. In general, OLED excels in dark content and mini-LED excels at bright content. Which is best for you depends heavily on how bright your ambient light is, as well as your goals for HDR headroom.

 

OLED can struggle to achieve higher brightness. This can obviously affect peak nits, and therefore your HDR headroom (especially in a brighter room where using a higher SDR brightness may leave you with little HDR headroom). What’s much less obvious is that the overall brightness of the content will often cause dimming of the display. An OLED often only achieves its peak nit values for a 2-10% “window” (ie percentage of pixels on screen). So if your image has only a few pixels at 3 stops, they might properly show around 1000 nits on the display. But if 25% of the image is that bright, then the same pixels values might be display at something like 400 nits. This is caused by ABL or “automatic brightness limiting“.

ABLis common to nearly all OLEDs and may be done to prevent burn in, protect electronics, to achieve energy efficiency targets, etc. It causes a degree of tone mapping in the display when the overall display is too bright. Even just large amount of white background on this web page is enough that an HDR image would be dimmed somewhat. The result is still a good HDR image, but the image won’t have quite the same punch and is not as accurate as what you’d see without it.

Mini-LED can easily achieve high brightness. It is not immune to some degree of ABL, but you will generally see it far less often and to a much lesser degree. Not just higher peak values to achieve greater HDR headroom, but also greater sustained brightness. For example, mini-LED Apple XDR displays can show 1000 nits across the entire display and up to 1600 nits in smaller areas.

If you were to compare my ABL test (#15) on a mini-LED vs this OLED, you would see that the HDR images get brighter while the page background is set to black. This is because the surrounding white (even though it is just diffuse/SDR white) creates enough brightness to trigger the ABL. The result is that the images lose a bit of punch against the white background on the OLED, while a mini-LED would show the images the same way with either background.

To put all this in context. The PA32UCDM is able to achieve high levels of accuracy (deltaE < 1), and so is its mini-LED sibling. But remember that calibration tests the monitor’s ability to make a small rectangle at the correct color and brightness, they do not guarantee that the same pixel values will be as accurate when you show an image and other content across the entire display. If you were to test with 100% patches covering the entire monitor, you would find that the mini-LED would get much closer to hitting the expected values, whereas the OLED would be much darker (ie poor gray tracking). That’s not a terribly realistic test either, as we rarely cover the whole screen in bright content. The most common scenario for that would be in MS Word or a light theme web page where much of the screen is using SDR white – but HDR pixels are even brighter.

There’s no standard for an objective test here, but there are a few things you can look to for confidence. This display is certified to DisplayHDR 400 True Black, which should give you a sense of the comparative performance. And it has a “typical” (aka “sustained”) brightness of 250 nits. That’s helpful to know, since it’s much higher than the roughly 100 nits you should use for SDR content like the background of this web page. Overall, the performance for this OLED is great, but a mini-LED would be more able to show the true full brightness under more conditions. If you’re concerned with accuracy, you could set this monitor to the 400 nits limit and have confidence that you’re seeing 2 stops of HDR with greater accuracy. Or you could allow the full 1000 nits with 3.3 stops of headroom, but less certainty that you’re achieving the target brightness. I think most people would most appreciate the full 1000 nits, just check a few images elsewhere (such as your phone) if you’re new to HDR to make sure you aren’t editing the images too bright (as a monitor that darkens make it harder for you to see your mistake).

 

OLED offers perfect blacks. That means no blooming and better shadow detail. This is often an advantage for dark content when viewed in a dark room. However, mini-LED cannot achieve perfect blacks. Unlike OLED (where each pixel is individually lit), mini-LED uses a shared backlight which causes blacks to show a bit of grey around lit pixels (known as “blooming”). While older mini-LEDs (with low zone counts, limited backlight control, and more basic processing) tend to be prone to this issue, it’s much more controlled on newer models. OLED will show better detail in deep shadow areas when viewed under suitably dim ambient light. In very dark content with a suitably dark room, this OLED will clearly show much better detail than even the best mini-LEDs. The classic demo for this is a video showing a moving star field (the blacks will get much darker on OLED). It’s a nice benefit for watching content, but not a must have for editing photos. The advantages of OLED are modest here for most photographers, especially as much of your audience is unlikely to be viewing your work with OLED in dark conditions that would allow them to see it. Video may benefit more as that content is more often viewed in a dark environment.

 

OLED also offers very fast pixel response times. This primarily matters for gamers. Having support for 120Hz is beneficial for photographers to see smooth results when panning and zooming images, as well as scrolling text for general use. If you can only achieve 60-90Hz, that’s ok, but 120 will be appreciated by most users. Gamers will often appreciate much higher refresh rates, but photography won’t benefit from exceeding 120Hz. Nearly all OLED will hit this target, but mini-LED varies.

Today’s best mini-LED tend to be the optimal choice for most photographers. They can be used in brighter rooms, achieve higher HDR headroom, and will retain brightness more accurately even in images with more HDR pixels. However, OLED is an excellent choice, especially if you work under controlled lighting.

The PA32UCDM fits these patterns:

  • It offers excellent image quality, but is best appreciated in a room where you can control the ambient light.
    • If you need to raise it to maximum brightness in your room, you’ll have 2 stops of HDR headroom.
    • When your screen contains a lot of bright content (even just the white background of this web page), you will likely see HDR images dimmed by the ABL.
    • If you want to achieve 4 stops of headroom and consistent rendering of HDR, you should look to the ASUS PA32UCXR mini-LED (review).
  • The clarity of shadow detail is incredible.
    • You can clearly read text set to a grey value of 0.0001 in 32-bit Photoshop (this is the lowest level you can directly set in Photoshop colors).
    • On the Pro Display XDR, the text is very legible at 10 brighter values, yet shows significant blooming.  The ASUS mini-LED (PA32UCXR) does a clearly better job, but this OLED clearly outperforms both. It’s not even close in the most extreme dark shadows.
    • Like any OLED, the black shadow detail is much better than mini-LED. This will allow you to appreciate extremely deep shadow detail on your display, but is a modest benefit for content creation as much of your audience will either lack OLED or view it surroundings too bright to appreciate the same detail.

On the whole, the image quality is amazing, but mini-LED still has an edge in monitors. OLED TVs have gotten much brighter in 2025, and hopefully OLED monitors follow suit over the next two or three years close much of that gap.

 

How is the hardware quality?

The PA32UCDM has a nice metal exterior and stand. It looks very attractive. It also feels like a product built to last, though I cannot evaluate quality by testing a single unit for a week. The stand feels solid and offers smooth and precise control over height and tilt (there is no swivel, but you can easily just turn the whole stand to achieve the same result). The ports on the back are clearly marked and much easier to read that previous ASUS models. The controls for the on screen display are easy to find and reach from the front.

Connection options are good. It has 96W charging and passthrough Thunderbolt 4. So you can connect the display and a downstream hub via a single cable. It also has a 1 downstream USB-C and 1 USB-A, which might meet your needs without a dock. For video input, I recommend using the Thunderbolt input, but it also supports HDMI 2.1.

This monitor does have a fan. It is fairly quiet, but definitely audible. Using the Decibel X app on iPhone, I measure 26 dB sound in front of the monitor when turned on and 23 dB when the monitor is off. I hear it if the room is otherwise dead quiet, but cannot hear it at all if I turn on music on my laptop at even a relatively low volume. The fan will often run continuously once the monitor is warm.

The sound quality of the speakers is very basic (like most monitors). The max volume is modest and the sound is limited, but it does offer useful sound. Most people will probably prefer to use external speakers or their laptop audio.

There is no web cam built into the monitor.

 

What could be better?

If you want greater maximum headroom, a brighter display for a bright room, or 6k resolution – that’s another class of premium monitor and you should expect to pay more at this point. The PA32UCDM hardware performs very well for its price point. Some may be concerned that this model does not include a colorimeter, though I tend to prefer an external colorimeter and you may be able to borrow one or use it with multiple displays (such as your laptop or another future monitor). I believe the overall product offers good value and have no major concerns with the hardware.

However, that doesn’t mean there aren’t areas where ASUS could and should do better. In particular, I find that ease of use could be significantly improved, along with customer service.

There are several areas where the menus, software, or documentation could be easily improved to make this product easier to use:

  • Bad HDR defaults:
    • This monitor is promoted for HDR use (1000 nits) and should default to offering a good HDR experience.
    • However, the default setup caps HDR brightness to 250 nits, which is is a very poor HDR experience. And then the EDID firmware bug at this setup makes the problem significantly worse as the computer reports full HDR support and simply clips.
  • Firmware bugs:
    • The EDID (“Extended Display Identification Data”) appears to self report a 1015 nits limit when the HDR display brightness limit is set to 250 nits (per Advanced Display info in Windows, as well as obvious clipping in MacOS at this limit). This is a serious issue as is causes significant clipping in the default mode (your web browser will not properly tone map HDR video and photos in this scenario). Thankfully, this won’t affect you when you configure the display per my recommendations. The higher limits are fine (when set to 400 nits, it reports 445 nits and max reports 1015 nits).
  • Calibration software:
    • There is a very long list of colorimeters which show as supported under Windows and are not available under MacOS (only 3 listed). Perhaps this is due to a lack of MacOS drivers from the vendors, but that seems unlikely. And it may be due to the latest Windows ASUS software being 2 months newer than the MacOS version (v4.1.6.4 vs 4.1.7.3).
    • The lack of clear error messages in the ASUS software should be addressed ASAP. You should be in great shape if you follow my guidance below, but I believe many people would be quite frustrated without better guidance and this almost certainly increases ASUS costs for technical support and returns. Here are a few examples where the ASUS calibration software could be improved:
      • There are several settings which the user is required to set in the monitor. This should be completely automated by ASUS during calibration.
      • Failure to turn off HDR mode (which is not intuitive) results in “error 4156”. A clear description of the most likely steps to resolve this common issue should be shown.
      • If the colorimeter is not pointing at the screen without the diffuser, you see “error 1038”. A clear description of the most likely steps to resolve this common issue should be shown.
    • The ASUS calibration software does not auto-detect the connected colorimeter, or even remember the last option you used.
    • The monitor should be able to self report how long it has been on so that you can skip the warm up screen whenever possible.
    • The ability to create a custom name for the user modes is great, but the names are hard to read as you cannot use underscore, dot, or space (all of which are used in the included modes)
    • On the last screen before you start calibration, the middle grey background makes it very hard to see your mouse so you can click the button to start. A black background would be much easier.
    • The color calibration tab (where you start calibration) does not show the date/time a given mode was last calibrated of if it was ever calibrated at all. You can infer this from the history, but it is cumbersome.
    • When running multiple calibrations at once, the final screen shows only one result rather than acknowledging it ran multiple (you can find these in history, it just isn’t intuitive and it is unclear if clicking “apply” will program any failed calibrations).
    • Calibration targets in K are not clear to those who know that D65 is more precise than “6500K”. This could at least be noted in the written manual.
    • The “embedded calibrator calibration” is neither grayed out nor warns you if you click on it with a monitor like this which does not have the feature. It’s confusing if you don’t realize this only applies to a product you don’t own.
    • If the calibration is highly successful (achieves <1 deltaE), there should just be an option to apply it. As it stands, the monitor stays fully lit until you accept the result and then you wait for it to apply. It would be faster if just done when you return if you let it run unattended.
    • The MacOS color profile was incorrectly switched to sRGB after calibration (this may be a bug in MacOS itself, unclear).
  • Display Widget:
    • The concept of the display widget (to let you control the monitor from the operating system) is great, but the implementation is too incomplete for it to be useful at this time.
    • There is no MacOS app.
    • The Windows app is insufficient for setting up the monitor.
      • The app supports importing settings and I would love to give you a file to set everything up as below, but Display Widget simply lacks key options.
      • There is no option to control “Uniform Brightness” or the brightness limit for HDR to properly enable HDR. You cannot control the HDR preview (required for calibration, though ideally the calibration app would take care of this for you).
    • The options are laid out differently in this app vs the monitor and it creates confusion, as does some poor naming.
      • For example, the app toggle under “system settings” for “Display HDR” seems to suggest HDR will be disabled when laptop power is low. The naming is a little vague and it is not clear that this is a setting which is only possible when driven from the computer (the monitor lacks this option as it does not know battery state or have control of the operating system HDR mode).
      • Further down the same display is an option for “Power Saving”. This is an option on the monitor, but mixing it with other controls that are unique to the app creates confusion and ambiguity.
      • Ideally, the software would list options controllable on the monitor up top and then clearly break out options which are specific to the Display Widget (such as “HotKey” and “App Tweaker” sections).
      • Custom user modes do not show any custom name you set in the calibration software.
    • The active mode can be confusing. There are so many modes that none of the default HDR modes will show in this app until you click to the right. Either shrinking the UI to show all or switching to a dropdown would be much more intuitive.
  • Written manual:
    • The written manual does not describe screen saver options clearly (Does not mention Proximity Sensor disabled in HDR mode next to the description of the options. What happens under the three levels of Panel Protection and Image Protection? What is “ISP”?)
  • Ease of use in monitor settings:
    • When you use an Apple monitor, things are incredibly easy. It’s nearly impossible to get it wrong. You do not have to think about it. The ASUS setup (and nearly all monitors other than Apple) are very confusing. I often have questions, and I have much more experience with this than the average photographer.
    • I would love to see ASUS eliminate or hide options which are not high value.
      • For example, the presets should probably include just P3 (100 nits SDR) and P3 PQ. That’s all the typical user needs. Anyone who needs more can easily achieve it with custom modes, which are probably better anyhow (to achieve the ideal SDR brightness for either reference viewing or to suit their environment).
      • Why is Dolby Vision a setting vs just automatically using it when providing in the input?
    • It would be ideal if the HDR display options were shown at the top of the list of presets, given this is an monitor clearly built for HDR.

Ultimately, the issues above (other than limited colorimeter support on MacOS) should be no problem if you read this whole post and follow the guidance below. But the changes above would be helpful for all, and are probably critical to a good HDR experience for many who may not read the guidance below. Hopefully the MacOS colorimeter list is expanded with a software update in the near future.

As a side note, the Display Widget software (currently Windows only) from ASUS is a step in the right direction for ease of use. The level of simplicity for HDR on MacOS is so far ahead of anything I’ve seen under Windows. With a Pro Display XDR, you plug it in and it works with zero setup (even calibration is completely optional). Much of what makes that possible is Apple’s tight integration between the operating system and software (and unsurprisingly you can’t use that same monitor under Windows). Microsoft would do well to consider an open standard for display makers to better integrate into Windows (something offering great performance by default and an ability to control the monitor settings entirely in the operating system).

While I have not needed to contact ASUS technical support for any questions related to this monitor, I did last year when reviewing an older model and I found the support experience rather frustrating. The phone support experience was particularly frustrating for me. The phone system randomly disconnected on multiple calls and the call quality often had audible crackling and low quality that made conversations more difficult. The email support was higher in quality, though responses typically took 24 hours and there were several times when the response did not sound like the person had bothered to read my concern carefully and/or lacked sufficient expertise to resolve the issue. I don’t expect you’ll need technical support if you follow the guidance on this page carefully, but if you do, use the email option and be patient. Sadly, experiences like this have been fairly common for me with many similar electronics companies (Apple being a notable exception where technical support is usually fast and high quality).

Side note: MacOS System Info reports 5120×2880 resolution (but does not offer it as a choice in System Settings / Display). This monitor has 3840 x 2160 resolution. I suspect this error is a MacOS bug (rather than another EDID bug) as I don’t see this 5K resolution when connected to a Windows computer.

 

How to configure it for HDR photography:

Here are the most important things to know for setup:

  • Use the 96W USC-C connection to connect the computer.
    • The other USB ports are for downstream connections like a dock or keyboard.
    • You may use the HDMI port and will get the same image quality, but you’re going to need a USB connection to do calibration or use the optional Display Widget software.
  • Turn on the monitor with a long press on the round button on the back bottom of the display near the middle (just right of the joystick control when you’re facing the front of the monitor and can’t see the back)
  • If the monitor does not offer auto-detect, manually select the Thunderbolt input
  • Once connected, you must enable HDR mode in System Settings / Display
    • This is written out as “high dynamic range” in MacOS
    • You will likely need to reduce from the peak refresh rate to enable HDR.
    • If you cannot enable HDR mode, reduce the refresh rate.
      • On MacOS under Thunderbolt 4, I found I could use with HDR mode up to 240Hz with “larger text” (1920×1080) scaling, but that the max allow dropped to  60Hz by the middle scaling option and was supported at that level up to the maximum 4K scaling. I find the 2K scaling (2560×1440) looks best and supports 100Hz. Note that this is just scaling of the MacOS user interface, and your images are always displayed with the maximum resolution and no interpolation with any multiple of 100% zoom in LR, PS, etc.
      • On MacOS over HDMI 2.1, I see greater support. This is odd, as Thunderbolt 4 should support higher bandwidth and I’m using a certified cable. Under HDMI, I was able to get 120Hz at 4K HDR, 100 Hz at 2560×1440, and the full 240 Hz HDR at 1080p scaling. I find that pattern odd (lowest refresh rate support at middle resolutions), unless somehow the M4 Max cannot internally handle its UI scaling in that scenario (and this does not improve if I only drive the ASUS display with no others active). I have reported this as a potential Apple bug via Feedback Assistant.
      • On Windows (over Thunderbolt), there is no option for 240Hz, but you can use 120Hz all the way up to 4K (3840×2160).
      • On Windows over HDMI 2.1, I still had 120Hz HDR support, but the color bit depth drops from 10-bit to 8-bit. So that’s one more reason to prefer the TB connection.
      • I am unclear whether these differences in supported frame rate are an issue caused by the operating system or the monitor. The ability to select 120Hz on MacOS with 4K scaling on MacOS would be nice for scrolling text. This is a surprising result given I’m testing with an powerful M4 MacBook Pro and my cheap Windows laptop achieves 120Hz at the same scaling.
      • I have no particular concern about the 120Hz limit I see on Windows. The lack of 240Hz support on Windows may very likely be related to my testing with a relatively cheap PC laptop that may very well not be capable of driving the display to its limits.
    • If you cannot enable the toggle in Windows, make sure you are not using the monitor in a mirrored configuration (extended only) and check that your refresh rate is not set too high.
    • The monitor should automatically switch to an HDR mode, but you can check that HDR_PQ DCI is set as noted in the setting section below.
    • When properly connected, you should see headroom >0 in my test #1.
  • System Settings / Display should show the color profile as “PA32UCDM“. That’s likely the case at first, but I found that MacOS was reset to sRGB during the calibration process, resulting in over-saturated colors after calibration. Simply putting it back to the correct value then showed correct color on the monitor.
  • Once you’ve enabled HDR, you should set the SDR brightness slider in System Settings.
    • MacOS shows a brightness slider in the display settings when HDR mode is enabled.
    • Windows has an “SDR content brightness” slider hidden under the > icon on the far right of the HDR mode toggle.
    • Set the SDR brightness at a level which would be comfortable for reading text in a browser. Do not try to set it darker to improve headroom, that will induce eye strain and result in editing images which are too bright.
    • If you are measuring the SDR white level (“diffuse white), it should be set to 80-120 nits under controlled lighting (note that 1 nit is the same as 1 cd / m^2). If you need to set it brighter, your room is too bright and you should dim lights or use window shades. This will help ensure both better prints and HDR.
  • At default settings, the default monitor settings will unfortunately limit the display to 250 nits (shown as “brightness” at the top right of the on screen display).
    • This combined with the bug noted below where the EDID falsely reports 1015 nits support in this scenario means that you will have almost HDR support and HDR content will clip very badly because software like browsers will not be able to tone map to the correct limit. You’ll also see the Lightroom histogram and my headroom test incorrectly report 2 stops more headroom than you actually have. So this is a serious problem at default settings, but easily avoided with the correct setup.
    • The solution is to use the monitor settings below to raise the brightness limit to “MAX” (1000 nits). But you can only choose this when Settings > Uniform Brightness is turned off.
    • (Note: each time you change brightness, the monitor will briefly go black and exit the OSD, you’ll need to wait for the “HDR mode” notice to clear before you can go back in to go from 250 to 400 to 1000. This is tedious, but the only way to do it).
  • DO NOT use any custom ICC profiling tool such as Spyder / X-Rite at this time.
    • If you use a typical profiling approach, you will cause all HDR content to clip to SDR. There is no HDR standard for custom ICC profiles.
    • If you have done this, you may see that my test #1 reports you have HDR headroom, but the content looks clipped.
    • If you’ve already done this, revert to the factory profile
      • in MacOS go to ColorSync Utility / Devices tab / Display, then select your display and click ⌄ by “current profile” and choose “set to factory”.
  • Other watchouts (for Windows, these generally aren’t issues on MacOS)
    • Do not use the Windows HDR calibration utility, it will only cause problems.
    • If your computer comes with 3rd-party software which affects the display, you may need to uninstall it. In rare cases, I’ve seen such software cause conflicts.

 

I recommend the following settings in the monitor:

  • If you’ve customized and are unsure of the defaults, you may reset to factory settings via Settings > All Reset. This will clear any calibration as well.
  • Preset tab
    • Select HDR_PQ DCI.
    • Choose PQ Clip for best accuracy. You may use PQ Optimized for some highlight rolloff (slightly less accurate, but allows you to see more highlight detail). Skip the Basic option (low accuracy).
    • While the HDR_PQ BT2020 may sound appealing for wider gamut, it has been my experience that accuracy is reduced. You can edit in Adobe software with ProPhoto or Rec2020 to retain this color and be ready for it in the future, but I would not try to display it yet on consumer monitors. You will make much better editing decisions with accurate color than a slightly wider gamut.
    • Do not use the HLG or HDR_DolbyVision options unless you are using an unusual source such as a DVD player or your camera over HDMI. The PQ options are best for a computer.
    • If you wish, the User modes may be used for advanced users.
  • Palette
    • Brightness must be set to MAX. [ VERY IMPORTANT ]
      • This is a very confusing label. This is not the SDR brightness (as you would typically control in an SDR mode on the monitor), this is really the peak brightness allowed. So when you set it to “max”, what you are really going is setting the maximum to the maximum (ie setting the HDR clipping limit to the full 1000 nits).
      • However, you must first turn off Uniform Brightness in the Settings tab, or you will be unable to set the brightness above 250 nits.
      • If you do not increase brightness to “MAX”, the monitor will show significant clipping in HDR mode (2 stops of clipping as the default will limit output to 250 nits without any tone mapping)
  • Settings tab:
    • enable HDR preview (this is required for ASUS calibration).
    • Uniform Brightness must be disabled or will be unable to use HDR without seeing significant clipping. [ VERY IMPORTANT ]
    • Update the language as necessary.
    • It would be best to disable the sound, you’ll get better results from external speakers or your laptop.
    • It is best to turn off the Light Sync options and focus instead on ensuring your ambient light is fixed if possible. Eliminating variability will help you achieve the most consistent results for editing photos.
    • You may update the OSD (on screen display) timeout if you wish to change how long it stays on screen after using the monitor buttons.
    • You may refine the proximity sensor settings if needed for your use. This feature dims the display if it fails to detect someone near the monitor (to help improve lifespan of the monitor).
    • Screen Saver
      • All of these options are used to control features meant to help prevent burn-in with an OLED monitor. They are not well documented, but what is clear is that your options here will not limit peak brightness or HDR display immediately. They will cause dimming over time or occasional display blackouts.
      • Proximity Sensor:
        • This option is disabled in HDR mode, but fine to enable for SDR mode if you won’t be viewing from far away.
      • Panel Protection:
        • This includes occasional tricks to help avoid monitor burn in, including dimming over time if there is no change on screen. This dimming is much slower than image protection and ok to use.
        • If you have 100 nits SDR white and panel protection is “max”, you will see no dimming until the display has been static for 50s. At that point, it will dim slowly continuously and continuously until it gets to 5 nits at about 2 min 50s.
        • If you have 100 nits SDR white and panel protection is in the middle position, you will see no dimming until the display has been static for about 1 minute 50s. At that point, it will dim slowly continuously and continuously until it gets to 25 nits at about 6 min.
        • The “off sensing” is always enabled and runs a 6 minute blackout period that won’t be nice in the middle of your work. Min runs “off sensing” every 12 hours, the middle is 8, and the top/max position will be disruptive with blackouts every 4 hours
        • I would pick the middle setting to allow dimming if you walk away, but without monitor blackouts every 4 hours.
      • Image Protection:
        • It is meant to help avoid potential burn-in from things like a continuously visible Windows task bar.
        • I cannot comment on the long term risk of burn in here, but I would personally disable image protection.
          • It will darken the display and therefore reduce accuracy.
          • If you adjust your SDR brightness in HDR mode to 100 nits and then switch from image protection off to max, the diffuse white will drop to 80 nits.
          • OLED longevity has improved significantly in recent years and I would be comfortable taking the risk. The tests I see generally indicate you need to abuse the display pretty heavily with continuous bright content. I would consider any risk here an investment in my art. In other words, I would prefer to take what I would consider a small risk that might replace the monitor in five years (when there are even better and cheaper options) than try to create great content on a dimmer and less accurate display.
          • You can take steps to minimize static content, including letting your task bar / dock auto-hide, and using a darker theme in Photoshop.
  • Avoid changing anything else
    • There is no need to alter anything in the Image, PIP, or QuickFit tabs.
    • In HDR mode, brightness is control in the operating system Display Settings (not on the monitor, see key setup above).

 

Once you’ve done the above setup, you may then proceed to use the ASUS calibration software to improve the accuracy of the display. You only need to calibrate the HDR mode you are using. There should generally be no need to use other modes or disable HDR. If you wish to optimize an SDR mode for print work, you might wish to calibrate that as well and set the custom buttons on the monitor to help toggle between the two modes (though ideally the monitor will just switch between your last SDR and HDR mode as you toggle in the operating system).

 

Calibration process:

  1. Install and launch the ASUS Calibration software
  2. Check that the monitor is set up as noted in the prompt in the calibration software:
    1. HDR mode in MacOS / Windows must be off
    2. Monitor Settings / HDR Preview must be on (this seems to be how the monitor converts the SDR signal to HDR during calibration)
    3. You can ignore the “ambient effect” setting, as it does not apply to this monitor from ASUS
    4. Make sure “true tone” and “night shift” are off in MacOS, as these will significantly distort the calibration
  3. Connect a supported colorimeter to one of the downstream USB ports on the monitor. The following colorimeters are supported on both MacOS and Windows (be sure to scroll down, may not be obvious there are more than 3 choices):
    1. Calibrite Display Plus HL (can handle up to 10,000 nits displays)
    2. Calibrite Display Pro HL (3000 nits is probably ideal, will be a long time before you need 4000+ in a monitor, though some TVs are there now).
    3. Calibrite ColorChecker Display Pro
    4. Calibrite ColorChecker Display Plus
    5. Datacolor Spyder X, Datacolor Spyder X2 Elite, Datacolor Spyder 5, Datacolor Spyder X2 Ultra, X-Rite i1 Display Pro, X-Rite i1 Display Pro Plus
    6. spectrophotometers (most accurate, ridiculously expensive): CR1-100, CR-250, Klein K-10, X-Rite i1 Display i1 Pro 2, X-Rite i1 Display i1 Pro 3
  4. Check the modes you wish to calibrate, that should include at least HDR_PQ DCI
  5. Click “start calibration” and follow the prompts. If you see an error, check the steps above and the troubleshooting info below.
  6. If you created a custom user mode, go to the “device” tab and click the icon right of the name to set your own custom name (this will show in the on screen display of the monitor)
Note on an issue I found. After the ASUS calibration software completed, the MacOS display settings showed the monitor’s profile had been updated to sRGB (rather than the factory PA32UCDM option). As a result, the post calibration results were strongly over-saturated until I found that issue and corrected it. It is unclear if that is a bug in the ASUS calibration software or MacOS itself, but you should check to make sure this isn’t an issue after you complete calibration.

Here are some helpful links:

  • User Manual
  • ASUS ProArt Calibration software and drivers
    • Choose your operating system, and download the calibration software (Windows users might wish to install the driver, no need on MacOS)
    • Click the Bios and firmware tab to check for any updates (see the version installed on your monitor via Settings > Information).
      • If you need to update, be sure to follow the PDF instructions in the download very carefully (guaranteed pain if you don’t pay attention).
      • Be sure to format a memory stick with FAT32, copy the bin to it, and insert the memory stick in the correct port on the monitor.
    • You may wish to click “see all downloads” at the bottom of the list, but there’s probably nothing you need there.
  • ASUS Display Widget lets you control many aspects of the monitor directly through the computer for convenience (currently only available for Windows).
  • ASUS support article for calibration.
  • Tech support. Note that it has been my experience that the phone system is flaky (poor connection quality). While online support is slower, I’ve had better results. If you follow the steps I’ve outlined here, you shouldn’t need it.

 

Troubleshooting:

  • Cannot select HDR mode: make sure the refresh rate is set not too high
  • You are able to pass test #1 to confirm you have HDR headroom, but the display looks clipped:
    • make sure you set the monitor as noted above (brightness must be “max”, which is only an option after you turn off “display uniformity).
    • do not use any custom ICC profile. This will break HDR. Switch back to the factory profile. You should only use the ASUS software (or CalMAN / ColourSpace) to calibrate in the monitor itself.
  • If you fail test #1 (0 headroom) and/or see zero headroom in Lightroom but have HDR mode enabled in Windows:
    • I saw this immediately after calibrating. The overall display looked bright and clipped in this mode.
    • Simply toggling HDR mode in Windows system settings and the monitor did not help. But after I toggled HDR support on the internal laptop display and subsequently toggled it for the monitor (both in Windows settings), then it worked properly. I don’t know if it is a bug in the monitor or Windows, but I suspect it’s a Windows issue (given the involvement of other controls in the OS and no similar issues under MacOS).
  • Color looks over-saturated after calibration:
    • There appears to be a bug in either MacOS or the ASUS software which may set the monitor profile to sRGB after calibration. Just go to System Settings / Display and change the profile back to PA32UCDM.
  • Display looks wildly incorrect:
    • Make sure both the computer and display are in HDR mode (or both in SDR mode if that is your intent).
  • An error occurred during calibration (may show as error 4156):
    • Make sure HDR mode is OFF in System Settings / Display in the computer. This is true even when calibrating for HDR.
    • Make sure Setup / HDR Preview is ON in the monitor settings (the calibration software appears to send a known SDR signal for the monitor to display as HDR during calibration).
    • Make sure night shift and true tone are off (may not errror, but will certainly invalidate the results).
  • Calibration quits with an error:
    • Make sure MacOS / Windows HDR mode is off, even when calibrating an HDR mode (may show as error 4156)
    • Make sure the colorimeter is right over the target and the sensor is facing the screen with the diffusor cover out of the way (may show as error 1038)
  • Color looks over-saturated after calibration:
    • Make sure System Settings / Display still show the factory PA32UCDM profile in use (I found MacOS switched to sRGB after calibration, changing back to the factory profile in MacOS settings then showed accurate post-calibration color).
  • If the display starts cycling full screen blue, red, white, green, black (in that order, over and over):
    • I ran into this by trying to connect both the TB4 and HDMI cables, also while using the ASUS calibration software.
    • This is clearly an edge case, but one you should avoid as it makes the monitor appear like a broken product (does not respond to the power button, you have to physically unplug and replug it).
    • I also found that MacOS showed two monitors (which is expected), but then would keep showing one even after both were disconnected. I had to reboot MacOS.
    • So my sense is that both Apple and ASUS may show a bug when you connect both inputs from a Mac to the monitor.
    • Avoid the issue by only connecting one cable or the other. If you prefer HDMI, you may consider using a dock instead of trying to use the TB4 for downstream use (including the colorimeter, which should connect to the computer instead here).
    • If you run into this, power cycle the monitor and reboot the computer and all should be fine. It’s just a strange use case probably not fully considered / tested by the developers at one or both companies.
  • See my main HDR troubleshooting page for additional things to check.

 

I’d like to extend a special thanks to B&H Photo for helping to facilitate access to a unit for testing for this review.

Which file formats to use for photography?

Choosing the right file format for photography is critical if you want high quality, but it’s also very confusing. There are a number of file formats, file details like color space and bit depth, hidden details, HDR considerations, and an endless number considerations for the software or hardware others might use to view your images. In this tutorial, you’ll learn the best options to ensure your work looks great and is easy to share.

This topic is so deep that no one person really understands everything under all possible scenarios, so it probably helps to understand my background and how I’ve reached the conclusions below. I’ve been working with digital photography, digital printing, the web, and Adobe software for over 25 years. I teach photography to a wide range of people using a large variety of software, hardware, and web services to share their work. And as a software developer, I have spent years testing and optimizing image exports for the web, including encoding custom gain maps to optimize images for the new HDR display technology. I’ve processed and shared thousands of images, learned from countless mistakes, and received feedback from a large number of photographers who care deeply about the quality of their work. But I have not done rigorous, scientific study of all these topics (that would be a full time job on its own).

The discussion below will cover many critical details, but let’s start with a high-level overview of the most important information.

TLDR: Shoot RAW, edit PSB in a wide gamut, and share on the web as JPG (with a gain map if HDR). In about a year, AVIF should replace your use of JPG.

I recommend photographers use the following file formats and settings:

  • Capture RAW images with the highest bit depth available.
    • (exception: if you do not intended to process your images, capturing in a lower quality format like JPG or HEIF may be ideal to support faster workflows)
  • Save your layered working files as PSB
    • Use 16 or 32-bit (never 8-bit)
    • Use a wide gamut colorspace (Rec2020 is ideal, but P3, Adobe RGB, and ProPhoto RGB are all great)
    • TIF or PSD are fine and may be better supported outside Photoshop/Lightroom, but PSB is preferable because the file size is effectively unlimited, which is increasingly important with modern cameras and non-destructive workflows.
  • JXL is a great format for sharing directly with other professionals (assuming they use supporting software, such as LR / ACR).
    • JXL would be an ideal way to send images to your lab for print if they support it (higher quality than JPG, much smaller file size than TIF, and no size limit like AVIF).
  • Sharing on the web:
    • Sharing your standard (SDR) images on the web today depends on the service you use:
      • JPG with sRGB is safe to use anywhere
      • JPG with P3 is ideal, if you check to confirm that the color profile will not be stripped when your image is uploaded (which would cause your image to look desaturated)
    • Share HDR images on the web using JPG with a gain map
      • HDR images lacking a gain map will render inconsistently and often at very low quality
      • JPG with a gain map is currently the only HDR format which is 100% safe on all browsers
    • In the near future, AVIF will be an even better option than JPG
      • AVIF offers higher quality and smaller sizes than JPG, as well as support for even wider gamuts (Rec2020).
      • AVIF should be a good option by early 2026 (it is already very well supported by all modern browsers for SDR, most support HDR AVIF, and gain map support is nearly ready).
    • File formats you should not share on the web at this time:
      • Do not share any HDR image which does not include a gain map, this will result in significant loss of quality that you won’t notice (but it will affect many people with displays less capable than yours). See “Great HDR requires a gain map” for more info. I have seen numerous people make this mistake trying to share AVIF or JXL to Instagram – you should avoid those formats until there is proper support for uploading them with gain maps (you can upload them now on some devices, but your gain map will be discarded and the image will degrade significantly).
      • JXL is only supported by 14% of web browsers (even if Chrome and FireFox added support it would take years for adoption rates to get into the high 90s). This is unfortunate because as great as AVIF is, JXL has some advantages including: lossless re-compression of JPG images on existing websites, faster encoding speed, and better support for progressive rendering over slow internet connections.
      • HEIF (aka HEIC) is only supported by 14% of browsers, and this is not likely to grow significantly.
      • I would skip webp. AVIF has nearly as much support (94% vs 97%), but offers higher image quality (10-bit), support for HDR (cannot support gain maps), and smaller file sizes. The primary benefit of webp over AVIF is that it is faster to encode, so it may appeal to some large scale content providers.
      • You arguably skip PNG now. AVIF has nearly as much support (94% vs 97%) for transparency and nearly lossless quality with much smaller file sizes. However, PNG offers true lossless and has slightly browser higher support today.
  • The best format for sharing on your own personal devices is generally the same as what you’d use on the web
    • Using the same format ensures you can easily share from your phone/tablet
    • For HDR, iOS and Android have great support for JPG gain maps (you should not share AVIF / JXL yet and they generally will not adapt optimally when headroom is limited, but they would use less space on your device).

Before we get into the specifics of the various file formats, let’s dive into a few technical details which apply across the various formats.

 

Bit depth

Image data gets stored as bits (1s and 0s). Bit depth describes the number of bits used for each red, green, and blue value in your pixels. With more bits, you can properly encode a greater number of colors or shades of grey. If you don’t have enough bits, then there may be a visible jump from one value to the next (this rounding error is known as “quantization error”). Gradients (such as blue skies) require subtlety and where you will most likely see problems (banding).

When you use more bits, the data is more accurate but the file size is larger. So the ideal bit depth is the point where adding more does not improve the perceived quality of the image. There are a number of factors which may affect the requirements, but they all come down to how much you might stretch the data, as that makes the jumps bigger. So you may need more bits to protect for further editing, for HDR (which has a much wider range of possible luminance), or very wide gamuts (though not likely).

As a general rule, you need:

  • Final content (for sharing, no further editing)
    • 8 bits for SDR (“standard dynamic range”) content, though 10-bits can be useful in some rare cases with smooth gradients.
    • 10 bits for HDR (“high dynamic range”) content, though 12-bits can be useful in some rare cases (and you can get away with 8-bits in many cases for images lacking gradients – I’ve seen some great JPGs encoded as a simple 8-bit image with no a gain map).
  • Working files (when further editing may be required)
    • 16-bits for SDR working files
    • 32-bits for HDR working files (though you could get away with 16-bit in many cases)

For examples and more information, see “8, 12, 14 vs 16-bit depth: What do you really need?“.

 

HDR Gain maps

Modern displays offer HDR (“high dynamic range”). Support is already the norm for Instagram and Threads, and in time will be widespread. When these images are viewed on less capable displays, they must be adapted. That can occur in one of two ways: automatically (tone mapping) or with your input as the artist (using a gain map). Exporting your HDR image with a gain map will significantly improve image quality on any display less capable than yours. Never share your image without a gain map unless you place a much higher priority on image size over image quality.

There is a lot of confusion around gain maps. They are often misperceived as some sort of “hack” to allow HDR support in JPG (which only supports 8-bits). It is true that they allow JPG to show HDR without banding (a JPG with a gain map is actually two 8-bit images, which is not the same as 16-bit quality – but it is much higher quality than just 8-bits and looks great for HDR). But the most important benefit of gain maps is that they allow proper adaptation of HDR to less capable displays, and this applies to any file format. Even higher bit depth formats like AVIF or JXL need a gain map to ensure high quality when sharing HDR.

Note that when saving HDR images in Adobe Lightroom / Camera RAW, the “maximize compatibility” is how you request a gain map. I believe this name is a little misleading, as the HDR images can still be viewed on SDR displays (they are compatible), but the quality is much lower. A more accurate name for this checkbox would be “ensure high quality on SDR / limited-HDR displays”, but that’s a mouthful.

Gain maps are well supported now for JPG, and there is limited support already for gain maps for AVIF, JXL, and TIF. Support is coming for most common formats which are capable of encoding a secondary image (including PNG – but I don’t believe webp is possible or likely).

 

Color space (primaries, aka “gamut”)

Whereas bit depth affects the precision of color, the color space affects the range of color (ie “gamut”). These maximum red, green, and blue values are technically known as “primaries”. A wider gamut allows us to show more vibrant colors. The benefit of a wider gamut isn’t just being more colorful, it also helps to avoid clipping color gradients. For example, the details of a red rose petal may be lost without a wide gamut, or a colorful light shining on a wall may show a strange artifact where the color clips at the limit of the gamut.

There are many factors to consider in choosing a colorspace. Ideally, the gamut is wide enough to avoid clipping any color in your image. Colors found in nature would generally be contained within “Pointer’s gamut” (see here for comparisons to common colorspaces). ProPhoto includes all of Pointer’s gamut, but also many values outside the range of human vision. Rec2020 is an excellent match with Pointer’s gamut (technically missing a tiny bit blue, but you wouldn’t have a monitor supporting it). AdobeRGB does a decent job, but lacks some colors supported on monitors. P3 does a decent job, but lacks some colors supported by printers and by monitors which support more of the Rec2020 gamut.

sRGB is a very limited colorspace based on the limits of monitors from 30 years ago. However, it remains relevant for one annoying reason. Many websites strip the color information from your photo when you upload it (to save a few bytes). When this happens, anything other than sRGB encoding will show up as a desaturated image. This is unfortunate, as wide gamut is safe to use in all browsers and supported by most monitors. This practice of stripping color should hopefully go away as websites move to newer file formats like AVIF (which use a 4-byte CICP for color instead of 500-1000 bytes for an ICC profile in a JPG).

As a general rule:

  • sRGB is always a safe choice and may be required to avoid desaturation on some web services.
  • P3 (or Rec2020) is ideal for sharing higher quality images when you know the color profile will not be stripped (just check that it does not look desaturated after upload).

See “How to soft proof easily in Photoshop” for more on how clipping color gamut affects your image.

 

We tend to think of “color space” as just referring to the range of potential color (which are defined by the primaries). In reality, the color space includes other information such as the white point or transfer characteristics (often called “gamma”, but there are many other options  which are not simple gamma curves, such as sRGB or PQ). You generally do not need to think about these as the best values are typically assumed or set for you. For example, when you choose sRGB, you get the sRGB primaries (same as rec709), the sRGB transfer function (somewhat different from rec709), and D65 for the white point. These factors are adjusted for you. For example, sRGB uses a D65 white point and a transfer function close to gamma 2.2, but ProPhoto uses D50 and gamma 1.8. But when you convert a flattened image from one to the other, the image will look identical (unless your ProPhoto image contains colors beyond the sRGB primaries, then that color will clip).

To be clear, the “gamma” here is encoding of the data and does not affect the display (which has it’s own “gamma”, and that definitely does matter). The only time the gamma encoding of the image would matter is if you used an inefficient gamma with low bit depth (for example, saving linear data at 8-bits would be a very poor choice). If you did have such a mismatch in the extremes, you might see banding (because not enough data would be allocated for the shadow values that we are more sensitive to).

And going down the rabbit hole here for those of you exploring HDR… gain maps are a totally different animal. Many people incorrectly assume that these images are encoded with PQ or HLG. That is almost never the case today because the base image in a JPG is encoded as SDR. It will gain a gain map which has either 1 or 3 gammas, as the map itself is not an image – it is just mathematical data to adjust from the base image. So a JPG gain map image could contain up to 4 different gamma values! When we start using gain maps in formats like AVIF that support 10-bit depth or more, we may see PQ or HLG used there (if the base image is encoded for HDR, which won’t be likely initially in order to ensure greatest compatibility until support is very widespread).

 

Compression

Compression is a measure of the file size required to achieve a given level of quality. An image that is identical to the original is considered lossless. An image which looks the same to a human would be known as “visually lossless”. And an image which is visibly inferior would be referred to as “lossy” compression. Any of those standards for quality may be appropriate, depending on your needs. Most photographers should aim for something which is visually lossless when viewed at 100%, or perhaps when zoomed in if that is important to you. True lossless encoding is rarely beneficial for encoding of an image to be shared on the web, and therefore should be avoided to optimize file size.

 

Progressive rendering

When viewing images on a slow connection, progressive rendering allows the viewer to see some preview before the image is fully downloaded. This is most important for viewing on mobile devices in areas with slow service or if you are sharing very high resolution images.

Depending on the file standard and encoding, you may see different rendering experiences:

  • Sequential shows the image slowly revealed in full resolution from top to bottom. JPG may do this.
  • Progressive shows a low resolution preview which increases in resolution as the data comes in.
    • Progressive rendering in JPG may show as greyscale or false color initially and then show correct color
    • JPEG XL will show proper color even at the lowest quality preview.
  • No preview: nothing will show until the base image is ready to show in full resolution. AVIF does this.
  • Note that gain maps will render the base image in the manner typically used for the file type, and then the gain map will be applied later when it is ready.
    • So in a slow situation, a JPG gain map (which has an SDR base) may show progressive loading in SDR and then suddenly jump to HDR later (if the display supports HDR).
    • An AVIF gain map with an SDR base would show the SDR base all at once when ready, and then could jump to HDR when the map is ready (if the display supports HDR).
    • An AVIF gain map with an HDR base would show the HDR base all at once when ready. If the display does not support the full HDR, then automatic tone mapping would be used until the map was loaded and then the image quality would be improved using the map to derive the best results.
    • JXL maps are not supported in any browser, but their superior progressive rendering would allow the best results.
    • If you are concerned with progressive rendering with gain maps, the best solution today is to avoid excessively high resolution. Formats like AVIF are smaller and will help improve time to full render. In the future when we can share images encoded with an HDR base, that would allow the initial preview to render at the correct brightness (even if any required tone mapping might be sub-optimal until the map loads). If we could share JXL with an HDR base, that would be the ideal scenario as it would show a very usable preview almost right away.

See Theo’s demonstration of progressive vs sequential rendering.

 

Other key considerations

There are several other factors which may determine the best format to use:

  • Support for the format. The best image format in the world is useless if your viewer can’t see it.
  • Layers. Obviously, you’ll want support for saving your working files for non-destructive workflows. And of course you wouldn’t use these for the web as layers make the image much larger.
  • Max pixel dimensions or file size. This isn’t much of an issue for images to be shared on the web or shown on a screen, but it definitely affects layered images and those prepared for print.
  • Color sub-sampling (444 vs 422 vs 420).
    • You generally won’t have a choice and don’t need to think about this (the defaults are typically great).
    • If you’re curious, 444 means that the color data is full resolution (higher quality but larger files). As humans don’t see color in high resolution, sub-sampling is used to share color with neighboring pixels to reduce image size.
  • Transparency. This doesn’t matter to most photographers, but it is helpful for showing products or other images where you’d like to see the background around the subject.
  • Encoding speed/efficiency. This doesn’t matter for most photographers, but is a consideration for large web services converting millions of images.
  • Decoding speed/efficiency. This isn’t a concern for most photographers, but may matter if you wish to share images with absurdly high resolution or wish to optimize for battery life on mobile devices.
  • Animation. This isn’t a concern for most photographers, but you might use it in lieu of video in some niche application.

 

Wild cards: encoding, transcoding, decoding, and display

The topics on this page are clearly enormously complex and would be easy to get confused. Unfortunately there are other factors which are important to consider if you are troubleshooting or wish to make comparisons. It is important to know that each of the following can affect your experience:

  • Encoding details.
    • For example, Lightroom exports HDR AVIF at 10 bits and JXL at 16-bits.
    • So these aren’t directly comparable at a given resolution, as the AVIF may be smaller and faster to export just because of the change in bit depth – but the JXL would be much more suitable for further editing.
  • Transcoding.
    • Transcoding is reprocessing of an image which occurs on most websites. It may be done to reduce file size (causing a loss of quality), reduce pixel dimensions, ensure the image is save from any security risks, etc.
    • This is the cause of enormous confusion on the web, especially for uploading HDR gain maps. Your source image may be perfectly fine, but then it stops working after upload because the image on the web is NOT the same one you created.
  • Decoding
    • As you can see, there are not just multiple file types, but many very detailed variations within them. For example, there are four different ways to encode a gain map in just the JPG format – and I didn’t even mention that above. File format support is a very nuanced topic.
    • Every piece of software you use may use a different decoder. Support in one piece of software or part of an operating system does not mean you’ll have the same result elsewhere. For example, MacOS Preview supports HDR gain maps, but Finder won’t show HDR thumbnails for gain maps (even though TIF and EXR will).
  • Display
    • Of course, even if the decoder knows how to properly process an image, different displays may show different results.
    • The most common differences will be in gamut, HDR, and level of shadow detail.
    • This is all unrelated to the file format and encoding, but it’s important to remember that it may affect your experience.

If you are testing to evaluate different options, the most important rules are:

  • eliminate as much variability as possible. If you change more than one factor at a time (such as comparing 10-bit AVIF to 16-bit JXL), it’s very hard to draw meaningful conclusions.
  • always test your results after uploading to a website, as transcoding may change the result.
  • remember that others may have a significantly different display than you, this is particularly important for HDR.

 

Now that we’ve covered the key technologies, let’s take a look at the various file formats.

 

DNG

Your files may be in a camera-specific RAW format (NEF, CR2, ARW, etc) or in Adobe’s open format (DNG). I recommend you just leave your image in the RAW format offered by your camera. Select higher bit-depth if that’s an option, full resolution, and it would be ideal to use lossless compression (lossy is generally fine, uncompressed creates large files you can avoid).

Some would suggest using DNG to ensure your image will always be supported in the future. There is theoretical merit to this, but history has shown the formats remain well supported over time. I believe the risk is low, and you could convert files to DNG in the future if some update dropped support for legacy formats.

You may use the latest lossy DNG compression from Adobe to compress your images by 90%. The loss of quality here is so small that it would not show in a print. These are excellent images which are visually lossless. The potential risk here is that some future RAW software feature requires the original mosaic data. That was the case for some AI denoise, but support for the linear DNG formats seems to be expanding. If you don’t care about file size, you can skip this to keep your options open. But if you want a massive reduction in file size, this is a great option.

 

PSB / PSD / TIF

For your working files, it is critical to use a file format which supports layers. That means PSB (“Photoshop Big”), PSD (“Photoshop Document”), or TIF (“tagged image file format”). All three support the same image quality and features. The only difference is in maximum file size and support for the format.

Use PSB if possible. It is supported by Photoshop, Lightroom, Bridge, Affinity (importing), and Photomator, and more. The file size is effectively unlimited, and that is very helpful for exposure blending, 32-bit HDR, large prints, etc. You can resave the image as TIF if necessary (assuming it fits in the 4GB limit).

If you use software which does not support PSB, then TIF is a great option. It is limited to 4GB (which is much better than the 2GB limit of PSD).

 

JPG

The main benefit of JPG is that it is universally supported. It offers high quality standard images at a reasonable file size, but there are several better file formats we’ll get to below. Once those formats gain enough support, we will likely see JPG usage decrease significantly. It’s had a great 33 year run, but it will soon be time to starting replacing JPG.

JPG may support HDR by using a gain map. This makes it an ideal option for HDR today as it offers 100% compatibility. The viewer will always see a great result on any browser, and it adapts in an ideal way to the capabilities of the monitor. But compatibility is again the key benefit, and another format with gain map support will likely replace it in the future.

JPG has several limitations which are overcome by other file formats:

  • Modest compression. Many newer file formats are smaller.
  • Limited bit depth, putting the image at risk of banding (in rare cases) and making them unsuitable for further editing.
  • No support for CICP encoding of color, which further increases file size and increases the risk that a web service will strip the profile and therefore make wide gamut color unusable.
  • No support for transparency.

 

 

AVIF (“AV1 Image File Format”)

AVIF is the most promising next generation file format. It is already supported on all modern browsers.

It offers numerous benefits, including:

  • Outstanding compression, often reducing file size by 30% compared to JPG.
  • Up to 12-bit depth.
  • Supported on the latest versions of all major browsers (for SDR)
  • HDR (both natively at high bit depth and more importantly with gain maps to adapt optimally to any display)
  • Transparency
  • Animation

AVIF offers Lossless compression, but the files are rather large.

AVIF is reported to offer higher quality than JXL at extreme compression levels, but that has not been my experience with Adobe software. It might be true for other encoders.

Limitations of AVIF:

  • Resolution is limited to 8K (for a single tile, up to 65,536 is possible but may show artifacts at edges). This is fine for the web, but could limit your print size to 14×26″.
  • Maximum 12-bit depth, making it not suitable for further editing in many cases.
  • No progressive rendering.
  • Slow encode speeds / higher energy consumption.

Due to the time it takes for people to upgrade their browsers, real world support is already at 94%. All modern browsers support SDR rendering of AVIF and all browsers supporting HDR will show HDR AVIF. Additionally, AVIF gain map support is already built into Chrome and Apple software. Taken together, that means that you should be able to share AVIF on the web in 2026. And if you’re willing to provide backup JPGs or wish to ignore a few people using very old browsers, you could start using it now for SDR image (wait for gain map support if you create HDR). AVIF is the key format to watch and should start to gain significant use soon.

Given outstanding image quality, small file sizes, and excellent support, AVIF is an excellent candidate to replace JPG. The JXL file format has some additional benefits, but a lack of support makes AVIF a much more viable option for the foreseeable future.

 

JXL (JPEG XL)

JPEG XL (file extension “JXL”) is a next generation file format. Don’t let the name fool you, JXL is not a JPG and compatibility is very low (which is the key reason it isn’t used much currently).

It supports numerous benefits, including:

  • Outstanding compression, often reducing file size by 30% compared to JPG.
  • Lossless compression.
  • Up to 32-bit depth (or as low as 8, depending on your needs).
  • HDR (both natively at high bit depth and more importantly with gain maps to adapt optimally to any display)
  • Better progressive rendering – a low-res preview can show almost immediately (where as JPG loads sequentially and AVIF won’t show anything until the full base image is downloaded)
  • Faster encoding than AVIF
  • Existing JPGs may be converted to JXL with no further loss of image quality (making this ideal for migrating existing web content)
  • Unlimited file size – supports over 1 billion pixels in each dimension
  • Even fewer artifacts than AVIF (though AVIF is outstanding)
  • Transparency
  • Animation

If JXL were widely supported, this would be my file format of choice. It offers a better experience on slow internet connections (progressive rendering), has faster encoding speeds, better decode speed / power efficiency, and is a much better choice for sending flat images for printing or further editing (due to higher bit depth and unlimited dimensions).

However, browser support is currently very limited. That would likely change if Chrome would offer support, but it would take some time for upgrades even if all browsers supported it. So even if Google changes its policy now, AVIF is the most promising candidate to replace JPG in the near term.

JXL is the basis of Adobe’s new lossy compression, so you may already be using it without realizing it. Aside from that, JXL is an ideal format for sharing high quality images with other creatives for further editing. It’s also an ideal file format to send to your print lab, as the file size is much smaller than TIF and the quality is much higher than JPG.

 

 

HEIF

The key benefits of HEIF:

  • Outstanding compression, often reducing file size by 30% compared to JPG.
  • Up to 16-bit depth (though many decoders only support 8)

The future of HEIF is unclear, and there appears to be little momentum for other browsers to add support. It’s well supported by Apple, but Apple devices support AVIF (including with gain maps) and JXL. So if you are exporting images for the web or for Apple devices, AVIF is a better choice. The only place where photographers should consider using HEIF currently is when capturing images on an Apple device (as the quality is higher than JPG), but any subsequent exports of those images would ideally be done as AVIF.

 

 

WebP

The webp file format was created in 2010. It has some good uses as a JPG alternative, but has seen limited use as the benefits have not been compelling enough.

The key benefits of webp:

  • Better compression than JPG (though not as good as AVIF, JXL, and HEIF)
  • Transparency
  • Animation

Further growth of WebP seems very unlikely. It has several deficiencies compared to other newer file formats.

The key limitations of webp are:

  • Limited to 8-bits
  • No support for HDR (no gain maps)
  • Support is not universal due to older browsers (96%)
  • Compression and quality are lower than AVIF / JXL

 

PNG

The main benefit of PNG is broad support in a format capable of supporting both transparency and lossless encoding. Transparency is the key benefit for the web, and lossless encoding is mostly important for archiving (especially for governments and museums, which may not yet support next generation formats). A gain map spec for PNGs is coming, but this is likely only to be to used for archival work (and perhaps not even that by the time it is ready). Like JPG, PNG should soon be replaced by better formats on the web.

HDR: gain maps vs tone mapping

One of the greatest innovations to help improve the quality of HDR (“high dynamic range”) images is a technology known as a “gain map“. Their value is often misunderstood or overlooked, so in this article we’ll dive into what they are and why they are vastly better than the alternative.

TLDR summary: Always share HDR images with a gain map. This is easy to do and ensures that viewers who don’t have the same great screen as you will still see a beautiful image. Avoid sharing AVIF or JXL until gain maps are well supported in those formats. JPG gain maps offer excellent quality and are the best option in 2025. 

Backstory: what happens when you share HDR images?

When sharing photos online, there has always been some variability in the experience (even for SDR or “standard dynamic range” images). The viewer may have different brightness, black point, wide gamut support, or differences in how their software renders the image. There are ways to manage it, but it’s just a reality.

In the world of HDR photography, the differences in experience are a bit bigger. This is a huge step forward in image quality through new technology, so naturally older SDR monitors simply cannot show an image as HDR. The HDR images are simply adapted to the capabilities of an SDR display (rather than clipping, which would obviously be bad). The result is that the best possible image is shown, based on the capabilities of the display. When done properly, the worse case scenario is the same SDR image you would have shared, but a vastly better HDR experience where possible.

HDR images may also be adapted to less-capable HDR displays. The number of stops of additional dynamic range a monitor offers over SDR is known as “HDR headroom”. A great HDR display (such as those on a modern MacBook Pro) offers up to 4 stops of dynamic range. A less capable display might offer only 1 or 2 stops. An SDR-only display would have 0 stops of headroom. These capabilities are dynamic. When you make the display brighter, the headroom available for HDR decreases. So an HDR image will often need adaptation even on an HDR display.

While the gaps will shrink over time, we will probably never avoid the need for adaptation. This is particularly common for mobile devices, which will have reduced headroom when used in bright ambient light (or in low power mode). And we are also likely to see the best displays continue to improve for at least the next decade. The current best HDR displays support 1,600 nits of peak brightness. However, the full HDR standard allows up to 10,000 nits and there are already multiple TVs which support this level of capability. We won’t see that in computers anytime soon, but 5-6 stops of HDR headroom may be in our future (6 is likely the upper limit for benefit).

To sum things up, your audience will either see:

  1. Your HDR image as you edited (if they have enough HDR headroom)
  2. An adapted version of your image (the best possible version for their SDR or limited HDR display)

HDR adaptation: gain maps vs tone mapping

In scenario #1, your HDR image is shown as you edited it. Any valid HDR encoding will support that. However, it is easy to take for granted that scenario #2 applies on displays less capable than yours (or when used in brighter ambient light, etc).

In scenario #2, there are various ways the image can be adapted. There are two options today:

  • Let the computer automatically generate the SDR version of the image. This is known as “tone mapping“.
  • The artist can provide the SDR version of the image. This is done with a “gain map“. You can think of this like embedding both the SDR and HDR version in the same file (in reality, one version is stored normally along with a map to derive the other – this helps minimize file size)

If you share an image without a gain map, then tone mapping is used. It’s just the default when you don’t provide a better way to adapt the image.

You can create a gain map using Web Sharp Pro v6, which gives you 100% control over the base SDR image (as well enabling you to share optimal HDR images on Instagram and Threads). You can also create a gain map using Lightroom or Adobe Camera RAW, which offer some artistic control over the SDR. Photomator also supports gain map exports, but offers no artistic control over the base SDR (and is therefore the same as letting MacOS tone map the image).

(1) HDR (gain map)
(2) HDR (simple AVIF, no gain map)
(3) SDR reference (Greg's original)
(4) SDR (from gain map, any browser)
(5) SDR (from simple AVIF in Chrome v133)
(6) SDR (from simple AVIF in Safari v18)
(7) SDR (from simple AVIF in FireFox v135)

Rollover each to compare different versions.


Your HDR headroom and browser will affect rendering for the HDR images (#1 and #2).

Only the gain map (#1) is able to achieve a perfect match to both my original HDR and SDR edits (as created in Photoshop).

The HDR AVIF (#2) will appear close when viewed with 3+ stops of HDR headroom, but shows a minor color shift in the sky in Chrome. When it is viewed on a less capable display, it will degrade significantly (as shown in the SDR screen captures in #5-7)


#3 is a direct export of my SDR edit (for reference to show the version I intended to share). The remaining images are screenshots from a browser showing one of the above HDR images on an SDR display (text changed to label the image).

The gain map (#4) is a perfect match to the desired SDR result. The tone map versions based on the simple HDR AVIF (#5-7) vary by browser and all are significantly degraded from the desired SDR result.


Cannot run test: JavaScript not allowed

Tone mapping has several downsides:

  •  The SDR experience is almost always visibly inferior to the results using a gain map created by a skilled photographer.
    • An HDR-only image offers no creative control over how the image is adapted to less capable displays.
    • Compare #3 (a direct export of my SDR image for reference) with #4-7. The gain map is the only version that looks the way I intended. 
    • HDR eliminates many of the creative tradeoffs affecting SDR. The optimal edit often involves very specific local changes that a global tone mapper will not address.
    • For example, an SDR edit might show water in the foreground nearly as bright as the sunset sky (simply because we need to see water detail and the sky can’t be too bright if we wish to retain color in SDR). But in HDR, we can properly show the sky as brighter than the water. There is no global relationship between the SDR and HDR pixels, it depends on the image content.
    • An AI-based tone mapper might help close the gap, but that only further exacerbate the next concern (variability from one browser to the next).
  • Moderate HDR displays also degrade
    • Any HDR display with less than the full headroom will use some tone mapping. The full HDR encoding above requires 3 stops of headroom. If your HDR display only supports 2 stops, you will see that the HDR AVIF (#2) is already significant degraded compared to the gain map (#1). The clouds have better detail/color and the water looks better.
    • So the SDR tone map examples are the worst case, but a significant number of HDR displays would show a loss of quality as well.
  • There is no standard!
    • Compare versions #5-6 above. Chrome and Safari do not match.
    • Every browser does it differently and is subject to change over time (Safari v17 was different than v18, next year might be something else, etc).
    • Chrome (Edge/Brave/Opera) have the best tone mapping. It may be acceptable for some uses, but is still often inferior to a gain map.
    • Safari 18 introduced updated tone mapping. It is not nearly as good as Chrome, but is much closer than v17.
    • Safari 17 added tone mapping, but the results are truly awful.
    • These results are subject to change over time (and they have changed many times in the past couple years).
    • Other software (such as the Windows File Explorer, MacOS Finder, etc) may all use their own versions of tone mapping, or none at all.
    • There appear to be no serious efforts to standardize tone mapping for HDR photos.
  • Lack of support on old browsers. Tone mapping may not be supported, resulting in images that fail to render (or show as nearly black).
    • The current version of FireFox will show HDR AVIF as a very dark image.
    • Many older browsers are still in use and will simply refuse to show an HDR-only image.
    • These issues apply to about 5% of browsers in early 2025. In time, these browsers will be updated/replaced to something which at least supports tone mapping.
  • It creates more disparity between the printed and HDR version of the same image. Many photographers are concerned that the HDR image feels consistent with the print. This is easily achievable with a gain map, but not with tone mapping.

Gain maps offer several advantages:

  • Artists can create a much higher quality SDR rendition for less capable displays (including limited HDR displays).
  • It is possible to have 100% creative control of the image on any display, rather than only for the most capable displays (which are not the most common).
  • The base SDR image is consistent with any print and can be printed. This makes the format vastly more attractive to the most skilled content creators.
  • Renders consistently across browsers.
  • Widely supported under an ISO standard.

What are the downsides to using a gain map?:

  • There’s really only one: a gain map will increase file size by 15-30% (30% is typical for high quality). It is nowhere near doubling because a gain map uses a very creative approach to derive one version of the image from the other (rather than truly embedding two images in the same file).
  • There are a few niche scenarios today where a piece of software might support HDR encoding but not a gain map (which would show the SDR version). This is very limited and quickly moving towards being a non-issue. Software supporting HDR image formats is generally adding gain map support, and we will soon be encoding gain maps in formats such as AVIF or HEIC (which can encode the base image as an HDR and the map is used to generate the SDR).

There are many misperceptions gain maps, because understanding them requires a solid understanding of both the art and software development. Here are a few key points to know about gain maps:

  • Gain maps are not limited to JPG (or just intended for 8-bit formats).
    • There are already implementations or proposals to use gain maps with HEIC, AVIF, JXL, PNG, TIF, and DNG.
  • A JPG gain map is not an 8-bit HDR:
    • A gain map is two images embedded in the same file (ie two 8-bit images when using JPG).
    • That means an HDR JPG is actually based on 16-its of data (though the quality is not directly comparable to a 16-bit native format, there is more than enough data to avoid quantization / banding concerns when the image is properly encoded).
  • Gain maps are NOT a “hack” to provide HDR in an 8-bit format, they are also critical to quality at much higher bit depths
    • It is true that gain maps help overcome banding issues when encoding HDR in a JPG, but that they serve a much more important role.
    • Higher bit depths do not help your HDR image adapt to an SDR display.
    • The key reason to use JPG gain maps today is because they are 100% safe (even if you have a monitor and browser from 1990s, you’ll still see a great SDR image).
  • Newer formats like AVIF / HEIC / JXL will be preferable to JPG – but only once we can safely use them with gain maps.
    • That will offer smaller file sizes (typically 30% smaller) and even higher image quality (less risk of banding, less visible artifacts).
    • SDR AVIF is already well supported, so AVIF with base image encoded as SDR is the most likely next major format (and likely to be a great option in 2026).
    • Longer term, encoding a format like AVIF with a base HDR image will likely be the optimal choice. The base image is always shown at full quality, and you can therefore more aggressively compress a gain map with a base HDR image (because loss of quality is less of a concern in the less important SDR rendering). In the long run, these formats should allow high quality HDR gain maps which are smaller than the SDR-only JPGs we share today (though SDR will of course get much smaller with these formats as well).
  • In some cases, sharing a simple HDR image will be transcoded (converted) to a gain map for you. This eliminates browser variability, but the results are almost universally terrible. Do not let a computer create your art, this is hardly better than automatic tone mapping. See: great HDR requires a great SDR in the gain map.
  • It is important that transcoding software support both gain maps and high bit depth formats (such as AVIF). Both approaches will be widely used (ideally together), and therefore should be supported to ensure the output is comparable to the source.

 

Are there any alternatives?: ICC tone maps

Not every image is needs maximum quality. This creates a scenario where neither of the above approaches is ideal. The benefit of gain maps is lower and so the file size is therefore more of a concern. At the same time, browser variability in tone mapping is still a concern. And that’s why there are ongoing efforts to create a third option: ICC tone maps.

There are ongoing efforts to create a standardized way to embed a tone map into an image using an ICC profile with a LUT (lookup table). This does add to file size, but less than a gain map. Storing the tone map into a profile has a few benefits. First, it can eliminate browser by browser variability (at least once widely supported). And second, it allows for the tone mapping to be more customized to a specific image. For example, your phone might analyze the image or consider your use of portrait mode to pick the best ICC tone map to embed in the image.

This is an attractive option for mobile capture and sharing, where images are rarely edited by humans and file size is a frequent concern for bandwidth and power on a phone. It is unlikely to be a suitable replacement for gain maps for skilled artists editing their images. And it would not support the potential for better automated approaches where an AI might create the SDR rendition of a gain map.

 

Conclusions: gain maps vs tone mapping

HDR images must be adapted to less capable displays. Even as we move towards a future when all displays are updated to HDR, we will still have situations where some are less capable and require adaption. This is a long term concern, but is easily managed.

The key points to remember are:

  • Gain maps offer superior image quality over tone mapping on most displays (anything less than the most capable HDR displays under optimal conditions).
  • It is easy to create gain maps through Web Sharp Pro (which supports Photoshop) and Lightroom.
  • Avoid sharing JXL or AVIF on the web until gain maps are well supported (they’ll be great later). JPG gain maps work very well and are the ideal format to use in 2025.
  • ICC tone maps may be a good solution for automatically derived images (such as direct sharing of smart phone photos) to avoid  browser variability, while minimizing file size impact.
  • Tone maps offer the lowest quality for display on anything less that premium HDR displays, but may be the best choice when file size is much more important than image quality.

Remove tourists in just 1-click

Adobe Camera RAW v14.2 just added a new AI-based removal tool for “distracting people”. It’s pretty amazing and I highly recommend checking it out. It is much more than just a way to automate the existing AI remove brush. It can achieve better results and is smart enough to understand and keep an intentional main subject (rather than just removing everyone).

To use it:

  • Update to ACR v14.2 and go to its prefs / tech previews and turn on “new AI features and settings panel”
  • Open a RAW image and go to the remove tab (<B>)
  • Open the “people” section under Distraction Removal.
  • It will highlight detected people in red. You cannot add or remove the red with a brush, but you can select a pin and delete it if there is an area you wish to not fix (such as a secondary person you want in the image).
  • Click the blue “remove” button

How are the results? It often does a great job, especially for the kind of casual shots where you aren’t likely to wait for people to leave or use a tripod for multiple exposures or a long shutter. Like other recent AI tools, the resolution is a bit lower, but good enough for a lot of social media. You can always do more manual repair or cloning after, and this will often save you considerable time. Overall, it’s very impressive for an initial release of a tech preview.

 

How to create an HDR timelapse

I’m very excited to see that the popular LRTimelapse app now supports the ability to create an HDR time lapse video from your HDR photos in Lightroom. Many of you have been asking me how to create HDR time lapses, or to create HDR video from your photos. Both of these are great ways to share your work. And this also gives you more options to share your HDR work (such as to share HDR on Facebook, which does not yet support photos but does support video).

I haven’t had a chance to try it yet myself, but certainly will soon and I look forward to sharing my experience when I can! But given how many of you have specifically asked me about how to do this, I wanted to make sure you were aware of this new capability in a widely loved program for timelapse. The current capability supports timelapse and isn’t really designed to make slideshows (ie show the same image for more than a frame), but perhaps we’ll see that added too in the future.

If you’ve tried it, please share your thoughts below!

Disclosure: This article contains affiliate links. See my ethics statement for more information. When you purchase through such links, you pay the same price and help support the content on this site.

 

 

 

Greg Benz Photography