Do you need to calibrate Apple XDR monitors?

Photographers need accurate monitors. If you are viewing a display with inaccurate color temperature, crushed shadows, etc, you are likely to be frustrated with your prints and your online audience probably won’t see your image as you intend. This is why we prefer high quality monitors and profiling tools. Things get a bit more complicated with HDR photography, as we do not yet have an ICC standard for profiling in HDR mode (if you create a profile, the HDR values will clip to SDR white).

As I’ve written on my recommended HDR monitors page, you only have two good options for HDR now: calibrate in hardware or buy a monitor which is accurate out of the box. There is no standard yet for ICC profiling in HDR mode. The only monitors on the market which support HDR calibration are ASUS ProArt monitors or Apple XDR displays (which includes MacBook Pro, iPhones, and many iPads). It’s great to have the option, but are these premium displays accurate enough out of the box?

I do extensive monitor testing, so I decided to get lab-grade test equipment (the CR-250-RH spectrophotometer) to calibrate and test five Apple displays with the highest possible accuracy. In this post we’ll take a deep look at a critical question: how accurate are Apple displays without calibration / profiling?

TLDR: Apple displays with the “XDR” branding are outstanding. They are extremely accurate out of the box, and even several years after purchase. Unless you are both a professional who demands extremely high levels of color accuracy (such as a Hollywood colorist), you do not need to calibrate XDR displays. The results are so good that most people would struggle to notice any difference between the factory results vs after custom calibration. If you want to ensure the highest accuracy as the display ages, I recommend using the Calibrite Display Pro HL to measure reference white and use MacOS’s built in “fine tune” calibration (see how below). To understand the basis of my conclusions and learn much more about how calibration/profiling works, keep reading…

[Disclosure: This post contains affiliate links. I rarely endorse other products and only do when I think you would thoroughly enjoy them. By purchasing through my links on this post, you are helping to support the creation of my tutorials at no cost to you.]

What does an accurate display mean for photographers?

Everyone’s level of tolerance for error will vary, but there are some fairly clear targets and expectations for photographers.

The most important targets to ensure accuracy for photography are:

  • Color accuracy.
    • Overall, a color deltaE (ΔE) of 2 or less is ideal. If you are above 5, you too much error for photography (and high error is common for gaming monitors).
    • Accuracy in neutral gray values is most important (its not only most easily noticed, gray is most of user interface surrounding your image and therefore biases your decisions while editing).
    • The target white point for photography is D65. This is a specific white (measured as 0.3127x, 0.3290y).
    • Your measurement software may report a “correlated color temperature” (CCT) such as 6500K. This is not a specific white: D65 is a 6500K value, but there are a wide range of 6500K values which are not D65. CCT specifies only the blue/yellow balance (not magenta/green).
  • Gray tracking (aka tone response / EOTF, the Electro-Optical Transfer Function).
    • Overall, a gray deltaE of 1 or less is ideal (with a 10% test window – peak luminance will vary for HDR monitors as noted below, so we just do our best).
    • This ensures proper shadow detail, contrast, etc.
    • For SDR, your target is gamma 2.2. For HDR, the signal to the monitor is PQ (“perceptual quantizer”). However, the effective EOTF target for HDR is undocumented / unclear unless you are using an XDR monitor in a reference mode. Neither Windows or MacOS specify how they are trying to drive the display when you use brightness sliders. You can test an external monitor with a pattern generator, but would only confirm good calibration in the hardware and not tell you how the operating system is trying to adapt shadow values, etc.
    • Apple XDR displays uniquely offer several reference modes and the ability to create custom user presets (including control over the EOTF in the SDR range).
  • Peak luminance
    • This is the most critical metric for HDR performance, as it determines how many stops of headroom you have at a given brightness.
    • This is not a fixed value in a monitor. Peak brightness depends on several factors – most commonly how bright the display is overall. OLEDs (other than Tandem Stack OLED iPads from Apple) are far more likely to be subject to dimming than mini-LED.
    • It is ideal to have a display offering 1000+ nits peak for great HDR. It is also ideal to have a sustained / full screen capability of 400+ nits (as this ensures accuracy is retained even while viewing bright content).
  •  Uniformity
    • This means consistency across the entire display. Lower quality displays may often show less accurate results near the edges of the display.
    • Some solutions (such as ProArt calibration) offer ways to improve uniformity, but this is most commonly something you cannot improve. An ICC profile affects all pixels equally, it has no mechanism to correct the edges of your display.
  • Wide gamut
    • Real world color is much more vibrant than sRGB. A wide gamut monitor doesn’t just show more vibrant color – it shows more detail. A limited gamut won’t show the full texture of sunset clouds. A flower petal may look flat when the gradient of colors gets clamped. A wall lit by a colored light may even look like an artifact or blown pixel when the colors get clipped.
    • A wide gamut display will let you enjoy much more beautiful images, and give you an edge in creating them (and there is no downside in editing with wide gamuts like P3 – preemptively clamping the colors in your edit won’t produce a better final result on less capable displays and you can export sRGB from any source).
  • Black levels
    • This refers to the deepest black the monitor can produce and is also critical for HDR to ensure shadow detail and avoid halos.
    • This is mostly a function of monitor hardware, but may be influenced by OSD options (such as for backlight and black level). An ICC profile cannot make the deepest black any darker.

This is only a partial list of the most important monitor capabilities. Other factors like anti-reflective coatings, zero fan noise, and simple operation are often also important (Apple performs extremely well on these other considerations too).

Performance for some of these goals (such as color accuracy) may be improved if you are able to “calibrate” or “profile” your display (we’ll discuss that those terms mean below). Your ability to do either will depend on your monitor, budget (for test equipment/software), technical skills, and support.

What’s the difference between profiling and calibration?

These terms get thrown around very loosely. Often times photographers will say they “calibrated” their monitor, when in fact they profiled it.

Calibration is a process of making the monitor itself more accurate. In other words, when you ask for a specific RGB value you get something very close to it. This may be done in some monitors by changing settings in the OSD (such as RGB gain values) or by using special software tools to write lookup tables in the monitor (such as with ASUS ProArt).

Profiling is a process of making the overall system response more accurate by hacking the signal that goes to the monitor. If your computer knows that requesting a red value of 230 actually produces the result you would expect for 227, then the computer may send a request for something like 232 instead in order to get a result closer to the desired value. This is very well supported for SDR mode thanks to ICC profiles, but not yet for HDR (though Apple’s “fine tune” options for XDR offer a limited option for a basic white point correction).

If you do both, calibration must be done first (otherwise the profile is based on the wrong assumptions about how the monitor will behave). The ideal scenario is a high quality monitor with both a good calibration and profile.

How accurate are Apple XDR displays?

I tested my five XDR displays: the Pro Display XDR, an M1 MacBook Pro, an M4 Max MacBook Pro, an M4 iPad Pro, and an iPhone 17 Pro. Using a spectro for these tests has given me a level of confidence I’ve not previously had (as the quality of color matching functions has always been a question for me).

While these are all XDR displays, they very in both technology and options for profiling / calibration. As you can see in the SPD below, the panel technology in the older Pro Display XDR and M1 is extremely similar if not potentially identical in many ways other than size. The M4 MBP improved by changing from a red KSF phosphor film to QD (quantum dot) mini-LED. The M4 iPad uses a “Tandem Stack” OLED and is therefore inherently different (note that the peaks are closer together in the iPad). The iPhone uses a different OLED, showing more of a blue spike.

Apple offers a few mechanisms to improve display accuracy (all found under System Settings / Displays / Preset dropdown at the bottom):

  • full” calibration
    • Calibrates the white point, primaries, luminance, and gamma response.
    • This option requires a supported spectrophotometer (the CR-250-RH I bought is the least costly – note that “RH” means “rubber hood” and is the one you would want). See the last section below for more information on why spectros are more accurate than a colorimeter.
    • The test is very simple to use. You point the spectro at a target on the screen and click a button to run. There are no options to configure (nor any final report when done). It’s very simple and effective.
    • The test runs for just under 90 minutes. If you have multiple monitors, you can view the progress in the dialog box if you didn’t leave it on the display being tested. It will show “performing stabilization” during warmup and then show progress through 96 measurements.
    • When you calibrate the Pro Display XDR, it is written in the monitor and therefore will benefit other Apple computers you later connect to that display.
  • fine tune” calibration
    • This provides a minor correction based on a single measurement (x, y, and luminance values of a known white). You can do this manually with any colorimeter or spectro.
    • This is a great option for those who don’t have a spectro to run full calibration. And even if you do, this offers a much faster way to test and tweak performance on a regular basis after full calibration.
    • You may only initiate this while in one of the system preset XDR modes, so use “HDR video (P3-ST 2084)” to measure its D65 reference white.
    • On MacOS, it affects performance in all XDR presets in MacOS (even ones where you cannot start fine tune).
    • On an iPad, fine tune only shows benefit while reference mode is enabled.
  • visual fine tune” calibration
    • Do not use this, you are more likely to reduce than improve accuracy.
  • XDR presets:
    • These presets don’t make the display more accurate – they give you more control over the target (such as a specific SDR luminance or EOTF).
    • For XDR displays under MacOS:
      • You can choose from a list of system presets. “HDR video (P3-ST 2084)” offers predictable HDR results.
      • Or better yet you can easily create a custom profile with SDR brightness adapted for your ambient light (80-120 is ideal under controlled lighting), gamma 2.2 for SDR range (ideal for print work), 1600 nits HDR peak (to use the full capabilities of the display),  and P3 primaries.
    • For XDR iPads:
      • You may only choose “reference mode” (under System Settings > Display & Brightness > Advanced). When enabled, this is the same as “HDR video (P3-ST 2084)” on the computer – it provides a predictable/fixed response and disables options which reduce accuracy (such as true tone and night shift).
      • It would be nice for home use if there were a way to add a toggle for reference mode to control center so that you could switch between accurate viewing in controlled lighting – vs otherwise normal use (brightness adaptation is key when outdoors or near a window, and many may like to use true tone / night shift. Apple’s rational for the iPad is likely just so that pros in Hollywood can review “dailies” on an iPad with sufficiently high accuracy under reasonable controlled ambient light – toggling this setting probably doesn’t appeal to that audience.
    • The XDR iPhone (ie 11+):
      • Offers no reference mode nor options for any calibration. You cannot improve on the factory default results. The iPhone is too small for normal Hollywood use, and I wouldn’t see much value for home use without the toggle I mentioned (try turning off brightness adaptation on your phone and using for a day, it’ll be unusable in some parts of the day or night – we need adaptation here).
      • (Third party tools support profiling, but only in their apps and not systemwide – which makes them of very limited value).

I tested my various XDR displays under several conditions:

  • Factory settings (ie “out of the box” performance). This will show the minimum performance you should expect.
  • Full calibration only. This shows the benefit of Apple’s advanced calibration (ASUS ProArt is the only other display offering a similar capability).
  • Fine Tune calibration only. This reflects the only calibration most photographers can perform (ie the best possible result until there is a standard for ICC profiling).
  • Full calibration + Fine Tune. This may be expected to show the best possible results.
  • [ Note that I did not use any ICC profiling with any of these displays as they are not supported in HDR mode nor iPad / iPhone for any profile systemwide. There would be nothing further to gain above the excellent results I’ve achieved. ]

For MacOS testing, I used XDR set to HDR Video mode (which is the only option to measure HDR with a predictable EOTF), a CR-250-RH, CalMAN targeting P3 PQ, and MacOS Patterns using full range encoding (HDR10 enabled in CalMAN). For iPad, I used reference mode. For iPhone, I could only measure white point (the EOTF is undocumented and hard to reliably control with a brightness slider).

Note: I manually evaluated actual data points, but the average and max deltaE values below are skewed high as CalMAN automatically includes several test patches which are clipped (ie it over-weights the brightest value, which is often one of the least accurate). So actual performance is better than the average values below suggest, but I didn’t bother to manually exclude extra readings to re-calculate the average.

What can you reasonably expect after calibration / profiling?

While calibration and profiling are important tools which can improve the accuracy of your display, they are not magic solutions that can fix everything. In fact far from it, especially when working with HDR. These issues come up in SDR editing too (finding that your prints are dark even after you’ve profiled your monitor is a very common example – as your display can be both perfectly accurate and still the wrong brightness for your working conditions).

To set expectations, here are a few things to know about the limits of accuracy with an HDR display even if you get great results from calibration and profiling:

  • Calibration and profiling cannot improve capabilities
    • If your monitor can only hit 1200 nits or only has 97% P3 gamut, that’s the best you can get.
    • In fact, your capability will probably decrease (very slightly) after calibration and profiling because the least accurate values are at the extremes and they will be eliminated.
    • For example, peak nits are likely to drop after calibration and profiling. The only way to correct white balance issues for the brightest values is to turn down the maximum red, green, or blue sub-pixels a bit until we find the brightest white where the three channels add up to an accurate white.
  • Calibration does not mean that two different displays will match perfectly!
    • Different displays have inherently different SPD (“spectral power distribution”), which is discussed below. As a result there are inherent limits to how closely two different panel types can produce any given color – and matching all colors across your gamut is nearly impossible.
    • To achieve the widest modern gamuts, many monitors have very tight ranges of wavelengths emitted for red, green, and blue (measured as FWHM or “full width at half maximum”). These more precise colors for the sub-pixels allow creation of very saturated colors – but they also increase the risk of “observer metamerism” (ie different people may perceive some colors from the display slightly differently).
    • You don’t need to worry about metamerism, but it may come up when a display with very high coverage of the Rec2020 gamut is involved (ie with newer technologies such as RGB mini-LED and laser projectors). This could result in two people disagreeing about whether a calibration looks “neutral”, whites appearing slightly greener or redder to different people, skin tones differing subtly between viewers, or blue highlights varying more than expected.
    • The only time you should expect a very close match is when both display use the same panel technology (ie backlight, phosphers, etc), are in good condition, and warmed up. So if you use multiple displays, it is beneficial to use the same model for color matching. Apple does a very nice job even across different technologies.
  • HDR luminance is dynamic and impossible to fully characterize or control.
    • No monitor (other than $30k reference monitors) offer the ability to hit peak brightness across all pixels at the same time. Due to power consumption, thermal design limits, monitor burn-in risks, etc your pixels may dim significantly – sometimes even with just SDR content. This dimming is typically known as ABL (automatic brightness limiting).
    • As a result, calibration and profiling are typically done with a 10% window (ie covering 10% of the pixels in the center of the display). If you were to run your tests with a 2% or 50% window, you would see very different results!
    • For example, an OLED might achieve 1000 nits in a 2% window, but only 200 nits in a 50% window (Apple “tandem stack” OLED iPads uniquely avoid this and are able to offer full screen or “sustained” values of 1000 nits – and consumer phones are less prone to this).
    • For this reason, today’s mini-LED displays are generally more accurate than OLED in real world use. You may well see a great test result for an OLED (based on that 10% window) – but when you start viewing real photographs, you are likely to find that the OLED has dimmed and is therefore less accurate (potentially causing you to edit the image in a way which will look too bright on a display which does not suffer as much from ABL).
  • deltaE only tells you how well the display performs against a target value – it says nothing about whether those targets are suitable for photography!
    • The display needs to not only be accurate, it also needs to be set for the brightness appropriate for the level of ambient light in the room. The accuracy of your laptop does not change when you turn the lights in the room on or off, but you’ll certainly struggle to get good results if you don’t adapt the brightness when the ambient light changes.
    • Your display may also be configured to target different EOTFs (“electro-optical transfer Function”). In SDR, gamma 2.2 is the correct standard for photography. For HDR, there isn’t a clear standard (you can’t even tell what the operating system is trying to do – other than when using an XDR display in a reference mode).
    • As an example of the impact of EOTF: When viewing my dark shadow detail test in a dark room, I can see down to the 0.1% level when I have the display set to the HDR Video Preset or a custom preset using gamma 2.2 and 100 nits for the SDR range. But when using the variable brightness preset with brightness is set to the same 100 nits SDR white, I can only see down to 0.25%. I assume these are all accurate (Apple doesn’t publish their target for the default variable brightness preset, but unless there is a bug in MacOS it would be expected to leverage the same calibration data). These are just different EOTF targets and it affects the shadow detail.
  • Low deltaE values may not tell the whole story:
    • What does it mean when a manufacturer claims “deltaE <1”? Did they test HDR or just the SDR range? Which gamut did they test?
    • A deltaE claim of <1 suggests the monitor should be decent, but take these claims with a pinch of salt.
    • Note that there are also different deltaE values. In photography, we typically mean ΔE00 (CIE2000) when we simply say “deltaE”, but there are others. For example, CalMAN can optionally report ΔEITP, is based on the ICtCp space and is designed to help better reflect human perception in the HDR range, better handle wider gamut, and helps separate color error from luminance error.
  • Performance varies across the screen:
    • Corners are often darker and less accurate than the center (ASUS ProArt offers some way to compensate, but performance here is often just based on the quality of your hardware).
    • With mini-LED, each pixel is dependent on its neighbors due to the shared backlight. For example, this may cause halos visible in dark areas next to bright content.
  • Performance varies across time:
    • A display may vary a fair bit in the first 30 minutes it is turned on or if temperature varies in your environment (which is why it is recommend to let the monitor “warm up” before testing).
    • It may also change as the display ages (as you’ll see in my M1 results below).
  • Your monitor may include several important settings outside the scope of calibration
    • Many OLED monitors include a setting to limit peak brightness, and it may be enabled by default (such as in the ASUS PA32UCDM). This likely won’t change test results, as a monitor which is accurate at 1000 nits will likely be just as accurate when forced to never exceed 400 nits. And while the typical 10% test patch won’t trigger ABL during the test, limiting HDR may improve EOTF accuracy with real world images which trigger ABL.
    • Some mini-LED displays include options to control local dimming. This creates complex behaviors where pixel-level accuracy varies with neighboring content and changes over time.
    • There may be additional controls for sharpening or other factors outside the scope of our testing.

These considerations are an important part of the reason why Hollywood professionals often pay $35,000 for a “reference monitor” (which more or less means one which is as accurate as possible vs the intended standard, such as mastering content for 4000-nits P3 D65 – though it also typically includes support for special features like built-in vectorscopes or SDI input ports).

Apple has done an outstanding job addressing these concerns with their XDR displays:

  • They are all held to a very high standard. There is not a single “XDR” branded display which is not outstanding.
  • Sustained luminance values are very high (even in the OLED XDR), so ABL is not a problem affecting accuracy.
  • Everything works great by default, and there are easy to use controls in MacOS for experts who wish to customize performance.

Summary of key findings for the XDR displays:

  • My 5-year old M1 MBP achieved excellent results after fine tune calibration, but was slightly out of spec when relying only on factory performance.
    • With the factory calibration, deltaE was 2.8 average (max 4.9). That isn’t tragic, but falls below expectations. Color was accurate, but the display was about 15% dimmer than expected across the range (tested peak of 835 nits vs expected 1000).
    • I consider this factory result good enough for most photographers (who would work in the default variable brightness mode and would have compensated by increasing brightness one tick). However, a Hollywood colorist would not accept the factory results. The most color critical users should test to validate accuracy rather than assuming full + fine tune achieves target on aging hardware.
    • Using fine tune calibration only (used 1,000 nits D65 test), results were excellent. Peak brightness overshot at 1066 vs 1000 target, color was dead on. Average deltaE 0.4 (max 1.0). CCT measured 6407 at peak.
    • Using full + fine tune calibration, deltaE only improved to 1.2 average (max 1.2). Peak brightness was near perfect (998 vs 1000 target) and RGB balance was great across the range (red drifted lower in bright values). CCT measured 6627K at peak.
    • We’ll consider this result in greater depth further below.
  • Excellent deltaE scores for color and gray tracking even with factory settings. These results would meet the expectations of the vast majority of photographers.
    • M4 Max achieves outstanding scores with the factory calibration:
      • deltaE 0.7 average / max 1.7 (peak error near brightest whites – achieving 968 nits vs 1000 nits target).
      • Running CalMAN’s ColorChecker test against P3 targets with the factory calibration showed average 0.5 deltaE (max 0.8). Color was excellent across the range.
      • 99.8% P3 gamut coverage (68.1% Rec2020 coverage)
    • My ~5 year old Pro Display XDR still very good scores with just the factory calibration:
      • 1.3 average deltaE / max 2.6. Color balance was very good across the range. Peak white was 987 vs 1000 target.
      • Running CalMAN’s ColorChecker test against P3 targets with the factory calibration showed average 0.6 deltaE (max 1.4). Color was excellent across the range.
  • Accuracy could be further improved using Full calibration and/or Fine Tune.
    • M4 after full calibration
      • deltaE 0.4 average (max 1.2)
      • This is a measurable improvement, but at a level so trivial that most photographers wouldn’t even notice if you could view them side by side.
    • XDR using only fine-tune calibration:
      • When tuning from a 1,000 nit sample results improved to 0.8 deltaE (max 2.3). White balance was notably better across the range and peak luminance hit 996 vs 1000 target.
      • When tuning from a 100 nit sample, results showed 1.2 average deltaE (max 2.6). White balance was better across the SDR range but not over HDR. Peak luminance hit 1007 vs 1000 target.
      • Comparing the two, the 100 nit sample unsurprisingly showed improved accuracy over the SDR range (but worse HDR), while the 1000 nit sample showed nearly perfect HDR measurements and failed to benefit SDR.
      • Compared to factory calibration, fine tuning at 100 offered the most benefit to the SDR range (but degraded HDR), while fine tuning at 1000 improved HDR significantly and very slightly improved SDR.
    • XDR after full calibration:
      • 0.8 average deltaE / max 1.8. Peak 994 vs 1000 target.
    • These results would meet the expectations of even the most demanding photographers, but are optional and full calibration is likely not an option unless you know someone or can hire a TV calibrator to use their spectro with your display.
    • Fine tune is not worth the effort if you are not using a reference mode / custom XDR preset (ie if you wish to use brightness controls on your display).
  • Excellent gamut, with the M4 Max showing 99.9% coverage of P3 in HDR mode (bright HDR colors can be difficult to achieve, so this is outstanding).
  • Excellent uniformity on all displays (including that M1).
  • Halos are well controlled for the mini-LEDs, but visible and the OLED iPad is clearly superior in extremely dark detail.
  • EOTF tracking remains excellent even in a very large test window due to 1000 nits sustained performance. This is a huge advantage for photographers working with HDR, as it means that the actual performance with real images will track these test results closely (unlike most  non-Apple OLED displays which will likely dim quite a bit with larger test windows).

Aside from the accuracy, perhaps the most notable benefit is that XDR displays are extremely easy to use. Great results are the default, you literally don’t have to do a thing. HDR works automatically, SDR content looks great, and the display is very accurate (there is also zero fan noise). XDR-branded displays from Apple are excellent and are clearly the way to go for serious photography.

The Pro Display XDR is perhaps the best investment I’ve ever made in photography gear (the Nikon D850 is right up there too). It gave me capabilities far beyond anything I could previously do and was far more affordable than you would expect. Yes, it costs $6000 (plus tax) new for the monitor with stand. However, I bought mine used in like new condition with warranty remaining for only $3000. I could likely sell it today for a net $1000 – $3000, so I’ve risked almost nothing to pick up one of my favorite products of all time.

For reference, I also tested an M1 MacBook Air. It does not have an XDR-branded display, which means it offers no options for calibration or fine tuning. It also has very little HDR capability (400 nits peak and no local dimming to ensure deep blacks). The out of the box color was notably less accurate at 7185K, though acceptable for a general audience. As this device is primarily intended for SDR use, a custom ICC profile would be a good option and should easily get much closer to the ideal 6500K white point. 

My M1 MBP results:

As noted above, my M1 MBP fails to achieve ideal results when relying entirely on the factory calibration. Specifically, it is off by less than 1/3rd of a stop at the peak (15% dimmer in nits, but linear values are overstate how this would be perceived by humans). This is still very good for most photographers. If you use the default P3-1600 nits mode (ie you use the keys to adjust brightness), you would never know the difference. Furthermore, the results were excellent after fine tune calibration (which can be done with a standard colorimeter, no fancy spectro required). 

I have no serious concerns here for several reasons:

  • If you are using the default (variable brightness) preset, there is no loss of capability. You would just turn up the brightness and see nearly the same results even without calibration. 
  • Fine tune calibration can be easily performed to bring it back into the target range.
  • Even without calibration, it outperforms nearly every HDR monitor on the market. The ASUS P16 laptop is the only other one supporting calibration and 1600 nits. The rest of the options out there will have lower accuracy, less HDR headroom, and in all probability higher rates of quality issues with less generous support.
  • As a mini-LED, it will likely outperform most OLED in real use by avoiding ABL.

The best way to profile / ensure accuracy for your XDR monitor:

As you can see, there is no need to calibrate / profile XDR displays, though it may be helpful in some cases (outliers / old monitors) and you should definitely disable options which alter white balance. Here is a quick summary of the best way to ensure accuracy with your Apple XDR displays:

  • For any Apple display: Disable true tone / night shift (both are found under Settings / Display in iOS / iPadOS / MacOS)
  • iPhone:
    • You cannot improve further. There is no reference mode nor options for any calibration or profiling iPhone (third party apps can offer this within their own app only, which makes it fairly pointless).
  • iPad
    • Accuracy may be improved by using reference mode (same as HDR Video preset for XDR computer displays). It is found under under System Settings > Display & Brightness > Advanced
    • In the same area is “fine tune” calibration.
    • Only use reference mode when working under controlled ambient light (as it fixes SDR white to 100 nits, which will make the display too dim in bright ambient light).
  • Computer XDR :
    • Accuracy may be improved by using fine tune calibration
    • You
  • To perform fine tune (for MacOS or iPad):
    • Enable reference mode in the target display (required at least temporarily to get a predictable SDR white to test, as well as to use the fine tune calibration):
      • For an XDR computer, switch to the “HDR Video (P3-ST 2084)” preset for MacOS (via the preset dropdown under System Settings / Display).
      • For iPad, just turn on the reference mode toggle as noted above (there are no other options for iPad).
    • If you have previously set a fine tune calibration, reset it before proceeding to test the display.
    • Connect a colorimeter to your computer
      • I recommend Calibrite Display Pro HL colorimeter (or Plus HL if you wish to future proof for >3000 nit displays). These may be used with the included PROFILER software.
      • When testing an iPad, you will connect the colorimeter to your computer (required to use the colorimeter), but will point the colorimeter at your iPad instead of the computer’s display.
    • Use Calibrite PROFILER to measure SDR white
      • Go to Utilities / Monitor Quick Check
        • select your Calibrite colorimeter
        • set the dropdown to OLED (M4+ iPad Pro) or mini-LED (all Apple computer displays and gen 5-6 iPad Pro)
        • click “next”.
      • Run the test:
        • At this point, you should see a screen with a target saying to “position calibrator in the circle”. If that’s on the display you are testing, just click next.
        • Otherwise, if you are running PROFILER on a display other than the one you are testing (such as for an iPad), you need to first show an SDR white test patch on the target display. You may open this page on that display and click the “show SDR white test patch” button below to show an SDR white test patch. This is the same target you’d see from the PROFILER software, but gives you a handy way to test a screen that isn’t running that software. Note that I have not bothered creating a 10% window here, as all XDR displays offer full screen brightness vastly exceeding SDR white without dimming (800 nits for iPhone 16e, and 1000 nits for other XDR displays).
      • The luminance and xy values shown here are what you need to enter manually into the Apple Fine-Tune dialog.
    • Open Apple’s fine tune calibration to enter the values you just measured:
      • in MacOS, go to System Settings / Display / Preset dropdown / Calibrate / Fine-tune calibration.
      • on an iPad, you’ll find it next to reference mode.
      • for the left hand target values, type in the values for 100 nits D65 white, which are:
        • luminance: 100 nits
        • x: 0.3127 (click to copy, then cmd-V to paste)
        • y: 0.3290
      • enter the values you measured with PROFILER in the right hand column and click enter.
    • Once you are done,
      • your MacOS computer will be more accurate in all modes – even if you switch back to the default “Pro Display XDR (P3-1600 nits)” preset where you can use the keys to change brightness.
      • your iPad will now be more accurate whenever reference mode is enabled (but it won’t help when reference mode is off)

What could be improved with XDR displays?

Apple has the best HDR displays on the market. They are very accuracy and easy to use. Yet there is always room for improvement and I’d love to see the following:

  • The Pro Display XDR is excellent, but beyond the budget of most photographers. The key opportunity would be some kind of prosumer-oriented HDR monitor (ie a 27″ Studio Display updated for serious HDR use). Specs similar to the MacBook Pro (ie 1600 nits / 1000 sustained) would be ideal so there is no compromise when docked.
  • I’d gladly welcome a refreshed Pro Display XDR. It’s still arguably the best HDR monitor for photography, but after six years it would be ideal to push the boundaries even further and add features commonly found on premium displays. A webcam, 120Hz refresh rates for smoother scrolling / panning, pass-through Thunderbolt for single-cable laptop connections, increased rec2020 coverage, and a higher zone count / dual-layer LCD / OLED to reduce or eliminate local dimming halos would all be great updates. I’d also love to see 3,200 nits peak to support 5 stops HDR (and get close to 4000 nits video mastering), but I’m not holding my breath there.
  • Better mirroring. Even when you connect two identical XDR displays, the one which is not “optimized for” will show degraded results with reduced HDR headroom. This problem affects any display under MacOS, but XDR displays should be much easier to support here.
  • An option in custom XDR presets to allow 1600 nits with clipping rather than ABL would be nice. The current design forces you to choose between accuracy and use of the full range. In practice, most content hitting 1600 nits won’t be bright enough across the whole screen to require dimming. It would be ideal to simply clip to something between 1000 (the full screen limit) and 1600 (the peak limit) as required when the display limit is reached for bright content. This would satisfy hardware requirements with less impact to practical capabilities for real use.
  • It would be very helpful if the fine tune input had an option to just choose D65 values. These will almost always be the target, and typing manually is both cumbersome and creates risk of user error.
  • For those with supported spectros, offer an automated option to measure / implement fine-tune calibration. The manual test setup and typing could be eliminated.
  • For pros using “full” calibration, it would be ideal to get a summary of results (ie deltaE values). This is important for any calibration/profiling to confirm success or failure, and eliminates the need for a complex validation tool like DisplayCAL (or CalMAN, which is costly and requires another computer running Windows or a virtual machine).
  • Simpler custom XDR preset management. You cannot hide any of the system presets and likely don’t need many of them. There is no simple way to edit an existing preset (or even open it to confirm the settings). And deleting presets is very slow (the screen blacks out for ~15 seconds when you delete any preset, even if it isn’t the one in use).

While some photographers would probably love to see a reference mode on the iPhone, I’m not sure it’s needed nor practical (as phones need to adapt brightness constantly throughout the day).

There will certainly also be photographers who’d love to see the full calibration support extended to colorimeters. However, that’s not a trivial effort given the need for color matching functions for each colorimeter + display pairing. As the potential gains are small, this doesn’t seem like a great use of resources now. This seems like a task better suited to vendors like Calibrite when we have an ICC standard for profiling HDR. I’d rather see Apple invest in things like HDR support on AppleTV, iMessage, and iCloud.

How do these results compare to other options?

Apple has a very solid lead in HDR display capabilities, ease of use, and accuracy. But everyone’s needs are different. Here are some thoughts on the alternatives:

  • External monitors:
    • You can also achieve great HDR results with an ASUS ProArt monitor (or a 42″ TV), but setup is required, calibration is likely required (out of the box accuracy is not as high), and there are no reference modes – though ASUS is the only option for Windows, is much cheaper than a Pro Display XDR, and has some other advantages such as pass-through USB support.
    • For SDR-only work, there are great and very popular alternatives to the Studio Display from Eizo and BenQ. But once you dive into HDR, there’s really no going back to SDR – everything else looks flat by comparison.
  • Laptops:
    • There are a few very good HDR PC laptops now. But in general, most fall well short of the MacBook Pro’s display. Windows support is also lacking: there are no reference modes, HDR content brightness slider can cause clipping issues, bugs are more common, and pre-installed 3rd-party tools may cause incorrect results.
    • I did limited testing on the Lenovo Yoga, which had very good reference white color balance at 6250K. As CalMAN Client3 cannot generate HDR test patterns for a PC laptop and Windows does not document its EOTF target (which varies under its SDR content brightness slider), I cannot comment on how well it tracks a target EOTF – but I believe most photographers would be very happy with this display’s factory results.
    • The only PC laptop supporting calibration is the new ASUS ProArt P16.
  • Tablets: I have yet to see any tablet offering HDR performance remotely similar to the Tandem Stack OLED offered by Apple. The M4+ iPad Pro has the best consumer computing display I have ever seen.
  • Phones are generally very accurate and well supported for HDR:
    • Pixel Pro 7+ is outstanding. My Pixel Pro 8 measures 6530K at around 100 nits, very accurate white balance.
    • Samsung S23+ has excellent HDR hardware, but are not supported by the Android version of Adobe Lightroom and has historically had some strange HDR software quirks.

Conclusions

Apple has a commanding lead in HDR with its XDR-branded displays. There is simply nothing else like it for editing and viewing HDR photos on a computer display or tablet (Android phones do well). Both the level of HDR capability and accuracy are unsurpassed. The monitors and laptops run dead silent in normal use. And perhaps most critically, they are so easy to use. With zero effort, you get a great HDR (and SDR) results by default.

Key points:

  • There is no need to calibrate / profile these XDR displays for photography.
    • My results met or exceeded typical photography standards with or without full / fine-tune calibration.
    • Even my worst case M1 MBP was within good ranges after fine tune – and approaching excellent if you use the variable brightness mode (or create your own custom XDR preset with an offset to the SDR luminance). Tt still outperforms all HDR displays other than the ASUS ProArt (thanks to its support for hardware calibration). I also suspect this is an outlier and that most XDR displays probably show even less aging.
  • If you want to ensure the highest accuracy (especially as your display ages), occasional use of the fine tune calibration is all you need
  • Full calibration is unnecessary for photographers. It probably only improves results by 0.5 deltaE (an undetectable change for most photographers).

Side note – what’s the difference between a colorimeter and spectrophotometer?

For those who want to better understand why I’m using such an expensive piece of lab equipment for this testing…

Visible light is a mix of wavelengths. Humans can see from roughly 380 nanometers (violet) to 750 nanometers (red). Unless you are viewing a laser, the color you see is almost always a broad mix of wavelengths. Human vision is based on sensitivity to short, medium and long wavelengths. This isn’t the same as seeing red, green, and blue – but that’s not terribly far off the truth (and is the reason we have RGB monitors).

Each monitor has a unique SPD (“spectral power distribution”). This refers to the overall mix and intensity of specific wavelengths of light the monitor emits to make white. You can see examples below which show how red subpixels in the M1 and M4 MacBook Pros are extremely different. Though a human mostly won’t notice, these differences in SPD from one display to the next are the reason that you will almost never get a perfect match between two different monitors  – even if they are both perfectly calibrated and profiled. You can get close, but it is simply not possible to get every possible color to match when the underlying RGB spectra are not the same.

The graphs above show the wavelengths of light emitted for 100 nits D65 white from the Pro Display XDR (which has a red KSF phospher) vs the M4 MacBook Pro (which changed to quantum dot). Both look like very similar whites to a human, but there is a limit to just how well you can get these two displays to match each other even after calibration or profiling.

We can turn up or down the level of blue, green, or red sub-pixels by changing the voltage. That’s the nature of calibration / profiling – altering the mix of red, green, and blue used to produce a given color.

However, the characteristic shapes are just scaled as we change those voltages. We can make the red sub-pixels in the Pro Display XDR brighter or darker, but they will always have that odd triple peak. The PSD is an inherent property of the hardware. Calibration and profiling can’t change the fact that the Pro Display XDR has a super-peaky red or that the M4’s blue is shifted about 10nm towards wavelengths we see as cyan – but our tools try to do the best they can.

These shapes are the foundation of the monitor’s gamut too – having narrower peaks means more pure colors to increase gamut.

There are several underlying technical reasons for these differences in SPD, such as light source (backlight, OLED emitters), phosphors, color filters, quantum dot films, etc. The results may a function of cost considerations and technical targets. An ideal SPD probably has well defined peaks with minimal overlap for red, green, blue (to produce wider gamut), controls the blue spike (minimize eye strain), etc.

To measure and improve the accuracy of monitors, we have two fundamental types of measurement tools:

  • Spectrophotometers
    • These are very expensive devices (typically $2,000 – $50,000), so they are typically only used by serious professionals with a massive budget, certified TV calibrators, scientists, or display manufacturers.
    • These are very accurate and can measure specific wavelengths (with precision down to the 1-4 nanometers being common). In other words, they can see the actual PSD as shown in the graphs above.
    • My CR-250-RH has a a spectral bandwidth of 4 nm (covering the 380 – 780 nm range). This is the minimum performance needed for Apple’s full calibration (for example you cannot use the $2k i1 Pro 3, as it only measures down to 10 nm precision).
    • This is not a device any photographer needs. I purchased one for some very specific reasons. I wanted to educate myself (factory results vs best possible, better understand various HDR display technologies, etc) and to facilitate better reviews of HDR monitors (as I test them frequently and often find color matching functions for colorimeters are missing or unclear even in CalMAN for the display I wish to test). As you’ll see below, it is very easy to use and does improve the accuracy of even Apple’s XDR displays. So if money is truly no object, go for it – but the gains are modest.
  • Colorimeters
    • This is a relatively low cost device (often $100-500 for consumer / prosumer grade devices).
    • They most commonly have 3 colored filters to detect light (though they might use 2-10).
    • This makes them relatively analogous to human vision, but their sensitivity to specific wavelengths is different and varies quite a lot from one device to the next.
    • They cannot tell you the PSD of the display, they are don’t have nearly that level of precision. My colorimeter collects 3 data points per color (while my spectro collects 100).
    • As a result, they are only useful when you make some assumptions about the SPD of the display you are testing. That is why you would typically be asked which type of display you are measuring (mini-LED, LCD, OLED, etc).
    • That gets you in a reasonable ballpark, but limits accuracy. My spectro automatically knows the difference between the PSDs for the monitors above and anything else I might test – but my colorimeter can’t tell the difference and the only hint it gets is me choosing the generic “mini-LED” to interpret the data it collects.

You certainly don’t need the accuracy of a spectro for photography, but it’s helpful for making more definitive evaluations of displays like this. Apple’s XDR displays hold up very well under scrutiny with lab-grade measurement tools.

The best of HDR photography in 2025

HDR photography is rapidly moving towards being a mainstream technology. It’s supported by billions of devices, nearly all web browsers, popular editing software like Lightroom and Photoshop, social media sites like Instagram, and more. 2025 saw several important milestones, including numerous open source developments which may pave the way to finally address challenges in distribution which may allow HDR adoption to accelerate significantly in 2026.

Last year, I summarized key achievements in 2024. Let’s take a look look at what’s happened since then, and what’s coming next in the year ahead.

 

The best of HDR photography in 2025

There are have been numerous developments in the past ~18 months which give photographers the opportunity to make serious use of HDR. The collective impact of these updates is that we now have excellent support to edit and export HDR images which can be viewed as HDR by a large audience on Instagram and can be safely shared anywhere (even on devices lacking HDR support. We also have the ability to share these images with a very large audience (particularly on Instagram, where ).

Notable HDR photography improvements in the past year include:

  • January: Web Sharp Pro v6 added significant new support for creating and sharing HDR gain maps, including support for Instagram and Threads.
  • February: LRTimelapse added the ability to create HDR time lapse videos from your photos in Lightroom.
  • February: The budget iPhone SE was updated with OLED. You cannot buy any phone or computer from Apple without HDR now.
  • March: ASUS PA32UCDM introduced the first 1,000-nit OLED with high color accuracy for under $1800.
  • March: Safari Tech Preview 215 added support for HDR images (including native PQ encoding in JPG, AVIF, and JXL).
  • April: Google Photos app v7.24 added support for Ultra HDR JPG
  • April: iOS 18.4 / MacOS 15.4 added support for Android XMP gain map encoding, which helps to support old Android images (those captured before the ISO standard).
  • June: Adobe’s Project Indigo app adds powerful controls for capturing HDR images on a mobile phone.
  • June: Halide Mark III beta adds “Process Zero” HDR support for natural HDR
  • June: MacOS Tahoe adds options in the Digital Color Meter app to measure HDR pixel values on screen (EDR and EDR Linear).
  • June: AVIF with ISO gain map support is now enabled by default in Chrome, Edge, Brave, Opera, and Safari.
  • June: Android adds support for encoding HDR images as HEIF gain maps (on premium phones).
  • June: PNG 3rd-edition specification adds official HDR support.
  • Q2: The Lenovo Yoga Aura became the first laptop to offer DisplayHDR TrueBlack 1000 performance. This marked the first time a PC offered a good HDR experience.
  • Q2: 4th-gen Samsung 5-layer tandem QD OLED enabled the launch of several bright OLED monitors in the $900-$1200 price range.
  • August: the Hasselblad X2D II became the world’s first camera with end to end HDR support, boasting a 1400 nits HDR display, 15.3 stops of dynamic range, support for encoding excellent-quality gain maps, and a resounding endorsement from the world’s strongest camera brand that HDR is the future of photography.
  • September: Safari added HDR support to MacOS Tahoe, as well as WebKit for iPadOS / iOS 26 (which means not only Safari, but all browsers on mobile such as FireFox now support HDR photos).
  • September: the open source library libvips (pull request 4645). This is important on its own, and helps power critical developer tools like SharpJS and WordPress Media Experiments.
  • October: Photoshop improved 32-bit support in curves, histogram, and color picker.
  • October: Lightroom adds HDR-specific options for “edit in” (Preferences / External Editing), including support to choose Rec 2020 colorspace for SDR or HDR, HDR Limit now offers more granular control as a slider instead of a dropdown.
  • November: Krita began adding HDR support for Linux.
  • November: Black Friday prices hit new all time lows for several high-quality HDR monitors, making affordability better than ever.
  • December: The ASUS ProArt P16 laptop launched with a 1600 nits OLED display boasting Display HDR 1000 True Black and PANTONE certification, making it the first PC laptop on par with the Apple MacBook Pro for HDR performance and accuracy – while offering the perfect blacks of an OLED.
  • December: The open source library Sharp JS is adding support for gain maps (via libvips). This wildly popular developer library gets 22 million downloads per week via NPM and is rapidly growing. It’s supported by Gatsby and is the default engine when using Next.js’ built-in <Image> component (or its image optimization pipeline), which means it is enabling support in other downstream developer tools. This could enable many websites to easily adopt HDR support.
  • December 15: LR Android now supports HDR display and editing all Samsung S24 and S25 models. As with all HDR I have seen on Android devices, headroom appears limited to 2.3 stops in spite of displays supporting up to 2600 nits (iPhone offers 3 stops with only 1600 nits and Apple MBP laptops support 4 stops with 1600 nits).
  • December: I added an AI chatbot to my HDR monitors, support, and info pages to help newcomers get set up.

 

 

Where is HDR photography headed in 2026?

There are many HDR improvements which seem likely to arrive in 2026:

  • HDR Monitor options should continue to improve significantly
    • Multiple YouTube channels with access to Samsung at CES mentioned the company set its sights on 4k and 5k OLED monitor panels for 2025, with higher peak brightness being a key objective for 2026. If they deliver, this would likely translate into several new HDR monitors by May (as this is the side of their business that sells panels to other monitor makers). We should find out very soon, with CES 2026 in early January.
    • Expanded PC laptop support for HDR. We finally got a couple of great HDR Windows laptops this year, and that trend is likely to expand in 2026.
    • Prices should continue to decrease year over year (while options grow).
    • To give you a rough sense of where things are going in terms of computer monitors which can keep up with capabilities we already have in phones and TVs… If you ask ChatGPT “what is likely CAGR for monitors supporting 1000+ nits between now and 2030?”, it estimates a roughy 23% compound annual growth rate (citing several sources). That feels very plausible to me. Options are rapidly expanding now, but from a small installed base (Apple MacBook Pro has performed at that level for five years, but we only just got the first comparable PC laptop in mid-2025).
  • Distribution support expands, helping to address the key barrier to adoption.
    • The recent support added to open source libraries (libvips and Sharp JS noted above), there is a good chance we start to see a nice expansion in the number of websites which support and preserve HDR when you upload photos. If you are requesting support from your favorite web platform, please be sure to note that I’ve linked many important HDR resources for developers at http://gregbenzphotography.com/hdr#developers.
    • The WordPress Media Experiments dev team is exploring adding libultrahdr to support HDR gain maps. This could significantly expand the ability to share HDR photos, as WordPress powers >40% of all websites. It is possible to workaround now (by using the “full” size option), but this would make it vastly simpler and add import support for derivative assets such as thumbnails.
    • Google’s open-source libultrahdr is expected to add AVIF & HEIF gain map support. This should help make it easier to transcode from iPhone captures (HEIC sources) and to encode much smaller and higher quality images (AVIF).
    • I anticipate adding AVIF gain map support to Web Sharp Pro.
  • AVIF begins to replace JPG
    • As discussed in which file formats to use for photography, AVIF is poised to replace JPG. It offers higher image quality at half the size. It’s a great format for SDR, and for HDR promises to give us gain maps smaller than today’s JPGs which is great news for faster loading of websites or sending smaller files.
    • 95% of internet traffic is now on a browser supporting AVIF, and all browsers which support HDR also support AVIF gain maps. While JPG gain maps are 100% safe, we are getting very close to the point where AVIF can start to replace it. The long tail takes a while, so this is likely to be a long process that starts late in the year. For those of you are who pioneers, this is a great opportunity to get your website to load much more quickly.
  • Wider gamut TVs. RGB mini LEDs are coming from Sony and TCL. Outside of laser projectors, this will be the first time consumer displays show nearly full coverage of the Rec2020 gamut (which is the ultimate target for HDR).

Beyond this, there are various rumored developments – but who knows what the probabilities are. So I’ll focus instead on my “wish list” of developments would be high impact for HDR photography should they come to pass…

 

Wish list: what else do we need for HDR photography?

I have no idea if any of the following are projects which may be in development or under consideration, but they would help significantly accelerate adoption of HDR photography and increase its value (in no particular order):

  • Support for JPG gain maps in popular sharing platforms, especially Adobe Portoflio, SquareSpace, Wix, and Facebook.
    • These are the platforms I hear requested most request from my audience to get HDR support.
    • Please click on each above for a link to request support.
  • Simpler / better HDR in Windows.
    • You can get great results under Windows, but it isn’t nearly as simple as MacOS (where HDR is generally enabled by default and great results are either the default or relatively much easier to obtain).
    • For example, there are 3 different brightness sliders: “brightness” for laptops, “HDR content brightness” is a secondary control for laptops, and “SDR content brightness” is a secondary control for external monitors. These sliders are not well described and hard to control. The HDR content brightness slider is especially terrible – I find it leads to clipping of HDR highlights on great laptop displays if you don’t adjust this slider, and it affects photos and videos differently. This is unnecessary – MacOS offers a single brightness slider regardless of which HDR display you are using – it’s simple, intuitive, and works very well.
    • HDR is not enabled by default even on great HDR laptops where the display is well known. This undermines the value of premium laptops and puts the burden on the user to be aware of these controls and how to optimize their display.
    • The Windows HDR Calibration app is more likely to degrade than improve your results (and seems to occasionally produce some absurd HDR headroom numbers).
  • Expanded support in the Apple ecosystem:
    • Support for HDR photos via AirPlay on the AppleTV. This would enable a very simple way to get content onto the massive HDR display nearly everyone has at home.
    • Support for ISO-encoded HDR JPG / AVIF in iMessage. Apple already supports their own JPG gain map encoding (ie images captured with the phone), and being able to easily share images edited with Adobe software would significantly help share the highest quality images edited by artists.
    • Support to retain ISO gain maps for images synced via Apple iCloud.
    • An updated Apple Studio Display monitor with 1000+ nits support would bring easy-to-use, high-quality HDR to a much larger number of creators. Apple has been a consistent pioneer and champion of HDR since 2018, and a monitor priced for a large audience would help cement their already considerable lead in HDR display hardware.
    • Full utilization of ISO gain maps. The current support is welcome, but not fully optimal / consistent with the ISO standard. This can result in loss of brilliance for images encoded for >3 stops of headroom and lower quality when adapting to displays with < 2 stops of headroom (such as iMac). I recommend Chrome or related browsers like Brave for those of you using MacOS. It’s great that we have HDR support in Safari 26 now, but we need further improvement to get the most out of computer XDR displays and to support legacy HDR displays with limited headroom.
  • Unlock full potential of Android displays
    • Android displays are currently limited to 2.3 stops of headroom in software. With the latest Samsung and Pixel phones offering 2600-3300 nits, the display hardware is capable of supporting 4 stops of headroom in controlled lighting (ie viewing indoors). By comparison, Apple iPhone offers up to 3 stops with 1600 nits peak HDR.
    • It would be ideal to see Android allow at least 3 stops of headroom. The hardware supports it, many computers already support 4 stops, and up to 6.6 stops of headroom is beneficial (which is already supported by some high end TVs which support the full 10,000 nit PQ spec – though there is no indication monitors or mobile devices will pursue > 4 stops anytime soon).
    • It would be ideal if the mobile operating system could also intelligently headroom when ambient light is very low and the brightness slider is set to low values (same for Apple, not an Android-specific concern for me). This would help mitigate the “HDR is too bright” concern voiced by those who tend to scroll a tiny, bright display in a dark bedroom (while still allowing full HDR in more favorable conditions). This was on my wish list last year for both iOS and Android and remains something I believe would be very beneficial for the HDR ecosystem to ensure users and apps do not resort to draconian measures like disabling HDR entirely. This needs to be addressed at the operating system level to ensure consistency, avoid redundant or problematic solutions by developers who probably do not often have sufficient expertise in these complex human factors concerns, and potentially to protect user privacy as leaking ambient light data may be a concern for “fingerprinting” where an individual is tracked by comparing several otherwise innocent pieces of data about the device.
  • Support in Adobe Bridge
    • Adobe has done an incredible job adding support to ACR and all 5 versions of LR. However, if you use ACR (but not LR) there is no simple way to browse and manage your work as HDR.
    • Please vote for HDR support in Bridge (if you have comments, please write in your own voice – not copy/paste).
  • Greater support for accurate color in monitors:
    • Manufacturers should ideally all offer a mode designed to offer the highest accuracy possible with the factory calibration. This is the default for smart phones, but computer monitors are all over the board in terms of design target and accuracy.
    • Apple and ASUS ProArt offer great color by default and BenQ has an optional “Display HDR” mode which brings factory results reasonably close. But beyond that, most HDR monitors are optimized for gaming and showing off lots of color in the showroom. They tend to have terrible color accuracy and could offer a better starting point by adding a display mode intended for accuracy. I would love to see Eizo get into the game, as they would surely do an outstanding job.
    • There is also an HDR working group at the ICC, and a standard for profiling monitors would be of immense benefit. However, such efforts are likely to take time (creation of a standard and then adoption in operating systems as well as 3rd-party profiling software).

And beyond this, we will certainly see many other announcements or enhancements for HDR throughout the year. There is massive support across a wide range of hardware vendors, developers, and individuals. We are clearly headed towards a time where HDR is a mainstream technology with widespread support.

 

How did we do on last year’s wish list?

If you review my wish list from 2024, we saw the following:

  • WebKit / Safari support: yes! As noted above, further enhancements are needed – but this is an excellent start and significantly expands support for HDR.
  • Transcoding support: As noted above, there was tremendous progress in open source libraries (libvips, SharpJS) and that moves us much closer to general support for sharing HDR on the web.
  • ICC profiling: We do not yet have a standard, but I feel good about where things are going. This is probably the most complex piece of the puzzle (tough scientific questions requiring broad industry agreement) – it will take some time.
  • More support for showing HDR photos on a TV: I’m disappointed we do not yet have support from major set top boxes such as AppleTV to easily share HDR photos on the great, large HDR display we all have at home. Hopefully we see movement here in 2026.
  • Solutions for the “too bright” concern at the operating system level: I have not observed improvement here, but continue to maintain that it would be ideal for HDR headroom to be limited when viewing in a very dark room at low brightness (especially on phones). This simple tweak would significantly help address concerns from those who scroll phones in bed, while retaining the benefit of HDR elsewhere (ie avoiding draconian solutions like disabling HDR generally).

So overall, we saw significant progress on most of these goals! The HDR ecosystem is expanding and improving at a rapid clip. We are at a point where transcoding (preservation of HDR when sharing images) is the key barrier to adoption. Once we have more outlets to easily share HDR, it should show a rapid uptick in adoption.

 

How to edit as PSB with Lightroom v15.1

Adobe just added an incredibly helpful update to Lightroom Classic (LrC) v15.1…

Lightroom has supported the ability to preview and manage existing PSB files for years now, but it was always missing a critical piece of the puzzle – the ability to create new layered edits as PSB files. With the old workflow, you could only choose TIF or PSD formats, which are limited to 4GB and 2GB respectively. That’s very limiting if you use smart objects for completely non-destructive workflows, exposure blending, make massive prints, have a 100 megapixel camera, etc.

Things get messy when you run into that limit. Photoshop will stop throw an error when you cross that threshold. Your only choice at that point is to either simplify the image (such as flattening layers) or to do a new save as PSB (in which case you’ll have to import it to Lightroom and will probably want to hunt down the old TIF to delete it).

Now LrC v15.1, you no longer have to worry about this problem. Simply choose PSB for all your new edits (under Prefs > External Editing). When you right-click your RAW file and choose to “edit in” Photoshop, you’ll be saving a PSB file. No more file size limits.

And if you are ok saving files which are ~2x larger, you can also increase the speed of your file saves by 10-20x! Compression is typically required when managing the limits of TIF / PSD, but truly optional when saving as PSB. That image that takes 60s to save? You may only need 3 now (because the bottleneck wasn’t writing to the drive, it was the compression calculations). In Photoshop, go to Prefs > File Handling and check “disable compression of PSD and PSB files“.

The PSB file format (also known as “large document format” in PS) works exactly like a TIF inside Adobe software, keeping all your layers and complex information – but it has no file size limit! Ok, there is a limit of 300,000 pixels per side (good enough for an 83′ print) or 4 exabytes (ie 1 billion times larger than the limits of TIF). We should be good for a while.

Are there any downsides to PSB? You may find 3rd-party software does not support it as well or that sometimes your preview does not look correct in a file explorer. That’s about it. File sizes are very similar in my experience when comparing TIF and PSB (including with “maximize compatibility”, which is required to see the preview in LrC). So if you only do basic editing in Photoshop and browser your images in Explorer/Finder often, you might not want to adopt this change. But this is a completely safe thing to go, even if you’re worried about other software (Affinity supports it just fine). You can always re-save a PSB as a TIF if you need to use the image elsewhere (you might need to flatten layers with the limits of TIF, but you would have done that already and now you have a better PSB file for editing as needed). I have been using PSB for well over five years and never once had an issue – but I’ve had many headaches when I started working as TIF and then hit the limit.

 

Other updates in LrC v15.1:

This update contains a mix of other nice quality of life improvements including:

  • improved quality for previews in the import dialog
  • expanded support for new cameras, lens profiles, and tethering support for more Leica cameras
  • batch renaming may be undone via ctrl/cmd-Z
  • bug fixes and more

Compress your RAW files by 90% or more with lossy DNG (visually lossless)

Last year I shared a tutorial about an option Lightroom which can shrink your RAW files by 90% or more. In this tutorial, I want to share a much simpler workflow to compress the images as well as updates on what I’ve learned after using lossy compression for over a year.

This is an incredibly powerful tool to help get the most out of a laptop with a small drive, save money on external storage, etc. After working with lossy compression for a year, I feel very confident using it on a wide range of images. I just compressed multiple years of images, which resulting in freeing up 3 TB of storage on my RAID drive. That has helped me personally avoid probably $1000 worth of hardware costs (as I would soon need to replace six hard drives and my external SSD backup without freeing up this space). Most of you won’t be in that situation, but you may well offset much or all of the cost of your Adobe subscriptions by minimizing the cost of laptops, external drives, and backups.

In my original tutorial, I shared a workflow to use the export dialog to create the files. That is still the only option if you wish to create lower resolution DNG files (and can shrink the images by up to 98%, though obviously this degrades image quality when you change the resolution). However, if your goal is simply to save space with an image which is visually lossless (even for making large prints), there is a simpler option I had previously overlooked.

CAUTION: This is provided for your education only – incorrect use of this information (or potential software bugs) may result in loss of data. For example, failure to filter to raw files could flatten TIF or PSB files. Bulk conversion of images increases the risks. Do limited testing with duplicate images to understand these options and results before you do try any mass conversion of your images. You are responsible for your own actions.

To replace your existing RAW / DNG with a “lossy” (visually lossless) DNG, this is the simplest workflow:

  • Filter for valid images
    • Only real images, not virtual copies (as LR has a bug that will show an error, where it should simply ignore it as it is not a real file)
    • Ideally, filter to file type = “Raw” or “Digital Negative / Lossless” (you should never convert layered files like TIF or finished exports like JPG, though the setting below should protect you even if you don’t filter by image type).
    • You may optionally filter for images with zero stars if you have any concern about converting your best images. You’d get most of the compression benefit without altering your best images.
    • If you have used
  • Go to Library > Convert Photo to DNG and select the following options:
    • Only convert RAW files” must be checked (you do not want to flatten TIFs, etc)
    • Delete originals after successful conversion” should be checked (otherwise you won’t save space, or would have extra cleanup work)
    • compatibility = “Camera Raw 16.0 and later” (this uses the new high quality JXL compression, older versions are not as compact nor as high quality)
    • JPG preview = “medium size”
    • Embed Fast Load Data” should be checked
    • Use Lossy Compression” must be checked to get the size reduction.
    • Do NOT check “Embed Original Raw File” or you will not save any space.

Note that once you have set these settings in that dialog, they are sticky and will be the default going forward. You can easily just search for “DNG” in the help menu to find the command if you forget where to find it and then just run with your previous settings.

Also note that if you have used LR’s merge to HDR / pano, the output is already a lossy DNG but will be compressed again if you select it (even when choosing “only convert RAW files”). The new result is nearly the same size and visually not degraded – but it would still be best to filter file types as noted above as this would avoid double compression of a lossy DNG.

 

Creating lossy DNG during import:

It would be ideal if Adobe would offer this an a simple check option during import. This would avoid the risk of picking the wrong settings, avoid re-compressing images merged to HDR / pano, make for a simpler workflow, save time, and enable users with very limited space to import more images (as you would need about 11x more free working space to import normally and compress than if you just imported directly as a compressed image).

Please vote for an option to import visually lossless DNG if you would like to have such a simple option (if you add any comment, please type your own thoughts – copy / paste never sounds sincere). Please note that the proposal is to create an option and have it off by default. I fully appreciate that most people will not feel comfortable using lossy (certainly if you have not spent significant time with it to see that the results are excellent) – so the proposal is to leave the default exactly as it is (ie, always lossless unless you consciously choose another option to save space).

 

Lossy DNG is a great choice if:

  • You would benefit from significantly smaller files to save money on hard drives, work with limited storage on a laptop etc.
  • And the caveats below do not affect your images (ie when the lossy file would not cause a visible loss of quality). Most images will show no visible loss of quality even for large prints from lossy files.
  • You do not expect to enlarge your image more than 4x native resolution (I would have used this on all the weddings I used to photograph – would have probably saved 50-100 GB per wedding) or use exotic de-mosaicing options like DXO DeepPRIME.
  • You are archiving old work (especially high volume events such as sports or a wedding).

 

You should avoid lossy DNG if:

  • You need to use the files with versions of LR or ACR which are more than 2 years old (this is unlikely as you cannot install them anymore and any current subscriber can install the latest).
  • If you use advanced de-mosaicing or wish to protect for the option in the future.
    • De-mosaicing is the process of converting the sensor inputs (usually a Bayer pattern: 1 red, 2 greens, and a blue) into a regular RGB pixel. A lossy DNG has already done this conversion, which results in excellent quality using the best conversion available today – but theoretically future conversion tools may do even better (for pixel-level details such as color / luminance noise reduction, moire, and capture sharpening).
    • This includes tools like DXO DeepPrime which only support the mosaic data.
  • Avoid compressing high ISO images before you use AI Denoise.
    • While Adobe now supports it even on lossy files, there may be color error. For example, in the deepest green shadows of trees shot at ISO 6400 in a night shot, I see patches where the trees show magenta.
    • However, if I run AI Denoise before converting to lossy DNG, the lossy DNG is very good and highly consistent with using AI Denoise on the original RAW.
    • In one example test image, the original NEF was 59.1MB, the lossy was 26.8 MB if denoise was applied before converting to lossy, and the lossy was 21.3 MB if lossy conversion was done before any AI denoise. So the lossy DNG is different if AI denoise active at the time the lossy DNG was created (though the amount had no impact, anything from 1-100 gave the same result and could be changed after lossy conversion).
  • Avoid compressing if you use Adobe’s super resolution.
    • There are artifacts when using super resolution on the lossy DNG which would be visible in a large print.
    • If you turn on super resolution before converting to lossy, there is no concern. However, there is no compelling size advantage – the lossy DNG is huge (200 MB if lossy is created with super resolution active vs 21 MB if ). Applying super resolution to the NEF results in 238MB of data (as a 179MB .acr file is created).
    • However, I see no such issue when using Topaz Gigapixel on the lossy DNG. The results are very clean. When comparing a 46MP image, I would be able to print at 100″ x 67″ (300 dpi) and see no difference. At double that size (200 x 134″), I see differences due to AI artifact, but I would not consider either version superior, just different.
    • Bottom line: lossy DNG is fine for massive prints if you use Topaz Gigapixel, but I would skip lossy if you rely on Adobe’s super resolution (of course, it may improve in time to work better with the lossy data like Topaz does).

 

An alternative option to create a compressed image and retain the mosaic data:

Stan N (per comments below) contacted me to suggest another way to create compressed DNG files and retain the mosaic data.

You first need to install the free Adobe DNG Converter (v18 is latest currently). Then you can run it via command line interface with some optional flags.
Under MacOS, do the following in Terminal:
  • “/Applications/Adobe DNG Converter.app/Contents/MacOS/Adobe DNG Converter” -lossyMosaicJXL -mp -fl -d {{destinationFolder}} {{sourceImage}}
  • Where you should leave a space after “-d” and then drag a destination folder from Finder to Terminal, add a space, then drag and drop your source image(s).
  • -lossyMosaic creates the desired format.
  • -fl keeps the “fast load” data.
  • -mp allows it to process multiple files in parallel (ie much faster)
  • -d indicates the output folder

This may offer additional value for you, but you should test carefully. My ISO 6400 image shows some changes in very dark regions. And while the output works fine in LrC, I was unable to benefit using DXO DeepPRIME (v5) as they apparently do not support this encoding. It will open as a very pink image and output a mangled result which is unusable. Hopefully support for this lossy yet mosaic format may grow in the future to offer the best of both worlds (I sent a bug report to DXO support).

Great deals for Black Friday 2025

During Black Friday, you can save 25% off all my software and courses (Lumenzia, Web Sharp Pro, my new Non-destructive Smart Objects course, etc) with discount code BF2025 (through Dec 1). Even better, you can stack that discount on top of the bundles. For example, you can get the Ultimate 7-Course bundle for $317 off the cost of buying all my courses individually – and you’ll get an 8th bonus blending course as well as bonus source images from several of my most popular YouTube tutorials!

 

There are also some great deals elsewhere this week for Black Friday on some of my favorite tools for photography (be sure to check back, as I will continue to update this page as deals are likely to get even better over the coming week):

  • HDR Monitors:
    • $900 off the ASUS PA32UCXR
      • This is the best price I have ever seen, and one of the best mini-LED HDR monitors on the market, offering: 4 stops more dynamic range than a standard monitor and a built-in colorimeter for hardware calibration.
      • See my full review for details.
    • $400 off the ASUS PA32UCDM
      • This is the best price I’ve ever seen and an outstanding OLED HDR monitor, offering 3.3 stops of headroom and support for calibration with a supported colorimeter such as the Display Plus HL below.
      • See my full review for details.
    • ==> See recommended HDR monitors for many other great HDR deals and details on the best options.
    • $80 off the Calibrite Display Plus HL colorimeter (it supports ASUS ProArt calibration software and SDR / print profiling now, and is future-proofed for profiling HDR up to 10,000 nits when an ICC standard is available).
  • Computer / hardware:
    • Up to $800 off Nikon cameras and lenses
    • Up to $400+ off the M4 / M5 MacBook Pro:
    • If you get any laptop with limited internal SSD, you can optimize storage by using visually lossless DNG compression and getting a fast external SSD. I recommend one of the following (all have been very fast and reliable for me, and all work with a single cable providing both data and power):
      • Vectotech (1-16TB): I have three of these and have used them for a long time without issue.
      • Glyph (1-16TB): Any 16TB SSD is pricy, but I can take a copy of all my work with me (I have 20TB of data in total, much of which is normally on a very bulky RAID drive). It comes with a ruggedized rubber grip and requires only a single cable for both data and power.
      • Sandisk Extreme Portablee (1-8TB): Very compact. I find this is a great option for backing up the computer, or adding more storage if you don’t have enough internal to the laptop (always be sure to backup your drives).
      • Samsung T5 EVO (2-8TB): I have the least experience with this drive, but it is working great and the price is very attractive.
  • Software:
    • $30 off DXO Nik Collection 8, including amazing tools like Nik Color Efex Pro (see my tutorial).
    • $30 off DXO PureRAW, which I love for cleaning up shadow details in foregrounds of high ISO photos (see my tutorial).
    • 20% off PeakTo from CYME. This AI-based organizer lets you easily find any image in your catalog (see my tutorial).

Disclosure: This article contains affiliate links. See my ethics statement for more information. When you purchase through such links, you pay the same price and help support the content on this site.

Greg Benz Photography