NEW: RAW adjustment layers in Photoshop

Before we get to the tutorial below, I want to share that there is a new Photoshop Virtual Summit (aimed at beginners). I’m not teaching at this event, but had an incredible time at last year’s LR Summit and highly recommend it. The all-star list of PS instructors includes Blake Rudis, Aaron Nace, Dave Cross, Matt K, Glyn Dewis Colin Smith, and several other top educators.


(affiliate link)
 

Camera Raw Smart Objects are one of the most important pillars of non-destructive editing in Photoshop. You can make a change to anything in your layered image all the way back to the original RAW edit. Yet they have two pain points: they can slow down your workflow and you can’t see how your layer will blend into the image while making edits. The new adjustment layers in Photoshop beta (v27.4) help address those shortcomings by putting some of the most popular raw adjustments directly into native Photoshop.

Photoshop Raw adjustment layers now include:

  • “color and vibrance” – this includes temp, tint, saturation, and vibrance
  • clarity and dehaze
  • “grain”

These adjustments work exactly like making global adjustments to your image via Filter / Camera Raw Filter. This layer-based approach offers several benefits:

  • better local adjustments:
    • vibrance is not available as a local raw adjustment.
    • global versions of temp, tint, and saturation are better than the local raw adjustments (as shown in “not all RAW edits are the same“).
  • higher quality masking: 
    • PS offers much more precise control for advanced masking techniques like luminosity masks.
    • PS masks can use feather / density / opacity for easy refinement.
  • much faster editing workflow than going in and out of ACR
  • preview the layered image while making adjustments.
    • If your filtered layer is below other layers, the final result in Photoshop will often look very different from the view you see inside ACR.
  • significantly smaller files:
    • When using the raw filter, you often need to select one or more layers and “convert to smart object” to work non-destructively or create a new mask.
    • For a 46 megapixel camera like the Nikon D850 or Z7rii, each use of “convert to smart object” increases the size of a compressed PSB by about 1/2 GB!
  • blend modes are simple to apply (vs being cumbersome or unusable for many photographers when working with the raw filter).

Note that grain adjustments always have a random seed. If you apply a raw filter to two copies of the same image with the same settings, the grain will be randomized (but aesthetically very similar). Same goes for the grain adjustment layers when you make a new one (but not if you duplicate one, it’ll keep the same seed). So if you try to compare these directly, you’ll see they are technically different. It does not matter and this is no different from how grain has always worked for the raw filter, it’s a little random to ensure each image is unique (just like grain in film).

 

Of course, these are just some of the raw adjustments and don’t work with your raw data (which offers the best results where possible). These new layers work best in combination with Raw Smart Objects (get the speed where you can with the layers, and use the raw filter otherwise). Here are several tutorials to help you get the most out of RAW smart objects and the camera raw filter:

 

Where might this go next? It would be great to see PS add adjustment layers for:

  • texture – this would naturally below with the new clarity and dehaze adjustment.
  • whites / highlights / shadows / blacks
    • you can do this now by adding luminosity masks on a curve, levels, or brightness / contrast layer
    • but it would be even simpler if you could create one of those adjustments and then add a luminosity mask as needed to refine it further (the masks will always be more powerful, but a more targeted adjustment makes for a better starting point).
  • curves
    • It does not make sense to change the existing curves layer or add another type of curve (that would be confusing and unnecessary).
    • However, it would be great to see the “refine saturation” slider make its way into to the PS curves. You can use “normal” or “luminosity” blend mode now, but that’s rather crude and a slider would offer much better results to get just the right color adjustment.

The latest HDR displays at CES 2026

I just got back from a couple of exciting days at CES 2026. If you aren’t familiar with the Consumer Electronics Show, it is one of the largest trade shows in the world (148K attendees) and is where new monitors and TV are often announced. I’ve attended the last three years to see the latest HDR (high dynamic range) displays on the market.

The trends continue to look very good for the future of HDR photography. There were several notable trends, especially significant expansion of 1,000+ nit monitors, much greater choice for 27″ HDR, numerous TVs (and even a monitor) offering 100% Rec 2020, and brighter displays (up to the full 10,000 nit HDR target). That ultimately means a larger range of outstanding HDR displays, likely price competition, and a clear signal that there is significant ongoing investment in the space. With these changes and recent price drops, I’ve updated my list recommended HDR monitors. page.

Before we get to the details, it’s probably helpful to provide some background. Here are a few key things to know:

  • HDR here refers to new display technology, not the old “HDR” many of you know (which was just a software trick to offer a different result on limited “SDR” or standard dynamic range monitors). SDR displays offer only 8 stops of dynamic range, while HDR displays offer up to 12 stops (and the latest premium TVs offer just over 14 stops). That compares very well with the 14 stops of dynamic range humans can see in static condition, and the 14+ stops of dynamic range our cameras have long offers. One of the key reasons we typically say “the picture doesn’t do it justice — you had to be there” is that we’ve been using terrible monitors. That is quickly changing.
  • OLED is a popular category of HDR technology. Each pixel emits its own light, which means it offers perfect blacks (no haloing). However, it’s very hard to make a single pixel very bright, but the latest generation of OLED displays are quickly improving on that front.
  • Mini-LED is the other popular category of HDR technology. Each pixel is created by an LCD layer blocking light from a shared backlight. It’s very easy to make this larger backlight bright, but it makes it more challenging to ensure perfect blacks in pixels near brighter ones. Modern mini-LEDs tend to do an excellent job managing halos (thanks to high zone counts and advanced software/dimming to control them).
  • Rec 2020 (aka BT.2020) is the ultimate color gamut target for HDR. It includes all of the P3 and Adobe RGB colors. It also includes a significant range of printable colors not offered by either of those gamuts, offering the potential for stunning electronic display and more accurate soft proofing of prints (especially in the green/cyan/ blue/magenta color range).

 

1,000+ nit monitors (to provide 3+ stops of HDR headroom):

There are several considerations when buying an HDR monitor, but the most critical is the peak brightness. That’s what gives these displays excellent contrast, highlight detail and color, and gets translated into “HDR headroom” (a measure of how much HDR capability the display offers). At 1,000 nits or more, you have 3+ stops of HDR headroom and a display that offers significant wow factor. There were numerous companies announcing 1000+ nit displays around this year’s CES.

The biggest trend was around bright OLED monitors, but there were also several new mini-LED displays as well. There was even the first RGB mini-LED monitor, which offers 100% coverage of Rec2020.

ASUS:

ASUS is always a leader, especially with their unique ProArt lineup (the only monitors supporting HDR calibration other than Apple).

  • P16 laptop (the premium model: H7606WX):
    • 1,600 nits peak (Display HDR 1000 true black)
    • This is the first PC which I would consider a true Windows alternative to the excellent MacBook Pro (Lenovo also has an excellent HDR display, but options to upgrade specs beyond the display are limited)
    • PANTONE Validated, Anti-reflection display, 120Hz refresh rate, 100% DCI-P3
    • Touch screen with stylus support
    • Unlike ProArt external monitors, I am told the laptops offer no similar software calibration.
  • Zenbook Duo 2026 (UX8407)
    • This unique laptop offers TWO 1,000 nit displays
    • Each are 14-inch 3K OLED touchscreens, with a very small gap between (this year’s hinge mechanism is smaller than the 2025 model).
  • PA32UCDMR-K and PA27UCDMR (OLED)
    • These are similar 32″ and 27″ 4K displays.
    • 1,000 nits (DisplayHDR 400)
    • ΔE < 1
    • 240Hz make these potentially very interesting for gamers who also demand serious color for photography, as well as better scrolling/panning than the Pro Display XDR (which is limited to 60Hz).
    • The 32″ model with a “-K” suffix means it also includes an external colorimeter, which in this case is their new ProArt CaliContrO MCA02. It can measure up to 10k nits, control the on screen display of the ASUS monitor, and be used to control apps (including Adobe software).
  • PA279CDV
    • 1,000 nits (I expect likely to be Display HDR 400 True Black, but not specified yet)
    • 27″ 4K
    • 120 Hz
    • KVM (lets you share a display with multiple computers by switching keyboard/video/mouse.
    • ΔE < 2
  • Gaming displays:
    • PG27UCWM
      • 27″ 4K RGB stripe tandem OLED
      • 1,000 nits (DisplayHDR 400)
      • 240Hz (or 480 at half)
      • 99%P3,
      • ΔE < 2
      • This is one I hope to try in person. The specs are very interesting, however I was unable to show content on it at CES for me to evaluate it for photography.
    • ROG Swift PG34WCDM
      • DisplayHDR 400, 1300 nits peak
      • 34″ curved display with 3440 x 1440 resolution. This is not an optimal form factor for photography.
      • 240Hz
      • Uses Samsung’s next-generation 5th gen QD OLED = RGB stripe (clear text), No white sub-pixel for better color volume

 

The rest of these displays are generally gaming oriented. Many of them may be excellent choices for HDR photographers on more of a budget, but won’t offer the same level of color accuracy photography as an ASUS or Apple display.

LG:

  • LG 27GM950B 5K
    • 27” 5K mini-LED with 2304 zones
    • 1250 nits (DisplayHDR 1000)
  • 32GX870B
    • First 32″ 4K Tandem WOLED Monitor
    • 240Hz (up to 480 at 1080p)
    • VESA DisplayHDR™ True Black 500
    • 4K, 240Hz

TCL:

  • TCL 27R94 ($800 at Best Buy)
    • DisplayHDR 1400
    • 27″ 4K
  • TCL 32R84 ($750 at Best buy)
    • 1,500 nits (DisplayHDR 1400 w/ 1400 zones)
    • 32″ 4K, 165Hz, deltaE<2

Note on optimal settings for these TCL monitors: reduce game mode / dark part brightening (blacks too bright), go to color / user for RGB gain control to get more neutral white balance (or use standard)

 

Samsung:

Samsung had no monitor announcements at CES. However, their displays power numerous 1,000 nit OLEDs and it is very likely they’ll have something interesting to show later in the year.

 

This isn’t even a complete list, with other offerings from AOC, the RGB mini-LED detailed below, and more. With all these announcements in just the first month, 2026 is shaping up to be an excellent year for HDR monitors.

 

100% Rec2020 (via RGB mini-LED and “super quantum dots”)

A very clear theme at this year’s show was expansion to cover “100%” of the Rec2020 gamut. I have that in quotes as truly covering every little corner probably requires the display to emit pure spectral colors. A laser projector can do that, but these new displays aren’t quite that precise. Regardless, these displays clearly have a much wider gamut than almost anything out there, and 100% is not a crazy marketing claim (if there’s a gap, it’s likely trivial). We also need to see how they hold up to closer inspection with controlled tests (to ensure we won’t have concerns with metamerism), but each and every one looked fantastic to me in person with the demo content at the show. This is a very exciting development and the entire industry seems to agree.

The main technology behind this is RGB mini-LED (confusingly marketed by some companies as “micro RGB”, but these still use a shared backlight and are not comparable to exotic and super expensive micro LED displays we might see several years from now). This is similar to mini-LED but now each zone has separate red, green, and blue backlights (instead of white). This allows more more pure color for a much wider gamut. This approach will be used by all of the top 5 TV brands. There was also a computer monitor with support, which bodes well for this technology trickling down to computers.

TCL (the 3rd most popular brand globally) is taking a two track approach. They are launching an RGB mini-LED, and additionally offering an even higher tier option with what they call SQD mini-LED (“super quantum dots”). This uses a mini-LED backlight where each zone is and then converts that to the final red, green, and blue sub-pixels via quantum dots. Like RGB mini-LED, it offers a massive increase in gamut and some additional benefits. It avoids the risk of “color cross talk”, which just means that any halos with this kind of mini-LED now have a potential color bias (though I saw very little issue in the RGB mini-LEDs at the show). It likely helps enable higher peak brightness as it’s easier to create brighter backlights and high zone counts when you only need the backlight to generate one color instead of three. This is the most impressive technology I’ve seen in a TV you can buy now (I will be giving serious consideration to buying the TCL X11L).

RGB mini-LED monitors:

One monitor has already adopted RGB mini-LED: the HKC Monitor M10 offers:

  • mini-LED with 1596 local dimming zones (4,788 RGB dimming zones)
  • 1600 nits peak (DisplayHDR 1400)
  • 32″ 4K
  • 100% Rec2020
  • 165Hz refresh rate (330 at 1080p)
  • Availability outside China TBD, but I think we should anticipate Chinese companies help push the envelope for both pricing and capability globally in the years to come.

 

RGB mini-LED TVs:

  • Samsung “Micro RGB” will be offered in 55, 65, 75, 85, 100 and 115-inch models.
  • LG MRGB95: expected to offer 3,000+ nits peak brightness, 1,000+ luminance dimming zones (sizes: 75, 86, 100 inches).
  • TCL Q10M Ultra offers ~9,000 nits, 2,912 luminance dimming zones (8,736 RGB dimming zones), Dolby Vision 2 (sizes: 85, 98, and 115 inches).
  • TCL Q9M offers~2,000+ nit, 960 luminance zones (2,160 RGB zones).
  • Hisense UR9 & UR8 offer Dolby Vision 2 (sizes: 55, 65, 75, 85, 98, and 100 inches).
  • Sony announced last year they will offer an RGB mini-LED this year (likely announced March-April). This is expected to be branded as the Bravia 10 and be positioned as their top TV (above OLED). They may called it “True RGB” (a term they’ve trademarked).

Manufacturers have not shared all specs, so I’m only listing what I’ve been able to find.

Note that zone counts vary by display size, so I tried to use the number for 85″ models. 

 

SQD mini-LED:

The TCL X11L stole the show for me with an impressive list of specs:

  • 10,000 nit peak brightness
  • 20,736 luminance zone count (these are true zones, this is not an RGB backlight where you might divide the number by 3).
  • While it has 100% Rec2020 coverage like the RGB mini-LEDs above, there is no color bias to the backlight and therefore no color cross-talk.

This means it can in theory cover the entire target HDR specification, which is the 10,000 nit (PQ) and Rec2020. With its incredible zone count, it should offer almost no detectable halos for a very OLED-like performance. In reality, we’ll have to see how quickly ABL (automatic brightness limiting) kicks in with real content (these peak values are achieve for 2% of the display). As a mini-LED, it should hold up well and I suspect it will remain quite accurate with real content. In a well edited image, only a minority of pixels should exceed SDR white and even fewer should get to the brightest levels. That means it may be practical for accurate display of HDR images up to 6.6 stops (need to test with real content edited properly). While we won’t see computer monitors do that in the foreseeable future, I’m personally excited to try it and get first hand experience as to the value of HDR photos going beyond the 4 stop limit of XDR displays (I’m not even sure MacOS will allow this on an external display, but Windows and Adobe software will).

Putting aside HDR photography (where we can provide the content), there are some limits to the benefits here in the near term. There are some Hollywood movies where HDR content is mastered with Rec 2020 and 10,000 nits. Many of those may not use that full color volume or only in very limited scenes (which is artistically ideal, you only want to use the extremes when called for). More practically, a lot of HDR content is mastered for P3 at 1,000 or 4,000 nits. That means the extra color gamut may not show until more movies are remastered. However, the full 10k nits should be very useful even for 1k and 4k masters as the content may be brightened via Dolby Vision 2 when ambient light levels call for it. These movies are graded in a dark cave – so if you watch in a room with windows, you may well need to boost the brightness to preserve as much of the same look and feel as possible.

These brightness levels may sound absurd if you are used to setting your SDR display to 100 nits (which is ideal in a controlled room). 10k nits is roughly the brightness of a fluorescent tube. The Dolby research behind the 10,000 nits PQ curve is substantial and based on both theory and validation with human studies. It showed benefit up to 10k nits when assuming 100 nits SDR and a very dark ambient surrounding (5 nits). Keep in mind brightness is perceived by humans in stops (powers of 2), so the jump from 4k to 10k is actually smaller than going from 1k to 4k, etc. There is real value at this level. If you see HDR which looks too bright, then either the display is not properly adapted for the ambient light (SDR brightness too high) or the content itself is too bright (the brightest values should only be used in a small percentage of pixels).

 

Frames to display HDR as wall art:

With the success of the Frame TV, there seems to be growing interest in offering TVs meant to look like art on the wall when not being used for video. I’ve seen Sony show their Bravia 9 (4000 nits mini-LED) in wooden frames, which would be a great use for that set. We don’t yet have an easy option to show the photos on a TV without a computer, but hopefully a set top box like the AppleTV may soon be updated to enable it (the hardware is clearly capable). There were a couple of notable sets for displaying art at CES.

 

One of the true darlings of CES this year was the LG W6, offering:

  • Only 9mm thick! That’s thinner than your finger.
  • Excellent HDR: “3.9× brighter than conventional OLED”, but no detailed specs (speculation is 2500-3000 nits peak).
  • It looks stunning in person. It has only a power cable. You send the video signal wirelessly via the LG Zero Connect Box.

The Samsung S95H was similarly interesting and features:

  • included metal frame to look wall art
  • 2700 nits (10% window)
  • It can display SDR photos from their Art Store or via USB stick (I see no reason they could not support HDR with a firmware update – and you could show HDR photos over HDMI now as with other TVs).

 

Other HDR news out of CES:

Dolby Labs have long been associated with the highest quality HDR experiences for movies and TV. Their DolbyVision already offers superior adaptation of HDR content over HDR10 or HDR10+, and they’ve just announced huge updates with DolbyVision 2. It features “Content Intelligence” to adapt based on the content and viewing environment, “Precision Black” for better shadow detail, “bi-directional tone mapping” where both the content and display participate in determining the optimal way to render the content, “Authentic Motion” to control motion smoothing on a scene by scene basis, and more. In a nutshell, it’s designed to adapt to any display in the most optimal way in order to offer the experience most closely matching what the content creator envisioned, regardless of your display or ambient lighting.

TVs with an ambient light sensor can utilize a premium Dolby Vision 2 Max, which will help optimize brightness of the display across day and night viewing conditions. Both versions requires new hardware, so this isn’t something you’ll be able to add with an older TV via firmware update.

TCL, Hisense, and Philips have already announced support for Dolby Vision 2 (and we’ll see if Sony jumps on board this year soon, as they hold a separate event outside CES to launch new products). NBC will support streaming live sports with Dolby Vision 2 on Peacock.

Samsung TVs will use their newly announced HDR10+ Advanced. It offers many similar sounding promises to Dolby Vision 2, but appears to rely more on AI and less on the content creator to determine results. As an open standard (no royalty costs) promoted by the #1 TV brand, it could certainly dominate in the long run, but Dolby Vision likely remains the leader for at least the next several years.

See my CES 2025 recap if you’d like to learn more about the history of this show.

Do you need to calibrate Apple XDR monitors?

Photographers need accurate monitors. If you are viewing a display with inaccurate color temperature, crushed shadows, etc, you are likely to be frustrated with your prints and your online audience probably won’t see your image as you intend. This is why we prefer high quality monitors and profiling tools. Things get a bit more complicated with HDR photography, as we do not yet have an ICC standard for profiling in HDR mode (if you create a profile, the HDR values will clip to SDR white).

As I’ve written on my recommended HDR monitors page, you only have two good options for HDR now: calibrate in hardware or buy a monitor which is accurate out of the box. There is no standard yet for ICC profiling in HDR mode. The only monitors on the market which support HDR calibration are ASUS ProArt monitors or Apple XDR displays (which includes MacBook Pro, iPhones, and many iPads). It’s great to have the option, but are these premium displays accurate enough out of the box?

I do extensive monitor testing, so I decided to get lab-grade test equipment (the CR-250-RH spectrophotometer) to calibrate and test five Apple displays with the highest possible accuracy. In this post we’ll take a deep look at a critical question: how accurate are Apple displays without calibration / profiling?

TLDR: Apple displays with the “XDR” branding are outstanding. They are extremely accurate out of the box, and even several years after purchase. Unless you are both a professional who demands extremely high levels of color accuracy (such as a Hollywood colorist), you do not need to calibrate XDR displays. The results are so good that most people would struggle to notice any difference between the factory results vs after custom calibration. If you want to ensure the highest accuracy as the display ages, I recommend using the Calibrite Display Pro HL to measure reference white and use MacOS’s built in “fine tune” calibration (see how below). To understand the basis of my conclusions and learn much more about how calibration/profiling works, keep reading…

[Disclosure: This post contains affiliate links. I rarely endorse other products and only do when I think you would thoroughly enjoy them. By purchasing through my links on this post, you are helping to support the creation of my tutorials at no cost to you.]

What does an accurate display mean for photographers?

Everyone’s level of tolerance for error will vary, but there are some fairly clear targets and expectations for photographers.

The most important targets to ensure accuracy for photography are:

  • Color accuracy.
    • Overall, a color deltaE (ΔE) of 2 or less is ideal. If you are above 5, you too much error for photography (and high error is common for gaming monitors).
    • Accuracy in neutral gray values is most important (its not only most easily noticed, gray is most of user interface surrounding your image and therefore biases your decisions while editing).
    • The target white point for photography is D65. This is a specific white (measured as 0.3127x, 0.3290y).
    • Your measurement software may report a “correlated color temperature” (CCT) such as 6500K. This is not a specific white: D65 is a 6500K value, but there are a wide range of 6500K values which are not D65. CCT specifies only the blue/yellow balance (not magenta/green).
  • Gray tracking (aka tone response / EOTF, the Electro-Optical Transfer Function).
    • Overall, a gray deltaE of 1 or less is ideal (with a 10% test window – peak luminance will vary for HDR monitors as noted below, so we just do our best).
    • This ensures proper shadow detail, contrast, etc.
    • For SDR, your target is gamma 2.2. For HDR, the signal to the monitor is PQ (“perceptual quantizer”). However, the effective EOTF target for HDR is undocumented / unclear unless you are using an XDR monitor in a reference mode. Neither Windows or MacOS specify how they are trying to drive the display when you use brightness sliders. You can test an external monitor with a pattern generator, but would only confirm good calibration in the hardware and not tell you how the operating system is trying to adapt shadow values, etc.
    • Apple XDR displays uniquely offer several reference modes and the ability to create custom user presets (including control over the EOTF in the SDR range).
  • Peak luminance
    • This is the most critical metric for HDR performance, as it determines how many stops of headroom you have at a given brightness.
    • This is not a fixed value in a monitor. Peak brightness depends on several factors – most commonly how bright the display is overall. OLEDs (other than Tandem Stack OLED iPads from Apple) are far more likely to be subject to dimming than mini-LED.
    • It is ideal to have a display offering 1000+ nits peak for great HDR. It is also ideal to have a sustained / full screen capability of 400+ nits (as this ensures accuracy is retained even while viewing bright content).
  •  Uniformity
    • This means consistency across the entire display. Lower quality displays may often show less accurate results near the edges of the display.
    • Some solutions (such as ProArt calibration) offer ways to improve uniformity, but this is most commonly something you cannot improve. An ICC profile affects all pixels equally, it has no mechanism to correct the edges of your display.
  • Wide gamut
    • Real world color is much more vibrant than sRGB. A wide gamut monitor doesn’t just show more vibrant color – it shows more detail. A limited gamut won’t show the full texture of sunset clouds. A flower petal may look flat when the gradient of colors gets clamped. A wall lit by a colored light may even look like an artifact or blown pixel when the colors get clipped.
    • A wide gamut display will let you enjoy much more beautiful images, and give you an edge in creating them (and there is no downside in editing with wide gamuts like P3 – preemptively clamping the colors in your edit won’t produce a better final result on less capable displays and you can export sRGB from any source).
  • Black levels
    • This refers to the deepest black the monitor can produce and is also critical for HDR to ensure shadow detail and avoid halos.
    • This is mostly a function of monitor hardware, but may be influenced by OSD options (such as for backlight and black level). An ICC profile cannot make the deepest black any darker.

This is only a partial list of the most important monitor capabilities. Other factors like anti-reflective coatings, zero fan noise, and simple operation are often also important (Apple performs extremely well on these other considerations too).

Performance for some of these goals (such as color accuracy) may be improved if you are able to “calibrate” or “profile” your display (we’ll discuss that those terms mean below). Your ability to do either will depend on your monitor, budget (for test equipment/software), technical skills, and support.

What’s the difference between profiling and calibration?

These terms get thrown around very loosely. Often times photographers will say they “calibrated” their monitor, when in fact they profiled it.

Calibration is a process of making the monitor itself more accurate. In other words, when you ask for a specific RGB value you get something very close to it. This may be done in some monitors by changing settings in the OSD (such as RGB gain values) or by using special software tools to write lookup tables in the monitor (such as with ASUS ProArt).

Profiling is a process of making the overall system response more accurate by hacking the signal that goes to the monitor. If your computer knows that requesting a red value of 230 actually produces the result you would expect for 227, then the computer may send a request for something like 232 instead in order to get a result closer to the desired value. This is very well supported for SDR mode thanks to ICC profiles, but not yet for HDR (though Apple’s “fine tune” options for XDR offer a limited option for a basic white point correction).

If you do both, calibration must be done first (otherwise the profile is based on the wrong assumptions about how the monitor will behave). The ideal scenario is a high quality monitor with both a good calibration and profile.

How accurate are Apple XDR displays?

I tested my five XDR displays: the Pro Display XDR, an M1 MacBook Pro, an M4 Max MacBook Pro, an M4 iPad Pro, and an iPhone 17 Pro. Using a spectro for these tests has given me a level of confidence I’ve not previously had (as the quality of color matching functions has always been a question for me).

While these are all XDR displays, they very in both technology and options for profiling / calibration. As you can see in the SPD below, the panel technology in the older Pro Display XDR and M1 is extremely similar if not potentially identical in many ways other than size. The M4 MBP improved by changing from a red KSF phosphor film to QD (quantum dot) mini-LED. The M4 iPad uses a “Tandem Stack” OLED and is therefore inherently different (note that the peaks are closer together in the iPad). The iPhone uses a different OLED, showing more of a blue spike.

Apple offers a few mechanisms to improve display accuracy (all found under System Settings / Displays / Preset dropdown at the bottom):

  • full” calibration
    • Calibrates the white point, primaries, luminance, and gamma response.
    • This option requires a supported spectrophotometer (the CR-250-RH I bought is the least costly – note that “RH” means “rubber hood” and is the one you would want). See the last section below for more information on why spectros are more accurate than a colorimeter.
    • The test is very simple to use. You point the spectro at a target on the screen and click a button to run. There are no options to configure (nor any final report when done). It’s very simple and effective.
    • The test runs for just under 90 minutes. If you have multiple monitors, you can view the progress in the dialog box if you didn’t leave it on the display being tested. It will show “performing stabilization” during warmup and then show progress through 96 measurements.
    • When you calibrate the Pro Display XDR, it is written in the monitor and therefore will benefit other Apple computers you later connect to that display.
  • fine tune” calibration
    • This provides a minor correction based on a single measurement (x, y, and luminance values of a known white). You can do this manually with any colorimeter or spectro.
    • This is a great option for those who don’t have a spectro to run full calibration. And even if you do, this offers a much faster way to test and tweak performance on a regular basis after full calibration.
    • You may only initiate this while in one of the system preset XDR modes, so use “HDR video (P3-ST 2084)” to measure its D65 reference white.
    • On MacOS, it affects performance in all XDR presets in MacOS (even ones where you cannot start fine tune).
    • On an iPad, fine tune only shows benefit while reference mode is enabled.
  • visual fine tune” calibration
    • Do not use this, you are more likely to reduce than improve accuracy.
  • XDR presets:
    • These presets don’t make the display more accurate – they give you more control over the target (such as a specific SDR luminance or EOTF).
    • For XDR displays under MacOS:
      • You can choose from a list of system presets. “HDR video (P3-ST 2084)” offers predictable HDR results.
      • Or better yet you can easily create a custom profile with SDR brightness adapted for your ambient light (80-120 is ideal under controlled lighting), gamma 2.2 for SDR range (ideal for print work), 1600 nits HDR peak (to use the full capabilities of the display),  and P3 primaries.
    • For XDR iPads:
      • You may only choose “reference mode” (under System Settings > Display & Brightness > Advanced). When enabled, this is the same as “HDR video (P3-ST 2084)” on the computer – it provides a predictable/fixed response and disables options which reduce accuracy (such as true tone and night shift).
      • It would be nice for home use if there were a way to add a toggle for reference mode to control center so that you could switch between accurate viewing in controlled lighting – vs otherwise normal use (brightness adaptation is key when outdoors or near a window, and many may like to use true tone / night shift. Apple’s rational for the iPad is likely just so that pros in Hollywood can review “dailies” on an iPad with sufficiently high accuracy under reasonable controlled ambient light – toggling this setting probably doesn’t appeal to that audience.
    • The XDR iPhone (ie 11+):
      • Offers no reference mode nor options for any calibration. You cannot improve on the factory default results. The iPhone is too small for normal Hollywood use, and I wouldn’t see much value for home use without the toggle I mentioned (try turning off brightness adaptation on your phone and using for a day, it’ll be unusable in some parts of the day or night – we need adaptation here).
      • (Third party tools support profiling, but only in their apps and not systemwide – which makes them of very limited value).

I tested my various XDR displays under several conditions:

  • Factory settings (ie “out of the box” performance). This will show the minimum performance you should expect.
  • Full calibration only. This shows the benefit of Apple’s advanced calibration (ASUS ProArt is the only other display offering a similar capability).
  • Fine Tune calibration only. This reflects the only calibration most photographers can perform (ie the best possible result until there is a standard for ICC profiling).
  • Full calibration + Fine Tune. This may be expected to show the best possible results.
  • [ Note that I did not use any ICC profiling with any of these displays as they are not supported in HDR mode nor iPad / iPhone for any profile systemwide. There would be nothing further to gain above the excellent results I’ve achieved. ]

For MacOS testing, I used XDR set to HDR Video mode (which is the only option to measure HDR with a predictable EOTF), a CR-250-RH, CalMAN targeting P3 PQ, and MacOS Patterns using full range encoding (HDR10 enabled in CalMAN). For iPad, I used reference mode. For iPhone, I could only measure white point (the EOTF is undocumented and hard to reliably control with a brightness slider).

Note: I manually evaluated actual data points, but the average and max deltaE values below are skewed high as CalMAN automatically includes several test patches which are clipped (ie it over-weights the brightest value, which is often one of the least accurate). So actual performance is better than the average values below suggest, but I didn’t bother to manually exclude extra readings to re-calculate the average.

What can you reasonably expect after calibration / profiling?

While calibration and profiling are important tools which can improve the accuracy of your display, they are not magic solutions that can fix everything. In fact far from it, especially when working with HDR. These issues come up in SDR editing too (finding that your prints are dark even after you’ve profiled your monitor is a very common example – as your display can be both perfectly accurate and still the wrong brightness for your working conditions).

To set expectations, here are a few things to know about the limits of accuracy with an HDR display even if you get great results from calibration and profiling:

  • Calibration and profiling cannot improve capabilities
    • If your monitor can only hit 1200 nits or only has 97% P3 gamut, that’s the best you can get.
    • In fact, your capability will probably decrease (very slightly) after calibration and profiling because the least accurate values are at the extremes and they will be eliminated.
    • For example, peak nits are likely to drop after calibration and profiling. The only way to correct white balance issues for the brightest values is to turn down the maximum red, green, or blue sub-pixels a bit until we find the brightest white where the three channels add up to an accurate white.
  • Calibration does not mean that two different displays will match perfectly!
    • Different displays have inherently different SPD (“spectral power distribution”), which is discussed below. As a result there are inherent limits to how closely two different panel types can produce any given color – and matching all colors across your gamut is nearly impossible.
    • To achieve the widest modern gamuts, many monitors have very tight ranges of wavelengths emitted for red, green, and blue (measured as FWHM or “full width at half maximum”). These more precise colors for the sub-pixels allow creation of very saturated colors – but they also increase the risk of “observer metamerism” (ie different people may perceive some colors from the display slightly differently).
    • You don’t need to worry about metamerism, but it may come up when a display with very high coverage of the Rec2020 gamut is involved (ie with newer technologies such as RGB mini-LED and laser projectors). This could result in two people disagreeing about whether a calibration looks “neutral”, whites appearing slightly greener or redder to different people, skin tones differing subtly between viewers, or blue highlights varying more than expected.
    • The only time you should expect a very close match is when both display use the same panel technology (ie backlight, phosphers, etc), are in good condition, and warmed up. So if you use multiple displays, it is beneficial to use the same model for color matching. Apple does a very nice job even across different technologies.
  • HDR luminance is dynamic and impossible to fully characterize or control.
    • No monitor (other than $30k reference monitors) offer the ability to hit peak brightness across all pixels at the same time. Due to power consumption, thermal design limits, monitor burn-in risks, etc your pixels may dim significantly – sometimes even with just SDR content. This dimming is typically known as ABL (automatic brightness limiting).
    • As a result, calibration and profiling are typically done with a 10% window (ie covering 10% of the pixels in the center of the display). If you were to run your tests with a 2% or 50% window, you would see very different results!
    • For example, an OLED might achieve 1000 nits in a 2% window, but only 200 nits in a 50% window (Apple “tandem stack” OLED iPads uniquely avoid this and are able to offer full screen or “sustained” values of 1000 nits – and consumer phones are less prone to this).
    • For this reason, today’s mini-LED displays are generally more accurate than OLED in real world use. You may well see a great test result for an OLED (based on that 10% window) – but when you start viewing real photographs, you are likely to find that the OLED has dimmed and is therefore less accurate (potentially causing you to edit the image in a way which will look too bright on a display which does not suffer as much from ABL).
  • deltaE only tells you how well the display performs against a target value – it says nothing about whether those targets are suitable for photography!
    • The display needs to not only be accurate, it also needs to be set for the brightness appropriate for the level of ambient light in the room. The accuracy of your laptop does not change when you turn the lights in the room on or off, but you’ll certainly struggle to get good results if you don’t adapt the brightness when the ambient light changes.
    • Your display may also be configured to target different EOTFs (“electro-optical transfer Function”). In SDR, gamma 2.2 is the correct standard for photography. For HDR, there isn’t a clear standard (you can’t even tell what the operating system is trying to do – other than when using an XDR display in a reference mode).
    • As an example of the impact of EOTF: When viewing my dark shadow detail test in a dark room, I can see down to the 0.1% level when I have the display set to the HDR Video Preset or a custom preset using gamma 2.2 and 100 nits for the SDR range. But when using the variable brightness preset with brightness is set to the same 100 nits SDR white, I can only see down to 0.25%. I assume these are all accurate (Apple doesn’t publish their target for the default variable brightness preset, but unless there is a bug in MacOS it would be expected to leverage the same calibration data). These are just different EOTF targets and it affects the shadow detail.
  • Low deltaE values may not tell the whole story:
    • What does it mean when a manufacturer claims “deltaE <1”? Did they test HDR or just the SDR range? Which gamut did they test?
    • A deltaE claim of <1 suggests the monitor should be decent, but take these claims with a pinch of salt.
    • Note that there are also different deltaE values. In photography, we typically mean ΔE00 (CIE2000) when we simply say “deltaE”, but there are others. For example, CalMAN can optionally report ΔEITP, is based on the ICtCp space and is designed to help better reflect human perception in the HDR range, better handle wider gamut, and helps separate color error from luminance error.
  • Performance varies across the screen:
    • Corners are often darker and less accurate than the center (ASUS ProArt offers some way to compensate, but performance here is often just based on the quality of your hardware).
    • With mini-LED, each pixel is dependent on its neighbors due to the shared backlight. For example, this may cause halos visible in dark areas next to bright content.
  • Performance varies across time:
    • A display may vary a fair bit in the first 30 minutes it is turned on or if temperature varies in your environment (which is why it is recommend to let the monitor “warm up” before testing).
    • It may also change as the display ages (as you’ll see in my M1 results below).
  • Your monitor may include several important settings outside the scope of calibration
    • Many OLED monitors include a setting to limit peak brightness, and it may be enabled by default (such as in the ASUS PA32UCDM). This likely won’t change test results, as a monitor which is accurate at 1000 nits will likely be just as accurate when forced to never exceed 400 nits. And while the typical 10% test patch won’t trigger ABL during the test, limiting HDR may improve EOTF accuracy with real world images which trigger ABL.
    • Some mini-LED displays include options to control local dimming. This creates complex behaviors where pixel-level accuracy varies with neighboring content and changes over time.
    • There may be additional controls for sharpening or other factors outside the scope of our testing.

These considerations are an important part of the reason why Hollywood professionals often pay $35,000 for a “reference monitor” (which more or less means one which is as accurate as possible vs the intended standard, such as mastering content for 4000-nits P3 D65 – though it also typically includes support for special features like built-in vectorscopes or SDI input ports).

Apple has done an outstanding job addressing these concerns with their XDR displays:

  • They are all held to a very high standard. There is not a single “XDR” branded display which is not outstanding.
  • Sustained luminance values are very high (even in the OLED XDR), so ABL is not a problem affecting accuracy.
  • Everything works great by default, and there are easy to use controls in MacOS for experts who wish to customize performance.

Summary of key findings for the XDR displays:

  • My 5-year old M1 MBP achieved excellent results after fine tune calibration, but was slightly out of spec when relying only on factory performance.
    • With the factory calibration, deltaE was 2.8 average (max 4.9). That isn’t tragic, but falls below expectations. Color was accurate, but the display was about 15% dimmer than expected across the range (tested peak of 835 nits vs expected 1000).
    • I consider this factory result good enough for most photographers (who would work in the default variable brightness mode and would have compensated by increasing brightness one tick). However, a Hollywood colorist would not accept the factory results. The most color critical users should test to validate accuracy rather than assuming full + fine tune achieves target on aging hardware.
    • Using fine tune calibration only (used 1,000 nits D65 test), results were excellent. Peak brightness overshot at 1066 vs 1000 target, color was dead on. Average deltaE 0.4 (max 1.0). CCT measured 6407 at peak.
    • Using full + fine tune calibration, deltaE only improved to 1.2 average (max 1.2). Peak brightness was near perfect (998 vs 1000 target) and RGB balance was great across the range (red drifted lower in bright values). CCT measured 6627K at peak.
    • We’ll consider this result in greater depth further below.
  • Excellent deltaE scores for color and gray tracking even with factory settings. These results would meet the expectations of the vast majority of photographers.
    • M4 Max achieves outstanding scores with the factory calibration:
      • deltaE 0.7 average / max 1.7 (peak error near brightest whites – achieving 968 nits vs 1000 nits target).
      • Running CalMAN’s ColorChecker test against P3 targets with the factory calibration showed average 0.5 deltaE (max 0.8). Color was excellent across the range.
      • 99.8% P3 gamut coverage (68.1% Rec2020 coverage)
    • My ~5 year old Pro Display XDR still very good scores with just the factory calibration:
      • 1.3 average deltaE / max 2.6. Color balance was very good across the range. Peak white was 987 vs 1000 target.
      • Running CalMAN’s ColorChecker test against P3 targets with the factory calibration showed average 0.6 deltaE (max 1.4). Color was excellent across the range.
  • Accuracy could be further improved using Full calibration and/or Fine Tune.
    • M4 after full calibration
      • deltaE 0.4 average (max 1.2)
      • This is a measurable improvement, but at a level so trivial that most photographers wouldn’t even notice if you could view them side by side.
    • XDR using only fine-tune calibration:
      • When tuning from a 1,000 nit sample results improved to 0.8 deltaE (max 2.3). White balance was notably better across the range and peak luminance hit 996 vs 1000 target.
      • When tuning from a 100 nit sample, results showed 1.2 average deltaE (max 2.6). White balance was better across the SDR range but not over HDR. Peak luminance hit 1007 vs 1000 target.
      • Comparing the two, the 100 nit sample unsurprisingly showed improved accuracy over the SDR range (but worse HDR), while the 1000 nit sample showed nearly perfect HDR measurements and failed to benefit SDR.
      • Compared to factory calibration, fine tuning at 100 offered the most benefit to the SDR range (but degraded HDR), while fine tuning at 1000 improved HDR significantly and very slightly improved SDR.
    • XDR after full calibration:
      • 0.8 average deltaE / max 1.8. Peak 994 vs 1000 target.
    • These results would meet the expectations of even the most demanding photographers, but are optional and full calibration is likely not an option unless you know someone or can hire a TV calibrator to use their spectro with your display.
    • Fine tune is not worth the effort if you are not using a reference mode / custom XDR preset (ie if you wish to use brightness controls on your display).
  • Excellent gamut, with the M4 Max showing 99.9% coverage of P3 in HDR mode (bright HDR colors can be difficult to achieve, so this is outstanding).
  • Excellent uniformity on all displays (including that M1).
  • Halos are well controlled for the mini-LEDs, but visible and the OLED iPad is clearly superior in extremely dark detail.
  • EOTF tracking remains excellent even in a very large test window due to 1000 nits sustained performance. This is a huge advantage for photographers working with HDR, as it means that the actual performance with real images will track these test results closely (unlike most  non-Apple OLED displays which will likely dim quite a bit with larger test windows).

Aside from the accuracy, perhaps the most notable benefit is that XDR displays are extremely easy to use. Great results are the default, you literally don’t have to do a thing. HDR works automatically, SDR content looks great, and the display is very accurate (there is also zero fan noise). XDR-branded displays from Apple are excellent and are clearly the way to go for serious photography.

The Pro Display XDR is perhaps the best investment I’ve ever made in photography gear (the Nikon D850 is right up there too). It gave me capabilities far beyond anything I could previously do and was far more affordable than you would expect. Yes, it costs $6000 (plus tax) new for the monitor with stand. However, I bought mine used in like new condition with warranty remaining for only $3000. I could likely sell it today for a net $1000 – $3000, so I’ve risked almost nothing to pick up one of my favorite products of all time.

For reference, I also tested an M1 MacBook Air. It does not have an XDR-branded display, which means it offers no options for calibration or fine tuning. It also has very little HDR capability (400 nits peak and no local dimming to ensure deep blacks). The out of the box color was notably less accurate at 7185K, though acceptable for a general audience. As this device is primarily intended for SDR use, a custom ICC profile would be a good option and should easily get much closer to the ideal 6500K white point. 

My M1 MBP results:

As noted above, my M1 MBP fails to achieve ideal results when relying entirely on the factory calibration. Specifically, it is off by less than 1/3rd of a stop at the peak (15% dimmer in nits, but linear values are overstate how this would be perceived by humans). This is still very good for most photographers. If you use the default P3-1600 nits mode (ie you use the keys to adjust brightness), you would never know the difference. Furthermore, the results were excellent after fine tune calibration (which can be done with a standard colorimeter, no fancy spectro required). 

I have no serious concerns here for several reasons:

  • If you are using the default (variable brightness) preset, there is no loss of capability. You would just turn up the brightness and see nearly the same results even without calibration. 
  • Fine tune calibration can be easily performed to bring it back into the target range.
  • Even without calibration, it outperforms nearly every HDR monitor on the market. The ASUS P16 laptop is the only other one supporting calibration and 1600 nits. The rest of the options out there will have lower accuracy, less HDR headroom, and in all probability higher rates of quality issues with less generous support.
  • As a mini-LED, it will likely outperform most OLED in real use by avoiding ABL.

The best way to profile / ensure accuracy for your XDR monitor:

As you can see, there is no need to calibrate / profile XDR displays, though it may be helpful in some cases (outliers / old monitors) and you should definitely disable options which alter white balance. Here is a quick summary of the best way to ensure accuracy with your Apple XDR displays:

  • For any Apple display: Disable true tone / night shift (both are found under Settings / Display in iOS / iPadOS / MacOS)
  • iPhone:
    • You cannot improve further. There is no reference mode nor options for any calibration or profiling iPhone (third party apps can offer this within their own app only, which makes it fairly pointless).
  • iPad
    • Accuracy may be improved by using reference mode (same as HDR Video preset for XDR computer displays). It is found under under System Settings > Display & Brightness > Advanced
    • In the same area is “fine tune” calibration.
    • Only use reference mode when working under controlled ambient light (as it fixes SDR white to 100 nits, which will make the display too dim in bright ambient light).
  • Computer XDR :
    • Accuracy may be improved by using fine tune calibration
    • You
  • To perform fine tune (for MacOS or iPad):
    • Enable reference mode in the target display (required at least temporarily to get a predictable SDR white to test, as well as to use the fine tune calibration):
      • For an XDR computer, switch to the “HDR Video (P3-ST 2084)” preset for MacOS (via the preset dropdown under System Settings / Display).
      • For iPad, just turn on the reference mode toggle as noted above (there are no other options for iPad).
    • If you have previously set a fine tune calibration, reset it before proceeding to test the display.
    • Connect a colorimeter to your computer
      • I recommend Calibrite Display Pro HL colorimeter (or Plus HL if you wish to future proof for >3000 nit displays). These may be used with the included PROFILER software.
      • When testing an iPad, you will connect the colorimeter to your computer (required to use the colorimeter), but will point the colorimeter at your iPad instead of the computer’s display.
    • Use Calibrite PROFILER to measure SDR white
      • Go to Utilities / Monitor Quick Check
        • select your Calibrite colorimeter
        • set the dropdown to OLED (M4+ iPad Pro) or mini-LED (all Apple computer displays and gen 5-6 iPad Pro)
        • click “next”.
      • Run the test:
        • At this point, you should see a screen with a target saying to “position calibrator in the circle”. If that’s on the display you are testing, just click next.
        • Otherwise, if you are running PROFILER on a display other than the one you are testing (such as for an iPad), you need to first show an SDR white test patch on the target display. You may open this page on that display and click the “show SDR white test patch” button below to show an SDR white test patch. This is the same target you’d see from the PROFILER software, but gives you a handy way to test a screen that isn’t running that software. Note that I have not bothered creating a 10% window here, as all XDR displays offer full screen brightness vastly exceeding SDR white without dimming (800 nits for iPhone 16e, and 1000 nits for other XDR displays).
      • The luminance and xy values shown here are what you need to enter manually into the Apple Fine-Tune dialog.
    • Open Apple’s fine tune calibration to enter the values you just measured:
      • in MacOS, go to System Settings / Display / Preset dropdown / Calibrate / Fine-tune calibration.
      • on an iPad, you’ll find it next to reference mode.
      • for the left hand target values, type in the values for 100 nits D65 white, which are:
        • luminance: 100 nits
        • x: 0.3127 (click to copy, then cmd-V to paste)
        • y: 0.3290
      • enter the values you measured with PROFILER in the right hand column and click enter.
    • Once you are done,
      • your MacOS computer will be more accurate in all modes – even if you switch back to the default “Pro Display XDR (P3-1600 nits)” preset where you can use the keys to change brightness.
      • your iPad will now be more accurate whenever reference mode is enabled (but it won’t help when reference mode is off)

What could be improved with XDR displays?

Apple has the best HDR displays on the market. They are very accuracy and easy to use. Yet there is always room for improvement and I’d love to see the following:

  • The Pro Display XDR is excellent, but beyond the budget of most photographers. The key opportunity would be some kind of prosumer-oriented HDR monitor (ie a 27″ Studio Display updated for serious HDR use). Specs similar to the MacBook Pro (ie 1600 nits / 1000 sustained) would be ideal so there is no compromise when docked.
  • I’d gladly welcome a refreshed Pro Display XDR. It’s still arguably the best HDR monitor for photography, but after six years it would be ideal to push the boundaries even further and add features commonly found on premium displays. A webcam, 120Hz refresh rates for smoother scrolling / panning, pass-through Thunderbolt for single-cable laptop connections, increased rec2020 coverage, and a higher zone count / dual-layer LCD / OLED to reduce or eliminate local dimming halos would all be great updates. I’d also love to see 3,200 nits peak to support 5 stops HDR (and get close to 4000 nits video mastering), but I’m not holding my breath there.
  • Better mirroring. Even when you connect two identical XDR displays, the one which is not “optimized for” will show degraded results with reduced HDR headroom. This problem affects any display under MacOS, but XDR displays should be much easier to support here.
  • An option in custom XDR presets to allow 1600 nits with clipping rather than ABL would be nice. The current design forces you to choose between accuracy and use of the full range. In practice, most content hitting 1600 nits won’t be bright enough across the whole screen to require dimming. It would be ideal to simply clip to something between 1000 (the full screen limit) and 1600 (the peak limit) as required when the display limit is reached for bright content. This would satisfy hardware requirements with less impact to practical capabilities for real use.
  • It would be very helpful if the fine tune input had an option to just choose D65 values. These will almost always be the target, and typing manually is both cumbersome and creates risk of user error.
  • For those with supported spectros, offer an automated option to measure / implement fine-tune calibration. The manual test setup and typing could be eliminated.
  • For pros using “full” calibration, it would be ideal to get a summary of results (ie deltaE values). This is important for any calibration/profiling to confirm success or failure, and eliminates the need for a complex validation tool like DisplayCAL (or CalMAN, which is costly and requires another computer running Windows or a virtual machine).
  • Simpler custom XDR preset management. You cannot hide any of the system presets and likely don’t need many of them. There is no simple way to edit an existing preset (or even open it to confirm the settings). And deleting presets is very slow (the screen blacks out for ~15 seconds when you delete any preset, even if it isn’t the one in use).

While some photographers would probably love to see a reference mode on the iPhone, I’m not sure it’s needed nor practical (as phones need to adapt brightness constantly throughout the day).

There will certainly also be photographers who’d love to see the full calibration support extended to colorimeters. However, that’s not a trivial effort given the need for color matching functions for each colorimeter + display pairing. As the potential gains are small, this doesn’t seem like a great use of resources now. This seems like a task better suited to vendors like Calibrite when we have an ICC standard for profiling HDR. I’d rather see Apple invest in things like HDR support on AppleTV, iMessage, and iCloud.

How do these results compare to other options?

Apple has a very solid lead in HDR display capabilities, ease of use, and accuracy. But everyone’s needs are different. Here are some thoughts on the alternatives:

  • External monitors:
    • You can also achieve great HDR results with an ASUS ProArt monitor (or a 42″ TV), but setup is required, calibration is likely required (out of the box accuracy is not as high), and there are no reference modes – though ASUS is the only option for Windows, is much cheaper than a Pro Display XDR, and has some other advantages such as pass-through USB support.
    • For SDR-only work, there are great and very popular alternatives to the Studio Display from Eizo and BenQ. But once you dive into HDR, there’s really no going back to SDR – everything else looks flat by comparison.
  • Laptops:
    • There are a few very good HDR PC laptops now. But in general, most fall well short of the MacBook Pro’s display. Windows support is also lacking: there are no reference modes, HDR content brightness slider can cause clipping issues, bugs are more common, and pre-installed 3rd-party tools may cause incorrect results.
    • I did limited testing on the Lenovo Yoga, which had very good reference white color balance at 6250K. As CalMAN Client3 cannot generate HDR test patterns for a PC laptop and Windows does not document its EOTF target (which varies under its SDR content brightness slider), I cannot comment on how well it tracks a target EOTF – but I believe most photographers would be very happy with this display’s factory results.
    • The only PC laptop supporting calibration is the new ASUS ProArt P16.
  • Tablets: I have yet to see any tablet offering HDR performance remotely similar to the Tandem Stack OLED offered by Apple. The M4+ iPad Pro has the best consumer computing display I have ever seen.
  • Phones are generally very accurate and well supported for HDR:
    • Pixel Pro 7+ is outstanding. My Pixel Pro 8 measures 6530K at around 100 nits, very accurate white balance.
    • Samsung S23+ has excellent HDR hardware, but are not supported by the Android version of Adobe Lightroom and has historically had some strange HDR software quirks.

Conclusions

Apple has a commanding lead in HDR with its XDR-branded displays. There is simply nothing else like it for editing and viewing HDR photos on a computer display or tablet (Android phones do well). Both the level of HDR capability and accuracy are unsurpassed. The monitors and laptops run dead silent in normal use. And perhaps most critically, they are so easy to use. With zero effort, you get a great HDR (and SDR) results by default.

Key points:

  • There is no need to calibrate / profile these XDR displays for photography.
    • My results met or exceeded typical photography standards with or without full / fine-tune calibration.
    • Even my worst case M1 MBP was within good ranges after fine tune – and approaching excellent if you use the variable brightness mode (or create your own custom XDR preset with an offset to the SDR luminance). Tt still outperforms all HDR displays other than the ASUS ProArt (thanks to its support for hardware calibration). I also suspect this is an outlier and that most XDR displays probably show even less aging.
  • If you want to ensure the highest accuracy (especially as your display ages), occasional use of the fine tune calibration is all you need
  • Full calibration is unnecessary for photographers. It probably only improves results by 0.5 deltaE (an undetectable change for most photographers).

Side note – what’s the difference between a colorimeter and spectrophotometer?

For those who want to better understand why I’m using such an expensive piece of lab equipment for this testing…

Visible light is a mix of wavelengths. Humans can see from roughly 380 nanometers (violet) to 750 nanometers (red). Unless you are viewing a laser, the color you see is almost always a broad mix of wavelengths. Human vision is based on sensitivity to short, medium and long wavelengths. This isn’t the same as seeing red, green, and blue – but that’s not terribly far off the truth (and is the reason we have RGB monitors).

Each monitor has a unique SPD (“spectral power distribution”). This refers to the overall mix and intensity of specific wavelengths of light the monitor emits to make white. You can see examples below which show how red subpixels in the M1 and M4 MacBook Pros are extremely different. Though a human mostly won’t notice, these differences in SPD from one display to the next are the reason that you will almost never get a perfect match between two different monitors  – even if they are both perfectly calibrated and profiled. You can get close, but it is simply not possible to get every possible color to match when the underlying RGB spectra are not the same.

The graphs above show the wavelengths of light emitted for 100 nits D65 white from the Pro Display XDR (which has a red KSF phospher) vs the M4 MacBook Pro (which changed to quantum dot). Both look like very similar whites to a human, but there is a limit to just how well you can get these two displays to match each other even after calibration or profiling.

We can turn up or down the level of blue, green, or red sub-pixels by changing the voltage. That’s the nature of calibration / profiling – altering the mix of red, green, and blue used to produce a given color.

However, the characteristic shapes are just scaled as we change those voltages. We can make the red sub-pixels in the Pro Display XDR brighter or darker, but they will always have that odd triple peak. The PSD is an inherent property of the hardware. Calibration and profiling can’t change the fact that the Pro Display XDR has a super-peaky red or that the M4’s blue is shifted about 10nm towards wavelengths we see as cyan – but our tools try to do the best they can.

These shapes are the foundation of the monitor’s gamut too – having narrower peaks means more pure colors to increase gamut.

There are several underlying technical reasons for these differences in SPD, such as light source (backlight, OLED emitters), phosphors, color filters, quantum dot films, etc. The results may a function of cost considerations and technical targets. An ideal SPD probably has well defined peaks with minimal overlap for red, green, blue (to produce wider gamut), controls the blue spike (minimize eye strain), etc.

To measure and improve the accuracy of monitors, we have two fundamental types of measurement tools:

  • Spectrophotometers
    • These are very expensive devices (typically $2,000 – $50,000), so they are typically only used by serious professionals with a massive budget, certified TV calibrators, scientists, or display manufacturers.
    • These are very accurate and can measure specific wavelengths (with precision down to the 1-4 nanometers being common). In other words, they can see the actual PSD as shown in the graphs above.
    • My CR-250-RH has a a spectral bandwidth of 4 nm (covering the 380 – 780 nm range). This is the minimum performance needed for Apple’s full calibration (for example you cannot use the $2k i1 Pro 3, as it only measures down to 10 nm precision).
    • This is not a device any photographer needs. I purchased one for some very specific reasons. I wanted to educate myself (factory results vs best possible, better understand various HDR display technologies, etc) and to facilitate better reviews of HDR monitors (as I test them frequently and often find color matching functions for colorimeters are missing or unclear even in CalMAN for the display I wish to test). As you’ll see below, it is very easy to use and does improve the accuracy of even Apple’s XDR displays. So if money is truly no object, go for it – but the gains are modest.
  • Colorimeters
    • This is a relatively low cost device (often $100-500 for consumer / prosumer grade devices).
    • They most commonly have 3 colored filters to detect light (though they might use 2-10).
    • This makes them relatively analogous to human vision, but their sensitivity to specific wavelengths is different and varies quite a lot from one device to the next.
    • They cannot tell you the PSD of the display, they are don’t have nearly that level of precision. My colorimeter collects 3 data points per color (while my spectro collects 100).
    • As a result, they are only useful when you make some assumptions about the SPD of the display you are testing. That is why you would typically be asked which type of display you are measuring (mini-LED, LCD, OLED, etc).
    • That gets you in a reasonable ballpark, but limits accuracy. My spectro automatically knows the difference between the PSDs for the monitors above and anything else I might test – but my colorimeter can’t tell the difference and the only hint it gets is me choosing the generic “mini-LED” to interpret the data it collects.

You certainly don’t need the accuracy of a spectro for photography, but it’s helpful for making more definitive evaluations of displays like this. Apple’s XDR displays hold up very well under scrutiny with lab-grade measurement tools.

The best of HDR photography in 2025

HDR photography is rapidly moving towards being a mainstream technology. It’s supported by billions of devices, nearly all web browsers, popular editing software like Lightroom and Photoshop, social media sites like Instagram, and more. 2025 saw several important milestones, including numerous open source developments which may pave the way to finally address challenges in distribution which may allow HDR adoption to accelerate significantly in 2026.

Last year, I summarized key achievements in 2024. Let’s take a look look at what’s happened since then, and what’s coming next in the year ahead.

 

The best of HDR photography in 2025

There are have been numerous developments in the past ~18 months which give photographers the opportunity to make serious use of HDR. The collective impact of these updates is that we now have excellent support to edit and export HDR images which can be viewed as HDR by a large audience on Instagram and can be safely shared anywhere (even on devices lacking HDR support. We also have the ability to share these images with a very large audience (particularly on Instagram, where ).

Notable HDR photography improvements in the past year include:

  • January: Web Sharp Pro v6 added significant new support for creating and sharing HDR gain maps, including support for Instagram and Threads.
  • February: LRTimelapse added the ability to create HDR time lapse videos from your photos in Lightroom.
  • February: The budget iPhone SE was updated with OLED. You cannot buy any phone or computer from Apple without HDR now.
  • March: ASUS PA32UCDM introduced the first 1,000-nit OLED with high color accuracy for under $1800.
  • March: Safari Tech Preview 215 added support for HDR images (including native PQ encoding in JPG, AVIF, and JXL).
  • April: Google Photos app v7.24 added support for Ultra HDR JPG
  • April: iOS 18.4 / MacOS 15.4 added support for Android XMP gain map encoding, which helps to support old Android images (those captured before the ISO standard).
  • June: Adobe’s Project Indigo app adds powerful controls for capturing HDR images on a mobile phone.
  • June: Halide Mark III beta adds “Process Zero” HDR support for natural HDR
  • June: MacOS Tahoe adds options in the Digital Color Meter app to measure HDR pixel values on screen (EDR and EDR Linear).
  • June: AVIF with ISO gain map support is now enabled by default in Chrome, Edge, Brave, Opera, and Safari.
  • June: Android adds support for encoding HDR images as HEIF gain maps (on premium phones).
  • June: PNG 3rd-edition specification adds official HDR support.
  • Q2: The Lenovo Yoga Aura became the first laptop to offer DisplayHDR TrueBlack 1000 performance. This marked the first time a PC offered a good HDR experience.
  • Q2: 4th-gen Samsung 5-layer tandem QD OLED enabled the launch of several bright OLED monitors in the $900-$1200 price range.
  • August: the Hasselblad X2D II became the world’s first camera with end to end HDR support, boasting a 1400 nits HDR display, 15.3 stops of dynamic range, support for encoding excellent-quality gain maps, and a resounding endorsement from the world’s strongest camera brand that HDR is the future of photography.
  • September: Safari added HDR support to MacOS Tahoe, as well as WebKit for iPadOS / iOS 26 (which means not only Safari, but all browsers on mobile such as FireFox now support HDR photos).
  • September: the open source library libvips (pull request 4645). This is important on its own, and helps power critical developer tools like SharpJS and WordPress Media Experiments.
  • October: Photoshop improved 32-bit support in curves, histogram, and color picker.
  • October: Lightroom adds HDR-specific options for “edit in” (Preferences / External Editing), including support to choose Rec 2020 colorspace for SDR or HDR, HDR Limit now offers more granular control as a slider instead of a dropdown.
  • November: Krita began adding HDR support for Linux.
  • November: Black Friday prices hit new all time lows for several high-quality HDR monitors, making affordability better than ever.
  • December: The ASUS ProArt P16 laptop launched with a 1600 nits OLED display boasting Display HDR 1000 True Black and PANTONE certification, making it the first PC laptop on par with the Apple MacBook Pro for HDR performance and accuracy – while offering the perfect blacks of an OLED.
  • December: The open source library Sharp JS is adding support for gain maps (via libvips). This wildly popular developer library gets 22 million downloads per week via NPM and is rapidly growing. It’s supported by Gatsby and is the default engine when using Next.js’ built-in <Image> component (or its image optimization pipeline), which means it is enabling support in other downstream developer tools. This could enable many websites to easily adopt HDR support.
  • December 15: LR Android now supports HDR display and editing all Samsung S24 and S25 models. As with all HDR I have seen on Android devices, headroom appears limited to 2.3 stops in spite of displays supporting up to 2600 nits (iPhone offers 3 stops with only 1600 nits and Apple MBP laptops support 4 stops with 1600 nits).
  • December: I added an AI chatbot to my HDR monitors, support, and info pages to help newcomers get set up.

 

 

Where is HDR photography headed in 2026?

There are many HDR improvements which seem likely to arrive in 2026:

  • HDR Monitor options should continue to improve significantly
    • Multiple YouTube channels with access to Samsung at CES mentioned the company set its sights on 4k and 5k OLED monitor panels for 2025, with higher peak brightness being a key objective for 2026. If they deliver, this would likely translate into several new HDR monitors by May (as this is the side of their business that sells panels to other monitor makers). We should find out very soon, with CES 2026 in early January.
    • Expanded PC laptop support for HDR. We finally got a couple of great HDR Windows laptops this year, and that trend is likely to expand in 2026.
    • Prices should continue to decrease year over year (while options grow).
    • To give you a rough sense of where things are going in terms of computer monitors which can keep up with capabilities we already have in phones and TVs… If you ask ChatGPT “what is likely CAGR for monitors supporting 1000+ nits between now and 2030?”, it estimates a roughy 23% compound annual growth rate (citing several sources). That feels very plausible to me. Options are rapidly expanding now, but from a small installed base (Apple MacBook Pro has performed at that level for five years, but we only just got the first comparable PC laptop in mid-2025).
  • Distribution support expands, helping to address the key barrier to adoption.
    • The recent support added to open source libraries (libvips and Sharp JS noted above), there is a good chance we start to see a nice expansion in the number of websites which support and preserve HDR when you upload photos. If you are requesting support from your favorite web platform, please be sure to note that I’ve linked many important HDR resources for developers at http://gregbenzphotography.com/hdr#developers.
    • The WordPress Media Experiments dev team is exploring adding libultrahdr to support HDR gain maps. This could significantly expand the ability to share HDR photos, as WordPress powers >40% of all websites. It is possible to workaround now (by using the “full” size option), but this would make it vastly simpler and add import support for derivative assets such as thumbnails.
    • Google’s open-source libultrahdr is expected to add AVIF & HEIF gain map support. This should help make it easier to transcode from iPhone captures (HEIC sources) and to encode much smaller and higher quality images (AVIF).
    • I anticipate adding AVIF gain map support to Web Sharp Pro.
  • AVIF begins to replace JPG
    • As discussed in which file formats to use for photography, AVIF is poised to replace JPG. It offers higher image quality at half the size. It’s a great format for SDR, and for HDR promises to give us gain maps smaller than today’s JPGs which is great news for faster loading of websites or sending smaller files.
    • 95% of internet traffic is now on a browser supporting AVIF, and all browsers which support HDR also support AVIF gain maps. While JPG gain maps are 100% safe, we are getting very close to the point where AVIF can start to replace it. The long tail takes a while, so this is likely to be a long process that starts late in the year. For those of you are who pioneers, this is a great opportunity to get your website to load much more quickly.
  • Wider gamut TVs. RGB mini LEDs are coming from Sony and TCL. Outside of laser projectors, this will be the first time consumer displays show nearly full coverage of the Rec2020 gamut (which is the ultimate target for HDR).

Beyond this, there are various rumored developments – but who knows what the probabilities are. So I’ll focus instead on my “wish list” of developments would be high impact for HDR photography should they come to pass…

 

Wish list: what else do we need for HDR photography?

I have no idea if any of the following are projects which may be in development or under consideration, but they would help significantly accelerate adoption of HDR photography and increase its value (in no particular order):

  • Support for JPG gain maps in popular sharing platforms, especially Adobe Portoflio, SquareSpace, Wix, and Facebook.
    • These are the platforms I hear requested most request from my audience to get HDR support.
    • Please click on each above for a link to request support.
  • Simpler / better HDR in Windows.
    • You can get great results under Windows, but it isn’t nearly as simple as MacOS (where HDR is generally enabled by default and great results are either the default or relatively much easier to obtain).
    • For example, there are 3 different brightness sliders: “brightness” for laptops, “HDR content brightness” is a secondary control for laptops, and “SDR content brightness” is a secondary control for external monitors. These sliders are not well described and hard to control. The HDR content brightness slider is especially terrible – I find it leads to clipping of HDR highlights on great laptop displays if you don’t adjust this slider, and it affects photos and videos differently. This is unnecessary – MacOS offers a single brightness slider regardless of which HDR display you are using – it’s simple, intuitive, and works very well.
    • HDR is not enabled by default even on great HDR laptops where the display is well known. This undermines the value of premium laptops and puts the burden on the user to be aware of these controls and how to optimize their display.
    • The Windows HDR Calibration app is more likely to degrade than improve your results (and seems to occasionally produce some absurd HDR headroom numbers).
  • Expanded support in the Apple ecosystem:
    • Support for HDR photos via AirPlay on the AppleTV. This would enable a very simple way to get content onto the massive HDR display nearly everyone has at home.
    • Support for ISO-encoded HDR JPG / AVIF in iMessage. Apple already supports their own JPG gain map encoding (ie images captured with the phone), and being able to easily share images edited with Adobe software would significantly help share the highest quality images edited by artists.
    • Support to retain ISO gain maps for images synced via Apple iCloud.
    • An updated Apple Studio Display monitor with 1000+ nits support would bring easy-to-use, high-quality HDR to a much larger number of creators. Apple has been a consistent pioneer and champion of HDR since 2018, and a monitor priced for a large audience would help cement their already considerable lead in HDR display hardware.
    • Full utilization of ISO gain maps. The current support is welcome, but not fully optimal / consistent with the ISO standard. This can result in loss of brilliance for images encoded for >3 stops of headroom and lower quality when adapting to displays with < 2 stops of headroom (such as iMac). I recommend Chrome or related browsers like Brave for those of you using MacOS. It’s great that we have HDR support in Safari 26 now, but we need further improvement to get the most out of computer XDR displays and to support legacy HDR displays with limited headroom.
  • Unlock full potential of Android displays
    • Android displays are currently limited to 2.3 stops of headroom in software. With the latest Samsung and Pixel phones offering 2600-3300 nits, the display hardware is capable of supporting 4 stops of headroom in controlled lighting (ie viewing indoors). By comparison, Apple iPhone offers up to 3 stops with 1600 nits peak HDR.
    • It would be ideal to see Android allow at least 3 stops of headroom. The hardware supports it, many computers already support 4 stops, and up to 6.6 stops of headroom is beneficial (which is already supported by some high end TVs which support the full 10,000 nit PQ spec – though there is no indication monitors or mobile devices will pursue > 4 stops anytime soon).
    • It would be ideal if the mobile operating system could also intelligently headroom when ambient light is very low and the brightness slider is set to low values (same for Apple, not an Android-specific concern for me). This would help mitigate the “HDR is too bright” concern voiced by those who tend to scroll a tiny, bright display in a dark bedroom (while still allowing full HDR in more favorable conditions). This was on my wish list last year for both iOS and Android and remains something I believe would be very beneficial for the HDR ecosystem to ensure users and apps do not resort to draconian measures like disabling HDR entirely. This needs to be addressed at the operating system level to ensure consistency, avoid redundant or problematic solutions by developers who probably do not often have sufficient expertise in these complex human factors concerns, and potentially to protect user privacy as leaking ambient light data may be a concern for “fingerprinting” where an individual is tracked by comparing several otherwise innocent pieces of data about the device.
  • Support in Adobe Bridge
    • Adobe has done an incredible job adding support to ACR and all 5 versions of LR. However, if you use ACR (but not LR) there is no simple way to browse and manage your work as HDR.
    • Please vote for HDR support in Bridge (if you have comments, please write in your own voice – not copy/paste).
  • Greater support for accurate color in monitors:
    • Manufacturers should ideally all offer a mode designed to offer the highest accuracy possible with the factory calibration. This is the default for smart phones, but computer monitors are all over the board in terms of design target and accuracy.
    • Apple and ASUS ProArt offer great color by default and BenQ has an optional “Display HDR” mode which brings factory results reasonably close. But beyond that, most HDR monitors are optimized for gaming and showing off lots of color in the showroom. They tend to have terrible color accuracy and could offer a better starting point by adding a display mode intended for accuracy. I would love to see Eizo get into the game, as they would surely do an outstanding job.
    • There is also an HDR working group at the ICC, and a standard for profiling monitors would be of immense benefit. However, such efforts are likely to take time (creation of a standard and then adoption in operating systems as well as 3rd-party profiling software).

And beyond this, we will certainly see many other announcements or enhancements for HDR throughout the year. There is massive support across a wide range of hardware vendors, developers, and individuals. We are clearly headed towards a time where HDR is a mainstream technology with widespread support.

 

How did we do on last year’s wish list?

If you review my wish list from 2024, we saw the following:

  • WebKit / Safari support: yes! As noted above, further enhancements are needed – but this is an excellent start and significantly expands support for HDR.
  • Transcoding support: As noted above, there was tremendous progress in open source libraries (libvips, SharpJS) and that moves us much closer to general support for sharing HDR on the web.
  • ICC profiling: We do not yet have a standard, but I feel good about where things are going. This is probably the most complex piece of the puzzle (tough scientific questions requiring broad industry agreement) – it will take some time.
  • More support for showing HDR photos on a TV: I’m disappointed we do not yet have support from major set top boxes such as AppleTV to easily share HDR photos on the great, large HDR display we all have at home. Hopefully we see movement here in 2026.
  • Solutions for the “too bright” concern at the operating system level: I have not observed improvement here, but continue to maintain that it would be ideal for HDR headroom to be limited when viewing in a very dark room at low brightness (especially on phones). This simple tweak would significantly help address concerns from those who scroll phones in bed, while retaining the benefit of HDR elsewhere (ie avoiding draconian solutions like disabling HDR generally).

So overall, we saw significant progress on most of these goals! The HDR ecosystem is expanding and improving at a rapid clip. We are at a point where transcoding (preservation of HDR when sharing images) is the key barrier to adoption. Once we have more outlets to easily share HDR, it should show a rapid uptick in adoption.

 

How to edit as PSB with Lightroom v15.1

Adobe just added an incredibly helpful update to Lightroom Classic (LrC) v15.1…

Lightroom has supported the ability to preview and manage existing PSB files for years now, but it was always missing a critical piece of the puzzle – the ability to create new layered edits as PSB files. With the old workflow, you could only choose TIF or PSD formats, which are limited to 4GB and 2GB respectively. That’s very limiting if you use smart objects for completely non-destructive workflows, exposure blending, make massive prints, have a 100 megapixel camera, etc.

Things get messy when you run into that limit. Photoshop will stop throw an error when you cross that threshold. Your only choice at that point is to either simplify the image (such as flattening layers) or to do a new save as PSB (in which case you’ll have to import it to Lightroom and will probably want to hunt down the old TIF to delete it).

Now LrC v15.1, you no longer have to worry about this problem. Simply choose PSB for all your new edits (under Prefs > External Editing). When you right-click your RAW file and choose to “edit in” Photoshop, you’ll be saving a PSB file. No more file size limits.

And if you are ok saving files which are ~2x larger, you can also increase the speed of your file saves by 10-20x! Compression is typically required when managing the limits of TIF / PSD, but truly optional when saving as PSB. That image that takes 60s to save? You may only need 3 now (because the bottleneck wasn’t writing to the drive, it was the compression calculations). In Photoshop, go to Prefs > File Handling and check “disable compression of PSD and PSB files“.

The PSB file format (also known as “large document format” in PS) works exactly like a TIF inside Adobe software, keeping all your layers and complex information – but it has no file size limit! Ok, there is a limit of 300,000 pixels per side (good enough for an 83′ print) or 4 exabytes (ie 1 billion times larger than the limits of TIF). We should be good for a while.

Are there any downsides to PSB? You may find 3rd-party software does not support it as well or that sometimes your preview does not look correct in a file explorer. That’s about it. File sizes are very similar in my experience when comparing TIF and PSB (including with “maximize compatibility”, which is required to see the preview in LrC). So if you only do basic editing in Photoshop and browser your images in Explorer/Finder often, you might not want to adopt this change. But this is a completely safe thing to go, even if you’re worried about other software (Affinity supports it just fine). You can always re-save a PSB as a TIF if you need to use the image elsewhere (you might need to flatten layers with the limits of TIF, but you would have done that already and now you have a better PSB file for editing as needed). I have been using PSB for well over five years and never once had an issue – but I’ve had many headaches when I started working as TIF and then hit the limit.

 

Other updates in LrC v15.1:

This update contains a mix of other nice quality of life improvements including:

  • improved quality for previews in the import dialog
  • expanded support for new cameras, lens profiles, and tethering support for more Leica cameras
  • batch renaming may be undone via ctrl/cmd-Z
  • bug fixes and more
Greg Benz Photography