New in Adobe Camera RAW 17: “Adobe Adaptive” profiles, non-destructive Denoise, and generative expand

Adobe Camera RAW (ACR) v17 just added some very interesting new AI features:

  • NEW: “Adobe Adaptive” profiles.
  • NEW: “Generative Expand
  • Updated: AI Denoise, Raw Details, and Super Resolution can all be applied non-destructively on even more RAW files (details below).

These features have the ability to get great results more easily and significantly simplify your workflow. Let’s dive into each of them.

 

What is the new “Adobe Adaptive” profile and how do I use it?

The various profiles we’ve had in the past (Adobe Standard, Adobe Color, etc) are fixed starting points. The new AI-based “Adobe Adaptive” is meant to provide a better starting point by analyzing the image to generate a custom profile. It’s effect is somewhat like adjusting the sliders for increased shadows and decreased highlights. It compresses the tonal range. The greatest benefit seems to in large areas of shadows (common in landscape) or small areas of nearly blown highlights (such as city lights).

In ACR, just click the profile dropdown and select “Adobe Adaptive (beta)“. That’s really it. You’ll immediately see changes and likely some very impressive results. Any existing sliders or local edits will remain as they were. That’s often fine, but you’ll probably want to make some further tweaks to get the most out of it.

Aside from the profile itself, there is an “amount” slider available when in the adaptive profile. If you drag the amount slider down to 0%, you’ll get the same result as the “Adobe Standard” profile. This lets you easily back off from the AI if it is too much. Often times that is the case (the default can look a bit like the results from older tone mapping software where shadows are too light). Conversely, you can increase the amount up to 200% to really lean into the effect it has on the image.

For more info, see Adobe’s post on adaptive profiles.

 

When should you choose Adobe Adaptive?

Without more experience, it’s hard to predict the best uses for such a complex new may be. This will likely appeal to a lot of novice users who are unclear how to get the kind of incredible results which are typically the norm when shooting with a smart phone. If you’ve struggled to get the best results out of your fancy camera, you’ll probably love this new feature.

My experience so far suggests that a wide range of images may benefit even skilled editors. I have seen some great improvements in things like nearly blown highlights which benefit greatly from the new Adaptive profile. It appears to be safe to use on a wide range of images (including those which have already been edited or use HDR). The quality of results will probably surprise many advanced users.

 

When should you avoid Adobe Adaptive?

As incredible as this new feature is, there are some scenarios where you may wish to skip the adaptive profile or exercise caution:

  • First, keep in mind this is a beta. There may well be bugs and performance may change over time (ie, re-editing later might produce a different result).
  • You will not be able to use this feature when working with the Camera RAW filter. At this time, you can only use the adaptive profile when opening RAW images or editing RAW Smart Objects.
  • There is likely a long learning curve to optimize results. Some types of images or workflows may be optimal with the adaptive profile, while others may be better with the regular profiles.
  • Do not use the adaptive profile in addition to the “auto” button. ACR will explicitly warn you against this, as the auto feature is not currently optimized to work with adaptive. There will likely be many requests for that, as it would be a very handy combination in the future for those seeking very quick and simple edits.
  • Those who prefer to work in Lightroom should probably wait, as support is just in ACR for now. You can of course use RAW Smart Objects and view your edited TIF in LR, but you should do the entire edit in PS / ACR if you’re doing to use the adaptive profile.
  • Be careful if you enable adaptive preset for images you have already edited, as your sliders may need some tweaking with the new profile. That said, I’ve seen some images which benefit nicely.

 

What is “generative expand” and how do I use it?

Photoshop’s cropping tool has had “generative expand” for a while. It allows you “outcrop” or expand the image area and  use AI to create new pixels at the edge. This is great for things like adding more sky when exporting your image for social media. However, this is a destructive workflow. The new pixels will probably be useless if you change the original edit.

With ACR 17, generative expand can now be done directly in the RAW file. This has a couple of important advantages:

  • Non-destructive. You can make any changes you wish to RAW settings and will not have to recreate the new pixels.
  • Avoid cropping when making geometry corrections. For example, if you need to tilt or rotate the image to straighten some lines, you may now simply fill in the gaps in the corner rather than cropping out parts of your original image.

To use generative expand:

  • enable the technology preview. In Photoshop, go to preferences / File Handling / Camera Raw Preferences / Technology Previews, and check the option there.
  • Go to the crop tab (near the top right). This now includes geometry adjustments (aka “transform” in Lightroom).
  • Expand the crop and / or make geometry adjustments as desired. If you are cropping, be sure to check “enable expand“.
  • Click “generative expand“.
  • Note that the results outside your filled area won’t be optimal, so if you need to further expand later, you will likely need to re-run generative expand.

This is a very exciting feature which targets an important need. However, it is definitely a technology preview and the results are sometimes not great. It seems to work best in areas with simple detail or texture, such as expanding the sky. So be sure to check your results for quality. While it isn’t perfect, it’s an exciting new feature and should continue to improve from a great starting point.

 

What are the benefits of the new “non-destructive” AI enhancements?

Adobe has generated a lot of buzz around several AI features for RAW images, including:

  • AI Denoise, which offers incredible improvements on images at any ISO.
  • raw details“, which enhances detail within the native resolution of the image.
  • super resolution“, which doubles the linear resolution of the image (ie 4x the total pixel count).

What’s new is that you no longer need to generate a new image, and a much wider range of RAW files is supported. You simply enable the feature in ACR and your existing RAW image will be enhanced. This has some important benefits:

  • Less file clutter, as you aren’t generating a new DNG and no longer need to consider whether you should retain the original RAW (just in case).
  • You can upgrade existing edits. For example, if you used RAW Smart Objects for your work, you can simply turn on AI Denoise the improve the final result without having to redo the edit.
  • You can work with many more RAW source files, including: HDR and panorama DNG files, Apple ProRAW DNG, Samsung Galaxy Expert Raw DNG, etc.
    • Nearly any RAW file should now work (other than exotic sensors such as Fovean).
    • Not currently supported: raster images (such as TIF or any use of ACR as a filter) and Lightroom.

To use this feature, you must enable the technology preview (as as above, go to PS preferences / File Handling / Camera Raw Preferences / Technology Previews).

 

Conclusions:

The vision for these tools is amazing, and I hope to see ongoing improvement to address a few opportunities (which is to be expected for any “tech preview”).

The overall picture in ACR v17.0 is:

  • Non-destructive denoise / raw details: Amazing and great to use now. It works like before, but is just much easier and supports more RAW files.
  • Adaptive profile: Very helpful for some images. Great for enhancing shadow detail, as well as taming some bright highlights. (Be sure to adjust the “amount” slider to optimize results).
  • The generative expand is on the right track. It can be useful for some social media edits, but needs work to be useful for high quality work such as large prints.

Collectively, these show great vision to bring useful AI capabilities into RAW editing, where they can provide the most benefit by allowing you to work in a fully non-destructive manner. This is a great update, and it will be just as exciting to see these capabilities expand and mature over time.

Is this 1000 nit HDR monitor for $330 too good to be true?

New course! I just released my new Milky Way: from Start to Finish course. This covers a complete edit including RAW processing, noise reduction, blending and halo correction, dodging & burning with luminosity masks, supporting both print and HDR, and more. This course is available for purchase now, but if you purchase the any of the 5DayDeal bundles (on sale Oct 1-11), I’ll give you give you the Milky Way course as a free bonus (note that only purchases through my affiliate link on this site or my newsletters qualify for the free bonus).

 

HDR monitors tend to cost $1,000 or more, so Xiaomi‘s launch of a 1000-nit HDR monitor for only $330 immediately raises questions. Is this the best value ever offered for an HDR monitor? Is it a piece of junk? Good enough for photographers?

I bought the Xiaomi Mini LED Gaming Monitor G Pro 27i (model P27QBA-RGPGL) and it’s well, pretty interesting. If you’re looking for a low cost way to get into HDR photography, it’s definitely worth a look. It offers a great opportunity to get into HDR on a budget, but with some caveats covered below.

This monitor features:

  • A very attractive $330 price point
  • 1000 nits mini-LED with 1152 dimming zones (DisplayHDR 1000 certified)
  • 99% DCI-P3
  • 180Hz refresh rate
  • 27″ size with 2K QHD resolution (2560×1440)
  • included stand supports a wide range of adjustments

Learn more about these terms and how to shop for an HDR monitor here.

HDR specs like these are simply unheard of at this price point. If you only got 1-2 years of service out of it, it would probably pay for itself as price and availability of HDR monitors continues to improve.

 

How’s the image quality?

The key headline here is the DisplayHDR 1000 certification. The high peak brightness and numerous dimming zones offers a very compelling HDR experience. Actual peak brightness somewhat exceeds the 1000 nits spec. You should be able to achieve 3 – 3.5 stops of HDR headroom in ambient lighting conditions typical for photography work. This is highly capable for a wide range of HDR work.

Equally important is the ability to control black levels to ensure high dynamic range. Haloing is inherent to any mini-LED and this monitor performs quite well. It’s also a bit complicated. It is simultaneously better and worse than the Pro Display XDR (that it can keep up at all with a monitor costing nearly 20x more is incredible). The mixed performance is due to sensitivity around viewing angle. When you look directly at the monitor, haloing is very minimal. However, it is much more pronounced when you view from an angle. So when you are sitting normally in front of the monitor, the Xiaomi offers better blacks than the XDR in the center of the display but clearly inferior blacks near the edges (unless you move your head). It’s not going to win any awards in a star field test. As you get into midtones against HDR highlights, a test pattern such as a +3 stop red over a mid grey will clearly look better on the XDR than the Xiaomi. But when I test real photos (such a sunset behind a building or with detail around a window), I see no isse. There is also some increased variability around the zones, which can manifest as a cursor which appears to flicker a bit if you move it across a black screen. In the rare case where you might want a more accurate look at very dark shadows, you could always move your head to look straight on. Overall, the local dimming is sufficient or very good for almost all practical photography. It exceeds my expectations for this price point and should be very suitable for anyone who doesn’t make a living from photography.

The monitor’s control for local dimming (advanced / local dimming) is unusual. It’s more like a brightness control than something which impacts halos. When set to low, peak brightness measured ~1150 nits. When set to medium (default) or high, it measured ~1310 nits. I would leave it at the default medium value and adjust brightness from display settings in Windows or MacOS (Sequoia has a new slider for 3rd-party monitors). The low setting seems to help somewhat with the highlight rolloff mentioned below, so you could try playing with a combination of the monitor’s dimming controls and the operating system brightness slider if you’re comfortable with these controls.

The monitor spec says it should support 180Hz refresh rates, but I am seeing a maximum of 144Hz on both my Mac and PC. I have never tried using either with a higher refresh rate and I suspect the computers themselves may be the limit here. In either case. photographers need only 60Hz for a decent experience zooming and panning images, and 120Hz is ideal (primarily for smoother text while scrolling). So 144Hz is more than enough, and I’m not convinced that a higher refresh rate would be all that meaningful at 2K resolution anyhow.

What about downsides? This is a budget oriented monitor which offers 2K resolution at 27″. That’s a great option which will meet the needs of many photographers, but it isn’t 4K at 32″. I find it’s more impactful for reading text (compared to my 6k Pro Display XDR, letters clearly are a bit jagged). I used a 2K monitor happily for many years, but I certainly appreciate what I have now.

Highlight details appear slightly clipped / compressed. It would be easy to work around this in editing if it bothers you by thinking of it as slightly less headroom. It’s pretty hard to find examples where this affects the viewing experience in a browser. If you look at the bright right side of the arch in this image, there is some loss of detail. You’d have to be pretty picky to notice / care for the majority of images.

This is also a gaming-oriented monitor. Many products aimed at gamers have rather terrible color accuracy and I was fairly pleased with the results. Neutrals such as the Photoshop UI are visibly shifted red, and warm highlight colors in photographs tend to be somewhat over-saturated. My tests in CalMAN corroberate this (though I’m not sharing detail as I don’t have a spectrometer to properly profile the C6 for this monitor). Gray EOTF tracking is generally a bit dark and shifts notably when the monitor’s local dimming is set to low (as a result, you might lose ~0.5 stops of headroom after boosting brightness in the operating system to compensate and achieve a comfortable level for productivity work). The uncalibrated color is not going do well in numeric tests, but is close enough that a large number of photographers won’t care. If you don’t tend to get deep into calibration, you’ll probably be very happy with the HDR color. That’s important, as there is no standard for ICC profiling HDR at this time and this monitor does not have hardware controls for calibration. At the same time, you can always switch to SDR mode and calibrate that for your print work if needed. Just be aware that HDR will not work while that profile is active, and switching profiles on MacOS is a little cumbersome (you might want to look into using BetterDisplay, which has an option to change color profiles when HDR mode is toggled).

Overall, this isn’t a monitor on par with Apple XDR displays or ASUS (which offers calibration in the hardware), but the fact that I would even use displays which cost 8-18x more than this Xiaomi is a testament to the value it offers. The image quality is surprisingly good and has exceeded my (low) expectations.

 

How’s the overall product quality?

The monitor and stand both physically feel like a quality product. The on screen menus are easy to navigate. There’s nothing to configure, though I recommend going to System / Backstrip Lighting and turning it off if you don’t like the glow from the back of the monitor.

Inputs are limited to two HDMI 2.0, two DisplayPort 1.4, and a 3.5 mm audio jack. There is no USB / Thunderbolt support, so you won’t have power for a laptop nor the ability to use the monitor as a docking station. And this would imply there’s no way for a user to upgrade firmware (so don’t expect improvements for things like the highlight rolloff I noted above).

I was unable to get audio to work over HDMI nor the 3.5mm jack, but I spent minimal time trying. I assume it is not impressive, as most monitors sound awful.

But I did experience challenges with HDMI. Neither my MacBook Pro nor PC recognized the display as supporting HDR when using an HDMI cable I have successfully used with several other monitors. When I switched to my FIBBR fiber-optic HDMI cable, things went smoothly (I’m a big fan of that cable, but just be aware that it is directional and the side marked “1” must be plugged into the computer to work properly). Refresh rate did not matter (144Hz was fine when HDR was working and dropping to lower settings did not help). Perhaps the HDMI ports on this unit are right at the quality threshold where a better cable is needed. It’s entirely possible my unit performs better or worse than others. As they say, your mileage may vary. DisplayPort (using the included cable) worked flawlessly for me.

I contacted support and found options are a bit mixed. There is no phone support in the US and the Facebook support page uses a bot which was completely unhelpful. However, their email support ([email protected]) was a somewhat more positive experience. I received a very thoughtful reply in less than 24 hours on a weekend. However, things went downhill from there. The support team provided information which seemed inaccurate. They did not seem to understand the difference between technical specs like HDR10 (i.e. information you can use to troubleshoot) and Display HDR 1000 certification (i.e. tests which just prove marketing claims). After repeating the same information three times (I thought I was routed to a bot), they ultimately said the only information the support team has is on their website. I wouldn’t expect any vendor other than Apple probably offers great service for understanding and setting up HDR at this point, you should look for that kind of technical support elsewhere (I’ve tried to share a lot of that setup and troubleshooting information on my HDR page / e-book).

I cannot comment on the long-term quality of this product or how other units might vary. My sense is that you should assume a risk of slightly higher risk of quality issue given this price point, my HDMI experience, and some reports online of quality issues (that said, all monitors run into dead or stuck pixels and people with a bad experience are vastly more likely to report issues than those who have a good experience). I have not seen any such issues and Walmart shows an average 4.7 out of 5 with nearly 1000 ratings submitted. My sense is that this will likely be a popular product that offers a great experience at a unique price point, but I’ve only had one unit for a short period of time.

 

Should you buy it?

This a product I can recommend for those seeking to get into HDR at a low price point, with the caveat that you may need to try other cables or be ready to return the unit if you have problems. HDR support is excellent, image quality and HDR color should meet most photographer’s expectations, and SDR color accuracy can be improved with profiling if needed (but needs to be toggled off to see HDR, as with any profiling today). It’s also exciting to see budget entries arriving in the HDR space, as this should help make it more accessible to a larger audience.

The only real alternative for quality HDR at minimal cost would be your M1+ Apple MacBook Pro (which has an amazing HDR display but is a 14-16″ display) or to use your TV as a monitor or pick up a used OLED TV. There are many TVs which offer great HDR (especially in darker rooms) and can be calibrated. If you’re willing to spend a bit more and manage the minor downsides of using a 42″ TV as a monitor, the LG C4 OLED is an excellent option.

If you’re willing to spend a good bit more, the ASUS PA32UCX offers 4K resolution and hardware-based calibration. And if you have a ~$2600 budget, I highly recommend the ASUS PS32UCXR.

For more reviews and details on how to evaluate options, see my recommended HDR monitors.

 

Disclosure: This article contains affiliate links. See my ethics statement for more information. When you purchase through such links, you pay the same price and help support the content on this site.

What do Apple’s latest updates mean for HDR photography?

New course! I just released my new Milky Way: from Start to Finish course. This covers a complete edit including RAW processing, noise reduction, blending and halo correction, dodging & burning with luminosity masks, supporting both print and HDR, and more. This course is available for purchase now, but if you purchase the any of the 5DayDeal bundles (on sale Oct 1-11), I’ll give you give you the Milky Way course as a free bonus (note that only purchases through my affiliate link on this site or my newsletters qualify for the free bonus).

 

Apple recently release MacOS v15 (“Seqouia“) and iOS / iPadOS v18. There are several key changes which are significantly helpful for HDR photography, including:

  • support for a critical HDR file standard which should make it much easier to share images
  • greater support for HDR photos in native Apple apps
  • better support for 3rd-party HDR monitors
  • improved tone mapping to support SDR or less capable HDR displays
  • support in Keynote

 

ISO 21496-1 gain maps:

At WWDC, Apple announced support for developer APIs to read and write gain maps using the upcoming ISO 21496-1 standard. Apple refers to this as “Adaptive HDR“, which is a great term for gain maps as they allow your photo to optimally adapt to any display. This is the key technology for HDR, as it allows us to support both HDR and SDR displays. Everyone gets the best possible image on their screen. And this concept may be used with all important file types (JPG, AVIF, HEIC, JXL, DNG, etc).

There are currently three different implementations of gain maps. The approach used by Google (Android) and Adobe is quite similar to each other, but not identical in practice. Apple’s own encoding is significantly more different, but conceptually similar. Ultimately, the files are not interchangeable and this creates confusing scenarios where HDR support is lost. For example, you can capture and upload gain maps on iPhone or Android to Instagram, so long as you use the same device for both. But you cannot currently upload the iPhone image from an Android and vice versa (thankfully, once you upload it works anywhere HDR is supported). So this mix of gain map formats has slowed adoption as it creates significant cost, complexity, and work for developers. And that’s why the ISO standard is so important.

The ISO 21496-1 standard recently reached the “draft international standard” phase, which should mean that it is close to official approval. It will take a little time before it makes an impact, but should help significant accelerate support for HDR on the web in 2025. Having a single standard will avoid failed uploads and should make it much easier for small developers to add support as standard software libraries add support.

This is a natural starting point, as it’s important to get viewing support before the images are worth creating and sharing (I am unaware of any software which currently supports creating images in the ISO format). So while Apple’s ISO support has no immediate impact today, it is a huge step forward that prepares Apple for the future of HDR but should help generate increased interest in the format.

 

HDR gain map support in native Apple apps:

Apple is implementing support for the ISO standard in several key areas:

  • iMessage app.
  • Photos app
  • Preview
  • Quick Look

This includes support for all four apps in MacOS, iOS, and iPadOS. These apps all support the older Apple format, which is still used when capturing HEIC / JPG photos with an iPhone / iPad.

The support in iMessage is particularly important, as you can now simply text your HDR photos to friends and family.

Support for the Adobe / Google gain map spec has not been added. That’s probably not too important, as ISO should quickly become the standard used by everyone. But it does mean that you won’t be able to share your existing HDR iPhone images with a friend using Android, at least not without re-saving the file through software like Lightroom.

 

Brightness slider for 3rd-party HDR monitors / TVs:

Prior to MacOS 15, you could not control the brightness (SDR white point) of a 3rd-party HDR monitor or TV. Only Apple’s monitors and laptops had this ability. For many people, this meant that their display was often too bright for productivity work like reading email and HDR headroom was limited. This ability to change brightness was the one area where Windows had a clear advantage with its SDR / HDR Content Brightness slider, but that is no longer the case.

MacOS 15 now shows a brightness slider under System Settings / Display for any monitor in HDR mode. It’s incredibly simple, you just slide it to adjust the brightness of the monitor to make it comfortable to view under your ambient lighting conditions. This very simple change has several important benefits:

  • You can adapt the display to be more comfortable to use and more appropriate for editing prints, without turning off HDR mode. The old default was 203 nits, which is about twice as bright or more than ideal in most rooms without strong window light.
  • You’ll have more HDR headroom if you are able to dim the display for your room, which will often be the case.
  • You more effectively use a wide range of HDR monitors and TVs.
    • Prior to this update, the only optimal way to view HDR under MacOS was with a 14-16″ laptop screen or Apple’s $5,000+ Pro Display XDR. Those are still best in class displays with many unique benefits, but it meant a loss of HDR headroom when using 27-32″ HDR displays at price points which are affordable to most photographers. That’s no longer a concern.
    • This is particularly helpful for working with OLED monitors, which excel in dark ambient light. You can now get incredible HDR results from a 42″ TV that costs only ~$800 or less.
    • See recommended HDR monitors list for more info.

With this change, MacOS now clearly offers the best overall HDR experience in terms of both quality and ease of use. Kudos to Apple for opening up their system for better 3rd-party support here, this adds tremendous potential value for a large number of Apple computer users.

 

Improved global tone mapper:

Gain maps are absolutely offer the best image quality for sharing HDR images. They adapt in an ideal way to any display and leave you as the creator in complete control of the result. By contrast, if you view a simple HDR image without a gain map on an SDR display, it will be automatically adapted using a process known as “tone mapping”. In nearly all cases, a properly encoded gain map will offer more consistently high quality than tone mapping can achieve with an HDR (as it includes input from the artist to create an optimal SDR, allows for local adaptation pixel by pixel, and does not vary from one browser to the next). That said, some images may be shared as a simple HDR due to sub-optimal workflows, automation, or a desire to reduce file size. So you will encounter simple HDR images without a gain map on the web and the quality of tone mapping is still important.

The quality of Apple tone mapping prior to the recent release was, quite frankly, very poor. Highlights often lacked detail and color often looked cartoonish.  Thankfully, the latest updates to MacOS, iOS, iPadOS, tvOS, watchOS, and visionOS (including Safari) all include a new global tone mapper which is extremely good. Highlight clipping is reduced and color has been significantly improved. I would still say that the quality of tone mapping in Google Chrome is clearly better, but only by a small margin at this point. Most people would likely say the new Apple tone mapping is clearly much better, but probably would not notice any difference from the results in Chrome without a side by side comparison.

 

Support in Keynote

If you are updated to Apple Keynote v14.2 and have updated MacOS to Sequoia, you will be able to use HDR photos in your presentations. This may be a very handy way to show HDR photos in a slideshow on your TV.

 

What’s still missing?

Apple has established an outstanding ecosystem for capturing, editing, and sharing high-quality HDR photos. They have clearly been leading the way for HDR photography along with Adobe and Google. However, HDR photography is still fairly new and there are of course some opportunities for improvement across the ecosystem.

The key gap for Apple is Safari / WebKit, which does not support HDR photos in any format. While you can (and should) use Chrome, Brave, Edge or Opera for a great HDR experience on MacOS – that simply is not an option for mobile devices. On iOS / iPadOS, all browsers (including Chrome, etc) rely on WebKit. As a result, you cannot view HDR photos in a browser on your iPhone or iPad. You can see HDR photos in apps like Instagram or Lightroom, but lack of support for browsing HDR photos at this point is rather shocking. No one has put more energy or cost into creating great HDR displays than Apple, so hopefully we’ll soon get the browser software to properly use the incredible HDR displays they’ve been shipping in phones since late 2020 (iPhone 12).

There are a handful of important but less critical opportunities which remain to be addressed:

  • No support in tvOS or Airplay2 for sharing HDR photos on an AppleTV.
    • This would be incredibly helpful for sharing our HDR photos on the large HDR TV’s most photographers already own.
    • There’s no technical limitation here (as far as I can see), as we can already stream DolbyVision movies at 60 frames per second over Airplay, we just don’t have the software to send a still image.
  • Increased headroom on iPhone.
    • The new M4 iPad Pro’s XDR display is 1600 nits and supports up to 4 stops of headroom. Similarly the MacBook Pro’s XDR display is 1600 nits and offers up to 4 stops (5 if you manually set the SDR limit down to 50 nits, though that is not recommended).
    • However, there is only support for 3 stops of headroom on iPhone, even though it also has a 1600 nits OLED.
    • Below ~80% brightness, the headroom is limited to 3 stops because the peak 1600 nits are not allowed. Ideally, we’d have access to the full brightness until the phone gets to much dimmer values (it probably should be limited near the bottom of the brightness range).
    • Aside from any software concerns, competing phone displays have gotten much brighter. The Samsung S24 offers 2,600 nits. This isn’t a huge gain (as 3200 nits would only be one stop brighter than the iPhone), but it is definitely offers a visible improvement in the impact of HDR photos.
  • No support for HDR gain map thumbnails in Finder (Quick Look / Preview does support). This is just nice to have, as many photographers will likely rely on software such as Lightroom to navigate their images.
  • No support for profiling HDR displays in order to improve color accuracy
    • This is not an Apple-specific issue. There is no ICC standard for HDR profiling, so it isn’t an option on Windows either. If you create a custom profile under MacOS or Windows, you will lose HDR support. Your only option currently is to use a display which is very accurate out of the box, supports calibration in the display (such as an ASUS ProArt or most TVs), or accept limited accuracy / extra steps (you can toggle to SDR mode for an accurate display for printing)
    • Apple displays are so good that it’s quite optional, but that’s not the case for many 3rd-party displays – and many photographers have very high standards for accuracy.

Can I migrate my plugins from Photoshop to Affinity?

From time to time I get questions from customers about whether they will still be able to use my Photoshop plugins (Lumenzia and Web Sharp Pro) if they switch to Affinity. It’s a complex issue, but the short answer is that my plugins will not run under Affinity. This is by no means unique to me or my software, so I thought it may help to write a more detailed explanation to understand why some (many) Photoshop plugins cannot or will not work in Affinity.

A significant amount of confusion surrounds the term “plugin”, which is a generic reference that doesn’t really tell you its capabilities. By way of analogy, knowing you have a “car” doesn’t tell you if it can run on gas (vs needing electricity) – you need to know what kind of car. Similarly, there are vast differences between different types of “plugins” in Photoshop, and that impacts the potential that they might work with Affinity.

 

What is a plugin? What other types of 3rd-party tools are there?

The term “plugin” means at least 3 different things in Photoshop and there are other types of 3rd-party tools which may be used to automate Photoshop. So let’s start by quickly defining what options there are to add new capabilities or automation to Photoshop:

  1. C++ plugins.
    • They integrate with Photoshop through Adobe’s SDK (software development kit), but are otherwise standalone programs (compiled from code written in the C++ programming language).
    • They have their own user interface and do not rely on Photoshop’s native features.
    • Modern MacOS plugin have a “.plugin” file extension.
    • Windows and older Mac plugins may have an “.8bf” extension (for a filter, other types of plugins for things like file formats have other extension starting with 8b).
    • These plugins can be found in Photoshop primarily under the Filter menu (but also under File / Automate menu for non-filter applications like Topaz Gigapixel).
    • The actual plugin files may be found in MacOS Finder under /Library/Application Support/Adobe/Plug-Ins/CC/DxO/   (or the older location: ~/Applications/Adobe Photoshop <year>/Plugins)
    • The actual plugin files may be found in Windows File Explorer under C:\Program Files\Common Files\Adobe\Plug-Ins\CC   (or the older location: C:\Program Files\Adobe\Adobe Photoshop <year>\Plug-ins)
  2. UXP plugins (aka UXP panels).
    • These are written in JavaScript, HTML5, and in rare cases may be hybrid plugins (have some native capabilities in compiled binaries called from the JavaScript).
    • These are mostly or completely dependent on Photoshop APIs to do anything to your image. Their user interface is heavily dependent on Adobe APIs.
    • These are always found under the dedicated Plugins menu in PS.
    • These plugins are installed via the Adobe Creative Cloud installer (either directly through the Adobe marketplace, or by double-clicking a “.CCX” file provided by the developer)
  3. CEP plugins (aka CEP panels).
    • These are conceptually similar to UXP plugins, but use much older software standards.
    • These cannot run when using Photoshop running natively on Apple Silicon (possible under Rosetta, but this slows Photoshop performance by about 50%).
    • If supported, they are found under the Window / Extensions menu in PS.
  4. PSJS scripts
    • These use the same JavaScript standards and API as UXP plugins, but lack plugin features (not permanently installed, no dedicated panel interface, limited capabilities, etc).
    • These could be launched via the File / Scripts / Browse (or Scripts Event Manager) menu, by dropping on the Photoshop app icon, or by dropping on the Photoshop canvas when no images are open.
  5. JSX scripts
    • These use the same (older) JavaScript standards and API as CEP plugins, but also lack other plugin capabilities.
    • Unlike CEP panels, these may still run natively on Apple Silicon.
    • These may be launched with the same options as JSX.
  6. Actions
    • These allow playback of Photoshop features and may also invoke features of UXP plugins which explicitly are designed to support action recording. Lumenzia supports such recording extensively (Web Sharp Pro also has a significant amount of support).
    • These capabilities are explicitly tied to Photoshop, but migration would typically be possible. Affinity’s term for this capability is “macros”.  A large degree of Photoshop functionality is available in a similar way in Affinity and a similar macro could often be created without extensive effort.
    • These are found in the Window / Actions panel in Photoshop.
  7.  BridgeTalk
    • This is an Adobe system for communication between applications. For example, this facilitates integration between Lightroom and Photoshop (such as opening / stacks images and bringing the new file back to Lightroom).
    • 3rd-party developers sometimes use this to leverage Photoshop, but it’s rare and you wouldn’t confuse it with a plugin as this software is run external to Photoshop.

 

Which of these can you continue to use when switching from Photoshop to Affinity?

Of all the items above, only #1 (C++ plugins) may be directly supported with Affinity – and only some of these plugins will work, it depends on the specific plugin. Your best bet is to use the official installer, but you may try manually installing your plugins. Go to Affinity Settings / Photoshop Plugins, click the button to open the default folder, and then manually copy plugins there (ie to a location Affinity looks for plugins when it launche). You may need to enable “allow unknown plugins” in the same settings area (an unsupported plugin may cause Affinity to lock up and require you to force quit).

Item #6 (Actions) could generally be ported in many cases with modest effort, so there’s a good chance you can find these or make them yourself.

None of the other five technologies are supported under Affinity. So anything you find under the Plugins or Window / Extensions menu will not work. Or to put it another way, any panel you can dock somewhere in the Photoshop interface will only work in Photoshop.

 

Can UXP / CEP plugins be migrated or re-written? Maybe in the future?

As for UXP (or CEP) “plugins”, there is no comparable JavaScript / HTML interface currently supported in Affinity. Even if it were, the code is primarily written for Photoshop-specific APIs. Even if there were a direct 1:1 mapping of all the attempts to use an API to get information from Photoshop or request it to do something, it would be a tremendous effort to migrate the code. It would likely require a developer to undertake significant effort to rewrite much of the functional code from scratch.

The main panel interface is built from a limited subset of HTML / CSS, but mostly follows conventional web standards. So it could perhaps be adapted with some effort if another software program used HTML for the interface. However, any popup dialogs are likely to rely heavily on Adobe-specific APIs / elements and would likely take significant work to adapt. So any code reuse for the user interface would also likely be minimal and there would be significant effort on this front as well.

So even if a similar scripted plugin capability were added to Affinity, it would require significant effort from the developer. I believe there would be limited interest from most developers to support it for a number of reasons. The number of potential customers using Affinity is much smaller than Photoshop is much smaller, and a large number of these users state that their primary interest in Affinity is to reduce their costs. So the potential revenue is probably not attractive given a smaller number of more price-sensitive users. That might be offset somewhat by a smaller number of competing plugins for Affinity, but the costs are quite significant. In addition to the direct cost of migrating and maintaining the code, there would likely be additional costs to generate Affinity-specific training videos.

 

Summary:

If you have a 3rd-party plugin which appears in Photoshop under the Filter or File / Automate menu, there is a good chance you can use it with Affinity (but no guarantees).

If you have something in the Actions in Photoshop, there’s a decent chance you can find an equivalent or similar macro for Affinity (or may be able to convince the creator to migrate it).

However, if you use a plugin which is launched via the Plugins menu or Window / Extensions, you’re out of luck. It simply will not run on Affinity and it is very unlikely a developer would create something comparable. If you’re just doing something very simple, you may be able to substitute macros instead.

There are significant technical, practical, and financial barriers to offering support for many plugins under Affinity. Hopefully, this helps understand what you might be able to take over to Affinity and why most 3rd-party plugins are only available for Photoshop.

Note: I am not an expert on everything involved here and these things may change over time. This information is correct to the best of my knowledge. Please comment below if you think I’ve made a mistake, missed something relevant, or if this information becomes outdated due to some future update.

This will wreck your color in Photoshop

If you’ve every tried to change color space in Photoshop (such as when preparing an image for print), you may have received a notice from Photoshop to flatten the image. It’s also an option built right into the conversion dialog (Edit / Convert to Profile). This is a warning you should definitely pay attention to. In many cases, it can cause significant and unexpected changes to your images – your choice of color space even affects how opacity works! Thankfully there’s a simple way to change colorspace without damaging your image, but let’s first dive into how it works and why it matters.

Why colorspace affects results:

A change in the “primaries” or gamut is the most obvious difference between colorspaces. If you go from a large gamut (such as ProPhoto) to a small one (sRGB), then you may see some vibrant colors get desaturated. But there’s another less obvious reason: the numbers change but the math does not. Let’s dive into what that means.

In addition to the primaries/gamut, a color space has an EOTF (“electro-optical transfer function”), which is most often referred to as a “gamma” (but there are ETOFs which do not technically use gamma, and the sRGB color space is an example of it – it is close to gamma 2.2 but not the same). You can think of an EOTF as specifying how quickly you get from the minimum value (black) to the maximum (white). A large gamma like 2.6 will dedicate much more of the numeric range to dark values than a small gamma like 1.0. For example, RGB 150,150,150 in gamma 2.6 is the same mid gray value as 64,64,64 in gamma 1.0. When you convert color profiles, you will see completely different numbers even when you keep the exact same color.

While Photoshop will convert pixels from one color space to another, it will not adapt the math. Imagine that same curve being applied to both 64 and 150 as inputs. They will start the same, but almost certainly produce different colors. The same problem occurs with almost any adjustment layer, BlendIf, blend mode, and even opacity. Each of those involves mathematical computations which simply process numbers – without any real understanding of what those numbers mean in terms of real color.

There are a couple of times when you may not think about color space, but are going to run into the same problem:

  • Copying and pasting a layer from one image to another
    • If the layer is a pixel layer or smart object with no mask, BlendIf, or reduced opacity – you’re ok.
    • If both images are in the same colorspace – you’re ok.
    • But there are many scenarios where this is not true. A common example would be if you copy a watermark or masked subject from one image to another, you may quickly find that the edges show a halo or poor blending because any change in EOTF will change how the mask works.
  • Converting to or from 32-bit mode (HDR editing).
    • ’32-bit mode is always gamma 1.0 or “linear” in Photoshop, while colorspace in 8 or 16-bit often uses gamma 1.8-2.6.
    • So if you go to Image / Mode and change bit depth, you may well be effectively changing colorspace and need to take similar precautions.
    • In fact, this will produce some of the most dramatic alterations, as the large change in gamma causes significantly different results for the same mask / opacity.

 

How to safely change colorspace:

As you’ve seen, it is ok to convert final pixels – just not anything involving more math (adjustment layers, opacity, etc). So there are a few good options:

  1. Layer / Flatten Image.
    • This will merge everything to a background layer which can be safely converted.
    • This is of course destructive, but is well suited for things like preparing prints (you should not resize a smart object, as the quality of the enlarged image will suffer).
    • Alternative: During conversion (Edit / Convert to Profile), check “flatten image to preserve appearance“. This may be a little more convenient by doing two things at once.
  2. Select all layers, right-click, and choose “convert to smart object“.
    • This will give you a single pixel layer, while preserving a fully non-destructive workflow.
    • You can go back anytime and safely edit the contents inside the smart object. Unless you are creating a derivative file (such as for print), this is the best option.

Note that Photoshop will warn you in a couple of situations:

  • Changing modes can affect the appearance of layers. Merge layers before change?” if you try to convert to 32-bit in a potentially unsafe way. This will flatten your image, converting to a smart object is better.
  • Changing the document profile can affect the appearance of smart objects. Rasterize objects before changing profile?” if you have a smart object as one of several layers.  This is not an appropriate fix and will likely result in both unwanted changes in the image and is a destructive workflow.

However, Photoshop will not warn you in some other situations where you will very likely have problems:

  • When you have multiple layers, but no smart objects. This is exactly the scenario shown in my video. It’s very common and easy to damage your image.
  • A single layer smart object with a smart filter.

 

You may be aware of the “blend RGB colors using gamma 1.0” option in Edit / Color Settings. This is probably not a good solution for a couple of reasons:

  1. It is set for Photoshop, not the image.
    1. This will cause most of existing images to change.
    2. It will make it hard for you to work with layered images which come from other people.
    3. If you share the images with another computer, the results are very likely to be incorrect (as this setting needs to be set the same).
    4. If this setting were saved with the image, it would be far more useful / safe.
  2. It is an incomplete solution. It helps with opacity / mask concerns, but not adjustments such as levels, curves, gradient.

The only place I see this option to blend with gamma 1 being useful is if you work in a closed system where all computers using Photoshop on the layered content and:

  • You need more colorimetrically correct blending of colors for graphics/text (ie red plus green becomes yellow instead of brown), or…
  • Your goal to to help improve compositing masked layers while working with a mix of different ETOFs.

Those generally don’t apply for photography and it isn’t worth the problems noted above with a setting that is not set per image.

Greg Benz Photography