Export for any ratio with Generative Fill

Web Sharp Pro v5.8 is now available as a free update for all customers and includes the ability to use Generative Fill AI to export at any aspect ratio without cropping out any of your original image. So if your image needs to be a little taller or wider to meet the needs of Instagram, Pinterest, or just to match other images on your site – you no longer need to crop off the edges of your image to fit nor add white space or some other filler. Web Sharp Pro will now use Photoshop’s artificial intelligence to fill the gaps so you can preserve your original content and add just a little more sky to the top or sandy beach in the foreground to achieve the required size.

Tips for using Generative Fill:

  • Go to Settings / Quick Export and set the Crop / Fill dropdown to “Keep full image (fill via Generative Fill)“.
  • You can interactively refine any crop and the area to be filled by also checking the “interactive crop when filling” option. I would generally leave this on, unless you’re doing a batch export. It will default to keeping the full image and using a symmetric fill on the top/bottom or sides as needed.
  • If you are using any of the cropping modes, you may also leave some edge gaps anytime you’d like to fill. When you do this, you will be prompted for the type of fill to use, and Generative Fill will be one of the options.
  • Generative Fill tends to work best when the expanded area would contain unique content not seen elsewhere in the image. Content-Aware Fill is an alternative option and tends to work best in areas of repeating patterns/texture or when exporting at high resolution (as the current beta version of Generative Fill only creates content which is 1024 pixels on the long side and scales from that for larger sizes).
  • Generative Fill is very new (still in beta) and I would expect that it (and perhaps the Web Sharp Pro interface in turn) may adapt over time as the tool likely continues to improve.

 

Web Sharp Pro v5.8 also features:

  • The ability to slice any export into columns via quick exports (no need for overlay templates).
    • Because phones are oriented vertically, it can be very powerful to slice horizontal / landscape images into several columns for the viewer to swipe through, rather than show a very small single image. This is especially handy for Instagram, Threads, and other social networks on phones.
    • Go to Settings / Quick Export to choose columns. Quick tip: you can use math in the size fields. So if you want to export 5 columns of 1080 wide by 1350 tall for Instagram, you can enter the width as 5 * 1080 and Web Sharp Pro will automatically calculate 5400 as the correct width in pixels for the overall export.
  • Simplified interactive cropping (as of v5.8.1). You no longer need to hold <shift> to maintain the aspect ratio when cropping. It is fixed to avoid making any unwanted changes to the aspect ratio.
  • Support for content-aware fill in Quick Exports as well. This may be preferable to Generative Fill for high resolution exports or areas of texture or minimal detail.
  • Improvements to templates for the Threads social network and more. See the release notes for full details.

 

Apple Pro Display XDR for HDR photography

If you’ve been following me for a while, you know I’m a fan of next-generation HDR displays. These displays are absolutely stunning – for the first time ever, we can truly see all the detail and color in our RAW files. I’ve just picked up a stunning external HDR monitor and wanted to write a quick review of my experience so far and how it compares to my Eizo and other displays I’ve used in the past.

For the past couple years, I’ve been using the “Retina XDR” display built into the M1 and M2 MacBook Pro. These are stunning screens that far surpass anything I’ve seen in any other laptop, but they are limited to 14″ and 16″ sizes. Most of the time in my office I’ve been using a 27″ Eizo (CG2730). It’s an incredible display, but does not support HDR at all. So I’ve been seeking a large external HDR display. After extensive searching, it’s clear that Apple’s “Pro Display XDR” is in a class of its own. These are also very expensive displays when purchased new, but you can save a substantial amount if you purchase one used. I picked up the Apple Pro Display XDR, Apple stand, and the Logitech 4k magnetic webcam designed for this monitor for $3000 on Craigslist. That same bundle would have cost roughly $6700 with tax new ($5k for the display, $1k for the stand, $200 for the webcam). So I got everything in mint condition (including original packaging) with more than 1-year remaining warranty for 55% off. It is still a lot of money for most budgets, but it is an excellent value.

Before we get into the review, a quick primer on HDR displays. Current HDR technology tends to fall into one of two camps: ideal in a dark room (modern OLED) and ideal for everything else (mini-LED). There are already newer, brighter OLED displays in phones, but it will likely take a while before we have external monitors which are optimized for both typical ambient conditions and extremely dark viewing. So you may find your TV looks amazing for movies at night, while being very hard to see in the daytime. Or you may hear a professional who makes movies in a controlled environment talk about how important an OLED or reference monitor is for their work. The key thing to understand is that we don’t yet have affordable technologies which are perfect for all ambient conditions, so the best technology for watching movies at night and best technology for editing photos in the day are often different. I own five different HDR displays and have tested them and many others in a variety of conditions. At this point in time, I believe most photographers will be much better served with greater peak brightness (1000+ nits) than perfect blacks in an HDR monitor for photography. For a deeper discussion on the various technologies, see here.

 

General impressions of the Pro Display XDR

The Pro Display XDR has some truly impressive specs: 32″ size, 1600 nits peak brightness for stunning HDR display, deep integration with MacOS, no fan noise, and gorgeous aesthetics. The image quality is absolutely stunning. If you have not see a proper HDR image on this monitor or a similar 1600 nits screen (such as the Apple Silicon MacBook Pros), it’s very hard to appreciate what a substantial improvement in image quality it offers. It’s the most significant improvement in photography display I’ve seen in decades. The display is truly stunning for both HDR and print-centric workflows. I’ll dive into that more in the comparisons below.

The Pro Display XDR was originally launched nearly 4 years ago and it would be natural to ask if it will be outdated soon. I don’t think that will be the case. In the next 5 years or so, we’ll probably start to see monitors offering better OLED (QD, MLA, tandem stack) or micro-LED. All of these technologies are emissive displays which offer true black (to eliminate blooming and offer much greater dynamic range in dark viewing conditions) while also offering very high peak brightness (in order to retain HDR benefit in bright viewing conditions). While I’m looking forward to those future technologies in a monitor, I suspect mini-LED displays like this will probably be the best HDR option for most photographers for several years to come. Unless you use a current generation OLED in an extremely dark room, you’re better off with higher peak brightness than true black pixels.

The treatment of resolution is very interesting. My friend Mark pointed out that whether you’re at the default 3k or the full 6k, the actual photo in Photoshop is 100% identical. If you set the zoom level to 100% at each resolution, you’ll find the image fills the same portion of the screen. If you take screenshots of each, you’ll find the lower resolution screenshot is 6016 x 3384. You can put the screenshots in difference mode (align as needed) and you’ll find they are completely identical. So you can use the larger interface and will still get the maximum image resolution. It’s very slick!

The $1k stand for this display is the target of a lot of understandable frustration and sarcasm, but now that I’ve used it I’ve been won over that it is also truly unique. The attachment mechanism is dead simple, just bring touch the monitor to the stand and a magnet pulls it into position and then a mechanical lock automatically secures it. You have bring the stand to full height and flip a switch to release it, so it’s very secure and yet very easy to detach when needed. Height and tilt is extremely smooth and precise – you could use a single finger to reposition the screen easily. You can rotate the monitor to a vertical orientation if desired (and MacOS will automatically adapt). The stand is very solid, substantial, and gorgeous. You can easily buy a cheaper stand if you prefer – but if these design features appeal to you, I don’t think you’ll find another stand like it.

What could be better? It would be very nice if it had a downstream Thunderbolt port, which would make it easier to connect everything with a single cable. Many users would probably like to see a high-quality web cam integrated into it as well, though the Logitech webcam works very well to address that need.

I’ve not tried very hard with the right PC, but have yet to have success using this as an external HDR display for Windows. According to Apple, you can connect Pro Display XDR to a Windows or Linux PC equipped with a GPU that supports DisplayPort over Thunderbolt or DisplayPort Alt-mode over USB-C. But I haven’t had luck controlling HDR or brightness, which may simply be addressed with options I haven’t tried yet. I would recommend Windows users consider the ASUS ProArt Display PA32UCG.

 

Pro Display XDR vs laptop Retina XDR:

The naming here can be a bit confusing as both are labeled as “XDR”, which is simply Apple’s hardware branding for “Extreme Dynamic Range”. This branding indicates that you’re getting the very highest level of HDR support available.

I’ve compared the internal laptop XDR display with this external Pro Display XDR and would say they are generally very similar. Both offer excellent 1600 nits peak brightness and mini-LED with local dimming for excellent HDR display. Both offer deep MacOS integration and detailed control over the monitor’s characteristics. You can set the HDR brightness anywhere between 50 and 1600 and SDR brightness anywhere between 50 and 500 nits. That’s very handy if you wish to simulate less capable HDR displays or easily set your monitor to fixed levels of brightness to manage print workflows. You can also also customize the color gamut, white point, and EOTF to a degree.

There are a few differences of course, including:

  • Size, obviously. The Pro Display is 32″ vs 14 or 16″ for the laptop (I have the 14″ laptop for lightweight travel).
  • Resolution. The Pro Display is 6k (6016 x 3384 max, with 3008 x 1692 as the default) vs (3024 x 1964 max, with 1512 x 982 as the default).
  • Dimming zones. The Pro Display has 576 dimming zones vs 2500 for the laptop. A greater number of zones helps reduce “blooming”. I’m not entirely sure why a higher resolution display has fewer dimming zones, but assume pixel pitch/density has something to do with it (the larger display is 218 pixels per inch vs 250 for the laptop). Ultimately, I do not see performance differences as a result, which I’ll discuss below.
  • The Pro Display XDR uniquely has an option to optimize backlight performance for color detail vs minimizing blooming/halos, but I stick with the default and find them similar.
  • The Pro Display XDR has a higher contrast ratio (20MM : 1 vs 8MM :1), but I find them very similar.
  • The keyboard controls for brightness only seem to control the internal display. It would be nice to be able to adapt the external monitor’s brightness with the keyboard, but you can easily do this via the control center at top-right if you go to System Settings / Control Center and set display to “always show in menu bar”.
  • You cannot turn off the Pro Display without unplugging it, locking the screen, putting the computer to sleep or turning it off. You cannot simply slide the dimmer to the minimum to make the screen truly black. Using cmd-ctrl-Q to lock the screen is a good option. Not a big deal, but you may need to adapt a bit if you don’t like leaving the display on.

Ultimately both displays perform very similarly outside the size (and resolution, though both look great at the default resolutions which are well below their limits).

Like any transmissive display with local dimming, both will show “blooming” when there are bright pixels surrounded by dark pixels. This is because if any pixel within a given lighting zone is not black, then that zone’s backlight is turned on. That means that you can only see a truly black pixel when every pixel in that zone is black (or technically more than 20.5 stops below SDR white in my Photoshop testing). With brighter pixels in a zone, the backlight gets brighter and the minimum “black” increases, which is what creates this blooming effect. This is not something you’ll notice under most conditions. If you work in a room which is completely black or very dark, that’s when you’d likely notice it. That blooming also the reason why a professional color grading a movie (which takes place in a very dark environment) would opt for a $33,000 reference monitor or OLED instead of a mini-LED display like this. But if you’re like most people who work some lights on or window light, you’re unlikely to notice it. More importantly, you have much greater peak brightness to overcome ambient light, which is a major advantage over OLED for most users in general.

If you’d like to get a good sense of how much blooming there is with your display, you can easily test this in Photoshop. Create a new image and fill it with complete black. Then click “F” twice to toggle the screen mode to full screen with no menu bar. Then just move the cursor around. If the room is very dark, you’ll clearly see blooming around it on a mini-LED (whereas OLED will show none). But if you turn on the lights or light is coming in through the windows, you probably will not be able to see any blooming because of competing reflections on the screen and your eyes being less sensitive to very dark content in a brighter environment. When you’re done, click “F” again in Photoshop to get back to the default display.

 

Pro Display XDR vs Eizo CG2730:

I would say my Eizo is a pretty good proxy for any great SDR monitor you might currently use for printing.

Both of these are excellent monitors, but they have significant differences (the first spec in each line is the Pro Display XDR):

  • 1600 nits HDR vs 350 nits. The HDR capability here offers massive benefits for displaying the image. When it comes to SDR content and printing, both are fairly similar.
  • Size: 32″ Pro Display vs 27″ Eizo. I don’t feel the extra size is critical for enjoying the images or making prints, but it has great workflow benefits. I can see much more of the image while zoomed in, show more tools, etc.
  • Resolution: 6k Pro Display (6016 x 3384) vs 2k Eizo (2560 x 1440). The detail is clearly better at 6k. Some people would absolutely love the extra detail, to me it’s nice but not a huge deal (viewers with less than perfect vision or correction for viewing at computer distances probably won’t see a difference).
  • Gamut: P3 vs Adobe RGB. On the whole, I don’t think there’s a clear winner here and they’re similar for most content. More details below.

My primary concern in replacing the Eizo was to make sure the Pro Display would work as well for printing. I wasn’t too concerned with gamut, but had a lot of questions on my ability to see shadow detail. Thankfully, the Pro Display holds up extremely well and I would say offers similar levels of shadow detail. The Pro Display is able to achieve deeper blacks and a bit more shadow detail, but this is likely offset by the reflective glossy display (depending on room conditions). The Eizo has more of an anti-reflective matte finish and it definitely helps minimize reflections, even in a room where I don’t have any strong lights behind me. If I were buying this XDR display, I would consider upgrading to the anti-reflective nano-texture glass if your budget allows.

When it comes to gamut, the Pro Display offers more vibrant reds (and a bit more orange/yellow/magenta) while the Eizo offers more green and cyan. So a sunset or flower photo may look more vibrant on the XDR, but both will look great. And green/cyan water and foliage can definitely look more vibrant on the Eizo. Overall, the Eizo is a bit better aligned with the gamut of vibrant print media like Lumachrome, but I’d feel equally comfortable proofing a print on either display. Unless your image content focuses primarily on a narrow range of subjects which fall into one color camp or the other, I don’t think both deliver great results for gamut.

Both monitors have very good uniformity visually. When tested with the Calibrite Display Plus HL, both showed some minor deficiencies from ideal – with the Pro Display being flagged in 2 corners and the Eizo in all 4. I have no concerns with either.

 

Conclusion

The bottom line for me is that the Pro Display XDR not only offers massive benefits for HDR display, but is also outstanding for making prints and any other photography work. The price will be outside the budget of many photographers, but it’s well worth it and you can probably find an excellent used one for substantial savings. I would consider the nano-textured option if your budget allows. If I couldn’t find the right used version of this monitor, I likely would have purchased it new. It’s an excellent product.

If you’re one of the few people who primarily use your monitor in a dark room, you may be better off with a bright OLED display (you will likely save some money and may achieve comparable or possibly better dynamic range – but only if the ambient light is very low). Note that I also have more information on key aspects of HDR displays and other high quality and budget HDR options on my HDR monitor recommendations page.

See the latest pricing for the the Pro Display XDR at Amazon or B&H.

If something like this is out of your budget, I highly recommend the M1 or M2 MacBook Pro. You can still get a M1 MacBook with the 16″ version of this incredible display and 1TB SSD for $1900 new at B&H. And I’ve seen several 14″ and sometimes 16″ M1 or M2 laptops on eBay and Craiglist for $1000-1500. These Apple laptops can be very budget-friendly, offer an excellent HDR experience, and are incredibly powerful computers even at the lowest specifications.

[Disclosure: This post contains affiliate links. I rarely endorse other products and only do when I think you would thoroughly enjoy them. By purchasing through my on this post, you are helping to support the creation of my tutorials at no cost to you.]

 

How to set up the Pro Display XDR

Setup for most people will be a simple matter of connecting the display to a thunderbolt port on your computer or dock. This monitor includes downstream USB ports, but no pass-through Thunderbolt. So you’ll either need multiple downstream ports on your dock or to use multiple Thunderbolt cables to connect your computer if you use other Thunderbolt accessories like a RAID drive.

Here is how I recommend you setup System Settings / Display:

  • If mirroring, set the Pro Display as “main display” and make sure “optimize for” also uses that monitor. Even if you mirror to a Retina XDR display on your laptop, the secondary HDR screen will be clipped at a maximum 2.5 stops of HDR headroom (even if the display is set in a way that would show 4 or 5 stops if it were the only or primary display).
  • Select the size that makes the interface feel comfortable for you. The image will always use 6k resolution, you’re really just scaling the surrounding interface with this choice (at least for Photoshop, I have not extensively tested this across all photography apps).
  • Turn off “true tone”. This will cause significant color shifts (warm tones) which make the color less accurate and invalidate any profiling you may do.
  • Leave the preset at the default “Pro Display XDR (P3 – 1600 nits)” unless you need to soft proof for less capable HDR displays or use a fixed reference brightness. If you wish to customize, click the dropdown and then “customize presets” at the bottom. You can then select the specific SDR and HDR brightness. See Apple support here and here for more info.
  • Leave the refresh rate at the default 60Hz.

I also recommend going to System Settings / Control Center and setting Display to “always show in Menu Bar”. This gives you easy access at the top-right of your screen to change brightness (or custom presets if you use them).

Be sure to review Apple’s white paper on the XDR to understand the various preset modes.

You won’t find the maximum 6k resolution for the display listed in System Settings / Display by default (the “more space” icon only goes to 3008 x 1692). To access it, you need to click on “advanced” and turn on “show resolutions as a list”. Then you’ll see 6k as an option in the list (as 6016 x 3384). I expect very few people will want it, as it makes the interface tiny and offers no image quality benefit (you always get 6k quality for the image in PS).

Exporting AVIF and HDR with Web Sharp Pro v5.6

Support for both AVIF (smaller file format) and HDR (“high dynamic range” via new monitor technology) is rapidly increasing. Adobe Camera RAW 15.4 just added a great new capability to export AVIF images at any time which enables new features in Web Sharp Pro.
Web Sharp Pro (WSP) v5.6 is a free update for all customers and includes two very significant updates:
  • The ability to export  AVIF images on via ACR (on both Mac and Windows). This offers the ability to export images which are up to 85% smaller than JPG and at the same time higher quality (fewer artifacts and higher bit depth).
  • A new option to convert and enhance standard (SDR) images to HDR. This can make any image look significantly better and makes it easy to use HDR with your existing edits, AI tools like MidJourney, stock images, etc.

Note that unless you see YouTube’s red “HDR” indicator by the quality setting at bottom right, you are viewing the content tone mapped to SDR (ie, it simulates the effect but true HDR will look much better).

Export as AVIF (via ACR):

The AVIF export is now possible by leveraging a new capability in ACR v15.4 which allows you to save images at any time. AVIF offers numerous benefits over JPG and will be universally supported by all major web browsers very soon (MS Edge is the only missing browser and AVIF support has been in test for a couple months now.
The workflow for the “AVIF (via ACR)” method involves manually clicking to save the image from ACR. When WSP opens the ACR interface, you should do the following:
  • Click <ctrl/cmd>-S or the save icon (this is the icon at top right with down arrow in a box).
  • Set output folder. I recommend setting it once and leaving it, this keeps things simple.
  • Set the first part of the file name to “Document Name” (the first one). This will preserve the name created by WSP.
  • Set file format to “AVIF”, leave metadata on “all” (since WSP already manages metadata for you), and quality between 8 to 12 (8 is fine for most use, 10 is ideal if you expect the viewer may zoom in beyond 100%).
  • Set “enable HDR display” appropriately (checked if and only if exporting HDR).
  • Set the color “space” to sRGB for SDR exports and or to either HDR P3 for HDR (any HDR option is safe). These are remembered separately, so once you’ve set them, you can ignore this and just make sure the enable HDR display is checked appropriately.
  • Do not use image sizing or output sharpening options, as WSP has already taken care of this for you.
  • Click “save” or <return>.
  • Exit ACR by clicking “OK” or <return>.
The steps above in bold are the ones you’ll need to do typically after you’ve setup ACR the first time – click the save icon, paste the filename, update HDR options if necessary, and save. If you aren’t changing output folder or switching between HDR and SDR, you should be able to simply press <ctrl/cmd>-S and then <return> twice each time you see the ACR interface. Note that WSP will show a guidance message during ACR export covering the key details (you may need to move ACR to see it underneath in Photoshop).
WSP does all the standard file prep (resizing, cropping, borders, watermarks, etc), manages several scenarios to simplify the export process with ACR, and then invokes ACR for the final save. ACR is not really intended for this kind of use and Photoshop does not yet natively support saving AVIF, so there are a couple minor manual steps involved. Still, it’s amazing to see ACR continuing to add valuable features like this to make it easier to work with AVIF and HDR images. I’ll update WSP on MacOS to a fully automated solution if/when PS natively supports AVIF (this is already possible for Windows users). If you’d like to see native support for AVIF in Photoshop, please be sure to vote and comment in support of AVIF.
If you’re using WSP on Windows, you can now choose to export as AVIF in two different ways:
  1. Export via a free 3rd-party plugin for Windows. This offers a fully automated export and supports both SDR and HDR images. It may also offer slightly better highlight color and detail in SDR images. See this tutorial for details on this Window-only option. https://gregbenzphotography.com/photography-tips/exporting-avif-files-from-photoshop/. I recommend this method for exporting SDR images.
  2. Export via ACR. This offers enhanced support for exporting HDR images which will offer the best possible image quality for HDR. ACR v15.4 only offers AVIF exports for HDR images. I recommend the ACR mthod for exporting HDR images.
MS Edge is the only browser which does not yet have AVIF support. You can enable it via MS Edge Canary with a development flag as shown here. The Canary build is at v115 and the latest production release is v113, suggesting support may get into production as soon as late July.

Enhancing SDR images to HDR:

WSP v5.6 also adds a new “Enhance SDR to HDR” setting to allow you to easily enhance any standard image. This feature will convert an SDR image to HDR and significantly enhance it (by automatically expanding the dynamic range in an intelligent way). This may be used for enhancing your existing edits, enhancing images created by AI tools like MidJourney, converting stock photos to HDR, etc. For those of you focused on editing for print, this also offers a simple way to enhance that same image for online display.
This will be increasingly useful as more and more software catches up to the great HDR screens already in circulation. Most Apple computers since 2018 include HDR hardware and can properly display such images on Chrome, Opera, and Brave (ie 65% of web browers). Android 14 beta with Chrome Canary now supports it, suggesting those with a Samsung Galaxy, Pixel 7 Pro, and other HDR-capable Android phones should be able to view HDR images on their phone by the end of this year. If/when Apple WebKit adds support to display HDR images, a massive number of iPhones and iPads will be able to display these images too. If you’re buying a new phone, tablet, or computer I recommend considering one which supports HDR to ensure you’re ready.
Recently, zonerama.com added support to let you share your HDR images on the web, including an elegant mechanism to automatically show the SDR version of your image for any viewer which does not support HDR.

Photoshop’s amazing new AI “Generative Fill”

Adobe just added some of their “Firefly” generative artificial intelligence (AI) directly into Photoshop as a new beta feature called “generative fill“. It’s a very exciting development, with the potential to offer something MidJourney, Dall-E, and Stable Diffusion cannot… deep integration into Photoshop. What benefits might that offer?

  • Native support in Photoshop. There are already great plugins to use tools like Stable Diffusion, but Adobe can offer a richer interface. You can create a selection and work directly with your current image. Ultimately, this offers the potential for greater user control and a richer interface, as well as the convenience of doing all your work right inside Photoshop.
  • Generate objects. You provide both the source image and location and description of where you’d like to add something, and the AI does the work of combining them.
  • Remove objects. It’s like “content-aware fill” on steroids. I find that it can offer better results than the new remove tool in many cases (though the brushing workflow is very nice and they both have their uses).
  • Revise objects. Want to change from a yellow shirt to a red one? Just select the shirt with the lasso tool and tell Photoshop what you want.
  • Expand images. You can push things pretty far and it often provides much better results than content-aware fill. Generative fill seems to work better in detailed areas and content aware seems to excel with gradients or areas of low detail such as the sky, so using both together may produce the best results.
  • Create new backgrounds. You can generate content from nothing, which may be ideal if you need a backdrop for a subject.
  • Fewer legal / commercial / ethical concerns. Firefly has been trained on Adobe stock, so there is much less copyright concern with the source data used to train the AI. I’m no expert on the contractual terms and legal matters here, but certainly this source has significant benefits over scraping content from Pinterest, Flickr, etc which does not include model or property releases. See Adobe’s FAQ for more details.

There are several ways you can invoke generative fill:

  • The Lumenzia Basics panel’s “Fill” button now offers generative fill (when the feature is enabled in PS). This not only gives one-button access to use it, but includes other enhancements:
    • It also includes a feature to “auto-mask subject“. This allows you to easily resize, rotate, or move content you’ve added without edge concerns. When you use this option to create a new fill layer, the last mask will automatically update to isolate the subject anytime you update the layer. This prevents issues with surrounding water, clouds, etc failing to match the new surroundings after you transform your subject.
    • You can easily expand your image. Just use the crop tool (<C>) to expand the edges of the image and then click “Fill”. When using generative, just leave the prompt blank.
  • Make a selection with a tool such as the lasso, quick selection, or subject select and then go to Edit / Generative Fill.
  • Via the “Contextual Task Bar” (Window / Contextual Task Bar). Whenever you create a selection, you’ll see a “generative fill” button in this floating task bar. Tip: turn on “sample all layers” for quick selection / subject select, as they won’t work very well once you start creating multiple layers.
  • Via voice commands through MacOS or Windows when using any of the above methods:
    • MacOS: Setup via System Preferences / Keyboard / Dictation. Enable dictation and set the “shortcut” you prefer (“Press Control Key Twice” works great with external keyboards). Use your shortcut, speak, and click <return> or your shortcut key.
    • Windows: Start by pressing the Windows logo key + H, or the microphone key next if you have one.
  • Revise an existing generative fill layer by selecting the layer and opening the properties panel. You can click “generate” to create new options or change the text prompt to refine your concept. You can also select from other variations.

Note that the generative fill layer is created as a Smart Object with one layer for each version you see in the properties (you can right-click the layer and choose “convert to layers” to actually see this). This has a couple of implications:

  • Each of these layers does take a bit of space, so clicking the “x” on unused versions will help reduce your file size (you may also rasterize the layer to save further space if you are done revising it).
  • You can non-destructively apply filters to the layer.

Capability like this naturally raises ethical questions around truth in imagery and it should also be noted that Generative Fill is designed to work with Content Credentials. This is an initiative involving companies like Adobe and the New York Times to create standards and a trail of evidence to help differentiate between original and altered content.

 

How good is it, and where do we go from here?

Is this a perfect AI? No, of course not – but that isn’t the goal at this stage. Adobe is making that very clear by releasing this as a feature only available in the beta version of Photoshop. This is what software developers call an MVP (minimum viable product). It’s a chance to get user input and more experience to help build the real product. You should expect that (a) it has lots of limitations now and (b) it will get much better in the future. At this time, this is a tool best used for fun and experimentation at social media resolutions. Commercial usage is prohibited during the beta phase. But it’s very exciting to get a glimpse of where things are likely headed. All the use cases above are interesting to me and would will immensely beneficial with sufficient quality.

Even if you see no relevance to this kind of AI for your work in the near future, that’s unlikely to remain the case years from now. AI tools like this are going to be constantly evolving. Most people hadn’t heard of ChatGPT until it reached version 4, and this isn’t even version 1 of “generative fill”, “Firefly”, or whatever the product will be called over time. It’s an extremely exciting development with enormous potential to alleviate tedious work and open up new creative avenues for exploration.

Personally, I’m most excited about the potential for better methods of removing distractions from my images. Cloning is tedious work. I’ll probably expand some image edges as needed for certain formats and cropping factors. I’d be happy to make some tweaks to alter some colors. However, I don’t see myself adding subjects to images because I focus on creating images which share the experience of a place. The video above is just meant to give some sense of what’s possible. I’m not going to be adding fake animals to my portfolio images.

Everyone’s needs are different. This could be a great aid for someone who doesn’t have model releases for marketing work to simply swap real people with invented ones. Some people want to create fantasy images. There are so many potential uses, and I think ultimately the evolution will take a winding path as developers find out what people really want (and are willing to pay for). That said, I think there are some fairly clear avenues of continued improvement for tools like this.

 

Adobe’s standalone / website version (Firefly) already has several additional features would be very useful in Photoshop, including:

  • Category options for style, color and tone, lighting, etc. Many of these are less necessary in this context when you’re filling a portion of the image (vs generating something from nothing on the website), but I do think a somewhat guided experience may provide more clarity in some cases. For example, a blank prompt currently may remove a selected object or not – what’s the right approach? There is much to be learned about what interface works best, but I suspect  a simple open-ended text input may feel a bit daunting for those who aren’t experts in “prompt engineering”.
  • Text effects” to create gorgeous visual fonts. Fonts have many needs which are unique from image content and options here will certainly be appreciated by users such as graphic designers.

Beyond that, there are several potential ways to expand the capability:

  • Higher resolution. The current results are limited to social media sizes, this isn’t something you’re going to print right now. Anytime you generate content, it is created at a maximum of 1024×1024 (though will be upscaled from there automatically to fill the target space). This isn’t surprising given Adobe is providing this for free at what must be significant costs to run on their servers, but obviously there will be a lot of demand for higher resolution output in the future.
  • Improved image quality. There are artifacts, matching to the ambient lighting is hit or miss, the results may look more like art than a photograph, etc. This will obviously improve over time and I’m excited to see how it evolves. Whether training the AI from Adobe stock is a limiting factor in the long run remains to be seen – that catalog reportedly includes hundreds of millions of images (vs billions used for Stable Diffusion). I suspect that as AI models continue to improve to work with less data, the quality of the training images is going to be more important than the quantity. This will undoubtedly improve significantly in time.
  • Improved user interface. The current design is very basic, as if to drive home the point that it’s a beta. You can’t just click <enter> after typing your text, you can’t double-click the layer to access properties for further editing, no option in the toolbar for the lasso tool, clicking OK before generate leaves an empty layer, only a few iterations offered at a time, no way to specify a seed to revise an existing version, the previews are small, etc.
  • Negative prompts. You can’t currently type “remove the sign”, though selecting an object and typing nothing often will help remove it (though other times you just get another random object).
  • Better support to revise existing content. Unlike the demo I showed with the Stable Diffusion plugin (where I turned my photograph into a charcoal sketch), there isn’t quite the same mechanism for style transfer with generative fill. I can select someone’s shirt and type “blue shirt” to change color. But if I select the whole image and type “charcoal drawing”, the result will bear no resemblance to the original photo. This kind of capability would be nice for altering the entire image (day to night conversions, changing the weather, time of day, style transfers, etc). And the quality of the result isn’t the same. If I try to select my mouth and type “frown” or “closed lip smile”, I don’t get what that result.
  • On-device processing. The beta release of generative fill runs in the cloud, which means you have to be connected to the internet. Processing on your own computer would allow offline use and probably faster processing time.
  • AI assisted compositing. Rather than using a text prompt to create new content, imaging that you just provide a background and a subject – then Photoshop cuts out the subject with no visible edge, matches color and tone, and creates shadows or reflections to complete the composite for you.
  • More flexible input. Support for languages other than English is key. It also needs to be more tolerant of typos (“brooom” should be recognized as an attempt to type “broom”). It’d be nice if you could use the arrow keys to cycle through the versions you create. And while you can already use your voice, imagine a richer interface where you give an initial idea (“add some pine trees”) and then continue to refine it with feedback (“make the middle one taller, show light rays coming through the trees, and warm up the shadows”).
  • Support for 32-bit HDR. Photoshop’s tools for cleaning up 32-bit images are limited to the clone stamp. There is no healing, spot healing, patch, or remove tool support. It would be very helpful to be able to remove things like lens flare in HDR images.

There are an unlimited number of potential use cases here and it will be very exciting to see where the technology goes over time. What do you think? What capabilities and interface would you like to see for this sort of generative fill in Photoshop? I’d love to hear your thoughts in the comments below.

Troubleshooting

I’ve had various emails and comments I want to address if you are unable to use Generative fill.

If you do not see an option for Generative Fill, please check the following:

  • Make sure you have installed the PS beta. Help / System Info must show 24.6.0 20230520.m.2181. Several people reported installing the beta initially and seeing a build older than “2181”.
  • Make sure you are running the PS beta (when you install the beta, it keeps the regular version and you can run either).
  • Check that your age shows at least 18 in Adobe’s system (contact [email protected] if unsure). Generative Fill in Photoshop (Beta) is only available to users of at least 18 years of age.
  • Make sure you have a supported license type: Creative Cloud Individual license, CC Teams, CC Enterprise, or Educational.
  • Note: Generative Fill is not available in China at this time.
  • See the official Adobe support page

If you see Generative Fill, but it is greyed out / unavailable, please check the following:

  • Note: hovering over the Generate button should show a tooltip explaining why the feature is unavailable.
  • Make sure you have a selection
  • Make sure your image is in 8 / 16-bit mode (32-bit HDR is not supported)
  • Use the “Fill” button in the Basics panel if you own Lumenzia. It is designed to address some edge cases (other than 32-bits, which is a fundamental requirement of the tool).

If you do not see the Generative Fill option in Lumenzia Basics, please check that you can see it as an option under the “Edit” menu in Photoshop and that you have updated to Lumenzia v11.4.0 or later.

Incredible new AI noise reduction in LR / ACR

Adobe Camera RAW (ACR v15.3) and Lightroom (Classic v12.3 / Desktop v6.3) have just added a powerful new noise reduction tool using artificial intelligence (as well as some great enhancements to HDR capabilities and more noted at the bottom of this post). Noise reduction is a powerful tool not only for high ISO images, but also noisy images from small sensors (such as drones) or helping to make larger prints from low ISO images. There are now a number of AI-based tools and ACR is already among the best in its first release. In this tutorial, you’ll learn why you should use it, how to get optimal results, and how it compares to DXO PureRAW.

In my experience, this tool consistently adds a lot of value for its intended targets: noisy images. It’s is extremely helpful for noise in shadows, high ISO images, and small sensors (such as images captured with a drone). It can help make larger prints from images captured at an optimal ISO. And it can even reduce hot pixel noise in some images. The final result is significantly less luminance noise, less color noise, and better preservation of detail compared to the older manual noise removal.

 

Workflow for Adobe Denoise:

  1. Open or select an image in LR or directly in ACR (the feature is not available in inside RAW Smart Objects and the RAW Filter does not actually work with RAW data). You may also <shift>-click to select multiple images at the same time.
  2. Consider using exposure or shadow adjustments so you can clearly see any noisy areas you’ll want to review in the next steps (this won’t affect the results and you can change/undo it later).
  3. In the Detail tab, click the “Denoise” button. If you <alt/option>-click Denoise, it will run “headless” and immediately process the image with the same settings used last time you ran denoise.
  4. The preview shows 100%. You cannot change the size of the preview, but you can easily pick other parts of the image to preview by clicking directly on the image. You may alternatively click the – icon (or <alt/option>-click in the preview) to zoom out, then click elsewhere in the preview to zoom back in.
  5. Click the preview to view before/after to help choose the desired amount of Denoise. 50% is generally a good amount.
  6. Click “Enhance” then you may open the new image which shows in the film strip. You may wish to <ctrl/cmd>-click to select the original as well so that you may blend the two.
  7. You may apply manual noise reduction in addition to Denoise. That’s not something you should do often, but it can be useful in some cases.

Some tips for working with Denoise:

  • Expect to make some minor changes to adapt your approach if you’re already using another noise reduction tool. Adobe Denoise is designed to reduce noise. It does add some detail, but not  real sharpening like DXO or Topaz DeNoise do. So you’ll need to add some degree of sharpening or detail enhancement to the Adobe Denoise image if you’re hoping to match the level of detail those products tend to produce by default.
  • You can apply Denoise at any time on the RAW file, but it should ideally be done before using any tools which make permanent changes based on the current pixels (this includes AI-based masks like Select Sky/Subject and the healing brush). Denoise will automatically update those areas, but it’s a good idea to review them if you denoise after those changes. I’ve also seen denoise make some slight shifts in apparent color/tone. They’re quite minor, but it’s a good idea to look for changes if you’re denoising an image you’ve already processed.
  • You can skip the popup interface and run with the last same settings used last time by <alt/option>-clicking Denoise (the … will disappear on the button when you’re holding the correct shortcut key and hovering).
  • Adobe Denoise also adds detail, which can be both a benefit and a potential concern, depending on the image content. automatically turns on the “RAW Details” enhance option. If you compare Denoise at 1% (where it’s doing almost nothing) vs unchecking Denoise (so that it’s completely off), there is a significant change in detail. In other words, it adds a lot more detail beyond that provided when you only have “RAW Details” enabled. This extra edge detail increases with higher amounts of Denoise. This has several implications:
    • You may turn on Denoise at a very low percent to help reveal more detail in an image (like a form of AI capture sharpening).
    • You may find this detail results in artifacts in some areas. So even if you’re already comfortable with how  RAW Details affect your images, you should review the results closely since they’re different now. Watch out in particular for halos along strong edges like backlit buildings. If you run problems, you can blend locally with the original, blend with a version using less Denoise, or just use the old manual noise reduction as needed. It’s not a concern I’ve seen in many images, but you should be aware of it. It’s also the sort of problem I expect may be eliminated as this tool matures with future updates.
  • One scenario that I find Adobe Denoise doesn’t handle well yet: very high contrast edges, such as a sunset sky behind the hard edge of a building. In that scenario, you may see halos. Hopefully this is addressed in a future update, but there may be a few scenarios where another approach is preferable or should be blended in via layer mask.
  • If you wish to filter LR to only show denoised images, you may search on the text “enhanced” and may further limit metadata to file type of “digital negative / lossless” to show DNGs (in case you have unrelated files with a similar name). You may also go to Settings / File Handling and check “automatically add keywords to enhanced images”, which will cause “Denoise” to be added as a keyword (the amount of noise reduction is not noted in either the keyword or new file name).
  • There’s no direct way to to filter to files which have not been denoised, but you could use a creative approach by setting a metadata filter for file type to “Raw” and opting for Denoise to output as a stack. The resulting DNG will be at the top of the stack and will not show in the filtered stack (so long as you leave the stack collapsed). This assumes you did not import your images as DNG, in which case both the source and denoised image would have the same “file type”.
  • Speed with this tool varies wildly based on your computer. On my M2 Max, converting 10 D850 images took an average of 27 seconds per image. I’ve heard reports of much longer times with much older computers, so your speed will depend heavily on your computer’s capabilities. If you have a slower computer, I recommend just letting the batch run in the background (you can even keep working on other images in LR if you like). Adobe’s official guidance is: “For best performance, use a GPU with a large amount of memory, ideally at least 8 GB. On macOS, prefer an Apple silicon machine with lots of memory. On Windows, use GPUs with ML acceleration hardware, such as NVIDIA RTX with TensorCores. A faster GPU means faster results.”
  • You can get basic info on your GPU in PS under Help / GPU Compatibility. If you’d like to compare with others, some GPU benchmark options which have been recommended to me are 3DMark (for PC) and Cinebench for (PC or MacOS).

 

Adobe Denoise vs DXO PureRAW 3:

There are a few AI-based noise reduction tools out there and I’ve posted tutorials previously on DXO PureRAW 3 (with “DeepPRIME XD”). How does ACR compare? The short answer is that they have complimentary strengths, so I prefer to use a mix of both depending on the image. The full answer is a bit longer as they aren’t fully comparable, as DXO targets a larger range of RAW enhancements.

Pros for Adobe Denoise:

  • Included with ACR and therefore costs nothing if you have Photoshop / Lightroom CC (vs $129 for PureRAW for a new purchase at full price).
  • Offers control over the degree of noise reduction, which can be helpful to fine tune the balance between noise reduction and preserving detail.
  • Less prone to artifacts in fine details.
  • Preserves the mosaic data. This may facilitate use of improved demosaicing algorithms in the future (which may help improve fine detail or reduce pixel-level artifacts). However, if you keep the original RAW, you’d always have this data (and the combined size of the original and DXO DNG is only about 5% larger than the Adobe DNG). So this is nearly a wash if you’re willing to keep the original when using DXO.
  • Simpler to use. There’s not much to think about here, which is nice. That said, the DXO interface isn’t too complicated.
  • Embeds fast load data, which may provide some performance boost when changing images in LR / ACR.
  • I find that it does a somewhat better job with high ISO night sky images. DXO tends to shows some artifacts (faux star trails) and makes secondary stars too strong (which makes for a cluttered star field where everything is a bright star). Adobe Denoise also makes secondary stars too strong, but the overall result is a bit better.
    • One benefit of this cleaner result is that you can combine Adobe Denoise with stacking multiple images. This would help achieve greater total noise reduction and/or allow you to shoot fewer images. I’d happily stack say 5 images instead of 10-20. Not only will that save time (on a typically cold night), but helps reduce problems with sky area near edges (where you may not have much data from other images in the stack).
    • I find that DXO does a better job with foregrounds, so I expect I’ll use both tools on the same image and use the sky from ACR and the foreground from DXO.
  • Direct integration with LR / ACR.
  • It’s a first release and only going to get better. Adobe’s Eric Chan noted in his blog post that they’re continuing to work on better training data, support for using Denoise with Super Resolution, and eliminating the creation of a new DNG file. If you look at the history of another new feature, HDR, you’ll see it has improved significantly in the past six months already (including the improvements noted below).

Pros for DXO PureRAW:

  • Does a better job of enhancing high ISO shadow details. It also includes a “lens softness” control to help control the degree of detail enhancement. When you want to make enlargements for print, I find DXO (especially when combined with Topaz Gigapixel) is an indispensable tool. This is a great reason to consider adding DXO to your toolkit.
  • 25% smaller files. I assume this is because PureRAW is saving demosaiced data (RGB) vs Adobe Denoise which preserves mosaic data (RGGB). For reference, a typical D850 is roughly 51MB for the original NEF, 135MB for the PureRAW DNG, and 178MB for the Adobe DNG. You might save the file space or use this as an opportunity to keep the original RAW for reprocessing in the future as these algorithms continue to improve.
  • Offers vignetting, chromatic aberration, and lens corrections. I don’t generally find these to be huge advantages. The lens corrections are very helpful if you get perfect results, but if you wish to blend in some of the original image, you need to skip them to align the image. I wouldn’t say I’ve found the chromatic aberration and vignetting addresses problems I can’t address with other tools in ACR.
  • Outputs to a sub-folder, which may be preferable in organizing the derivative DNGs.

Note that camera support varies a little bit here and both are likely to offer expanded options over time. I do not know if Adobe Denoise supports more cameras than PureRAW in general or just different cameras (you can check your images with DXO’s free trial to confirm with your own images).While both were able to process images from a DJI Mavic Pro drone, the results from PureRAW were not as expected and it does not seem properly supported as of v3.1.0.

Adobe Denoise is also able to process a RAW (not ProRAW) photo from iPhone 14, whereas PureRAW 3 could not. Note that the iPhone can capture two different types of RAW images and only one of them can be processed with Adobe Denoise. The native iPhone app RAW files are ProRAW files which are already demosaiced (partially processed), and therefore not compatible with AI Noise. You’ll have to use an app like ProCamera to capture in the mosaiced RAW format if you wish to use this software. You can tell which is which in ACR / LR based on Denoise availability, as well as by reviewing Metadata/DNG in LR and checking to see that “mosaic data” shows “yes”. In a brief test, I felt that the RAW + Adobe Denoise version showed more noise and less sharpening artifact, so it may be worth exploring this if you do serious photography with an iPhone (but the differences probably aren’t worth the effort for casual use).

I find these tools are very complimentary and I’m glad to have both. When to use which tool:

  • When you want noise reduction with the most natural look: ACR Denoise
  • When you want to enhance fine detail or restore very noisy shadow detail: PureRAW3
  • High contrast edges (such as sunset behind buildings): PureRAW3 (note that you may see better results using the option to remove chromatic aberration in ACR than in PureRAW3 in this scenario)
  • Starry night skies: Adobe Denoise or manual noise removal in ACR coupled with stacking. Adobe Denoise makes minor stars more prominent (which can make the sky too cluttered). You can get a better result by using a more modest amount of Adobe denoise and then add manual noise reduction. I also find Adobe Denoise may require a slight shift in tint to keep consistent color in the night sky. I’d like to see both tools improve their ability to handle starry skies.

This is just the first release of the tool, so the last couple of items may shift in favor of Adobe Denoise as ACR is updated over time. Ultimately, Adobe is coming out strong in their first release and I expect many photographers will decide it’s good enough (given it provides great results at no extra cost). However, I think PureRAW is excellent and there are compelling advantages for many photographers. I recommend trying DXO’s free trial to see for yourself, your degree of benefit will depend on the kind of images you capture.

 

Adobe Denoise vs Topaz Denoise:

Many people love Topaz Denoise. They make amazing products and I’m an enormous fan of Topaz Gigapixel. Topaz were early pioneers with AI noise reduction and have made a great product. However, as often happens, there’s lot of competing innovation and in my opinion leadership has now shifted to Adobe and DXO. In particular, I believe the RAW processing workflow is simply better with those products. However, if you don’t care about reducing noise at the RAW stage, then you’ll likely have a stronger preference for Topaz.

Pros for Adobe AI Denose:

  • You can apply the noise reduction at any time and trust that the new RAW (linear DNG) will look extremely close to where you started. That makes it very easy to migrate existing edits. that’s not the case with many files I’ve tested with Topaz. I see huge shifts in color balance, tonality, and sometimes overall vignetting of the image. It’s a different result, and in my experience complicates editing.
  • I prefer the results from Adobe, I find they have the least amount of artifact of any of the AI tools so far. But it depends on your image content, and there is no universally better tool.
  • It’s simpler, just a single Denoise slider. Topaz also has a reasonably simple interface, but you have to choose which model to use and set up to four sliders.
  • Topaz is not showing the RAW image as you’ve processed it, which I find makes it harder to make optimal decisions with those various choices (especially if making critical decisions to reduce noise in shadow areas).
  • Denoise is effectively “free” as it is included with the cost of LR / ACR.
  • Direct integration with LR / ACR.

Pros for Topaz:

  • You can use it on TIF, JPG, and PNG – which is beneficial for improving images you’ve already edited, stock images, etc.
  • You can use it as a filter on a Smart Object. So if you prefer more flexibility, you can change the noise reduction settings later. This may mean an extra Smart Object + filter (if you aren’t applying it to a RAW Smart Object). I’d generally be careful with applying noise reduction after sharpening, clarity, etc.
  • It also offers controls to add some sharpening / detail. This can be very useful (it can also be misleading if you compare the initial Topaz results to the unsharpened results from Adobe Denoise). Personally, I find this may produce unwanted artifact at an early state of image processing which may limit the ability to enlarge later. I prefer not to sharpen this way on the RAW file.

 

I expect Adobe Denoise will spur more innovation and I look forward to seeing how things continue to improve across the ecosystem. If you feel I’m overlooking anything here (or things change in the months to come), please comment below. I’m sure Topaz will keep producing great updates.

 

Other notable changes in LR:

  • Edit in / “Open as Smart Object layers in Photoshop”.
  • Curves in local adjustments.
  • Ability to import  AVIF (new format which is much smaller than JPG) and HEIF (iPhone).

 

ACR v15.3 also has some nice HDR enhancements, including:

  • Vastly improved color for orange/yellow HDR highlights, which is particularly beneficial for images such as sunrise / sunset.
  • Full support for color grading in HDR.
  • A new keyboard shortcut (<shift>-O) to toggle the “Visualize HDR Ranges” overlay.

 

[Disclosure:  This post contains affiliate links.  I have purchased all the software referenced above and only endorse tools I personally use and recommend. If you purchase through these links, a small percentage of the sale will be used to help fund the content on this site, but the price you pay remains the same.  Please see my ethics statement if you have any questions.]

Greg Benz Photography