Luminosity masks are well-known for their ability to precisely and naturally target pixels in an image based on their tonal information. They also allow you to target other attributes such as color and saturation. Lumenzia v11.5 now offers a completely new way to target any color in Photoshop. It not only lets you target a range of hues (which you can customize extensively), it lets you further target nuances in the luminosity or saturation of those pixels as well. For example, you could create a selection or mask to help isolate brighter red-orange flowers in an image from their surroundings.
To use the new color previews, simply click on any color swatch or the color picker at the top of the panel. You may then optionally refine the preview in a number of ways:
Click and drag the sliders on the hue slider which appears to help refine what is included/excluded, as well as feathering at the limits. If you click and drag the area between the slider thumbs, you can move multiple sliders at once (which is helpful to work quickly or to adjust the limits while keeping the same feathering).
Click on any luminosity preview button (such as “L2” or “(b)”) to further isolate your color targeting by luminosity. It will show with a green outline to help note that the luminosity is actively being constrained as well.
You may use the precision and valuesliders to refine the D, M, L, or zone targeting as you like. Or if you wish to remove the luminosity constraint, just click the same button again (so click “L” if any L preview was targeted, “(b)” if zone B was targeted, etc).
You may further isolate by saturation by clicking on Sat (for more saturated colors) or Vib (less saturated). You may also adjust the opacity of the bottom layer in the preview to refine saturation targeting (as noted in that layer’s name).
And you may of course refine levels at the top of the preview or use the optional layers to dodge/burn the preview or paint directly on it (as you can for any preview by enabling the options in the top-right flyout of the panel).
You may then apply the preview as a selection or mask to any layer. For example, you might use these new color previews to:
Reduce the vibrance of a yellow wall behind an orange subject (this offers both greater control over targeting and the type of adjustment than can be achieved with a standard HSL layer)
Darken a bright blue sky without affecting the bright yellow building next to it.
Apply a Nik Color Efex adjustment only to the red flowers in a bouquet.
The possibilities are endless, as you can use these new color previews with any layer mask, filter mask, or selection.
See the release notes for all the details on other recent updates to Lumenzia.
Web Sharp Pro v5.8 is now available as a free update for all customers and includes the ability to use Generative Fill AI to export at any aspect ratio without cropping out any of your original image. So if your image needs to be a little taller or wider to meet the needs of Instagram, Pinterest, or just to match other images on your site – you no longer need to crop off the edges of your image to fit nor add white space or some other filler. Web Sharp Pro will now use Photoshop’s artificial intelligence to fill the gaps so you can preserve your original content and add just a little more sky to the top or sandy beach in the foreground to achieve the required size.
Tips for using Generative Fill:
Go to Settings / Quick Export and set the Crop / Fill dropdown to “Keep full image (fill via Generative Fill)“.
You can interactively refine any crop and the area to be filled by also checking the “interactive crop when filling” option. I would generally leave this on, unless you’re doing a batch export. It will default to keeping the full image and using a symmetric fill on the top/bottom or sides as needed.
If you are using any of the cropping modes, you may also leave some edge gaps anytime you’d like to fill. When you do this, you will be prompted for the type of fill to use, and Generative Fill will be one of the options.
Generative Fill tends to work best when the expanded area would contain unique content not seen elsewhere in the image. Content-Aware Fill is an alternative option and tends to work best in areas of repeating patterns/texture or when exporting at high resolution (as the current beta version of Generative Fill only creates content which is 1024 pixels on the long side and scales from that for larger sizes).
Generative Fill is very new (still in beta) and I would expect that it (and perhaps the Web Sharp Pro interface in turn) may adapt over time as the tool likely continues to improve.
Web Sharp Pro v5.8 also features:
The ability to slice any export into columns via quick exports (no need for overlay templates).
Because phones are oriented vertically, it can be very powerful to slice horizontal / landscape images into several columns for the viewer to swipe through, rather than show a very small single image. This is especially handy for Instagram, Threads, and other social networks on phones.
Go to Settings / Quick Export to choose columns. Quick tip: you can use math in the size fields. So if you want to export 5 columns of 1080 wide by 1350 tall for Instagram, you can enter the width as 5 * 1080 and Web Sharp Pro will automatically calculate 5400 as the correct width in pixels for the overall export.
Simplified interactive cropping (as of v5.8.1). You no longer need to hold <shift> to maintain the aspect ratio when cropping. It is fixed to avoid making any unwanted changes to the aspect ratio.
Support for content-aware fill in Quick Exports as well. This may be preferable to Generative Fill for high resolution exports or areas of texture or minimal detail.
Improvements to templates for the Threads social network and more. See the release notes for full details.
If you’ve been following me for a while, you know I’m a fan of next-generation HDR displays. These displays are absolutely stunning – for the first time ever, we can truly see all the detail and color in our RAW files. I’ve just picked up a stunning external HDR monitor and wanted to write a quick review of my experience so far and how it compares to my Eizo and other displays I’ve used in the past.
For the past couple years, I’ve been using the “Retina XDR” display built into the M1 and M2 MacBook Pro. These are stunning screens that far surpass anything I’ve seen in any other laptop, but they are limited to 14″ and 16″ sizes. Most of the time in my office I’ve been using a 27″ Eizo (CG2730). It’s an incredible display, but does not support HDR at all. So I’ve been seeking a large external HDR display. After extensive searching, it’s clear that Apple’s “Pro Display XDR” is in a class of its own. These are also very expensive displays when purchased new, but you can save a substantial amount if you purchase one used. I picked up the Apple Pro Display XDR, Apple stand, and the Logitech 4k magnetic webcam designed for this monitor for $3000 on Craigslist. That same bundle would have cost roughly $6700 with tax new ($5k for the display, $1k for the stand, $200 for the webcam). So I got everything in mint condition (including original packaging) with more than 1-year remaining warranty for 55% off. It is still a lot of money for most budgets, but it is an excellent value.
Before we get into the review, a quick primer on HDR displays. Current HDR technology tends to fall into one of two camps: ideal in a dark room (modern OLED) and ideal for everything else (mini-LED). There are already newer, brighter OLED displays in phones, but it will likely take a while before we have external monitors which are optimized for both typical ambient conditions and extremely dark viewing. So you may find your TV looks amazing for movies at night, while being very hard to see in the daytime. Or you may hear a professional who makes movies in a controlled environment talk about how important an OLED or reference monitor is for their work. The key thing to understand is that we don’t yet have affordable technologies which are perfect for all ambient conditions, so the best technology for watching movies at night and best technology for editing photos in the day are often different. I own five different HDR displays and have tested them and many others in a variety of conditions. At this point in time, I believe most photographers will be much better served with greater peak brightness (1000+ nits) than perfect blacks in an HDR monitor for photography. For a deeper discussion on the various technologies, see here.
General impressions of the Pro Display XDR
The Pro Display XDR has some truly impressive specs: 32″ size, 1600 nits peak brightness for stunning HDR display, deep integration with MacOS, no fan noise, and gorgeous aesthetics. The image quality is absolutely stunning. If you have not see a proper HDR image on this monitor or a similar 1600 nits screen (such as the Apple Silicon MacBook Pros), it’s very hard to appreciate what a substantial improvement in image quality it offers. It’s the most significant improvement in photography display I’ve seen in decades. The display is truly stunning for both HDR and print-centric workflows. I’ll dive into that more in the comparisons below.
The Pro Display XDR was originally launched nearly 4 years ago and it would be natural to ask if it will be outdated soon. I don’t think that will be the case. In the next 5 years or so, we’ll probably start to see monitors offering better OLED (QD, MLA, tandem stack) or micro-LED. All of these technologies are emissive displays which offer true black (to eliminate blooming and offer much greater dynamic range in dark viewing conditions) while also offering very high peak brightness (in order to retain HDR benefit in bright viewing conditions). While I’m looking forward to those future technologies in a monitor, I suspect mini-LED displays like this will probably be the best HDR option for most photographers for several years to come. Unless you use a current generation OLED in an extremely dark room, you’re better off with higher peak brightness than true black pixels.
The treatment of resolution is very interesting. My friend Mark pointed out that whether you’re at the default 3k or the full 6k, the actual photo in Photoshop is 100% identical. If you set the zoom level to 100% at each resolution, you’ll find the image fills the same portion of the screen. If you take screenshots of each, you’ll find the lower resolution screenshot is 6016 x 3384. You can put the screenshots in difference mode (align as needed) and you’ll find they are completely identical. So you can use the larger interface and will still get the maximum image resolution. It’s very slick!
The $1k stand for this display is the target of a lot of understandable frustration and sarcasm, but now that I’ve used it I’ve been won over that it is also truly unique. The attachment mechanism is dead simple, just bring touch the monitor to the stand and a magnet pulls it into position and then a mechanical lock automatically secures it. You have bring the stand to full height and flip a switch to release it, so it’s very secure and yet very easy to detach when needed. Height and tilt is extremely smooth and precise – you could use a single finger to reposition the screen easily. You can rotate the monitor to a vertical orientation if desired (and MacOS will automatically adapt). The stand is very solid, substantial, and gorgeous. You can easily buy a cheaper stand if you prefer – but if these design features appeal to you, I don’t think you’ll find another stand like it.
What could be better? It would be very nice if it had a downstream Thunderbolt port, which would make it easier to connect everything with a single cable. Many users would probably like to see a high-quality web cam integrated into it as well, though the Logitech webcam works very well to address that need.
I’ve not tried very hard with the right PC, but have yet to have success using this as an external HDR display for Windows. According to Apple, you can connect Pro Display XDR to a Windows or Linux PC equipped with a GPU that supports DisplayPort over Thunderbolt or DisplayPort Alt-mode over USB-C. But I haven’t had luck controlling HDR or brightness, which may simply be addressed with options I haven’t tried yet. I would recommend Windows users consider the ASUS ProArt Display PA32UCG.
Pro Display XDR vs laptop Retina XDR:
The naming here can be a bit confusing as both are labeled as “XDR”, which is simply Apple’s hardware branding for “Extreme Dynamic Range”. This branding indicates that you’re getting the very highest level of HDR support available.
I’ve compared the internal laptop XDR display with this external Pro Display XDR and would say they are generally very similar. Both offer excellent 1600 nits peak brightness and mini-LED with local dimming for excellent HDR display. Both offer deep MacOS integration and detailed control over the monitor’s characteristics. You can set the HDR brightness anywhere between 50 and 1600 and SDR brightness anywhere between 50 and 500 nits. That’s very handy if you wish to simulate less capable HDR displays or easily set your monitor to fixed levels of brightness to manage print workflows. You can also also customize the color gamut, white point, and EOTF to a degree.
There are a few differences of course, including:
Size, obviously. The Pro Display is 32″ vs 14 or 16″ for the laptop (I have the 14″ laptop for lightweight travel).
Resolution. The Pro Display is 6k (6016 x 3384 max, with 3008 x 1692 as the default) vs (3024 x 1964 max, with 1512 x 982 as the default).
Dimming zones. The Pro Display has 576 dimming zones vs 2500 for the laptop. A greater number of zones helps reduce “blooming”. I’m not entirely sure why a higher resolution display has fewer dimming zones, but assume pixel pitch/density has something to do with it (the larger display is 218 pixels per inch vs 250 for the laptop). Ultimately, I do not see performance differences as a result, which I’ll discuss below.
The Pro Display XDR uniquely has an option to optimize backlight performance for color detail vs minimizing blooming/halos, but I stick with the default and find them similar.
The Pro Display XDR has a higher contrast ratio (20MM : 1 vs 8MM :1), but I find them very similar.
The keyboard controls for brightness only seem to control the internal display. It would be nice to be able to adapt the external monitor’s brightness with the keyboard, but you can easily do this via the control center at top-right if you go to System Settings / Control Center and set display to “always show in menu bar”.
You cannot turn off the Pro Display without unplugging it, locking the screen, putting the computer to sleep or turning it off. You cannot simply slide the dimmer to the minimum to make the screen truly black. Using cmd-ctrl-Q to lock the screen is a good option. Not a big deal, but you may need to adapt a bit if you don’t like leaving the display on.
Ultimately both displays perform very similarly outside the size (and resolution, though both look great at the default resolutions which are well below their limits).
Like any transmissive display with local dimming, both will show “blooming” when there are bright pixels surrounded by dark pixels. This is because if any pixel within a given lighting zone is not black, then that zone’s backlight is turned on. That means that you can only see a truly black pixel when every pixel in that zone is black (or technically more than 20.5 stops below SDR white in my Photoshop testing). With brighter pixels in a zone, the backlight gets brighter and the minimum “black” increases, which is what creates this blooming effect. This is not something you’ll notice under most conditions. If you work in a room which is completely black or very dark, that’s when you’d likely notice it. That blooming also the reason why a professional color grading a movie (which takes place in a very dark environment) would opt for a $33,000 reference monitor or OLED instead of a mini-LED display like this. But if you’re like most people who work some lights on or window light, you’re unlikely to notice it. More importantly, you have much greater peak brightness to overcome ambient light, which is a major advantage over OLED for most users in general.
If you’d like to get a good sense of how much blooming there is with your display, you can easily test this in Photoshop. Create a new image and fill it with complete black. Then click “F” twice to toggle the screen mode to full screen with no menu bar. Then just move the cursor around. If the room is very dark, you’ll clearly see blooming around it on a mini-LED (whereas OLED will show none). But if you turn on the lights or light is coming in through the windows, you probably will not be able to see any blooming because of competing reflections on the screen and your eyes being less sensitive to very dark content in a brighter environment. When you’re done, click “F” again in Photoshop to get back to the default display.
Pro Display XDR vs Eizo CG2730:
I would say my Eizo is a pretty good proxy for any great SDR monitor you might currently use for printing.
Both of these are excellent monitors, but they have significant differences (the first spec in each line is the Pro Display XDR):
1600 nits HDR vs 350 nits. The HDR capability here offers massive benefits for displaying the image. When it comes to SDR content and printing, both are fairly similar.
Size: 32″ Pro Display vs 27″ Eizo. I don’t feel the extra size is critical for enjoying the images or making prints, but it has great workflow benefits. I can see much more of the image while zoomed in, show more tools, etc.
Resolution: 6k Pro Display (6016 x 3384) vs 2k Eizo (2560 x 1440). The detail is clearly better at 6k. Some people would absolutely love the extra detail, to me it’s nice but not a huge deal (viewers with less than perfect vision or correction for viewing at computer distances probably won’t see a difference).
Gamut: P3 vs Adobe RGB. On the whole, I don’t think there’s a clear winner here and they’re similar for most content. More details below.
My primary concern in replacing the Eizo was to make sure the Pro Display would work as well for printing. I wasn’t too concerned with gamut, but had a lot of questions on my ability to see shadow detail. Thankfully, the Pro Display holds up extremely well and I would say offers similar levels of shadow detail. The Pro Display is able to achieve deeper blacks and a bit more shadow detail, but this is likely offset by the reflective glossy display (depending on room conditions). The Eizo has more of an anti-reflective matte finish and it definitely helps minimize reflections, even in a room where I don’t have any strong lights behind me. If I were buying this XDR display, I would consider upgrading to the anti-reflective nano-texture glass if your budget allows.
When it comes to gamut, the Pro Display offers more vibrant reds (and a bit more orange/yellow/magenta) while the Eizo offers more green and cyan. So a sunset or flower photo may look more vibrant on the XDR, but both will look great. And green/cyan water and foliage can definitely look more vibrant on the Eizo. Overall, the Eizo is a bit better aligned with the gamut of vibrant print media like Lumachrome, but I’d feel equally comfortable proofing a print on either display. Unless your image content focuses primarily on a narrow range of subjects which fall into one color camp or the other, I don’t think both deliver great results for gamut.
Both monitors have very good uniformity visually. When tested with the Calibrite Display Plus HL, both showed some minor deficiencies from ideal – with the Pro Display being flagged in 2 corners and the Eizo in all 4. I have no concerns with either.
The bottom line for me is that the Pro Display XDR not only offers massive benefits for HDR display, but is also outstanding for making prints and any other photography work. The price will be outside the budget of many photographers, but it’s well worth it and you can probably find an excellent used one for substantial savings. I would consider the nano-textured option if your budget allows. If I couldn’t find the right used version of this monitor, I likely would have purchased it new. It’s an excellent product.
If you’re one of the few people who primarily use your monitor in a dark room, you may be better off with a bright OLED display (you will likely save some money and may achieve comparable or possibly better dynamic range – but only if the ambient light is very low). Note that I also have more information on key aspects of HDR displays and other high quality and budget HDR options on my HDR monitor recommendations page.
See the latest pricing for the the Pro Display XDR at Amazon or B&H.
If something like this is out of your budget, I highly recommend the M1 or M2 MacBook Pro. You can still get a M1 MacBook with the 16″ version of this incredible display and 1TB SSD for $1900 new at B&H. And I’ve seen several 14″ and sometimes 16″ M1 or M2 laptops on eBay and Craiglist for $1000-1500. These Apple laptops can be very budget-friendly, offer an excellent HDR experience, and are incredibly powerful computers even at the lowest specifications.
[Disclosure: This post contains affiliate links. I rarely endorse other products and only do when I think you would thoroughly enjoy them. By purchasing through my on this post, you are helping to support the creation of my tutorials at no cost to you.]
How to set up the Pro Display XDR
Setup for most people will be a simple matter of connecting the display to a thunderbolt port on your computer or dock. This monitor includes downstream USB ports, but no pass-through Thunderbolt. So you’ll either need multiple downstream ports on your dock or to use multiple Thunderbolt cables to connect your computer if you use other Thunderbolt accessories like a RAID drive.
Here is how I recommend you setup System Settings / Display:
If mirroring, set the Pro Display as “main display” and make sure “optimize for” also uses that monitor. Even if you mirror to a Retina XDR display on your laptop, the secondary HDR screen will be clipped at a maximum 2.5 stops of HDR headroom (even if the display is set in a way that would show 4 or 5 stops if it were the only or primary display).
Select the size that makes the interface feel comfortable for you. The image will always use 6k resolution, you’re really just scaling the surrounding interface with this choice (at least for Photoshop, I have not extensively tested this across all photography apps).
Turn off “true tone”. This will cause significant color shifts (warm tones) which make the color less accurate and invalidate any profiling you may do.
Leave the preset at the default “Pro Display XDR (P3 – 1600 nits)” unless you need to soft proof for less capable HDR displays or use a fixed reference brightness. If you wish to customize, click the dropdown and then “customize presets” at the bottom. You can then select the specific SDR and HDR brightness. See Apple support here and here for more info.
Leave the refresh rate at the default 60Hz.
I also recommend going to System Settings / Control Center and setting Display to “always show in Menu Bar”. This gives you easy access at the top-right of your screen to change brightness (or custom presets if you use them).
Be sure to review Apple’s white paper on the XDR to understand the various preset modes.
You won’t find the maximum 6k resolution for the display listed in System Settings / Display by default (the “more space” icon only goes to 3008 x 1692). To access it, you need to click on “advanced” and turn on “show resolutions as a list”. Then you’ll see 6k as an option in the list (as 6016 x 3384). I expect very few people will want it, as it makes the interface tiny and offers no image quality benefit (you always get 6k quality for the image in PS).
Support for both AVIF (smaller file format) and HDR (“high dynamic range” via new monitor technology) is rapidly increasing. Adobe Camera RAW 15.4 just added a great new capability to export AVIF images at any time which enables new features in Web Sharp Pro.
The ability to export AVIF images on via ACR (on both Mac and Windows). This offers the ability to export images which are up to 85% smaller than JPG and at the same time higher quality (fewer artifacts and higher bit depth).
A new option to convert and enhance standard (SDR) images to HDR. This can make any image look significantly better and makes it easy to use HDR with your existing edits, AI tools like MidJourney, stock images, etc.
Note that unless you see YouTube’s red “HDR” indicator by the quality setting at bottom right, you are viewing the content tone mapped to SDR (ie, it simulates the effect but true HDR will look much better).
Export as AVIF (via ACR):
The AVIF export is now possible by leveraging a new capability in ACR v15.4 which allows you to save images at any time. AVIF offers numerous benefits over JPG and will be universally supported by all major web browsers very soon (MS Edge is the only missing browser and AVIF support has been in test for a couple months now.
The workflow for the “AVIF (via ACR)” method involves manually clicking to save the image from ACR. When WSP opens the ACR interface, you should do the following:
Click <ctrl/cmd>-S or the save icon (this is the icon at top right with down arrow in a box).
Set output folder. I recommend setting it once and leaving it, this keeps things simple.
Set the first part of the file name to “Document Name” (the first one). This will preserve the name created by WSP.
Set file format to “AVIF”, leave metadata on “all” (since WSP already manages metadata for you), and quality between 8 to 12 (8 is fine for most use, 10 is ideal if you expect the viewer may zoom in beyond 100%).
Set “enable HDR display” appropriately (checked if and only if exporting HDR).
Set the color “space” to sRGB for SDR exports and or to either HDR P3 for HDR (any HDR option is safe). These are remembered separately, so once you’ve set them, you can ignore this and just make sure the enable HDR display is checked appropriately.
Do not use image sizing or output sharpening options, as WSP has already taken care of this for you.
Click “save” or <return>.
Exit ACR by clicking “OK” or <return>.
The steps above in bold are the ones you’ll need to do typically after you’ve setup ACR the first time – click the save icon, paste the filename, update HDR options if necessary, and save. If you aren’t changing output folder or switching between HDR and SDR, you should be able to simply press <ctrl/cmd>-S and then <return> twice each time you see the ACR interface. Note that WSP will show a guidance message during ACR export covering the key details (you may need to move ACR to see it underneath in Photoshop).
WSP does all the standard file prep (resizing, cropping, borders, watermarks, etc), manages several scenarios to simplify the export process with ACR, and then invokes ACR for the final save. ACR is not really intended for this kind of use and Photoshop does not yet natively support saving AVIF, so there are a couple minor manual steps involved. Still, it’s amazing to see ACR continuing to add valuable features like this to make it easier to work with AVIF and HDR images. I’ll update WSP on MacOS to a fully automated solution if/when PS natively supports AVIF (this is already possible for Windows users). If you’d like to see native support for AVIF in Photoshop, please be sure to vote and comment in support of AVIF.
If you’re using WSP on Windows, you can now choose to export as AVIF in two different ways:
Export via ACR. This offers enhanced support for exporting HDR images which will offer the best possible image quality for HDR. ACR v15.4 only offers AVIF exports for HDR images. I recommend the ACR mthod for exporting HDR images.
MS Edge is the only browser which does not yet have AVIF support. You can enable it via MS Edge Canary with a development flag as shown here. The Canary build is at v115 and the latest production release is v113, suggesting support may get into production as soon as late July.
Enhancing SDR images to HDR:
WSP v5.6 also adds a new “Enhance SDR to HDR” setting to allow you to easily enhance any standard image. This feature will convert an SDR image to HDR and significantly enhance it (by automatically expanding the dynamic range in an intelligent way). This may be used for enhancing your existing edits, enhancing images created by AI tools like MidJourney, converting stock photos to HDR, etc. For those of you focused on editing for print, this also offers a simple way to enhance that same image for online display.
This will be increasingly useful as more and more software catches up to the great HDR screens already in circulation. Most Apple computers since 2018 include HDR hardware and can properly display such images on Chrome, Opera, and Brave (ie 65% of web browers). Android 14 beta with Chrome Canary now supports it, suggesting those with a Samsung Galaxy, Pixel 7 Pro, and other HDR-capable Android phones should be able to view HDR images on their phone by the end of this year. If/when Apple WebKit adds support to display HDR images, a massive number of iPhones and iPads will be able to display these images too. If you’re buying a new phone, tablet, or computer I recommend considering one which supports HDR to ensure you’re ready.
Recently, zonerama.com added support to let you share your HDR images on the web, including an elegant mechanism to automatically show the SDR version of your image for any viewer which does not support HDR.
Adobe just added some of their “Firefly” generative artificial intelligence (AI) directly into Photoshop as a new beta feature called “generative fill“. It’s a very exciting development, with the potential to offer something MidJourney, Dall-E, and Stable Diffusion cannot… deep integration into Photoshop. What benefits might that offer?
Native support in Photoshop. There are already great plugins to use tools like Stable Diffusion, but Adobe can offer a richer interface. You can create a selection and work directly with your current image. Ultimately, this offers the potential for greater user control and a richer interface, as well as the convenience of doing all your work right inside Photoshop.
Generate objects. You provide both the source image and location and description of where you’d like to add something, and the AI does the work of combining them.
Remove objects. It’s like “content-aware fill” on steroids. I find that it can offer better results than the new remove tool in many cases (though the brushing workflow is very nice and they both have their uses).
Revise objects. Want to change from a yellow shirt to a red one? Just select the shirt with the lasso tool and tell Photoshop what you want.
Expand images. You can push things pretty far and it often provides much better results than content-aware fill. Generative fill seems to work better in detailed areas and content aware seems to excel with gradients or areas of low detail such as the sky, so using both together may produce the best results.
Create new backgrounds. You can generate content from nothing, which may be ideal if you need a backdrop for a subject.
Fewer legal / commercial / ethical concerns. Firefly has been trained on Adobe stock, so there is much less copyright concern with the source data used to train the AI. I’m no expert on the contractual terms and legal matters here, but certainly this source has significant benefits over scraping content from Pinterest, Flickr, etc which does not include model or property releases. See Adobe’s FAQ for more details.
There are several ways you can invoke generative fill:
The Lumenzia Basics panel’s “Fill” button now offers generative fill (when the feature is enabled in PS). This not only gives one-button access to use it, but includes other enhancements:
It also includes a feature to “auto-mask subject“. This allows you to easily resize, rotate, or move content you’ve added without edge concerns. When you use this option to create a new fill layer, the last mask will automatically update to isolate the subject anytime you update the layer. This prevents issues with surrounding water, clouds, etc failing to match the new surroundings after you transform your subject.
You can easily expand your image. Just use the crop tool (<C>) to expand the edges of the image and then click “Fill”. When using generative, just leave the prompt blank.
Make a selection with a tool such as the lasso, quick selection, or subject select and then go to Edit / Generative Fill.
Via the “Contextual Task Bar” (Window / Contextual Task Bar). Whenever you create a selection, you’ll see a “generative fill” button in this floating task bar. Tip: turn on “sample all layers” for quick selection / subject select, as they won’t work very well once you start creating multiple layers.
Via voice commands through MacOS or Windows when using any of the above methods:
MacOS: Setup via System Preferences / Keyboard / Dictation. Enable dictation and set the “shortcut” you prefer (“Press Control Key Twice” works great with external keyboards). Use your shortcut, speak, and click <return> or your shortcut key.
Windows: Start by pressing the Windows logo key + H, or the microphone key next if you have one.
Revise an existing generative fill layer by selecting the layer and opening the properties panel. You can click “generate” to create new options or change the text prompt to refine your concept. You can also select from other variations.
Note that the generative fill layer is created as a Smart Object with one layer for each version you see in the properties (you can right-click the layer and choose “convert to layers” to actually see this). This has a couple of implications:
Each of these layers does take a bit of space, so clicking the “x” on unused versions will help reduce your file size (you may also rasterize the layer to save further space if you are done revising it).
You can non-destructively apply filters to the layer.
Capability like this naturally raises ethical questions around truth in imagery and it should also be noted that Generative Fill is designed to work with Content Credentials. This is an initiative involving companies like Adobe and the New York Times to create standards and a trail of evidence to help differentiate between original and altered content.
How good is it, and where do we go from here?
Is this a perfect AI? No, of course not – but that isn’t the goal at this stage. Adobe is making that very clear by releasing this as a feature only available in the beta version of Photoshop. This is what software developers call an MVP (minimum viable product). It’s a chance to get user input and more experience to help build the real product. You should expect that (a) it has lots of limitations now and (b) it will get much better in the future. At this time, this is a tool best used for fun and experimentation at social media resolutions. Commercial usage is prohibited during the beta phase. But it’s very exciting to get a glimpse of where things are likely headed. All the use cases above are interesting to me and would will immensely beneficial with sufficient quality.
Even if you see no relevance to this kind of AI for your work in the near future, that’s unlikely to remain the case years from now. AI tools like this are going to be constantly evolving. Most people hadn’t heard of ChatGPT until it reached version 4, and this isn’t even version 1 of “generative fill”, “Firefly”, or whatever the product will be called over time. It’s an extremely exciting development with enormous potential to alleviate tedious work and open up new creative avenues for exploration.
Personally, I’m most excited about the potential for better methods of removing distractions from my images. Cloning is tedious work. I’ll probably expand some image edges as needed for certain formats and cropping factors. I’d be happy to make some tweaks to alter some colors. However, I don’t see myself adding subjects to images because I focus on creating images which share the experience of a place. The video above is just meant to give some sense of what’s possible. I’m not going to be adding fake animals to my portfolio images.
Everyone’s needs are different. This could be a great aid for someone who doesn’t have model releases for marketing work to simply swap real people with invented ones. Some people want to create fantasy images. There are so many potential uses, and I think ultimately the evolution will take a winding path as developers find out what people really want (and are willing to pay for). That said, I think there are some fairly clear avenues of continued improvement for tools like this.
Adobe’s standalone / website version (Firefly) already has several additional features would be very useful in Photoshop, including:
Category options for style, color and tone, lighting, etc. Many of these are less necessary in this context when you’re filling a portion of the image (vs generating something from nothing on the website), but I do think a somewhat guided experience may provide more clarity in some cases. For example, a blank prompt currently may remove a selected object or not – what’s the right approach? There is much to be learned about what interface works best, but I suspect a simple open-ended text input may feel a bit daunting for those who aren’t experts in “prompt engineering”.
“Text effects” to create gorgeous visual fonts. Fonts have many needs which are unique from image content and options here will certainly be appreciated by users such as graphic designers.
Beyond that, there are several potential ways to expand the capability:
Higherresolution. The current results are limited to social media sizes, this isn’t something you’re going to print right now. Anytime you generate content, it is created at a maximum of 1024×1024 (though will be upscaled from there automatically to fill the target space). This isn’t surprising given Adobe is providing this for free at what must be significant costs to run on their servers, but obviously there will be a lot of demand for higher resolution output in the future.
Improved image quality. There are artifacts, matching to the ambient lighting is hit or miss, the results may look more like art than a photograph, etc. This will obviously improve over time and I’m excited to see how it evolves. Whether training the AI from Adobe stock is a limiting factor in the long run remains to be seen – that catalog reportedly includes hundreds of millions of images (vs billions used for Stable Diffusion). I suspect that as AI models continue to improve to work with less data, the quality of the training images is going to be more important than the quantity. This will undoubtedly improve significantly in time.
Improved user interface. The current design is very basic, as if to drive home the point that it’s a beta. You can’t just click <enter> after typing your text, you can’t double-click the layer to access properties for further editing, no option in the toolbar for the lasso tool, clicking OK before generate leaves an empty layer, only a few iterations offered at a time, no way to specify a seed to revise an existing version, the previews are small, etc.
Negative prompts. You can’t currently type “remove the sign”, though selecting an object and typing nothing often will help remove it (though other times you just get another random object).
Better support to revise existing content. Unlike the demo I showed with the Stable Diffusion plugin (where I turned my photograph into a charcoal sketch), there isn’t quite the same mechanism for style transfer with generative fill. I can select someone’s shirt and type “blue shirt” to change color. But if I select the whole image and type “charcoal drawing”, the result will bear no resemblance to the original photo. This kind of capability would be nice for altering the entire image (day to night conversions, changing the weather, time of day, style transfers, etc). And the quality of the result isn’t the same. If I try to select my mouth and type “frown” or “closed lip smile”, I don’t get what that result.
On-device processing. The beta release of generative fill runs in the cloud, which means you have to be connected to the internet. Processing on your own computer would allow offline use and probably faster processing time.
AI assisted compositing. Rather than using a text prompt to create new content, imaging that you just provide a background and a subject – then Photoshop cuts out the subject with no visible edge, matches color and tone, and creates shadows or reflections to complete the composite for you.
More flexible input. Support for languages other than English is key. It also needs to be more tolerant of typos (“brooom” should be recognized as an attempt to type “broom”). It’d be nice if you could use the arrow keys to cycle through the versions you create. And while you can already use your voice, imagine a richer interface where you give an initial idea (“add some pine trees”) and then continue to refine it with feedback (“make the middle one taller, show light rays coming through the trees, and warm up the shadows”).
Support for 32-bit HDR. Photoshop’s tools for cleaning up 32-bit images are limited to the clone stamp. There is no healing, spot healing, patch, or remove tool support. It would be very helpful to be able to remove things like lens flare in HDR images.
There are an unlimited number of potential use cases here and it will be very exciting to see where the technology goes over time. What do you think? What capabilities and interface would you like to see for this sort of generative fill in Photoshop? I’d love to hear your thoughts in the comments below.
I’ve had various emails and comments I want to address if you are unable to use Generative fill.
If you do not see an option for Generative Fill, please check the following:
Make sure you have installed the PS beta. Help / System Info must show 24.6.0 20230520.m.2181. Several people reported installing the beta initially and seeing a build older than “2181”.
Make sure you are running the PS beta (when you install the beta, it keeps the regular version and you can run either).
If you see Generative Fill, but it is greyed out / unavailable, please check the following:
Note: hovering over the Generate button should show a tooltip explaining why the feature is unavailable.
Make sure you have a selection
Make sure your image is in 8 / 16-bit mode (32-bit HDR is not supported)
Use the “Fill” button in the Basics panel if you own Lumenzia. It is designed to address some edge cases (other than 32-bits, which is a fundamental requirement of the tool).
If you do not see the Generative Fill option in Lumenzia Basics, please check that you can see it as an option under the “Edit” menu in Photoshop and that you have updated to Lumenzia v11.4.0 or later.
See the store page for Lumenzia and course info. “Lumenzia” and “Greg Benz Photography” are registered trademarks of Greg Benz Photography LLC. See licensing for Commercial and Creative Commons (Non-Commercial, Attribution) Licensing terms. Join my affiliate program. See my ethics and privacy statement.