Create AI art with Stable Diffusion

Exponential growth is nearly impossible to comprehend. Imagine a lake that starts a single lily pad, but every day the number doubles until the entire lake is covered with lily pads on day 30… How much of the lake’s surface would be covered after 25 days? Only 3%. The lake would be almost entirely clear and then just 5 days later you wouldn’t see any water at all. That’s exponential growth. Nothing seems to change and then suddenly everything does.

Artificial intelligence (AI) is on an exponential growth curve. And just like those lily pads, it’s hard to comprehend how quickly it can change. One day AI is the butt of jokes for creating people with two heads and then suddenly it’s so good you can’t tell if you’re looking at AI art or a real photograph. Where are we on that growth curve? I’m not quite sure, but an AI generated image just took first place at the Colorado State Fair. Things are improving quickly these days and I think it pays to have at least some basic understanding of AI and what it might mean for your art – even if you don’t care about it yet.

There are three interesting new AI platform which have been recently launched and allow you to simply type some words and have an AI generate an image for you: Stable Diffusion, MidJourney, and DALL-E. Each of them have their own merits (which I’ll discuss further below), but I’m going to focus this tutorial on Stable Diffusion because you can use it right inside Photoshop.

 

To set up Stable Diffusion for Photoshop:

Stable Diffusion is open-source software released by Stability AI Ltd. They also created DreamStudio to provide a web interface and API key for Stable Diffusion. I don’t understand why they didn’t name the web interface the same as the underlying software – you just need to know that DreamStudio gives you access to Stable Diffusion.

  1. Sign up for DreamStudio. You can use it for free on their website, but I think it’s worth starting with $10 to explore in depth and get an API key for the PS plugin.
  2. Install Christian Cantrell’s Stable Diffusion plugin for PS.
  3. Go to your member page on DreamStudio and click on the API Key tab, copy your API key, and paste it into the PS plugin.
  4. You can always check your balance and add funds as needed on your member page (the PS plugin will give you a warning if you need to add funds).

 

How to create images with the plugin:

There are settings which impact the image content in significant ways, settings which affect the quality and cost, and some which may impact both. The best strategy for quickly getting to good results is to use low quality options for speed while trying to refine your prompt and settings and then increase the quality to create the final results.

You will get unique results if you change any of the following: prompt, prompt strength, width, height, seed (using no seed will use a random seed), and any input image you provide.

You will get similar or identical results when changing the following: steps, number of images, and many of the sampler options produce similar results.

To start exploring an idea:

  • Fix your width and height if you need a specific output size or ratio. Changing image dimensions will change the results, so don’t bother exploring a low resolution first. So I would lock in dimensions if you know what you need (such as 1024 wide x 576 tall for a 16:9 ratio). However, some aspect ratios work better for some images due to AI bias, so don’t be afraid to play if you’re open to different aspect ratios for the final image.
  • Extremely low or high prompt strengths seem to produce poor results. Try staying between 5 and 10.
  • In the advanced options, set steps to 20. These will improve speed and cost while iterating without causing significant changes when you increase it later for quality.
  • Leave the sampler at the default “k_lms“. This seems to generate the best results most of the time and you could burn a lot of time and money iterating this setting looking for small differences.
  • Set number of images to 2-8. This will help give a good sample of different results under the current prompt.
  • Click “Dream” to generate images

The thumbnails in the plugin can be hard to evaluate. I like to work with a 1024×1024 image open, so that I can click the “layer” link under any of the thumbnails to see a much larger version. If you are using a source image, be sure your original source is visible before clicking “Dream” again, or you’ll be creating a derivate from Stable Diffusion’s output instead of your source. This can produce interesting results, but probably isn’t what you want to do.

Once you’ve found a version you like and want to finalize your work, use the following to refine and narrow down the image

  • Click “seed” by that image to copy it to the seed field above and lock in on that image.
  • Set number of images to 1 (so you don’t pay for images you don’t need)
  • Increase the steps to 50-100. I don’t generally see much improvement beyond 50 and the cost increases for larger values.
  • If the final results changes in unexpected ways, review any changes you made. Increasing steps from a very low value can result in big changes. Otherwise, changes probably come from some other unintended change (or failure to set the seed).

Because the output size is limited to low resolutions, upscaling can be extremely helpful. I recommend Topaz Gigapixel (and you can get it for 15% off with discount code gbenz15) for best results. Alternatively, PS’s Image Size command works well with the “preserve details” method (v1 not v2). Be sure to rasterize your layer first (smart objects are not supported) and try the “low resolution” model if using Gigapixel.

 

How to use a source image:

You can provide your own source image either to refine it, or to help guide your prompt. Use the following workflow:

  • Check “Use Document Image“. This tells the plugin to work from the current image as you see it at the moment you click the “Dream” button.
  • Try varying the image strength between 25 and 50.  I generally like around 25-35 for using the image to as general inspiration. Values around 50 are much more literal.
  • Note that the quality of the source image matters and I recommend using something with at least as much resolution as your intended output. It does not have to match the output aspect ratio (it will effectively use a cropped version of the source and you may wish to crop the image to better control which portion of the source is used.

 

Other tips for working with Stable Diffusion:

  • As with any text to image AI, your prompt matters significantly.
  • Many people add references to famous artists as a quick shortcut to achieve specific looks easily, but I recommend avoiding this creative trap. Imitating others limits your potential in the long run. Try spending more time experimenting with different language and details prompts.
  • Some prompts just seem to get ignored. Try requesting an image with two different people and you’ll probably just see the first person in your prompt. Try asking for a single car and you may still see several. It’s not perfect, just keep experimenting.
  • Stable Diffusion was trained on 512 x 512 images. This sometimes seems to provoke some strange results with larger output sizes for portraits. You may find better results limiting the output to smaller sizes. I expect training will be done with much more detailed data in time as the project progresses. I expect these sorts of quirks go away as the AI is improved or retrained with larger images.
  • When you supply a reference image, larger output sizes can be used much more reliably.
  • Some portraits seem to show as blurry. This may be a bug, or some mechanism meant to obscure results with potential copyright issues. As with any bad version, just try again.
  • Try seeing what prompts are working for others. Lexica is a helpful site to see a wide range of examples. A few more examples: here.

 

How does Stable Diffusion compare to other options?

I haven’t personally tried DALL-E because I find the degree of personal data they require for sign up intrusive and unnecessary. However, the images I’ve seen others show excellent images of people. I get the sense that it’s well ahead of the others in this category. Many people rave about it.

Comparing Stable Diffusion (SD) and MidJourney (MJ):

  • It’s very easy and useful to use a source image with SD. You can specify an image for MJ via URL, but that makes things cumbersome if you need to upload images and generate links to use it.
  • I generally find MidJourney does a better job interpreting prompts. If you have a very specific idea in mind, I’d recommend MJ.
  • The SD plugin is very handy and simplifies the learning curve by removing the need to specify options with strange text prompts like “–quality 4” or “–ar 16:9”.
  • The SD plugin doesn’t lend itself well currently to working on several ideas simultaneously. With MJ’s Discord interface, you can  work on numerous ideas at the same time. However, it gets messy and potentially confusing as everything shows up in one long thread.
  • MidJourney offers higher resolution on paper, but I find that it often has small artifacts and using the beta upsizing to avoid them ultimately generates results which I believe are comparable to what you can upsize from Stable Diffusion.

Ultimately, each of these platforms currently suffers from low resolution, artifacts, and other limitations. You might love or hate them right now. What I find most interesting about them is how quickly they’ve gotten to a point where some people take them very seriously. Just like the lily pads, things are going to change very quickly in the coming years. What feels like a joke now will be replaced with something truly amazing in a few years.

 

MidJourney resources:

How to eliminate false banding in Photoshop

I previously posted a tutorial on the problem of false banding in Photoshop. This can cause your image to look severely degraded when zoomed out (to less 64%). Typically, this false “banding” would show up as uneven changes across the sky in a photograph. It’s not real, just a quirk of historical performance optimizations in Photoshop. When you view a layered image in Photoshop, you’re typically just viewing a preview of what the flattened image would look like. There’s a very good reason for this. To continuously do all the calculations for layers, blend modes, layer masks, BlendIf, opacity, etc on every pixel would cause very slow performance and reduce battery life for laptops. The engineers at Adobe have devised all sorts of tricks to help make this preview look nearly identical to what the flattened image would be. This preview is so good in fact that when we see issues like false banding, we assume the problem must be real. But in this case the problem is that you’re viewing an 8-bit preview instead of your 16-bit image.

Now that computer performance has improved so much over the years, Adobe has released a solution that eliminates the issue with a more accurate 16-bit preview. The underlying issue is that certain levels of cache in Photoshop (those affecting the view when you are not zoomed in close) were generated using 8-bit cache data. This tends to become a problem when using adjustment layers with significant adjustments on gradients (such as the sky). With Photoshop v23.5, we can now tell Photoshop to use 16-bit previews all the time, which eliminates this false banding. Just go to PS prefs / Technology Previews, check “Precise Previews for 16-bit documents“, and restart Photoshop. As a Tech Preview, this option will likely just become the default behavior in the future. So if you’re using a future version of PS (newer than v23.5) and don’t see the option, you’re already getting the benefit.

This change also has another great benefit, more accurate histograms. The histogram is based on the same data used for the preview (unless you click the warning triangle or circular arrow by the histogram to refresh it based on the current state of the document). With the old 8-bit previews, the histogram was frequently misleading and would frequently show spikes. But with the 16-bit tech preview enabled, your histogram should be very accurate and smooth. I don’t see any need to refresh the histogram anymore (unless you’re using statistics and want them to be exact rather than just very close to the exact value).

I have seen no tradeoffs with this tech preview enabled. Performance is excellent and I highly recommend you enable it.

If you’ve enabled this setting and still see banding or spiky histograms, it’s real (you can flatten the image or zoom to 100% to confirm). The most likely cause of this would be working with 8-bit data (such as stock images or if you’ve opened your image in 8-bit mode).

5 great ways BlendIf can improve your photos

BlendIf is one of the most powerful tools in Photoshop. It allows you to quickly and easily make adjustments specific to highlights, shadows or midtones. It works similarly to “range masks” in LR/ACR, with the distinct advantage that you can use BlendIf on any adjustment you can make in Photoshop. You can also use it like a range mask to work with any of the tools in LR/ACR which are not available as local adjustments (HSL, vibrance, camera calibration, curves, advanced sharpening / noise reduction, etc).

In this tutorial, you’ll learn how to use BlendIf to improve your images in numerous ways including:

  1. Make sunsets glow (without harming shadows)
  2. Better dodging & burning (targeting highlights and shadows)
  3. Fix blown highlights
  4. Better noise reduction (keep highlight detail)
  5. Better vignettes (avoid crushing shadows)

But these are just a few ideas you can apply to your own work. You might use BlendIf to help lighten shadows, color grade, apply a Nik filter to highlights, target specific colors to enhance sunset, increase mid-tone contrast, target color channels in the image, etc. There are endless scenarios where targeting your adjustment by tone (or color) using BlendIf can greatly improve your image.

 

BlendIf offers a couple of substantial benefits over luminosity masks:

  • It adds nothing to your file size. Zero. By comparison, a single luminosity mask increases the file size by roughly 1/3rd the size of the original image (because a luminosity mask is essentially a grayscale copy of the image). Using BlendIfs where you can reduces disk space, helps avoid the 4GB TIF file size limit, and lets you save images much faster (because they are smaller).
  • It creates a dynamic mask. If you ever make significant changes to your underlying layers (such as cloning out dust spots or some other distraction), you will likely need to update or replace your luminosity mask as well. BlendIf is constantly updated, which can save you a lot of work when you update underlying layers.

Ultimately, luminosity masks offer much greater control than BlendIf and should be used for exposure blending, advanced dodging & burning, etc – but BlendIf is a great choice for simple targeting of shadows, midtones, and highlights.

 

While similar in concept, BlendIf also has major advantages over the “range masks” available in LR / ACR:

  • You can use it with any adjustment or content layer in PS.
  • You can use it to target any of the adjustments in ACR, whereas range masks cannot control camera calibration, curves, detailed sharpening and noise reduction, color grading, HSL, etc.

 

Lumenzia includes several tools to help create, visualize, and refine BlendIf. A general workflow may include the following steps:

  1. Click the top-left mode button a few times to put the panel into “If:under” mode and then click any of the preview buttons (such as L2, D, zone b, or the zone pickers) to create a BlendIf .
    • Or <shift>-click any of the preview buttons to create a BlendIf without needing to change modes in the panel.
    • If you’re goal is to protect a range of tones, click “Not” first or hold <alt/option> when creating the BlendIf. For example, to apply a vignette which protects the shadows you might use “Not D2” to affect everything which isn’t D2.
    • If you want to target by color, you may use the color swatches at top while using the “If:under” mode or hold <shift><cmd/ctrl> while clicking to create your BlendIf (the ctrl/cmd key will offer a choice of targets). Remember that BlendIf is targeting channels, not actual colors. So light values in the red channel include red, white, purple, and yellow. You could then additionally target dark greens to eliminate light white and yellow values.
  2. Drag the sliders as desired to refine specific values (such as for L2.5).
    • If you have a layer mask and it is selected, the slider will be white and affect feathering on the mask. To use the BlendIf slider, click on the layer thumbnail instead of the mask (the slider will appear blue when adjusting BlendIf).
  3. If you need to visualize the BlendIf, click the red “If” button at the bottom of the panel. Click “If” again when done to clear it.
    • If you’ve also added a layer mask, this visualization will show the combined result.
    • <shift>-click “If” to choose a different color for visualization if you don’t like the default green.
    • You can also convert the BlendIf to a layer mask by <ctrl/cmd>-clicking “Mask”. If you’re using this for learning or visualization, you can then undo to get back to the BlendIf when you’re done reviewing it.

Create clean star trails

Star trails can be a powerful way to compliment and accentuate a subject. In this image, they not only draw your attention towards the a moonlit focal point, they also echo the shape of the surrounding arches to help tie everything together. You can capture such a scene with a single long exposure, but you can often get a better image or simplify processing by taking a serious of short exposures to blend together.

There are several benefits to shooting star trails as a stack of short exposures rather than a single long exposure, including:

  • You won’t lose hours of work work if something goes wrong – such as someone shining a headlamp into your image, lens flare if the moon moves into a problematic position, the camera getting bumped, batteries dying, etc.
  • Less hot pixel noise. Unless you’re shooting on a very cold night, you’ll see more and more hot pixel problems with longer exposures.
  • Minimize potential problems with moving trees in the wind.
  • Keep the option to process a single sharp image (such a beautiful meteor passing through a frame) or to create a time-lapse video.
  • You to choose the exact arc of rotation you wish to see in your final image without complex planning. Just use as many images as you need until you have the rotation you want in the image.
  • It can potentially make it easier to clone out complications (meteors, satellites, planes, hot pixels, etc) from a short exposure frame instead of a full star trail image.
  • Avoid the complication of trying to calculate a safe exposure to use for an hour. It would be very tricky to determine an exposure which shows the stars, keeps the right balance of blue sky, and does not blow out the foreground in the bright (and constantly changing) moonlight.

 

Camera workflow:

  • Capture a foreground image. This might be taken at the blue hour, using light painting, or just a very long exposure with ambient light (potentially moonlight). The goal is to have something which is perfectly aligned with the star images (use a tripod) and ultimately has low noise (use ISO <1600).
  • Capture a series of star images. The resulting length of star trails depends on how long you shoot the sequence, how wide your lens is, and where you point it in the sky (there is no movement around Polaris or the Southern Cross and increasing movement away from them).
  • If you want flexibility to create a time-lapse or to process a single frame (perhaps if a great meteor runs through one image), you should shoot the night sky as you normally would. If you don’t need that, then you have the option to reduce noise by using a longer exposure and lower ISO, such as ISO 800 for 30 seconds each frame. Any noise you capture in such a stack may ultimately make the dark parts of the sky brighter in the final image (as we’ll be combining the brightest images), so reducing noise can help the final result.
  • If you have an exposure delay setting, turn it off. Adding a delay between frames will not improve sharpness for such a long exposure, but it will likely cause minor gaps between the frames that make the star trails less smooth.
  • Turn off long exposure noise reduction, this will similarly add delay between frames and cause gaps in the star trail. If you need to capture dark frames, do so before or after shoot the stars.
  • If shooting in very cold conditions where you may get condensation on the lens, use a rubber band to hold a hand warmer on the bottom of the lens.

 

Lightroom / RAW workflow:

  • Select the first image to process.
  • Apply somewhat strong noise reduction. This will give us two benefits. First , it helps keep noise from brightening of the background that would result by allowing typical noise patterns when combining the brightest version of each pixel. Second, having star trails from minor stars makes the final result messy / complicated.
  • Turn off sharpening, as this will reduce star edges and therefore make star trail gaps worse in the stacked image.
  • Consider adding some clarity to help enlarge stars to minimize star trail gaps in the blended image.
  • If you need to darken the ambient sky, a little bit of “dehaze” may help.
  • Do not apply profile corrections before stacking, as this may cause artifacts in the trails.
  • Select all the layers, sync your edits to apply the exact same processing to all images, then right-click and choose Edit / Open as Layers in Photoshop.

 

Photoshop workflow:

  1. Open all the sky images as a stack of layers in PS. Set all but the bottom layer to “lighten” blend mode.
  2. Put stars into groups of about 10, so that you can quickly toggle them to narrow your search when trying to find which layer may contain a meteor, plane, or other problem you need to clone out. You may then put these groups into a another group to collapse them all.
  3. If you need to show all layers again, <cmd/ctrl>-click one of the groups to expand/collapse all. Then click on the visibility icon of the top layer and then (without releasing the mouse button) drag down so that the mouse goes over the visibility icon of all the layers. When you reach the end and release the mouse button, all layers should show (or hide if you were hiding the first icon you clicked on). Alternatively, select all layers and then go to Layers / Hide All Layers.
  4. For dramatically smaller files and faster performance: select all your layers, right-click and merge to a single layer. You may choose to save your layers so you can edit further later, but it’s a significant tradeoff. If you don’t merge, put them all in a Smart Object or create a stamp visible layer to use the next steps.
  5. If you need to fill gaps in the star trails, try adding a Gaussian blur with radius 0.5-1.0.
  6. Now put your sky layer(s) and foreground layer into the document and add a layer mask to combine them.

 

The YouTube tutorial above was getting a bit long, so I didn’t show all the final post-processing I did. After I finished the video, I did the following to create the final image:

  • Used ACR adjustments, Nik, and curves with dark luminosity masks from Lumenzia to help extract shadow detail.
  • Used a darkening brightness/contrast layer in the top left to reduce some flare on the rocks from moonlight.
  • Added a vignette with a “Not D1” BlendIf to protect the shadows.

 

StarStax workflow:

Instead of using Photoshop to combine the stars, you may consider using StarStax. This software is free and designed specifically for stacking stars. A key benefit of this software for star trails is that it tries to help fill “gaps” between frames, which may give the edges of trails a jagged look.

The workflow is pretty simple:

  • Select your images in Finder / File Explorer and drag and drop them into the app.
  • Click the gear icon at top-right of StarStax and set the blending mode to “gap filling“.
  • Click the 4th icon towards top left for “start processing“.
  • When the processing is done, check “show threshold overlay” and increase the threshold to the highest value that keeps green on the stars (goal is to minimize green on the non-star pixels which you don’t want filled), then set the amount to a middle value where the results look best. When in doubt, just use a lower threshold to get the stars (since we’ll blend the foreground from another image, there won’t be issues there).
  • Click File / Save As. The file format will default to JPG and is based on the file name, not what you’d normally expect to choose file format. Be sure to change the “jpg” extension to “tif” in order to save as a TIF file.
  • Use Photoshop to combine this sky layer with your foreground layer.

I personally tend to use Photoshop over StarStax as neither truly eliminates the gaps and I can manage everything in one place. It’s also 8-bit for my use case, as far as I understand currently (see the list of tips below). I’m no expert on it, so please comment below if you know a solution or if a future update of StarStax addresses these concerns. That said, I think it’s a great option to consider, as results will vary for different source images.

Additional tips for working with StarStax:

  • If you get an error “Cannot display image / Processing stopped!”, try using a somewhat lower resolution or 8-bit image exports. It seems like this error shows up after hitting a limit for the total amount of pixel data you send it. I’ve found that exporting with LR set to use a long dimension of 7090 pixels works great, but you might need something smaller or get away with something larger. 8-bit exports may be the best solution as StarStax seems to only export 8-bit anyhow, so you might as well keep the original resolution. But try before you do this, as I wouldn’t be surprised if a future update addresses this.
  • If you exported in lower resolution to avoid a “cannot display image” warning, you can resize your sky layer to the exact size of your foreground for perfect alignment and great results. Alternatively, you can simply export your foreground using the same resolution as the sky.
  • When adjusting the sliders, click and drag them. If you simply click, the overlay and gap filling amounts do not update the preview.

Recover Noisy Shadow Detail

When you need high ISO to capture indoor or night scenes like this, your image will suffer from noise and a loss of detail. In this tutorial, you’ll learn how to clean it up with an an incredible tool and how to make the most of it. Be sure to read the full tutorial below, as I go into greater detail than I cover in the video.

 

There are generally two approaches you can use for reducing noise. You can reduce it right away in the RAW or subsequently on the processed image (but before resizing, adding sharpening, or making other changes that de-noising software is not designed to anticipate). Between these approaches, I have a strong preference for removing the noise in the RAW. This is not only a much more flexible and non-destructive workflow but often leads to better results. There are a number of complex interactions that can make reducing noise later a problem. Even just increasing the shadow slider in RAW before separate application of noise reduction (outside the RAW) can create inferior results. However, there are always going to be times when you forgot to reduce noise or didn’t reduce it enough and want to reduce noise without completely redoing your edit, so it’s still very useful to be able to apply noise reduction later. When I do that, I strongly prefer to do so as a Smart Filter on a Smart Object, so as to work non-destructively. With those two workflows and various goals in mind, there are several noise reduction tools you might consider.

 

Adobe Lightroom (LR) & Adobe Camera RAW (ACR)

LR and ACR offer the same controls and the exact same results when working with RAW data. If you apply ACR via Filter / Camera RAW Filter (rather than inside a RAW Smart Object), your results will be different and may be inferior. However, the filter approach means you can use the same tool for either workflow. The results are generally very good and this is my tool of choice for a large percentage of my images given simplicity, flexibility, and good results. If the image was shot at ISO 400 or lower, this is nearly always the approach I use. See this tutorial for more details on how to get the most out of these tools. However, for critical images or challenging noise, I also use other tools to help get optimal results.

 

DXO PureRAW 2 (with “DeepPRIME”)

DXO PureRAW 2 (referred to as DXO below) is the only tool I know of which works directly on the RAW data and outputs a true RAW file. You simply feed an image or batch of images to it, choose from a few simple settings, and it creates a new DNG file which you can edit like any other in LR/ACR or your editor of choice. This new DNG file retains all of the flexibility of your RAW data, but is enhanced to remove noise, improve detail, and can correct lens distortion. The workflow is extremely simple and the results are often better than what I get with LR/ACR.

I’ve found that its DeepPRIME algorithm can do an amazing job with high ISO dark shadow detail as shown in the video above. It can also extract much more star detail, though I find the results can appear to have artifacts (tails on the stars) and it probably does too good of a job, with my preference for leaving lesser stars less prominent. So I like using this result on the foreground and may use the original sky or some blend of the original and DXO sky.

I’ve also found it does great with skin tones shot at ISO 400-1600 (I haven’t tested such images above that), making it a great tool for cleaning up images shot of indoor events. I’ve been very impressed with the results it creates on a range of subjects. It’s important to be aware that it can create some artifacts or shifts in color (which you may find better or worse, but typically easy to manage).

DXO now supports a choice of workflows. You can always open your original image in the standalone app. If you’re using v2+, you can use their LR plugin or right-click the files in your file browser to make it even easier. If you’re using v1, I’ve found the results similar but the convenience of the v2 workflow worth upgrading. I find the LR approach is the simplest, as you can easily transfer existing RAW processing or continue after using DXO. The software will preserve any slider values embedded in the image ignores anything in a sidecar XMP file. Hopefully a future update will address this so that the output matches your source in LR every time. But if you use side car files, you should expect to copy and paste your settings if you started editing before using DXO.

If you’ve read my tutorials on other AI software, you know that I’m leery of artifacts or isolated problems from AI. They are almost never perfect, but frequently very helpful when you add a few simple steps to your workflow. Just use layer masks or opacity to blend the AI results into your original will often yield better or faster results. To do this, you’ll need proper alignment of the original and DXO files, which means you should disable the lens correction in DXO. It does an excellent job, but the ability to mix with the original to fix any artifacts or color issues is more important.

Given all of these considerations, I find the following workflow is ideal for DXO PureRAW is:

  1. In LR: select the file(s) to convert and go to File / Plugin Extras / Process with DXO PureRAW. You can even select multiple images from different folders (use ctrl/cmd and shift to make multiple folders visible and then the same keys to select multiple images).
  2. Use these settings: deepPRIME (this is typically the best algorithm and runs fairly quickly), under optical corrections turn OFF “lens distortion correction” (so that you can blend in the original later), you may try “global lens sharpening” if offered but I generally leave it off, and set output to DNG in a DXO sub-folder. I have found that using “global lens sharpening” improve some chromatic aberration, so it can have surprising effects either way and I recommend testing if you see edges you think may benefit from more/less sharpening or less aberration.
  3. If you started processing the image before conversion, embedded settings will be transferred but those from a side car will not. It’s simplest to do step #1 before adjusting anything, but you can simply copy and paste the RAW settingsDXO will never transfer noise reduction, sharpening, or lens correction settings.
    1. You should definitely copy or sync any lens correction settings so that the exact same LR corrections are applied to both versions for proper alignment later when blending the original and DXO RAW.
    2. You may copy sharpening settings, but I would review carefully as the optimal is probably not the same for both.
    3. DXO tries to take care of color issues for you and you don’t generally need to set the color noise reduction. But there are times when it will be very helpful for issues which resemble chromatic aberration. You can also run into some very odd niche issues, such as the significantly altered mask results as shown in the video above.
  4. Open both the original and DXO RAW files as RAW Smart Objects in Photoshop to merge the best of both.
    1. If the DXO shows unwanted color: In most cases, you can simply tweak the white balance (including the tint slider under camera calibration) for the DXO layer to achieve similar color. Otherwise, put the DXO layer above the original in PS and set the blend mode to “luminosity”. This will give the noise reduction benefit without the unwanted color shift.
    2. If you used the luminosity blend mode but do want the color in some places, you can duplicate the DXO layer, set it to “color” blend mode and use a layer mask where needed.
    3. Finish by putting a layer mask on your DXO layer (or group if using two copies of it) to combine the best of the DXO and original layers.

One minor note: you may read that DXO outputs a “linear” DNG. There is some confusion on the topic and you might think this means you may get improved highlight recovery. That is not the case. What is blown out in the original will remain blown out in the DXO layer. This is just a high quality file format that you can use like any other RAW with improvements in noise and detail based on what was captured in the original.

 

DXO Photo Lab

Photo Lab is DXO’s full-featured RAW processor (at least as of v5) also includes DeepPRIME. Just like PureRAW, you can process and export a RAW file that you can use in other editors. It offers a much greater range of options in general, with DeepPrime specifically offering luminance and dead pixel sliders for more control over those features. Other corrections like vignetting are split out and lens sharpness offers more detailed controls. If you want the ultimate control or wish to do everything in one place, this may be a better choice for you than LR + PureRAW. Just use the export and DNG option if you need to send the image back to LR. Personally, I don’t find the extra control here matters much as I can generally achieve similar effects by blending layers in Photoshop, prefer to use LR as my primary RAW processing tool, and find the integrated workflow with PureRAW much simpler than exporting from Photo Lab to LR.

Some tips for working with the Photo Lab version of DeepPRIME:

  • The main window is a rather low quality preview of denoising results, which may show a lot of artifacts and color noise which won’t be in the final processed image. You should only use the magnifier window in the controls to determine the best settings for denoising, lens sharpness, and chromatic abberation. I wish there was a way to get a larger but still accurate preview, but that does not seem to be an option currently. I’d happily wait the 20 seconds it takes PureRAW to generate teh whole image for me to move around. It’s a challenging limitation to only be able to review such a tiny portion of the image at a time which I feel limits my ability to make optimal choices.
  • Try setting the luminance slider around 40-75%.
  • Note that some sliders (such as dead pixels) are hidden and require you to click the + at the bottom of the tab, and the help for the denoising section lists all sliders (including many which only apply to the HQ or basic Prime algorythms), which is a little confusing at first.

 

Nik Dfine

Nik Dfine also comes from DXO, as part of the Nik Collection. It works as a great counterpart to PureRAW as it works as a Smart Filter on Smart Objects. It doesn’t offer the potentially eye-popping improvements of PureRAW, but it does a nice job tackling typical noise on your processed images. The interface is extremely easy to use. So if you’re using or interested in other Nik tools like Color Efex Pro, this is a great tool to consider using in addition to ACR. When you need more control for your processed images, I’d look to DeNoise or Neat Images.

 

Topaz DeNoise AI

Topaz DeNoise AI does offer a true RAW output, however I find it alters the crop by a few pixels and alters the color/luminosity significantly (in a negative way). It also does not let you lighten dark shadows in the RAW, making it hard to choose optimal settings. Because of this, I find that the current design does not adequately support the RAW workflow. However, it’s still very useful for reducing noise on an existing image and can therefore be very helpful in situations DXO cannot manage (since it only works on the RAW). So overall, I use LR/ACR on simple images, DXO on more complex RAW, and DeNoise or ACR as a filter to reduce noise after I’ve started processing. That said, it is still best to use this before applying clarity, texture, sharpening, etc and that does limit its use for me a bit.

I recommend the following workflow for DeNoise:

  1. Convert your target layer(s) to a Smart Object so you can work non-destructively
  2. Go to Filter / Topaz / Topaz DeNoise AI
  3. Try clicking “compare” to evaluate the difference between the different models (standard, clear, etc).
    1. Standard is a good general option, Clear seems to sharpen detail quite a bit, Low Light and Severe Noise are good options for high ISO, and I skip the RAW model (its meant for work on RAW images, which I’m not doing here).
  4. Set “remove noise” and “enhance sharpness” to the lowest values that get the job done. I tend to leave the sharpening off.
  5. If you see  inconsistent sharpening (such a patches of smooth sky mixed with noisy sky), try increasing “recover original detail” until things look more consisten.
  6. I generally leave “color noise reduction” at the default 0.
  7. Apply and use the Smart Filter to apply the noise reduction locally where helpful/needed.

Note: As of v3.6.2, I am seeing DeNoise crash a fair bit when updating RAW Smart Objects. PS does not crash (you won’t lose your image), it’s just that the plugin may fail to update its noise reduction when you try to tweak it or update the Smart Object. Deleting the filter and recreating is a workaround when this happens. I’ve reported the issue and I would expect Topaz fixes relatively soon it given their history of support/updates.

 

Neat Image

Neat Image holds a special place in my heart as the first program I used well over a decade ago to get great results. The interface is much more complicated and probably confusing to most users. However, it’s still a great option for some images. Since it is not based on AI, I find it less prone to artifacts and can be a good option for tricky high ISO images.

 

ON1 No Noise AI

While I own it and it is generally a fine program, I not currently do not use No Noise AI. It will open RAW files but ignores any adjustments you’ve made in RAW. It will output a DNG, but apparently with RAW data (LR/ACR treat it like a TIF). So it does not support the RAW workflow I like to use. It also does not work as a Smart Filter on a Smart Object, so it does not support the other workflow I recommend. At this time, I recommend DXO or Topaz. If I’ve missed something or the program is updated to address these limitations, please let me know and I’ll review again and update my findings.

 

Capture One

Capture One is a direct competitor to LR/ACR and many people believe superior for their work. I am personally of the opinion that it is better in some ways and worse in some ways and ultimately not really better or worse across a large number of images. The edit the RAW conversion right inside Photoshop gives LR/ACR an enormous advantage and is the reason I prefer it over Capture One. I find that this flexibility allows me to do my best work, which ultimately gives me a better image and with less hassle given the flexibility of the workflow. For the sake of discussion here, I recommend using either CaptureOne or LR/ACR and then considering one of the other specialized tools for important or tricky noise reduction jobs.

 

Starry Landscape Stacker

Starry Landscape Stacker uses a completely different workflow specific to reducing noise in starry night sky images like the Milky Way. Instead of improving your RAW or processed image, it lets you combine multiple images to reduce noise. This is often like shooting at ISO 400 or 800 instead of 6400. It rotates the sky image to align multiple exposures, which is the digital equivalent of using a star tracker. You can also combine the use of this technique with the others above to push the results further.

 

Conclusion

My overall use of these tools looks like this:

  • Images <ISO 400, use LR / ACR unless making a massive enlargement.
  • Images ISO 800+, use PureRAW2
  • On images I’ve already processed, use ACR, Dfine, DeNoise in roughly that order based on how challenging the job is. I try to avoid AI tools when simpler tools do the job because they fail in more predictable ways (less need to look over every little detail in the image to check for issues). For complex jobs where DeNoise shows artifacts, I would also try Neat Image.
  • For Milky Way shows where I can shoot a sequence of 10+ images, use Starry Landscape stacker. Typically in combination with noise reduction in LR or PureRAW2) first. Then I may do additional noise reduction targeted through a luminosity mask to help keep sharp stars.

 

[Disclosure:  This post contains affiliate links.  I have purchased all the software referenced above and only endorse tools I personally use and recommend. If you purchase through these links, a small percentage of the sale will be used to help fund the content on this site, but the price you pay remains the same.  Please see my ethics statement if you have any questions.]

Greg Benz Photography