Lumenzia v11 is now available as a free update for all customers (see update page/video to download). The main theme for v11 is comprehensive support for 32-bit (“HDR”) workflows, which allow you to create absolutely stunning images for monitors that support at least 400nits peak brightness. Note that this name might be confusing because we’ve used the term “HDR” for years (for tone mapping on monitors with limited dynamic range), but now that same name is being used for a completely different display technology that allows you to truly display higher dynamic range.
Nearly every major feature in Lumenzia has been optimized for 32-bit support in v11. It’s designed to be as transparent as possible (while adding alternatives and extra support where needed due to differences in 32-bit mode in Photoshop), so the interface is the nearly identical to prior versions. See the release notes for more details. And be sure to see the written manual’s section on 32-bit workflows for more details (the manuals can be found via the flyout menu at top-right of Lumenzia v11).
32-bit workflows offer:
Vastly greater dynamic range.
Boosted brightness without losing saturation for gorgeous sunrises and sunsets.
Intricate details in the highlights for white clouds and flowing rivers.
Glowing city lights.
And you can use these techniques to enhance any image (no RAW required, even an 8-bit JPG can be improved).
In the following tutorial, you’ll learn:
How 32-bit differs from 8 and 16-bit editing
How to improve any images by converting to 32-bit HDR and using Lumenzia v11
A workaround to edit in 32-bit even on monitors that can’t display HDR
Adobe Camera RAW (ACR) v15 just added one of the most exciting features since the creation of RAW processing itself: High Dynamic Range Output (“HDR” or “HDRO”). This name might be confusing because we’ve used the term “HDR” for years, but now that same name is being used for a completely different display technology. This involves hardware, it requires a monitor capable of a minimum brightness of at least 400 nits. And when you see it, it is nothing short of spectacular. It’s the most significant improvement in image display I’ve seen in decades.
HDR editing and display results in:
Vastly greater dynamic range. Pixels are up to 4 stops brighter than anything before.
Glowing city lights.
Boosted brightness without losing saturation for gorgeous sunrises and sunsets.
Intricate details in the highlights for white clouds and flowing rivers.
See the video below for details on how to enable and use HDR in ACR 15.
Note: I’ve created a comprehensive set of resources at gregbenzphotography.com/hdr (including sample images, HDR screen tests, a free e-book, and much more).
I’m happy to announce the launch of my newest course: Focus Stacking: from Start to finish.
This course includes:
Detailed instruction on how to shoot and process focused stacked images for vastly greater depth of field and unique compositions that go beyond the normal limits of your lens.
A complete workflow including RAW processing, focus stacking, exposure blending, dodging & burning, and other finishing effects.
Workflows to combine focus stacking with exposure blending techniques.
High-resolution source files so that you can follow along.
PDF summaries of all the videos with time-codes to make it easy to follow along, no matter what style of learning you prefer.
I rarely endorse other products and only do when I think you would thoroughly enjoy them. When you purchase through my link, you will receive my Focus Stacking: From Start to Finish course and be supporting me with an affiliate commission at no cost to you as well as helping fund some great charities. I’ve got more information the 5DayDeal coming this week. If you’d rather not hear any more about it, you can click below to opt out of these emails this week (while remaining on my email list for future tutorials and such).
Exponential growth is nearly impossible to comprehend. Imagine a lake that starts a single lily pad, but every day the number doubles until the entire lake is covered with lily pads on day 30… How much of the lake’s surface would be covered after 25 days? Only 3%. The lake would be almost entirely clear and then just 5 days later you wouldn’t see any water at all. That’s exponential growth. Nothing seems to change and then suddenly everything does.
Artificial intelligence (AI) is on an exponential growth curve. And just like those lily pads, it’s hard to comprehend how quickly it can change. One day AI is the butt of jokes for creating people with two heads and then suddenly it’s so good you can’t tell if you’re looking at AI art or a real photograph. Where are we on that growth curve? I’m not quite sure, but an AI generated image just took first place at the Colorado State Fair. Things are improving quickly these days and I think it pays to have at least some basic understanding of AI and what it might mean for your art – even if you don’t care about it yet.
There are three interesting new AI platform which have been recently launched and allow you to simply type some words and have an AI generate an image for you: Stable Diffusion, MidJourney, and DALL-E. Each of them have their own merits (which I’ll discuss further below), but I’m going to focus this tutorial on Stable Diffusion because you can use it right inside Photoshop.
To set up Stable Diffusion for Photoshop:
Stable Diffusion is open-source software released by Stability AI Ltd. They also created DreamStudio to provide a web interface and API key for Stable Diffusion. I don’t understand why they didn’t name the web interface the same as the underlying software – you just need to know that DreamStudio gives you access to Stable Diffusion.
Sign up for DreamStudio. You can use it for free on their website, but I think it’s worth starting with $10 to explore in depth and get an API key for the PS plugin.
Go to your member page on DreamStudio and click on the API Key tab, copy your API key, and paste it into the PS plugin.
You can always check your balance and add funds as needed on your member page (the PS plugin will give you a warning if you need to add funds).
How to create images with the plugin:
There are settings which impact the image content in significant ways, settings which affect the quality and cost, and some which may impact both. The best strategy for quickly getting to good results is to use low quality options for speed while trying to refine your prompt and settings and then increase the quality to create the final results.
You will get unique results if you change any of the following: prompt, prompt strength, width, height, seed (using no seed will use a random seed), and any input image you provide.
You will get similar or identical results when changing the following: steps, number of images, and many of the sampler options produce similar results.
To start exploring an idea:
Fix your width and height if you need a specific output size or ratio. Changing image dimensions will change the results, so don’t bother exploring a low resolution first. So I would lock in dimensions if you know what you need (such as 1024 wide x 576 tall for a 16:9 ratio). However, some aspect ratios work better for some images due to AI bias, so don’t be afraid to play if you’re open to different aspect ratios for the final image.
Extremely low or high prompt strengths seem to produce poor results. Try staying between 5 and 10.
In the advanced options, set steps to 20. These will improve speed and cost while iterating without causing significant changes when you increase it later for quality.
Leave the sampler at the default “k_lms“. This seems to generate the best results most of the time and you could burn a lot of time and money iterating this setting looking for small differences.
Set number of images to 2-8. This will help give a good sample of different results under the current prompt.
Click “Dream” to generate images
The thumbnails in the plugin can be hard to evaluate. I like to work with a 1024×1024 image open, so that I can click the “layer” link under any of the thumbnails to see a much larger version. If you are using a source image, be sure your original source is visible before clicking “Dream” again, or you’ll be creating a derivate from Stable Diffusion’s output instead of your source. This can produce interesting results, but probably isn’t what you want to do.
Once you’ve found a version you like and want to finalize your work, use the following to refine and narrow down the image
Click “seed” by that image to copy it to the seed field above and lock in on that image.
Set number of images to 1 (so you don’t pay for images you don’t need)
Increase the steps to 50-100. I don’t generally see much improvement beyond 50 and the cost increases for larger values.
If the final results changes in unexpected ways, review any changes you made. Increasing steps from a very low value can result in big changes. Otherwise, changes probably come from some other unintended change (or failure to set the seed).
Because the output size is limited to low resolutions, upscaling can be extremely helpful. I recommend Topaz Gigapixel (and you can get it for 15% off with discount code gbenz15) for best results. Alternatively, PS’s Image Size command works well with the “preserve details” method (v1 not v2). Be sure to rasterize your layer first (smart objects are not supported) and try the “low resolution” model if using Gigapixel.
How to use a source image:
You can provide your own source image either to refine it, or to help guide your prompt. Use the following workflow:
Check “Use Document Image“. This tells the plugin to work from the current image as you see it at the moment you click the “Dream” button.
Try varying the image strength between 25 and 50. I generally like around 25-35 for using the image to as general inspiration. Values around 50 are much more literal.
Note that the quality of the source image matters and I recommend using something with at least as much resolution as your intended output. It does not have to match the output aspect ratio (it will effectively use a cropped version of the source and you may wish to crop the image to better control which portion of the source is used.
Other tips for working with Stable Diffusion:
As with any text to image AI, your prompt matters significantly.
Many people add references to famous artists as a quick shortcut to achieve specific looks easily, but I recommend avoiding this creative trap. Imitating others limits your potential in the long run. Try spending more time experimenting with different language and details prompts.
Some prompts just seem to get ignored. Try requesting an image with two different people and you’ll probably just see the first person in your prompt. Try asking for a single car and you may still see several. It’s not perfect, just keep experimenting.
Stable Diffusion was trained on 512 x 512 images. This sometimes seems to provoke some strange results with larger output sizes for portraits. You may find better results limiting the output to smaller sizes. I expect training will be done with much more detailed data in time as the project progresses. I expect these sorts of quirks go away as the AI is improved or retrained with larger images.
When you supply a reference image, larger output sizes can be used much more reliably.
Some portraits seem to show as blurry. This may be a bug, or some mechanism meant to obscure results with potential copyright issues. As with any bad version, just try again.
Try seeing what prompts are working for others. Lexica is a helpful site to see a wide range of examples. A few more examples: here.
How does Stable Diffusion compare to other options?
I haven’t personally tried DALL-E because I find the degree of personal data they require for sign up intrusive and unnecessary. However, the images I’ve seen others show excellent images of people. I get the sense that it’s well ahead of the others in this category. Many people rave about it.
Comparing Stable Diffusion (SD) and MidJourney (MJ):
It’s very easy and useful to use a source image with SD. You can specify an image for MJ via URL, but that makes things cumbersome if you need to upload images and generate links to use it.
I generally find MidJourney does a better job interpreting prompts. If you have a very specific idea in mind, I’d recommend MJ.
The SD plugin is very handy and simplifies the learning curve by removing the need to specify options with strange text prompts like “–quality 4” or “–ar 16:9”.
The SD plugin doesn’t lend itself well currently to working on several ideas simultaneously. With MJ’s Discord interface, you can work on numerous ideas at the same time. However, it gets messy and potentially confusing as everything shows up in one long thread.
MidJourney offers higher resolution on paper, but I find that it often has small artifacts and using the beta upsizing to avoid them ultimately generates results which I believe are comparable to what you can upsize from Stable Diffusion.
Ultimately, each of these platforms currently suffers from low resolution, artifacts, and other limitations. You might love or hate them right now. What I find most interesting about them is how quickly they’ve gotten to a point where some people take them very seriously. Just like the lily pads, things are going to change very quickly in the coming years. What feels like a joke now will be replaced with something truly amazing in a few years.
I previously posted a tutorial on the problem of false banding in Photoshop. This can cause your image to look severely degraded when zoomed out (to less 64%). Typically, this false “banding” would show up as uneven changes across the sky in a photograph. It’s not real, just a quirk of historical performance optimizations in Photoshop. When you view a layered image in Photoshop, you’re typically just viewing a preview of what the flattened image would look like. There’s a very good reason for this. To continuously do all the calculations for layers, blend modes, layer masks, BlendIf, opacity, etc on every pixel would cause very slow performance and reduce battery life for laptops. The engineers at Adobe have devised all sorts of tricks to help make this preview look nearly identical to what the flattened image would be. This preview is so good in fact that when we see issues like false banding, we assume the problem must be real. But in this case the problem is that you’re viewing an 8-bit preview instead of your 16-bit image.
Now that computer performance has improved so much over the years, Adobe has released a solution that eliminates the issue with a more accurate 16-bit preview. The underlying issue is that certain levels of cache in Photoshop (those affecting the view when you are not zoomed in close) were generated using 8-bit cache data. This tends to become a problem when using adjustment layers with significant adjustments on gradients (such as the sky). With Photoshop v23.5, we can now tell Photoshop to use 16-bit previews all the time, which eliminates this false banding. Just go to PS prefs / Technology Previews, check “Precise Previews for 16-bit documents“, and restart Photoshop. As a Tech Preview, this option will likely just become the default behavior in the future. So if you’re using a future version of PS (newer than v23.5) and don’t see the option, you’re already getting the benefit.
This change also has another great benefit, more accurate histograms. The histogram is based on the same data used for the preview (unless you click the warning triangle or circular arrow by the histogram to refresh it based on the current state of the document). With the old 8-bit previews, the histogram was frequently misleading and would frequently show spikes. But with the 16-bit tech preview enabled, your histogram should be very accurate and smooth. I don’t see any need to refresh the histogram anymore (unless you’re using statistics and want them to be exact rather than just very close to the exact value).
I have seen no tradeoffs with this tech preview enabled. Performance is excellent and I highly recommend you enable it.
If you’ve enabled this setting and still see banding or spiky histograms, it’s real (you can flatten the image or zoom to 100% to confirm). The most likely cause of this would be working with 8-bit data (such as stock images or if you’ve opened your image in 8-bit mode).