BlendIf is one of the most powerful tools in Photoshop. It allows you to quickly and easily make adjustments specific to highlights, shadows or midtones. It works similarly to “range masks” in LR/ACR, with the distinct advantage that you can use BlendIf on any adjustment you can make in Photoshop. You can also use it like a range mask to work with any of the tools in LR/ACR which are not available as local adjustments (HSL, vibrance, camera calibration, curves, advanced sharpening / noise reduction, etc).
But these are just a few ideas you can apply to your own work. You might use BlendIf to help lighten shadows, color grade, apply a Nik filter to highlights, target specific colors to enhance sunset, increase mid-tone contrast, target color channels in the image, etc. There are endless scenarios where targeting your adjustment by tone (or color) using BlendIf can greatly improve your image.
BlendIf offers a couple of substantial benefits over luminosity masks:
It adds nothing to your file size. Zero. By comparison, a single luminosity mask increases the file size by roughly 1/3rd the size of the original image (because a luminosity mask is essentially a grayscale copy of the image). Using BlendIfs where you can reduces disk space, helps avoid the 4GB TIF file size limit, and lets you save images much faster (because they are smaller).
It creates a dynamic mask. If you ever make significant changes to your underlying layers (such as cloning out dust spots or some other distraction), you will likely need to update or replace your luminosity mask as well. BlendIf is constantly updated, which can save you a lot of work when you update underlying layers.
Ultimately, luminosity masks offer much greater control than BlendIf and should be used for exposure blending, advanced dodging & burning, etc – but BlendIf is a great choice for simple targeting of shadows, midtones, and highlights.
While similar in concept, BlendIf also has major advantages over the “range masks” available in LR / ACR:
You can use it with any adjustment or content layer in PS.
You can use it to target any of the adjustments in ACR, whereas range masks cannot control camera calibration, curves, detailed sharpening and noise reduction, color grading, HSL, etc.
Lumenzia includes several tools to help create, visualize, and refine BlendIf. A general workflow may include the following steps:
Click the top-left mode button a few times to put the panel into “If:under” mode and then click any of the preview buttons (such as L2, D, zone b, or the zone pickers) to create a BlendIf .
Or <shift>-click any of the preview buttons to create a BlendIf without needing to change modes in the panel.
If you’re goal is to protect a range of tones, click “Not” first or hold <alt/option> when creating the BlendIf. For example, to apply a vignette which protects the shadows you might use “Not D2” to affect everything which isn’t D2.
If you want to target by color, you may use the color swatches at top while using the “If:under” mode or hold <shift><cmd/ctrl> while clicking to create your BlendIf (the ctrl/cmd key will offer a choice of targets). Remember that BlendIf is targeting channels, not actual colors. So light values in the red channel include red, white, purple, and yellow. You could then additionally target dark greens to eliminate light white and yellow values.
Drag the sliders as desired to refine specific values (such as for L2.5).
If you have a layer mask and it is selected, the slider will be white and affect feathering on the mask. To use the BlendIf slider, click on the layer thumbnail instead of the mask (the slider will appear blue when adjusting BlendIf).
If you need to visualize the BlendIf, click the red “If” button at the bottom of the panel. Click “If” again when done to clear it.
If you’ve also added a layer mask, this visualization will show the combined result.
<shift>-click “If” to choose a different color for visualization if you don’t like the default green.
You can also convert the BlendIf to a layer mask by <ctrl/cmd>-clicking “Mask”. If you’re using this for learning or visualization, you can then undo to get back to the BlendIf when you’re done reviewing it.
Star trails can be a powerful way to compliment and accentuate a subject. In this image, they not only draw your attention towards the a moonlit focal point, they also echo the shape of the surrounding arches to help tie everything together. You can capture such a scene with a single long exposure, but you can often get a better image or simplify processing by taking a serious of short exposures to blend together.
There are several benefits to shooting star trails as a stack of short exposures rather than a single long exposure, including:
You won’t lose hours of work work if something goes wrong – such as someone shining a headlamp into your image, lens flare if the moon moves into a problematic position, the camera getting bumped, batteries dying, etc.
Less hot pixel noise. Unless you’re shooting on a very cold night, you’ll see more and more hot pixel problems with longer exposures.
Minimize potential problems with moving trees in the wind.
Keep the option to process a single sharp image (such a beautiful meteor passing through a frame) or to create a time-lapse video.
You to choose the exact arc of rotation you wish to see in your final image without complex planning. Just use as many images as you need until you have the rotation you want in the image.
It can potentially make it easier to clone out complications (meteors, satellites, planes, hot pixels, etc) from a short exposure frame instead of a full star trail image.
Avoid the complication of trying to calculate a safe exposure to use for an hour. It would be very tricky to determine an exposure which shows the stars, keeps the right balance of blue sky, and does not blow out the foreground in the bright (and constantly changing) moonlight.
Camera workflow:
Capture a foreground image. This might be taken at the blue hour, using light painting, or just a very long exposure with ambient light (potentially moonlight). The goal is to have something which is perfectly aligned with the star images (use a tripod) and ultimately has low noise (use ISO <1600).
Capture a series of star images. The resulting length of star trails depends on how long you shoot the sequence, how wide your lens is, and where you point it in the sky (there is no movement around Polaris or the Southern Cross and increasing movement away from them).
If you want flexibility to create a time-lapse or to process a single frame (perhaps if a great meteor runs through one image), you should shoot the night sky as you normally would. If you don’t need that, then you have the option to reduce noise by using a longer exposure and lower ISO, such as ISO 800 for 30 seconds each frame. Any noise you capture in such a stack may ultimately make the dark parts of the sky brighter in the final image (as we’ll be combining the brightest images), so reducing noise can help the final result.
If you have an exposure delay setting, turn it off. Adding a delay between frames will not improve sharpness for such a long exposure, but it will likely cause minor gaps between the frames that make the star trails less smooth.
Turn off long exposure noise reduction, this will similarly add delay between frames and cause gaps in the star trail. If you need to capture dark frames, do so before or after shoot the stars.
If shooting in very cold conditions where you may get condensation on the lens, use a rubber band to hold a hand warmer on the bottom of the lens.
Lightroom / RAW workflow:
Select the first image to process.
Apply somewhat strong noise reduction. This will give us two benefits. First , it helps keep noise from brightening of the background that would result by allowing typical noise patterns when combining the brightest version of each pixel. Second, having star trails from minor stars makes the final result messy / complicated.
Turn off sharpening, as this will reduce star edges and therefore make star trail gaps worse in the stacked image.
Consider adding some clarity to help enlarge stars to minimize star trail gaps in the blended image.
If you need to darken the ambient sky, a little bit of “dehaze” may help.
Do not apply profile corrections before stacking, as this may cause artifacts in the trails.
Select all the layers, sync your edits to apply the exact same processing to all images, then right-click and choose Edit / Open as Layers in Photoshop.
Photoshop workflow:
Open all the sky images as a stack of layers in PS. Set all but the bottom layer to “lighten” blend mode.
Put stars into groups of about 10, so that you can quickly toggle them to narrow your search when trying to find which layer may contain a meteor, plane, or other problem you need to clone out. You may then put these groups into a another group to collapse them all.
If you need to show all layers again, <cmd/ctrl>-click one of the groups to expand/collapse all. Then click on the visibility icon of the top layer and then (without releasing the mouse button) drag down so that the mouse goes over the visibility icon of all the layers. When you reach the end and release the mouse button, all layers should show (or hide if you were hiding the first icon you clicked on). Alternatively, select all layers and then go to Layers / Hide All Layers.
For dramatically smaller files and faster performance: select all your layers, right-click and merge to a single layer. You may choose to save your layers so you can edit further later, but it’s a significant tradeoff. If you don’t merge, put them all in a Smart Object or create a stamp visible layer to use the next steps.
If you need to fill gaps in the star trails, try adding a Gaussian blur with radius 0.5-1.0.
Now put your sky layer(s) and foreground layer into the document and add a layer mask to combine them.
The YouTube tutorial above was getting a bit long, so I didn’t show all the final post-processing I did. After I finished the video, I did the following to create the final image:
Used ACR adjustments, Nik, and curves with dark luminosity masks from Lumenzia to help extract shadow detail.
Used a darkening brightness/contrast layer in the top left to reduce some flare on the rocks from moonlight.
Instead of using Photoshop to combine the stars, you may consider using StarStax. This software is free and designed specifically for stacking stars. A key benefit of this software for star trails is that it tries to help fill “gaps” between frames, which may give the edges of trails a jagged look.
The workflow is pretty simple:
Select your images in Finder / File Explorer and drag and drop them into the app.
Click the gear icon at top-right of StarStax and set the blending mode to “gap filling“.
Click the 4th icon towards top left for “start processing“.
When the processing is done, check “show threshold overlay” and increase the threshold to the highest value that keeps green on the stars (goal is to minimize green on the non-star pixels which you don’t want filled), then set the amount to a middle value where the results look best. When in doubt, just use a lower threshold to get the stars (since we’ll blend the foreground from another image, there won’t be issues there).
Click File / Save As. The file format will default to JPG and is based on the file name, not what you’d normally expect to choose file format. Be sure to change the “jpg” extension to “tif” in order to save as a TIF file.
Use Photoshop to combine this sky layer with your foreground layer.
I personally tend to use Photoshop over StarStax as neither truly eliminates the gaps and I can manage everything in one place. It’s also 8-bit for my use case, as far as I understand currently (see the list of tips below). I’m no expert on it, so please comment below if you know a solution or if a future update of StarStax addresses these concerns. That said, I think it’s a great option to consider, as results will vary for different source images.
Additional tips for working with StarStax:
If you get an error “Cannot display image / Processing stopped!”, try using a somewhat lower resolution or 8-bit image exports. It seems like this error shows up after hitting a limit for the total amount of pixel data you send it. I’ve found that exporting with LR set to use a long dimension of 7090 pixels works great, but you might need something smaller or get away with something larger. 8-bit exports may be the best solution as StarStax seems to only export 8-bit anyhow, so you might as well keep the original resolution. But try before you do this, as I wouldn’t be surprised if a future update addresses this.
If you exported in lower resolution to avoid a “cannot display image” warning, you can resize your sky layer to the exact size of your foreground for perfect alignment and great results. Alternatively, you can simply export your foreground using the same resolution as the sky.
When adjusting the sliders, click and drag them. If you simply click, the overlay and gap filling amounts do not update the preview.
When you need high ISO to capture indoor or night scenes like this, your image will suffer from noise and a loss of detail. In this tutorial, you’ll learn how to clean it up with an an incredible tool and how to make the most of it. Be sure to read the full tutorial below, as I go into greater detail than I cover in the video.
There are generally two approaches you can use for reducing noise. You can reduce it right away in the RAW or subsequently on the processed image (but before resizing, adding sharpening, or making other changes that de-noising software is not designed to anticipate). Between these approaches, I have a strong preference for removing the noise in the RAW. This is not only a much more flexible and non-destructive workflow but often leads to better results. There are a number of complex interactions that can make reducing noise later a problem. Even just increasing the shadow slider in RAW before separate application of noise reduction (outside the RAW) can create inferior results. However, there are always going to be times when you forgot to reduce noise or didn’t reduce it enough and want to reduce noise without completely redoing your edit, so it’s still very useful to be able to apply noise reduction later. When I do that, I strongly prefer to do so as a Smart Filter on a Smart Object, so as to work non-destructively. With those two workflows and various goals in mind, there are several noise reduction tools you might consider.
Adobe Lightroom (LR) & Adobe Camera RAW (ACR)
LR and ACR offer the same controls and the exact same results when working with RAW data. If you apply ACR via Filter / Camera RAW Filter (rather than inside a RAW Smart Object), your results will be different and may be inferior. However, the filter approach means you can use the same tool for either workflow. The results are generally very good and this is my tool of choice for a large percentage of my images given simplicity, flexibility, and good results. If the image was shot at ISO 400 or lower, this is nearly always the approach I use. See this tutorial for more details on how to get the most out of these tools. However, for critical images or challenging noise, I also use other tools to help get optimal results.
DXO PureRAW 2 (with “DeepPRIME”)
DXO PureRAW 2 (referred to as DXO below) is the only tool I know of which works directly on the RAW data and outputs a true RAW file. You simply feed an image or batch of images to it, choose from a few simple settings, and it creates a new DNG file which you can edit like any other in LR/ACR or your editor of choice. This new DNG file retains all of the flexibility of your RAW data, but is enhanced to remove noise, improve detail, and can correct lens distortion. The workflow is extremely simple and the results are often better than what I get with LR/ACR.
I’ve found that its DeepPRIME algorithm can do an amazing job with high ISO dark shadow detail as shown in the video above. It can also extract much more star detail, though I find the results can appear to have artifacts (tails on the stars) and it probably does too good of a job, with my preference for leaving lesser stars less prominent. So I like using this result on the foreground and may use the original sky or some blend of the original and DXO sky.
I’ve also found it does great with skin tones shot at ISO 400-1600 (I haven’t tested such images above that), making it a great tool for cleaning up images shot of indoor events. I’ve been very impressed with the results it creates on a range of subjects. It’s important to be aware that it can create some artifacts or shifts in color (which you may find better or worse, but typically easy to manage).
DXO now supports a choice of workflows. You can always open your original image in the standalone app. If you’re using v2+, you can use their LR plugin or right-click the files in your file browser to make it even easier. If you’re using v1, I’ve found the results similar but the convenience of the v2 workflow worth upgrading. I find the LR approach is the simplest, as you can easily transfer existing RAW processing or continue after using DXO. The software will preserve any slider values embedded in the image ignores anything in a sidecar XMP file. Hopefully a future update will address this so that the output matches your source in LR every time. But if you use side car files, you should expect to copy and paste your settings if you started editing before using DXO.
If you’ve read my tutorials on other AI software, you know that I’m leery of artifacts or isolated problems from AI. They are almost never perfect, but frequently very helpful when you add a few simple steps to your workflow. Just use layer masks or opacity to blend the AI results into your original will often yield better or faster results. To do this, you’ll need proper alignment of the original and DXO files, which means you should disable the lens correction in DXO. It does an excellent job, but the ability to mix with the original to fix any artifacts or color issues is more important.
Given all of these considerations, I find the following workflow is ideal for DXO PureRAW is:
In LR: select the file(s) to convert and go to File / Plugin Extras / Process with DXO PureRAW. You can even select multiple images from different folders (use ctrl/cmd and shift to make multiple folders visible and then the same keys to select multiple images).
Use these settings: deepPRIME (this is typically the best algorithm and runs fairly quickly), under optical corrections turn OFF “lens distortion correction” (so that you can blend in the original later), you may try “global lens sharpening” if offered but I generally leave it off, and set output to DNG in a DXO sub-folder. I have found that using “global lens sharpening” improve some chromatic aberration, so it can have surprising effects either way and I recommend testing if you see edges you think may benefit from more/less sharpening or less aberration.
If you started processing the image before conversion, embedded settings will be transferred but those from a side car will not. It’s simplest to do step #1 before adjusting anything, but you can simply copy and paste the RAW settings. DXO will never transfer noise reduction, sharpening, or lens correction settings.
You should definitely copy or sync any lens correction settings so that the exact same LR corrections are applied to both versions for proper alignment later when blending the original and DXO RAW.
You may copy sharpening settings, but I would review carefully as the optimal is probably not the same for both.
DXO tries to take care of color issues for you and you don’t generally need to set the color noise reduction. But there are times when it will be very helpful for issues which resemble chromatic aberration. You can also run into some very odd niche issues, such as the significantly altered mask results as shown in the video above.
Open both the original and DXO RAW files as RAW Smart Objects in Photoshop to merge the best of both.
If the DXO shows unwanted color: In most cases, you can simply tweak the white balance (including the tint slider under camera calibration) for the DXO layer to achieve similar color. Otherwise, put the DXO layer above the original in PS and set the blend mode to “luminosity”. This will give the noise reduction benefit without the unwanted color shift.
If you used the luminosity blend mode but do want the color in some places, you can duplicate the DXO layer, set it to “color” blend mode and use a layer mask where needed.
Finish by putting a layer mask on your DXO layer (or group if using two copies of it) to combine the best of the DXO and original layers.
One minor note: you may read that DXO outputs a “linear” DNG. There is some confusion on the topic and you might think this means you may get improved highlight recovery. That is not the case. What is blown out in the original will remain blown out in the DXO layer. This is just a high quality file format that you can use like any other RAW with improvements in noise and detail based on what was captured in the original.
DXO Photo Lab
Photo Lab is DXO’s full-featured RAW processor (at least as of v5) also includes DeepPRIME. Just like PureRAW, you can process and export a RAW file that you can use in other editors. It offers a much greater range of options in general, with DeepPrime specifically offering luminance and dead pixel sliders for more control over those features. Other corrections like vignetting are split out and lens sharpness offers more detailed controls. If you want the ultimate control or wish to do everything in one place, this may be a better choice for you than LR + PureRAW. Just use the export and DNG option if you need to send the image back to LR. Personally, I don’t find the extra control here matters much as I can generally achieve similar effects by blending layers in Photoshop, prefer to use LR as my primary RAW processing tool, and find the integrated workflow with PureRAW much simpler than exporting from Photo Lab to LR.
Some tips for working with the Photo Lab version of DeepPRIME:
The main window is a rather low quality preview of denoising results, which may show a lot of artifacts and color noise which won’t be in the final processed image. You should only use the magnifier window in the controls to determine the best settings for denoising, lens sharpness, and chromatic abberation. I wish there was a way to get a larger but still accurate preview, but that does not seem to be an option currently. I’d happily wait the 20 seconds it takes PureRAW to generate teh whole image for me to move around. It’s a challenging limitation to only be able to review such a tiny portion of the image at a time which I feel limits my ability to make optimal choices.
Try setting the luminance slider around 40-75%.
Note that some sliders (such as dead pixels) are hidden and require you to click the + at the bottom of the tab, and the help for the denoising section lists all sliders (including many which only apply to the HQ or basic Prime algorythms), which is a little confusing at first.
Nik Dfine
Nik Dfine also comes from DXO, as part of the Nik Collection. It works as a great counterpart to PureRAW as it works as a Smart Filter on Smart Objects. It doesn’t offer the potentially eye-popping improvements of PureRAW, but it does a nice job tackling typical noise on your processed images. The interface is extremely easy to use. So if you’re using or interested in other Nik tools like Color Efex Pro, this is a great tool to consider using in addition to ACR. When you need more control for your processed images, I’d look to DeNoise or Neat Images.
Topaz DeNoise AI
Topaz DeNoise AI does offer a true RAW output, however I find it alters the crop by a few pixels and alters the color/luminosity significantly (in a negative way). It also does not let you lighten dark shadows in the RAW, making it hard to choose optimal settings. Because of this, I find that the current design does not adequately support the RAW workflow. However, it’s still very useful for reducing noise on an existing image and can therefore be very helpful in situations DXO cannot manage (since it only works on the RAW). So overall, I use LR/ACR on simple images, DXO on more complex RAW, and DeNoise or ACR as a filter to reduce noise after I’ve started processing. That said, it is still best to use this before applying clarity, texture, sharpening, etc and that does limit its use for me a bit.
I recommend the following workflow for DeNoise:
Convert your target layer(s) to a Smart Object so you can work non-destructively
Go to Filter / Topaz / Topaz DeNoise AI
Try clicking “compare” to evaluate the difference between the different models (standard, clear, etc).
Standard is a good general option, Clear seems to sharpen detail quite a bit, Low Light and Severe Noise are good options for high ISO, and I skip the RAW model (its meant for work on RAW images, which I’m not doing here).
Set “remove noise” and “enhance sharpness” to the lowest values that get the job done. I tend to leave the sharpening off.
If you see inconsistent sharpening (such a patches of smooth sky mixed with noisy sky), try increasing “recover original detail” until things look more consisten.
I generally leave “color noise reduction” at the default 0.
Apply and use the Smart Filter to apply the noise reduction locally where helpful/needed.
Note: As of v3.6.2, I am seeing DeNoise crash a fair bit when updating RAW Smart Objects. PS does not crash (you won’t lose your image), it’s just that the plugin may fail to update its noise reduction when you try to tweak it or update the Smart Object. Deleting the filter and recreating is a workaround when this happens. I’ve reported the issue and I would expect Topaz fixes relatively soon it given their history of support/updates.
Neat Image
Neat Image holds a special place in my heart as the first program I used well over a decade ago to get great results. The interface is much more complicated and probably confusing to most users. However, it’s still a great option for some images. Since it is not based on AI, I find it less prone to artifacts and can be a good option for tricky high ISO images.
ON1 No Noise AI
While I own it and it is generally a fine program, I not currently do not use No Noise AI. It will open RAW files but ignores any adjustments you’ve made in RAW. It will output a DNG, but apparently with RAW data (LR/ACR treat it like a TIF). So it does not support the RAW workflow I like to use. It also does not work as a Smart Filter on a Smart Object, so it does not support the other workflow I recommend. At this time, I recommend DXO or Topaz. If I’ve missed something or the program is updated to address these limitations, please let me know and I’ll review again and update my findings.
Capture One
Capture One is a direct competitor to LR/ACR and many people believe superior for their work. I am personally of the opinion that it is better in some ways and worse in some ways and ultimately not really better or worse across a large number of images. The edit the RAW conversion right inside Photoshop gives LR/ACR an enormous advantage and is the reason I prefer it over Capture One. I find that this flexibility allows me to do my best work, which ultimately gives me a better image and with less hassle given the flexibility of the workflow. For the sake of discussion here, I recommend using either CaptureOne or LR/ACR and then considering one of the other specialized tools for important or tricky noise reduction jobs.
Starry Landscape Stacker
Starry Landscape Stacker uses a completely different workflow specific to reducing noise in starry night sky images like the Milky Way. Instead of improving your RAW or processed image, it lets you combine multiple images to reduce noise. This is often like shooting at ISO 400 or 800 instead of 6400. It rotates the sky image to align multiple exposures, which is the digital equivalent of using a star tracker. You can also combine the use of this technique with the others above to push the results further.
Conclusion
My overall use of these tools looks like this:
Images <ISO 400, use LR / ACR unless making a massive enlargement.
Images ISO 800+, use PureRAW2
On images I’ve already processed, use ACR, Dfine, DeNoise in roughly that order based on how challenging the job is. I try to avoid AI tools when simpler tools do the job because they fail in more predictable ways (less need to look over every little detail in the image to check for issues). For complex jobs where DeNoise shows artifacts, I would also try Neat Image.
For Milky Way shows where I can shoot a sequence of 10+ images, use Starry Landscape stacker. Typically in combination with noise reduction in LR or PureRAW2) first. Then I may do additional noise reduction targeted through a luminosity mask to help keep sharp stars.
[Disclosure: This post contains affiliate links. I have purchased all the software referenced above and only endorse tools I personally use and recommend. If you purchase through these links, a small percentage of the sale will be used to help fund the content on this site, but the price you pay remains the same. Please see my ethics statement if you have any questions.]
Summer sale: Lumenzia, Web Sharp Pro, and all my courses are on sale this week for 25% off with discount code SUMMERSALE via mystore. And if you get all 3 courses and Lumenzia by the end of the sale, I’ll give you a bonus course with completely unique content for free. Just email me if you qualify (prior purchases count toward this offer).
Extreme weather makes for powerful memories, but that doesn’t mean your photographs will necessarily convey the excitement of the moment. In this tutorial, you’ll see how I added sunlight and contrast not only to make the image more visually stunning, but also to help make the blowing sand clouds stand out to help convey just how powerful the wind was. You’ll also learn a very simple trick to make perfect masks for adding light to an image.
A 40MPH wind storm was generating substantial clouds from these sand dunes. The amount of fine dust in the air was unreal. I changed lenses from inside my car and taped a plastic bag around my camera to avoid ruining it. There was so much dust in my ears by the end of the shoot that I was still cleaning them out 3 days later. The visibility was rather poor, as you can see from the distant mountains just barely peaking out of the dust clouds, so I chose a wide angle lens to help capture clear details in the foreground to compare to the blowing sand. At the same time, the light was also flat and created a very low contrast RAW image which fails to convey a sense of the wind because the blowing sand and rigid dunes aren’t clearly differentiated. This is where enhancing the sense of sunlight in post helped truly bring the image to life.
The general processing here uses a mix of techniques I teach in much greater depth in my Exposure Blending and Dodging & Burning Master Courses. The key here was to generate a sense of sunlight and then reveal the glowing air with a radial gradient and across the sand highlights using a midtones luminosity mask.
Gradients are a great way to help reveal anything that looks like sunlight or some other light source. However, getting the perfect gradient can be a little finicky. You’ll often want to squish or rotate a radial gradient to create an angled oval and then move it into place – but transforming and moving the mask will often leave you with a gradient with a clipped edge. The problem is that you cannot paint anything outside what you can view in the mask, but there’s a simple workaround. When you need an oval-shaped gradient mask, the following workflow will help you get perfect results quickly and easily:
Add a radial gradient by dragging from the center of the mask to the closest edge. This should leave you with a round white gradient which is fully black before touching any edges. You may have clipped edges if you dragged to the corner or a further edge, or didn’t drag from the center. If you want to start from the exact center, show rulers and change them to percentages so you can target 50/50 on the rulers as you move your cursor. If you have any clipping, start over. It’s important to get this step right. Once you do, you’ll be able to move this gradient outside the edges of the mask without clipping (the clipping only occurs at the moment you’re painting if some of the paint goes outside the image canvas).
Click <cmd/ctrl>-T to transform. You’ll see a box around your gradient (Photoshop creates the smallest rectangle which includes all non-black pixels in the mask, which makes it easy to transform and confirm that you’re gradient did not get clipped at the edge).
Click and drag from inside the gradient to move its center to be placed at the center of the light source you wish to create or enhance in your image. This might be just off the edge if you’re enhancing sunlight or something else where the source is not in the image.
<alt/option>-click and drag the edge or corner points to squish or elongate the the gradient into an oval shape. This modifier key will ensure the center point does not move as you change the shape of the gradient.
Click and drag from outside the corner points to rotate the oval to get the angle of light you desire.
Click <enter> to finalize the changes.
You can come back and make changes anytime by repeating steps 2-6. Your oval will not become clipped as long as you did step #1 originally. You can even paint further on the mask, however that will make further resizing much more complicated. So it’s best to duplicate the layer and use a separate layer mask if you need to reveal more of the layer and do not wish to lose the ability to revise the gradient.
It’s easy to fall in love with beautiful color. And for the same reason, it can also be a distraction when the color jumps out in a way that pulls your viewer’s eyes away from the main subject. In this tutorial you’ll learn how simplifying color can help strengthen your image.
The key adjustment in this edit was to push the yellow hue towards green and reduce its saturation to match. Once the forest was a more uniform saturation, we could then boost it across all the tree and add some contrast to give the image more life. Luminosity masks were helpful at several points to help isolate the adjustments:
The yellow hue adjustment was affecting the colors on the forest floor as well as the trees, so a darks luminosity mask helped paint the ground black to avoid unwanted color shift in areas where the original color was not an issue.
When boosting vibrance, the blue, red, and magenta values were becoming too strong, so an inverted color mask was used to exclude those problem colors from the mask.
Lumenzia’s automatic contrast enhancment feature was used to boost midtone contrast by selecting a general midtone preview and clicking “contrast”. Blendif works great for general DML masks and keeps the file size down. The resulting adjustment has modest opacity by default, so the opacity was increased a bit to add more contrast.
To brighten the river, a blue selection helped quickly create an accurate mask of the river to avoid brightening the surrounding areas.