CPU, GPU, RAM: What should you update?

I haven’t met a photographer yet who wouldn’t appreciate a more powerful computer. We push our machines very hard, and it’s no fun to sit and wait to view and edit images. So naturally, we tend to dream about a “better” computer. But what does that mean? Should I get a really powerful GPU (Graphics Processing Unit) unit because I work on images? Maybe more RAM, I heard I should get 64GB from a guy who’s never seen how I work. Oh, wait, I need a bigger and faster SSD (Solid State Drive).

Each of these upgrades is obviously going to cost you and unless you want to throw huge amounts of money at the problem and pray it works, you should probably get a better idea of what you’re going to get for your money. Even little splurges have a cost. You might not care about spending an extra $200 for an upgraded GPU on top of the $2500 you’re already planning to spend, but should you? You could pay for an extra couple years of Lightroom and Photoshop with that money, so ideally you’re not just guessing that it’s going to help.

While nothing’s a sure bet, there are some simple ways you can help easily determine where spending some of your hard-earned cash will give you the most increase in performance. When it comes to speed, there are several factors which tend to play a role (in no particular order of importance, as it varies case by case).

  1. The software you use. Some programs are much more efficient than others for a given task. For example, many sports and wedding photographers love Photo Mechanic for its speed in culling images. And this can all change over time. Some future Photoshop update could slow down due to new features or speed up due to optimized code.
  2. The clock speed of your CPU (Central Processing Unit). This is a measure of raw speed that affects pretty much anything that does not run on the GPU.
  3. The number of CPU cores. Software can be optimized to be broken up into different tasks that can be processed simultaneously by multiple cores. But code which is not optimized for multiple cores, may see no benefit at all. An analogy would be hiring 4 people to paint your house (where they can effectively break up the work and get the job done faster) vs hiring 4 people to cook your dinner (where they’re more likely to trip over each other in the kitchen than feed you more quickly).
  4. The GPU. This is a complex mix of GPU cores, speed, and GPU RAM (unless you’re on Apple Silicon, where the GPU gets RAM from a shared pool). I’m just going to lump these together, and it should make more sense why when you see the testing below.
  5. The amount of RAM. This is where data resides when it is being processed. If you run out, you’re computer may use the (much slower) hard drive to help manage the temporary data or it may simple give you an error.
  6. The speed of RAM. This affects how much time the CPU may spend waiting to retrieve or store temporary data.
  7. Hard drive size. Hard drives may be used to an extent as working memory (such as scratch disks in Photoshop). In situations where the free space runs very low, this can suddenly switch from a non-factor to a major performance issue.
  8. Hard drive speed. This affects the time needed to get and store data, which may affect the speed of opening applications or opening and saving images.

This isn’t a comprehensive list and I’ve simplified a few concepts, but it covers all the things that make a big difference for purchasing decisions. And on this list, all but #1 are hardware factors which you may potentially be able to improve through an upgrade. The key to knowing which is important is understanding which of these represent a bottleneck to the tasks that are slowing you down. Any given task likely required multiple sub-systems (CPU, RAM, etc), but slow performance is often the result of a specific component taking a very long time for a specific task. The key to knowing where to invest in your computer is knowing where an investment might help address a bottleneck. And we can get a very good sense of the opportunities by understanding the bottlenecks we have now.

Using the tests below, you can help identify the bottlenecks affecting your work. Of course, this might change. The software industry as a whole has been doing a lot of work on optimizing code for multiple cores and for GPUs, and so a future software upgrade might remove a bottleneck you have today. But knowing your current bottlenecks is crucial, and you can make an educated guess from there as to how much margin of safety you might want given how long you expect to use a new computer or hardware upgrade.

My video tutorial shows how to test on MacOS using “Activity Monitor”, but I’ll add details below on how to perform similar tests on Windows using “Task Manager”.

Quick steps to setup Activity Monitor on MacOS:

  1. Go to Applications / Utilities / Activity Monitor
  2. Click on the “memory” tab in the main Activity Monitor Window (you can go to Window / Activity Monitor if it isn’t showing)
  3. Right click on the column headings to show and hide columns until your screen shows the same columns as mine. You can resize and reorder the columns by clicking and dragging.
  4. Go to View / Update frequency and set it to Often for 2 second intervals. This will give you a faster read on transient processes (1 second may be too variable).
  5. Click on Window / CPU History to see GPU usage over time. Click and drag the edges to get a nice display of all cores. You’ll probably see every other core shows little activity, these are “virtual cores” and the total number of cores is probably 2x the physical number of cores in your CPU.
  6. Click on Window / GPU History to see GPU usage over time. You may see more than one GPU depending on your system.
  7. You can also check or uncheck Window / Keep CPU Windows on Top if you want to keep both CPU and GPU history visible while you test.

Note that there are several columns available for memory. It’s a tricky subject, and I would just show “memory” as a way to gauge which apps are eating a lot of RAM. Ultimately, the green/yellow/red memory pressure graph is the best indicator of when the total system RAM is insufficient for what you and the operating system are collectively consuming.

 

Quick steps to setup Task Manager on Windows:

  1. Search for “Task Manager” in the start menu
  2. In the “processes” tab, right click the column headers to add “GPU”. You can then see CPU and GPU percentage here.
  3. Click on the “performance” tab to see graphs over time of CPU and RAM usage. You can also click “open resource monitor” at the bottom of the performance tab and then click the “memory” tab to see a plot and % of physical RAM used.
  4. Go to the menu for Options / Always on Top if you want to keep Task Manager visible as you test.

 

Tips for watching these metrics:

  • Test in a real environment. Background applications like anti-virus, backup software, open web browsers, etc all consume important resources. But if that’s how you use the machine, then that’s how you should test the machine. The goal of this testing is not better numbers, the goal is to get solid data to help you make a decision. The exception to that is if you’re trying to optimize your software setup, which you should do before you get collect data to inform hardware purchases (since you might be able to get more out of the hardware you have).
  • The metrics generally apply to specific tasks, not to programs on the whole. For example, some parts of Photoshop are optimized for the GPU and others are not. To get a true sense of the bottlenecks, you should run a variety of tasks representing the activities where the machine feels sluggish to you. Just do a variety of tasks you normally do and watch the metrics.
  • These are all meant as helpful guides. It’s not an exact science, so always use good judgement. Hitting a bottleneck on a process you normally let run when you step away from the computer isn’t all that important. It doesn’t matter if your videos take 2 hours or 6 to render if you’re going to be sleeping when it happens.
  • If you have questions on the options, see these support articles on Activity Monitor and Task Manager

 

CPU bottlenecks

The main indicators for CPU performance is CPU %. A process which consistently runs at a high percentage is heavily dependent on the speed of your CPU and higher clock speed would help. A process which consistently runs around 100% is not optimized for multiple cores. In general, CPU clock speed matters tremendously for Photoshop, with many operations still using a single core. Multiple cores are much more important for importing and exporting in Lightroom, where you’ll likely see CPU usage vastly higher than 100%.

As a general rule of thumb, I find that any bottleneck that shows 100% CPU usage is typically one where I’ll get the least boost with a computer upgrade. This reflects a software bottleneck and most hardware upgrades are more impressive with software which is optimized to utilize the full capabilities of a modern computer.

 

GPU bottlenecks

A good indicator for GPU utilization is the GPU history. You can watch the GPU%, but it tends to move too erratically to give you the best picture. If the GPU is sitting near 0 all the time, you clearly won’t get an immediate benefit from upgrading. If it is sitting around 30% or more for decent stretches, it may not quite be the biggest bottleneck, but upgrading is likely to offer meaningful gains in performance. If it is running up to much larger values during important slowdowns, then an upgraded GPU is very likely to be beneficial.

Another test (new Photoshop 2022) is under Help / GPU compatibility. If you see red X’s, then you are probably going to find some Photoshop features are greyed out and others run slowly. If you don’t see all green, investing in a GPU is a good idea. You should definitely have a GPU that meets Photoshop’s minimum requirements.

You’ll probably be surprised at how little your GPU is actually used for most photography apps. It generally has much more benefit for gaming and video. And the benefits are very app-specific, some take advantage of the GPU more than others. Most photographers won’t see a lot of benefit from a top of the line GPU.

 

RAM bottlenecks

The amount of RAM (memory) is crucial. When you have more RAM than you need, the system bottleneck will be in the CPU or GPU. But when RAM is limited, the computer will likely start to depend significantly on the hard drive and cause massive slowdowns. Which is to say adding more RAM will either help a lot (if you don’t have enough) or hardly at all (if you consistently have enough). Adding more than the computer uses won’t give you any benefit.

RAM speed bottlenecks are hard to test without physically swapping it. But generally, this isn’t a huge factor for performance and I wouldn’t worry about it too much. The total amount of RAM is generally much more important.

I would say that 32GB of RAM is the sweet spot for most photographers these days. You can definitely get by with only 16GB, but you’d likely see faster results with 32. I’ve personally used 32GB for years with great results most of the time, but have just upgraded to 64GB to avoid a few current slowdowns I run into and give myself flexibility as my software and needs continue to grow. If you work on very large and complex images or don’t care about cost, then 64GB is a good option.

A quick note on the various memory options in Activity Monitor. I consulted with some very deep experts on the topic and they didn’t know the answer. I’ve read a lot of information online that claims to be true, but then you’ll quickly find some counterexample which a given interpretation is dead wrong. Apple seems to be a bit deliberately cagey on what the various memory columns are and they’ve changed a bit over time. The “memory” and “real memory” columns can give you some idea as to which applications are consuming a lot of memory when you start really pushing the memory pressure into yellow/red territory. I am told these values represent actual RAM (not virtual use of the disk), and that things get very confusing with shared use of some memory across multiple processes. Don’t get too worried about the details, you’ll end up pulling out your hair trying to figure out why one is typically larger but then sometimes the other is much larger. But it can still be helpful to identify problems (such a faulty app consuming too much RAM).

 

Hard drive bottlenecks

The most important decision here is simply to ensure you have enough space to hold critical data. The picture gets a little murky, as you can often buy cheaper external drives as you need more storage rather than buying a very expensive internal SSD now. I believe a good rule of thumb is to buy a new computer with 2X the amount of internal storage you actually use now (with a goal of keeping a minimum of 100-200GB of space unused at all times). This assumes you have stable patterns of usage, you’ll have to be more thoughtful if you recently started producing video for example. It also assumes you aren’t using the internal storage in a wasteful manner now. Tools like DaisyDisk can be very helpful to help see what’s consuming space now in case you want to move some files to an external drive or just put them in the trash.

Hard drives of course also affect the speed of the computer in a couple ways. First, a fast hard drive will allow you to more quickly open applications and images. The difference between a cheap HDD (spinning hard disk drive) and SSD (solid state drive) can be massive. And there are various degrees of SSD speed, which can make a different to a point. However, if you are compressing your images, the opening and saving of those images is actually bottle-necked significantly by the CPU (which is not multi-core optimized at this time). I find that reading an uncompressed image is 3X faster an saving is about 20X faster. So a faster drive may offer very little benefit opening and saving compressed images.

Hard drive space also matters. Your computer does not use RAM exclusively for processing data, it also uses the hard drive. This is especially true when RAM is fully in use, but it can be a factor at any time (memory management is extremely complicated). For this reason, prefer to keep at least 200GB of free space on my internal drive at all times and would recommend you keep at least 100GB free. Once the free space gets too low, you’ll start to see erratic moments where things slow down. And eventually, you’ll run into errors when you’re completely out of space.

 

Summary of where to invest in computer hardware:

Here’s a quick recap of how to interpret the data:

  • The CPU % is 100% or more for long periods of time => A faster CPU clock speed are likely to help
  • The CPU % is much more than 100% for long periods of time => More cores are likely to help
  • The GPU % is >30% for long periods of time or close to 100% for short periods of time => GPU upgrades are likely to help
  • MacOS: “memory pressure” shows long periods of yellow or bursts of red => More RAM is likely to help
  • Windows: RAM performance graph shows in use RAM running high => More RAM is likely to help
  • Available space on your internal drive is < 100-200GB => A larger internal drive or freeing up space is likely to help immediately (but be sure to plan ahead for the data you’ll add in the years to come, 2x your current usage is a good target)
  • Applications launch slowly or you open/save uncompressed images => A faster drive is likely to help

It’s also important to keep a few things in mind:

  • Your results are subject to change as your software improves or your patterns of use change
  • Buying a $200 upgrade in the face of uncertainty might be a good bet if the alternative is a risk that you’ll have to replace the whole computer a year or two early

If you’re just looking for general photography advice, I would say the following in 2021 is ideal:

  • Fast CPU clock speed is key.
  • Don’t sweat the GPU (get one, but don’t pay for big upgrades unless you need for games/video), put the money towards CPU and RAM.
  • 32GB RAM, 64GB if you can splurge.
  • A SSD with 2x the internal storage you currently use.

Masking 2.0 in Lightroom and ACR

Adobe just released one of the most important updates to Lightroom (LR) and Adobe Camera RAW (ACR) ever: the introduction of  “Masking 2.0”. In this tutorial, we’ll cover what’s new and what it all means. Be sure to read below for lots of details I couldn’t fully cover in the video.

What’s new in Masking 2.0?

Up until now, local adjustments in LR and ACR were mostly based on gradients and brushes represented by pins. It wasn’t nearly as easy to visualize as layer masks in Photoshop and you couldn’t customize the targeting beyond range masks and brushes. The workflow is about to get much more powerful with Masking 2.0.

Instead of a system of somewhat pins you can vaguely visualize with red overlays, you now get to visualize masks in a variety of ways. You can still use the old overlays, and additionally have several new options (I personally find the first and last options extremely useful):

  • Color Overlay: This is the traditional way we’ve visualized local targeting previously. This is a great way to see both the mask and image at the same time.
  • Color Overlay on Black & White: Shows the same red overlay, but with the underlying image as black and white. This is helpful to remove the distraction of color.
  • Image on Black and White: Shows the image, but with everything that is not part of the mask converted to grayscale. I find this one a bit hard to interpret, but may be useful for adjustments on highly saturated images.
  • Image on Black: This shows the image, but with everything that is not part of the mask blacked out. This helps see exactly what you’re adjusting and is very helpful for colorful and bright images.
  • Image on White: Similar concept, but with everything that is not part of the mask going white. This would be helpful for seeing what you’re adjusting in high-key images.
  • White on Black: This is exactly how you see a layer mask in Photoshop and is the most useful new overlay. You aren’t getting layers, but you’re getting the exact same way to view a mask, which makes it more intuitive and easier to see the details. This is a very useful new way to review the local targeting.

The new default setting to use “automatically toggle overlay” will help hide and show your preferred overlay. This may be a little confusing at first. For example, if you zero out all sliders (such as by double-clicking the only adjustment you made), the overlay will become visible again. The logic being that you should see any real adjustments, but otherwise see the overlay if there are no adjustments. I find this setting very helpful when working with the “color overlay” mode, but prefer to turn it off when working with white on black so that I’m always seeing the image unless I specifically want to see the mask.

There are new options for targeting. You still have brushes, linear/radial gradients, and color/luminance/depth range masks and now additionally can use:

  • Select Sky to help target the sky (or possibly foreground if inverted)
  • Select Subject to help target people and pets
  • Luminance range now gives you 2 controls over falloff instead of 1. The old smoothness slider has been replaced with the ability to split the ends. Just click and drag the sides of the rectangular box (the full strength range) or the triangles at the end (which designate the point at which the targeting begins).

The most powerful changes are in the new ability to manipulate and combine multiple masks, including the ability to:

  • Invert any mask. Previously, you could only invert a radial gradient. Now, you target everything which is NOT red or adjust everything but that area you just brushed.
  • Subtract and Intersect any mask. Previously you could only subtract with a brush or intersect a color/luminance/depth range mask. Now, you can do things like target a person and use range and luminance masks to isolate their skin tones from their yellow jacket. (Note that I’ve lumped these together because “intersect” is billed as a combination of subtract and invert, more on that below).
  • Add any mask. Previously you could only add with a brush. Now you can do things like adjust multiple gradients at the same time with the same set of sliders, rather than duplicating them and trying to keep the settings in sync.

Each mask is comprised of 1 or more “components”, which are the gradients, brushes, select sky, etc. You can think of a component as a sub-mask. These all get combined into the net targeting represented by the mask. Each mask type gets it own icon on the image, such as a little landscape for the sky or little portrait for select subject. I find this much more clear than a generic pin for any type of adjustment. The component icons only show for the currently active mask.

The mask logic is built using the components from the bottom-up. For example, if you have 3 components and the middle one shows the “-” icon, then the final mask will be built as: start from the bottom component, subtract the middle component, and then add the top component. The indicators for how the components are combined are a bit subtle and include:

  • A subtracted component gets a “” on its icon
  • An added component simple does not have an indicator, like a default state
  • An inverted component also has no indicator on its icon AND its preview does not show inverted! But it is indicated in a couple of places: there is a checkbox in the mask option (which appear next to the tools, not the mask) and the “invert” menu option is checked (under the … icon). The mask will show the impact of the inversion, which should be pretty obvious in most cases.
  • While you can find an “intersect” option on some platforms (yes in LR Classic, no on mobile), there actually is no inverted component. Mathematically, it’s the same as subtracted the inverted component, and that’s what you’ll get. So look for both the “-” and “invert” being checked to confirm that you’ve intersected something.

If the mask panel is open but none of your mask components are selected, then you can see a pin representing each full mask if you have selected “Show Unselected Mask Pins”. This can be handy to hover and quickly review each mask. Note that as you hover, the respective mask’s name will become a little brighter to help identify it.

If you already do most of your edits in LR, this should be a huge boost for you. This will make the more complex aspects of LR faster and easier to understand, while unlocking some new capabilities such as select sky/subject and the ability to combine masks. But what about those of you who spend a lot of time in Photoshop (PS)? Should you do more work in LR before heading to PS? Are there cases where you can skip Photoshop entirely?

 

Which workflows will this replace?

Once you’ve had time to get comfortable with Masking 2.0, I think you’ll find that you can more quickly and easily target your local adjustments in LR. You might even move a few steps from PS back into the RAW processing. But on the whole, I think the split of work between LR and PS is likely to remain similar to where it is now. Ultimately, the intention of these changes are to make LR/ACR easier to use and a bit more capable and Adobe delivered that very well. They aren’t meant to give you layers in LR, expand the adjustments you can make in LR (only the masks), or match Photoshop’s most advanced capabilities.

Most of you following my blog are very interested in Photoshop and luminosity masks and are probably wondering if this will let you replace any of those workflows. For some simpler edits perhaps, just like Range masks allowed a few more things to be done in LR/ACR. I find that this new approach makes the adjustments I was already making in LR/ACR faster and more intuitive. I may use the sky targeting for some subtle work on certain images. I’m thrilled to see these updates. At the same time, this won’t replace hardly any of the advanced workflows I use and these updates were never intended for that purpose.

To put things in perspective, you still cannot do the following with Masking 2.0:

  • Combine multiple RAW images or use layers of any kind.
  • Create highly precision luminosity masks. The range mask controls are similar to BlendIf in Photoshop, which is insufficient for advanced edits.
  • Use a selection to paint a mask (this is foundational to the precision of luminosity masks in Photoshop).
  • Make local adjustments with any RAW tools you couldn’t already use. So you cannot use these new masks with vibrance, tone curves, HSL, color grading, lens corrections (for targeting chromatic aberration to avoid unwanted effects) or camera calibration.
  • Use any of the tools exclusive to Photoshop (anything on the filter menu, warps, selective color layers, precision cloning and healing tools, etc).

As a result, the following is either impossible to do or better done in Photoshop:

  • Exposure Blending (including multi-processing of a single RAW due to a much larger range of local tools and more precise masks).
  • Advanced dodging and burning. Photoshop offers much more precision with luminosity selections, its simpler to work with color, you can apply multiple different strengths of dodging and burning with a single adjustment, and it’s simpler to manage a multi-layer dodge in PS than the equivalent in LR.
  • Focal-length blending: no layers.
  • Time blending: no layers.
  • Perspective blending: no layers.
  • Advanced black and white: Cannot apply different color conversion settings to different parts of the same image.
  • Use 3rd-party plugins like Lumenzia, Web Sharp Pro, Nik Color Efex Pro, etc. There is an interface for plugins in LR, but does not provide access to the capabilities of PS.
  • And there are far too many more examples to list.

So the bottom line is that Masking 2.0 is (a) an awesome and very welcome improvement to LR and (b) not the end of Photoshop. For most of you, I expect you’ll need a couple weeks to get comfortable with the new interface and then generally find it makes local changes in LR faster and more intuitive.

 

What could be better?

While on the whole these changes make LR/ACR more intuitive, there are a few things which may confuse people:

  • The preview for an inverted mask is not updates. This can be a bit confusing, so keep an eye on the net result and the “invert” checkmark status. I hope to see this changed in a future update.
  • The implementation of intersected masks as subtraction of the inverse may be confusing, especially when trying to replicate previous use of range masks. Or perhaps I just think differently on this as a developer, I’d be curious to hear what others think in the comments below.
  • The parameters for the components (such as range for color targeting, feather for a gradient, etc) aren’t grouped with the masks, but rather above the adjustments (which may be hidden depending on how you’ve scrolled the right-hand column).

I’d also like to see a couple tweaks for efficiency:

  • Zooming into the image to check mask quality is very important, as finding artifacts after a bunch of processing would cause a lot of unnecessary work. Unfortunately zooming into the mask is not simple and intuitive, as the keyboard shortcuts change when viewing masks (for example, you can’t use <Z>).
  • There does not appear to be a way to copy and paste the tool settings from one mask to another. Being able to copy and paste could be very helpful for example if you wanted to compare results between using a sky selection and a linear gradient intersected with a luminance range to see which gave better results. You can duplicate a mask and then swap out the components, but this would be a cumbersome workaround.
  • I also wish there were a faster way to toggle between the red overlays (like Quick Mask in Photoshop) and “white on black” (which is the conventional way a mask appears in Photoshop). Both are very useful because one lets you review the mask in relationship to the image and the other lets you review the mask as clearly as possible.

On the whole, these are little things and I would expect Adobe continues to improve on this already excellent starting point.

 

The fine print:

There are some little details to this change that may be of interest:

  • Select Sky and Select Subject are NOT based on the unadjusted RAW but the current processed version of the image. This is fine because the mask is fixed and won’t change after creation, but you should be aware of this if you need to optimize the mask. Try this: set all the sliders from exposure down to blacks as far left as they can go and add a sky mask. You will most likely see a pure white mask. So if you’re making extreme adjustments, you might want to consider when you select the sky (before or after big changes).
  • On the other hand, range masks ARE based on the unadjusted RAW. This is ideal, as it means that the targeting does not move around as you adjust the image. But it might also mean that the targeting looks different than you expect. For example, if you make increased exposure quite a bit, you might find the highlights for luminance are more in the range of 70-80 than 90-100 because that’s where they started.
  • The new masks work based on “process version 5” (which you can see in the Camera Calibration tab. If you use the new masks on an image using version 3 or 4, it will be updated (as there are no impacts to image appearance). However, if you try to use the masks on an image using process version 1 or 2, the new masking options will be greyed out. This is because updating from 2 to 3+ changes the image and Adobe is trying to protect you from unwanted changes. However, the newer versions are great and I would recommend going to the Camera Calibration tab to update to v5, then go to the Basics tab and review slider settings to keep the look of the image you want and then add your masks. Of course, if you don’t like the impact on the image, you can just go back in the history tab to revert to the old version.
  • Luminance masks which were created in the old version of LR will show an “update” option, but they don’t migrate consistently. I’ve seen some massive changes, so just review carefully if you decide to update this as you’ll probably need to adjust the sliders to keep the same look.
  • The LR mask data is saved in the file with an lrcat-data extension. If you’re backing up or migrating your catalog, be sure to grab all the LR files (and do this when LR is closed, as some of the files are just working files that don’t exist after LR is closed).
  • The traditional gradients are just “vector” masks, which means they take up very little space. However, the new Select Sky and Select Subject masks are bitmaps, meaning they are grayscale images which take up space. In my quick testing, it looks like about 1MB for every 3-4 images from my D850. It will certainly very with image content and resolution. (Note that I’m not always seeing the lrcat-data file get smaller when I delete sky masks and optimize the LR catalog. It will shrink if you step back in history and do the same, so it seems that the mask is kept if there is a history state involved even if the mask is not actively in use.)
  • LR v11 does not carry over your previews when upgrading your catalog. This means you’ll have to regenerate previews and Smart Previews (under Library / Previews) if you want to see your images quickly and when the source file is not connected to the computer (such as content on an external drive which is not connected).

Kudos to the ACR / LR teams at Adobe for creating such an incredible improvement. Learn more about these new features via Adobe’s masking post and the new features page for LR and ACR.

Coming Soon: Lumenzia v10

Update: Lumenzia v10 was released October 27, 2021

Many of you (including myself) are eagerly awaiting the arrival of a new Apple Silicon (“M1 Pro/Max”) MacBook Pro. The speed, battery, and display all look incredible.

As you may be aware, Apple Silicon is a huge technology shift where software which is optimized for it can run faster. That naturally raises questions about my own software, which means migrating to Adobe’s UXP platform to run natively for best speed. Web Sharp Pro is already a UXP panel and runs natively on Apple Silicon. Lumenzia v9 is already 100% compatible with Apple Silicon (under Rosetta).

I’m happy to announce that Lumenzia v10 will become available starting Oct 27th as a beta (with official release by the end of the year). This is a FREE upgrade for all Lumenzia customers (even if you bought 6 years ago) and will be a UXP panel which runs natively on Apple Silicon for an extra speed boost and simpler installation.

This launch will be different from previous ones because the Adobe APIs required to support Lumenzia’s needs have only recently become available. It will therefore be going through a beta phase to provide access as soon as possible. To avoid sending too many emails, I will not be sending regular notification of new betas. Instead, you can stay informed and try the beta by checking the Lumenzia beta page I have created. I anticipate the beta phase will last for 3-4 weeks, where I plan to rapidly address any reported bugs. Upon completion of the beta testing, I will email all customers to notify you the official Lumenzia v10 is officially available.

 

I’m adding some questions and answers here, but please comment below if I can help clarify anything for you.

Q: When will Lumenzia v10 be available? What is the latest version?

A: Please see the Lumenzia beta page for the latest information on timing/versions and how to get the beta.

 

Q: Is this a free update?

A: Yes. If you’ve been a customer since v1, you’ve received >1500 new features, updates, and fixes for free and I’m happy to continue offering free upgrades as thank you for your support and loyalty.

 

Q: What is required to run Lumenzia v10?

A: PS v22.5 (I anticipate raising the minimum soon based on the public PS beta, as it addresses some issues that were in the initial release of the new UXP APIs in v22.5). Lumenzia v10 will run on both Mac and Windows (it is not specific to any computer hardware, just the version of Photoshop).

 

Q: What’s new in v10?

A: In order to launch a UXP version as quickly as possible, the initial release is focused on migrating existing capabilities. This is a complete top-to-bottom rewrite of 7 years of code, and has been in progress for over a year now. Future updates will be built on this platform, and there is enormous potential with UXP. The immediate benefits of v10 are primarily faster speed on Apple Silicon, a simplified installation, and a more modern look and feel to popup dialogs from the panel. There are some other minor enhancements coming as well, which I’ll detail when the official v10 becomes available later this year.

 

Q: Who should install and use the beta?

A: I encourage anyone to try it. You can install both v9 (the CEP panel) and v10 (the UXP panel) at the same time without conflict. Apple Silicon users should see a roughly 20% speed boost if you switch off Rosetta.

 

Q: Can I use Lumenzia on older versions of Photoshop?

A: Yes, Lumenzia v9.2 runs on CS6 and all recent versions of CC. Lumenzia v9.2 will remain available in perpetuity for compatibility. Lumenzia will continue to run fine on older versions of Photoshop, and this includes all the functionality that is available today.

 

 

 

Making the Desert Glow

Get the 5DayDeal: Save 96% on this amazing bundle of tutorials from top instructors. And when you purchase using the link on this post, you’ll also get a free copy of my new Norway: From Start to Finish course (enrollment in the bonus course will be complete by Oct 22 after the 5DayDeal has ended). Learn more about the bundle here.

motionmailapp.com

 

Some of my favorite shots come from the most unexpected moments, like this long-exposure image taken long after sunset. Often times, if you’re patient, you’ll get a second chance at color. The original sunset fades, and then there is a late burst of color. It may be vibrant or somewhat subtle, but often creates some of my favorite soft images.

When I realized this was going to happen, I raced to setup another shot with a 2-minute exposure to help ensure smooth clouds. The RAW file looked a bit flat and the color is weak, but with the right processing, dodging & burning, and a few other little tricks I was able to extract an image that lived up to the awe of the moment taking in the last light of that gorgeous day.

 

I rarely endorse other products and only do when I think you would thoroughly enjoy them. When you purchase through my link, you will receive my Norway: From Start to Finish course and be supporting me with an affiliate commission at no cost to you as well as helping fund some great charities.

Don’t let this hidden setting RUIN your RAW smart objects

Camera RAW Smart Objects are hands-down one of the best features of Photoshop. If you don’t know why, first check out my previous tutorials: 3 Kinds of Smart Objects and 3 Common Misconceptions. The beauty of these special smart objects is that they always give you access to alter the RAW processing while keeping the highest-possible quality… unless you overlook this one critical (but hidden) setting.

If you have a RAW Smart Object embedded in an image set to 16-bits and Pro Photo RGB image, you’d assume that that’s what the RAW would give you. After all, you can output any RAW file with those settings. But that’s not the whole story. It is true that you will have a 16-bit, ProPhoto RGB layer rendered from your RAW Smart Object. That’s true even if you were in another color space or bit depth, as the layer is reprocessed as needed.

The problem is that Camera RAW Smart Objects contain their own color space and bit depth. Without them, you wouldn’t get a proper preview, RGB readings, or accurate histograms and clipping warnings inside ACR. More importantly, these settings are applied, no matter what the settings are in your document. So your ACR settings are applied, and then the layer is converted to your image’s settings externally. So if you have a RAW set to say sRGB and 8-bit, the layer will be converted to that FIRST and then converted to say the 16-bit ProPhoto RGB of your document. So yes, you will technically have the requested settings, but it’s just a conversion from a smaller color space and bit depth to a larger one. You’ve already lost a LOT of quality.

To preserve full quality, your Camera RAW Smart Object should use settings which are as good or better than the document’s color space and bit depth. This would ideally be the same settings, but you probably won’t see much difference if you RAW is set to ProPhoto inside an Adobe RGB image.

To avoid problems, there are some settings to check and update in both Photoshop and Lightroom. It’s important to check both, as you can get different results with different workflows. If you use Lightroom’s Edit in / Edit as a Smart Object workflow, you’ll open an image using the settings from Lightroom. If you instead open the image directly in Photoshop (basically any other method which invokes the ACR interface when opening the image), then the Photoshop settings will be applied.

Lightroom settings:

  • Go to Preferences / External editing
  • Change the color space to either Adobe RGB or ProPhoto RGB
  • Change the bit depth to 16 bits

Photoshop settings:

  • Go to Preferences / File Handling / Camera RAW Preferences / Workflow
  • Change the color space to either Adobe RGB or ProPhoto RGB **
  • Change the bit depth to 16 bits

** Note that Photoshop (unlike Lightroom unfortunately) lets you choose any color space to render your Smart Object. So you could use something like Beta RGB or REC2020 if you prefer. However, you should be aware of a limitation with color spaces in ACR. They are not embedded, just referenced by name. So if you choose something non-standard and open the image on another computer where that ICC profile is not installed, you could run into an issue. The image will be fine initially. However, if you double-click into the Smart Object to edit it and its previous profile is not available, ACR will use your default PS preference for you without warning (this applies to the color space, not bit depth). So simply opening ACR and clicking OK could convert from BetaRGB to Adobe RGB (or is set as default on that computer).

When you open an image, whichever defaults above apply (depending on on how you open), those settings will be set inside the Smart Object as well as on the document. So checking your document setting at import is a quick way to confirm that things worked internally as expected. You can change those settings afterward, but they will match when the RAW is opened the first time.

The good news is that if you didn’t know about this before, you can still fix your existing work (as long as you haven’t rasterized the Smart Objects). If you have the wrong settings inside the Smart Object, you can update them anytime. For example, if the Smart Object was internally set to 8 bit, you can switch it internally to 16-bits and you’ll get back the lost data. Just double-click the Smart Object, click on the text link at the bottom showing these details, update as desired, and click OK to save the Smart Object.

Greg Benz Photography