A photographer’s review of the M1 Max Macbook Pro

I’ve had my hands on the brand new 2021 Macbook Pro (M1 Max) for a few weeks now, does it live up to the hype? Yes, absolutely. I’m stunned by how good this computer is in so many regards. This is unquestionably the best computer I’ve ever used for photography and I wanted to provide a detailed review from a photographer’s perspective, as well as some tips on how to make the most of the transition.

First, some quick background. I’ve been using a fully-loaded 2018 MacBook Pro (MBP). This includes a 2.9GHz 6‑core i9 Intel processor, 32GB memory, Radeon Pro 560X GPU with 4GB of GPU memory, 4TB SSD, and 15″ display. And thanks to a free battery replacement last month from Apple (the original battery was swelling, probably from being continuously charged during Covid), it’s still just like new. But time has moved on and I’ve nearly run out of internal storage, I’ve been eager for a lighter laptop to carry on photography trips, and of course I’m eager to get my hands on anything that will cut down the time I spend waiting while processing images and video. So I’ve been literally waiting to hand my money to Apple the moment they released a Pro version of Apple Silicon.

Within 15 minutes of the machines going on sale, I’d bought mine. It was an easy decision to buy every option except the 16″ as a professional doing photography, software, and video. I opted for a smaller screen and battery life to save some weight. My 15″ 2018 feels hefty at 4.0lbs and putting 4.7lbs on in my photo backpack for a 16″ screen (and some extra battery life) didn’t sound like fun. With all the other upgrades, my 2021 MBP includes the M1 Max with 10-core CPU, 32-core GPU, 64GB unified memory, and 8TB SSD storage. Of course, it also includes all the standard features like a greatly improved screen, better speakers and mic, upgraded web cam, HDMI port, SDXC card reader, and MagSafe.

The tests below reflect my maxed-out 14″ computer, but I’m using this for business and most photographers should probably just buy 14″ or 16″ with the basic 10-core CPU, 32GB RAM, 2-4TB SSD. 64GB and an upgraded GPU are nice for photography, but the gains are smaller (64GB is nice if you work with a lot of large files and the upgraded GPU is nice if you edit a lot of video).

 

Photoshop benchmarks:

I’ve just released a free benchmarking tool for Photoshop called G-Bench. The gains with M1 are very real and very impressive. Here are a few highlights (all tests on a high-resolution image from a Nikon D850):

  • On average, the M1 Max completes tasks in Photoshop in half the time of the 2018 MacBook Pro (57s weighted test time vs 110s).
  • Filter / Reduce noise takes only 10s vs 25.
  • Surface Blur takes only 9.5s vs 25
  • Field blur takes 2.5s vs 8.5.
  • Brush lag is effectively gone even with a 2000 pixel soft brush.

On the other hand, there are some areas where the gains are very small. Saving smart objects (15 vs 16s) and compressed TIF files (21 vs 24s) is barely changed as the bottleneck here is CPU clock speed, and the M1’s impressive gains are generally other categories of performance. Of course, if Adobe were to optimize file compression for multi-core CPUs at some point, then we could expect to see huge gains as well.

Equally surprising to me was how much of a performance hit was caused when running under Rosetta2. I’ve generally been told there is about a 20% loss of performance, but it’s much larger. The weighted test time nearly doubled (57s vs 112s). That made it slightly slower than my 2018 MBP. It’s impressive that the machine can keep up when running under emulation, but you’ll definitely want to run natively as much as possible.

Note that I’ve run these tests on a 14″ M1 Max. There are rumors of a high-power mode for the 16″ version, which amounts to being able to run the processor hotter (faster) due to better cooling with the larger chassis. If that comes to fruition, I’d love to hear what times you see running the same benchmark. I would be very surprised to see any benefit in Photoshop, as there’s almost nothing I do in Photoshop that drives the fans much above the minimum. Heavy imports in LR, video exports, or other activity that causes loud fans for a long time is the kind of task where I would expect some possible benefit (certainly Photoshop might benefit under a heavy multi-tasking scenario).

 

Other performance tests:

I haven’t created benchmarks for these other apps, but wanted to share some comparisons running identical tasks on both my 2018 and 2021 laptops.

  • Gigapixel AI did the same enlargement in 180s vs 30s (ie, the old machine took 6x longer!)
  • Rendering a 20-minute 1080p video from Final Cut Pro X took 2:02 on the new laptop vs 8:43 (a whopping 77% reduction in the time to export, ie the old machine took 4.3x longer)
  • Rendering a different 19-minute 1080p video from ScreenFlow took 9:06 on the new laptop vs 14:03 (a 35% reduction in the time to export, ie the old machine took 1.5x longer). This project was slightly more complex (many more cuts and a second audio track), but nothing that I believe would account for the different times here. The benefits can’t be explained as simply as “this machine is built for video”. Yes, all video apps benefit greatly, but there is an enormous advantage for apps which have been specifically optimized to use multiple cores, GPUs, and Apple Silicon (in general, probably more than just running native).

As you can see, the there are generally gains across the board. For apps which are optimized for performance, the gains can be truly astonishing.

As a general observation, I generally find that the usage of the GPU is much higher on the Apple Silicon laptop. Some of this is the WindowServer process from MacOS, but some is showing for the apps themselves. Both video apps showed greater usage of the GPU, which may come from the native version of the app, MacOS on Apple Silicon, or both. The GPU still shows a lot more relative downtime  than the CPU, but the gap is smaller than on the Intel machine.

Beyond speed: how is the rest of the 2021 M1 Max?

There is a LOT to love about this new laptop beyond just faster speeds and better battery life. A quick rundown of my favorites:

The silence! This is probably my favorite part of the upgrade. The 2018 laptop routinely runs the fans full tilt, which measures 44dB (sound measurements taken near where my head would be during regular use via the Decibal X app on iPhone). When running my benchmark test, it quickly ramped to around 28-30dB and then the full 44dB shortly after that. This was as loud as I could make it, even when pushing LR full tilt by re-rendering standard, smart, and 1:1 previews for 300 images. The 2018 laptop makes fan noise that I can hear all the time, even when it’s sitting completely idle and registering around 23-24dB, which is the ambient sound of the quiet room in our house.

The 2021 is so quiet that it hardly ever exceeded the 23bB ambient sound. I can’t hear it idle without putting my ear near the keyboard. It only got up to 26dB at the most challenging parts of my Photoshop benchmark. The reality is that so little Photoshop work uses 8-10 cores for extended periods that you just aren’t making it hot. So I did the same LR test to rebuild 3 x 300 previews. This finally got it revved up to a maximum 44dB. So it isn’t that the new machine is more quiet at the maximum fan speeds, it’s that it almost never uses them while the old one routinely does. You are hardly ever going to hear this machine unless you’re rendering video or doing a batch processing of a large number of images.

The battery life! I ran my benchmark test 3 times on each machine starting from 100%. The old machine has a battery that’s only a month old, so I’d say this is a pretty fair head to head. Both were running the latest versions of Photoshop and MacOS Monterey and without apps running in the background. Not only did the new machine finish in half the time, it still showed 95% battery charge vs only 68% by the time the old machine finished. The improvement is astounding.

The screen brightness for me has jumped from a maximum of 500 to 1600 nits. Does it look “three times” brighter? Definitely not, but it is clearly brighter and this will be extremely nice when I need to use the laptop outdoors or in bright ambient conditions. And the quality of the screen is much better with deeper, richer blacks via the mini-LED technology (which only uses a bright backlight behind pixels that need it, so that it doesn’t show through the dark pixels). The blacks are clearly much darker and the whites are clearly brighter, so the contrast ratio on this display is excellent.

There have been concerns about the risk that this new display may show “blooming”, or a bit of a halo around bright pixels isolated against a dark background. I can confirm that I do see this, which is because mini-LED lights small zones (not individual pixels). That said, it’s minimal, I wouldn’t notice if I weren’t looking for it, I can see it sometimes on the old display, its not as bright as the minimum black on the old display, and I already do a lot of my critical editing on a 27″ Eizo anyway. So even if you’re shooting the moon or little stars against a black night sky, I wouldn’t be concerned with this.

There have also be questions and concerns about the notch. I mostly don’t notice it, but it does leave fewer options at the top at times when the menu is really crowded. It also has some quirks where things can be effectively rendered behind the notch, which I assume will be fixed in short order with an update from Apple. But for now, it can be pretty confusing / frustrating, as important system icons may be completely hidden. A quick workaround is to switch apps (as an app with fewer written menu commands will show more icons). Given that the real change here is actually giving you more display at the top corners (rather than taking something away from the top-middle), I’m perfectly ok with it (as I’m sure Apple will fix the glitch with hidden content soon enough).

The thing I miss most in this display is simply the size, which was a conscious decision on my part to save weight. 14″ is noticeably less space one you add in toolbars for LR and PS, but it’s fairly subtle. I’m seeing 20 images in grid view in LR instead of 24. In Photoshop, I can click <tab> to hide and show the side panels quickly to see more of the image in rare cases where I’d want more. I’ll see how I feel after a few road trips, but I suspect I’ll be sticking with 14-15″ sizes over 16″. If Apple were to shave weight by cutting battery weight, that’s a tradeoff I’d happily take. I can reverse that tradeoff anytime by bringing my HyperJuice 130W USB-C, which is an amazing product and good for a full charge on the go.

The new FaceTime HD webcam is amazing. The noise is substantially improved, feels like switching from ISO 1600 to ISO 200 on your camera. The color rendering is much better, with skin tones looking much more natural. And the boosted resolution provides a meaningful improvement in detail.

The sound of the speakers is incredible on both machines. The 2018 is slightly louder, but I’m comparing a 15″ to the new 14″ chassis and I don’t know if the new 16″ performs differently. I would say that the 2021 sound feels slightly more enveloping and less like it’s coming from a little device in a specific part of the room. I’d say I slightly prefer the new sound.

A built-in SD-card reader (SDXC) is awesome to have again – one less adapter to carry, lose, or forget. And not having to worry about forgetting an HDMI cable removes some stress for making presentations.

 

How to get maximum speed and compatibility with Universal / Apple Silicon apps vs running Intel under Rosetta2:

The migration from Intel to custom Apple ARM chips means better performance and battery life, but also the need for new software and some limitations. Apple has done a remarkable job with “Rosetta2”, which allows you to run legacy apps on Apple Silicon. But to get the best performance, you’ll need to run “universal” or native apps. If you right-click an app and see the application kind listed as “Intel”, you’ll be running under Rosetta2 (if you see “universal” or “Apple Silicon”, you’re ready to run native). It’s so seamless that you might easily not realize that you’re running Rosetta2 and missing out on better performance. If you use Apple’s Migration Assistant to transfer all your existing data from your old laptop to the new one, then you’re very likely bringing over Intel apps and will need to upgrade or reinstall.

I found that all of my Adobe apps were all installed as the Intel version, even though universal versions of Photoshop v23, Lightroom v11, etc all exist. This is presumably Adobe installing only the software your machine needed at the time it downloaded, to save space on your computer. To upgrade to the much faster native versions, just uninstall (be sure to keep settings) and then install again. When you install directly on your Apple Silicon computer, the Creative Cloud Desktop (CCD) will install the universal versions of the apps.

I found this was also the case with some other non-Adobe apps. Some of these could be updated to universal apps, and some are only available as Intel apps at this time. There’s an easy way to review everything (without having to right-click for “get info” on each app). Go to the Apple logo at top left / About this Mac / System Report / Software / Applications. Make sure the display is wide enough to see the far right column for “kind”. Click that column to sort it and look for apps listed as “Intel”. If you see anything as “32-bit (unsupported)”, this is so old that it cannot run on any computer running any of the three most recent major versions of MacOS (you’d need 10.14 or older). If you see “Apple Silicon” or “Universal”, those apps are ready to run natively.

One more thing to know about the universal apps is that they contain both the Intel and Apple Silicon versions of the app, which means you can run natively or under Rosetta2. Running under Rosetta2 may unlock some features in the app which have not yet been migrated to the native version. In Photoshop, this includes CEP panels and the Shake Reduction filter. If you’re trying to find an extension panel under Window / Extensions (legacy), you’ll need to run under Rosetta or install a newer UXP version of the same plugin (Lumenzia v10 and all versions of Web Sharp Pro run natively as UXP panels). If you wish to launch a universal app under Rosetta, please see here for details. Adobe also lets you run under Rosetta if you go into the Creative Cloud Desktop app, click the … icon, and then choose “Open (Intel)”. The normal open option there will use Apple Silicon (and is noted in the tooltip). If you don’t see those options, then you haven’t yet installed the universal version of that app.

 

Important photography apps available native for Apple Silicon

  • All my software runs natively on Apple Silicon, that includes Lumenzia (v10+) and Web Sharp Pro.
  • Adobe Photoshop (v22.3+). Just be sure you’ve updated (the Apple Migration assistant will import your old Intel version onto your new M1 computer). If you cannot access Window / Extensions (legacy), the plugins below, or Filter / Sharpen / Shake Reduction, you are running under Rosetta. Just uninstall and reinstall the latest version to replace your Intel version with the Universal build.
  • Adobe Lightroom Classic (v10.3+). Same update comments as Photoshop.
  • Nik Collection. The entire collection (except Perspective Efex) got native support for Apple Silicon starting from v4.2. Aside from providing faster performance, this update is critical if you want to see all the Nik tools without having to run Photoshop under Rosetta.
  • ON1 Photo RAW (2022), which includes Resize if you get the ON1 Photo RAW 2022 Ultimate Upgrade.
  • CaptureOne Pro 22 should work natively, however I had to buy an upgrade, the license provided to me over email does not work, and customer support has yet to reply after 7 days – so I cannot comment on the software, just the support experience so far.

 

Apps not yet available as native for M1

 

As of the time I’m writing this, the following apps are not native for Apple Silicon, but do run under Rosetta2:

  • Luminar 4. This means slower performance and more importantly, you won’t see the Photoshop Plugins unless you run Photoshop under Rosetta. It is my understanding that no native update is coming for Luminar 4. Instead, Luminar fans are encouraged to update to the upcoming Luminar Neo, which process much more than just Apple Silicon support, including. Preorder now to save, if you are logged in and click the nearly hidden option at top to validate that you are a customer and you’ll be able to update for only $54 (>70% off the normal standalone price of Neo). I’m looking forward to seeing the new AI relight, AI atmostpher, AI sensor dust removal, AI portrait tools like background removal and bokeh, and more.
  • Topaz Gigapixel AI (as of v5.6). Thankfully, it runs much faster even with Rosetta on M1 Max than the 2018 (doubling the resolution of a D850 image took 31s vs 179s, which is 83% less waiting, or nearly 6x longer on the old laptop). Given how intense this application is, it’s screaming for native support and I’ll be thrilled to see how fast it can go then! Additionally, it does not yet show under PF File / Automation.
    • Note: Gigapixel uses some very interesting optimizations. On the 2018, it runs nearly entirely on the GPU with modest RAM. On the 2021, it runs nearly entirely on the CPU with much higher (2-10x) RAM usage. I’m not sure how much extra RAM matters though, as I deliberately consumed nearly all the free RAM on the computer and Gigapixel didn’t slow down a bit when it had to work with less.
  • Topaz Denoise AI (as of v3.3). While the software does not run native, the plugin does and that’s all that matters (you’ll be able to launch the plugin when Photoshop is running natively and speed is not an issue).
  • Topaz Sharpen AI (as of v3.2.2). While the software does not run native, the plugin does and that’s all that matters (you’ll be able to launch the plugin when Photoshop is running natively and speed is not an issue).
  • Nik Perspective Efex (as of v5). The rest of the Nik Collection got native support for Apple Silicon starting from v4.2. Be sure to check out my demo to make the most of Color Efect Pro. But you won’t be able to run Perspective Efex as a Photoshop plugin yet.
  • NeatImage (as of v8). No ETA on an Apple Silicon version, but they are working on multiple native updates for other products.
  • Adobe Bridge (as of v11). See Adobe’s support article for some limitations and tips.
  • i1Studio calibration software. It runs fine under Rosetta and I really don’t see any reason to care about a native build, as there are no speed or compatibility concerns that I’ve come across.
  • CEP extension panels for Photoshop (anything normally loaded via Window / Extensions (legacy) in Photoshop). You can either run Photoshop under Rosetta or update to UXP versions of these extension panels (which will show under the Plugins menu in PS). Both my Lumenzia and Web Sharp Pro software are available already as UXP panels.

Please let me know if you think I’m missing any critical and widespread photography apps from this list in the comments below.

Disclosure: This article contains affiliate links. See my ethics statement for more information. When you purchase through such links, you pay the same price and help support the content on this site.

G-Bench: test Photoshop performance

As I wrote recently in article on how to assess the the best upgrades for Photoshop performance, it’s important to have some objective measures of performance in order to get good value for your money when upgrading your computer or otherwise optimizing performance. While there are a lot of great benchmarking tools out there, I’ve never felt there was tests designed to specifically assess Photoshop performance in a way that answered my questions. What performance should I expect from Photoshop with the latest hardware? Which features in Photoshop are most improved? etc.

So I decided to build my own benchmarking test for Photoshop: “G-Bench“, which I am making available for free. It will also be built into my upcoming Lumenzia v10+ and Web Sharp Pro v3.4+ updates (both should be available by Dec 1), where you can find it via the flyout (top-right three bars icon) under Utilities. And I have also created a standalone version of G-Bench in case you don’t have either or want to have a dedicated button for more frequent testing (which you can download for free here, as well as from the footer of my newsletters going forward). All of these panels are UXP panels, which means they can run natively on Apple Silicon (and are fully compatible with any Mac / Windows machine running PS 2023 or later).

Rational for a dedicated Photoshop benchmark

While it’s great to understand how fast your CPU or GPU is in general, it doesn’t tell you how fast a particular program will run. For example, with upgrade to the M1 Max (see my full review of the M1 for photographers), I found one video editor’s export time cut by 88% and another only cut by 35%. Software performance depends on the software as much as the hardware. If it isn’t optimized for multiple cores, GPUs, new instruction sets in the CPU, etc – then it may not show improvements in a generic bench test.

Furthermore, this applies not just to the program, but specific functions within it. For example, you might find that Surface Blur is suddenly 5x faster, while file saves only improved by 20% because they have different bottlenecks and may be optimized differently. So it’s important that your testing reflect the way you actually use the software. G-Bench is built to reflect the needs of photographers using Photoshop. So it focuses on testing where we tend to end up waiting: running filters, saving smart objects, etc.

Of course, you also use your computer for a lot of things other than Photoshop, so I recommend using G-Bench in addition to other benchmarks. They all compliment each other well.

 

How to use G-Bench

Just launch the “test and compare” dialog by clicking the button in G-Bench, or via flyout / utilities in Lumenzia and Web Sharp Pro. The first time you use it, you’ll need to set your preferred folder. This folder is used both to save the results of tests you run as well as allow you to review and compare tests you have already run. Once you’ve set that folder, you can click “run a test” to begin. The test may take roughly 5-60 minutes depending on the speed of your computer. Note that the display will update sporadically while the test is running (ie, the preview may appear lagging or frozen in the middle of the test). When it is done, you’ll see a popup dialog with results. If you leave the checkbox in that popup checked, a CSV file with your results will be saved to your folder when you click OK.

You may compare your tests  to each other, or to the tests which come with the panel. The “result A” and “result B” dropdowns each contain the same full list of all available tests. You can even import tests from other people by putting a copy of their CSV file into your designated folder. The top of the list reflects tests in your folder, and the bottom of the list reflects tests which come with the panel. Just select the two tests of interest (set A to your system and B to the reference data) and the click “compare results” to create a graph of the “weighted time”, which is the ultimate score for each test.

 

If you’d like to compare details of specific items in the test, just navigate in your file browser to the folder you selected to open the CSV files in Excel or Numbers. The CSV file contains test conditions (such as computer hardware) and results of specific tests (such as the time it took to enlarge the image 2.5x). If you’d like to see the CSV files for the included tests or use a formatted template to make the results more pretty, please click “export included tests” in the test and compare popup window. Note that the result A/B dropdowns will not list the duplicates of internal tests, they will only show once as included tests.

How to interpret results

I would also encourage you to run the entire test a few times. You’ll likely see the main result (weighted time) varies by 1-5%. This is normal, and you assume that a result in the middle is probably most indicative of performance. If you see a much larger variation, that likely indicators some other factor at work – such as a background program being very active during one run and not another. It could also be the result of changes you’d made in Photoshop (such as disabling OpenCL, which is one of the factor tracked in the test conditions recorded in the CSV). You might also see variation from one version of Photoshop to the next (such as speed improvements after code optimization, or perhaps a delay if there is a bug).

The line item tests should be relatively straight-forward. Each test is fairly specific. The reported time is an average of numerous loops (in most cases). This repeat testing helps provide better accuracy, as the test time varies a little each run – even under the same conditions. The fastest and shortest loop time is noted, and if you see huge variability there, you may wish to re-run the test. This might be caused by some transient issue that delayed Photoshop, but does not reflect the results you would typically see

The line items include a “weighting“. The purpose of this is so that each test reflect the real impact it has on a general workflow. You can think of this roughly as my estimation of the number of times you would use a given feature (on average) to edit a photo. So a weighting of 0.1 for motion blur indicates that I expect very few workflows will use this specific feature, whereas a weighting of say 10 for Adobe Camera RAW indicates I expect that feature would be used frequently. It’s much more predictive than just adding up all the times, but it’s far from perfect and certainly not a representation of exactly how you work. We are all a bit different. If you would like to try different weightings, please use the “export included tests” option and then open the exported XLSX (Excel template) to copy and paste your CSV data and then view the analysis tab to try your own weighting and look at the new result. You should then do the same with another data set, as the weighted time is only meaningful when compared to another test using the same weighting.

The total “weighted time” at the top is calculated by taking each tests average loop time and multiplying it by its weight, and then summing up all those numbers. So for example, if a test has a time of 7.2 seconds on average and a weight of 2, then it will contribute 14.4s to the total weighted time.

A few other notes on the data:

  • The rule of thumb I’ve always applied as an engineer is that if there isn’t a 20% change in performance, you’re unlikely to notice it.
  • The CSV file contains several important notes, be sure to look for them.
  • There is some setup for some tests, which is un-tracked. For example, the auto-align test creates lighter and darker versions of the base image and rotates them before invoking the alignment. Only the alignment time is recorded in that case. So the total active test time reported in the CSV will be somewhat less than the actual time the test takes to complete.
  • It’s important to keep these numbers in context. I deliberately kept the weighting for the brush tests lower than the expected use. These brush strokes take me about 1 second to do by hand, which means that some of the test time isn’t something I would perceive as any delay from the computer. You can certainly feel brush lag when it takes 2-3s for a large brush. Rather than trying to correct the values for this, I felt it best to just be simple and transparent with the data.
  • These are tests based on the time it takes to process the output. I do not have a way to test the time it takes for the ACR interface to open, for example. I’m effectively just testing the time it takes from the moment you click OK in a dialog until you get the result.
  • I do not plan to make many changes to the tests, as revising the tests would make previous test data useless. However, I anticipate I’ll eventually make a v2 and welcome your comments below on other tests you think would  be important to consider for your use of Photoshop for photography.

 

A few general observations from my own testing

I’ve tried running the test under various conditions on my computer to see what impacts performance and which aspects of Photoshop tend to be impacted. Your results will differ, but here’s what I see on an M1 Max and 2018 Macbook Pro:

  • While Rosetta2 is amazing to run old software on your M1 Mac, it is also far slower than I expected. My G-Bench times are nearly twice as long when running under Rosetta. It’s impressive performance under emulation, but my old 2018 laptop runs slightly faster than the M1 under Rosetta2. If you need to run some old plugins occasionally, I would just launch PS under Rosetta selectively. Otherwise, native is clearly the way to go with Apple Silicon.
  • As expected based on evaluating with Activity Monitor, GPU benefits are limited at this time. When I disable the GPU in Photoshop, the test times are nearly 50% longer – but almost the entire difference comes from the smart sharpen and enlargement test items. There is certainly a lot of untapped potential in a computer as powerful as the M1 Max, and I expect that software companies in general will increasingly take advantage of this more often.
  • Background application activity generally doesn’t matter, until you really push things.
    • Allowing a web browser and anti-virus to be active in the background increased the weighted time by just under 3% (most of which came from the browser since there were no new files for the anti-virus to scan). You can safely leave a lot of things in the background without having any detectable impact on Photoshop.
    • If I leave Final Cut Pro X rendering a large video in the background, the weighted time grows by 34%. Put another way, it still takes 43% less time than my old 2018 computer for Photoshop tasks even after I throw this heavy load on it. So even though I may not be using my CPU and GPU to the max in Photoshop, the excess capacity is still a huge benefit because I can run heavy applications in the background without causing much detectable change in Photoshop’s performance.

CPU, GPU, RAM: What should you update?

I haven’t met a photographer yet who wouldn’t appreciate a more powerful computer. We push our machines very hard, and it’s no fun to sit and wait to view and edit images. So naturally, we tend to dream about a “better” computer. But what does that mean? Should I get a really powerful GPU (Graphics Processing Unit) unit because I work on images? Maybe more RAM, I heard I should get 64GB from a guy who’s never seen how I work. Oh, wait, I need a bigger and faster SSD (Solid State Drive).

Each of these upgrades is obviously going to cost you and unless you want to throw huge amounts of money at the problem and pray it works, you should probably get a better idea of what you’re going to get for your money. Even little splurges have a cost. You might not care about spending an extra $200 for an upgraded GPU on top of the $2500 you’re already planning to spend, but should you? You could pay for an extra couple years of Lightroom and Photoshop with that money, so ideally you’re not just guessing that it’s going to help.

While nothing’s a sure bet, there are some simple ways you can help easily determine where spending some of your hard-earned cash will give you the most increase in performance. When it comes to speed, there are several factors which tend to play a role (in no particular order of importance, as it varies case by case).

  1. The software you use. Some programs are much more efficient than others for a given task. For example, many sports and wedding photographers love Photo Mechanic for its speed in culling images. And this can all change over time. Some future Photoshop update could slow down due to new features or speed up due to optimized code.
  2. The clock speed of your CPU (Central Processing Unit). This is a measure of raw speed that affects pretty much anything that does not run on the GPU.
  3. The number of CPU cores. Software can be optimized to be broken up into different tasks that can be processed simultaneously by multiple cores. But code which is not optimized for multiple cores, may see no benefit at all. An analogy would be hiring 4 people to paint your house (where they can effectively break up the work and get the job done faster) vs hiring 4 people to cook your dinner (where they’re more likely to trip over each other in the kitchen than feed you more quickly).
  4. The GPU. This is a complex mix of GPU cores, speed, and GPU RAM (unless you’re on Apple Silicon, where the GPU gets RAM from a shared pool). I’m just going to lump these together, and it should make more sense why when you see the testing below.
  5. The amount of RAM. This is where data resides when it is being processed. If you run out, you’re computer may use the (much slower) hard drive to help manage the temporary data or it may simple give you an error.
  6. The speed of RAM. This affects how much time the CPU may spend waiting to retrieve or store temporary data.
  7. Hard drive size. Hard drives may be used to an extent as working memory (such as scratch disks in Photoshop). In situations where the free space runs very low, this can suddenly switch from a non-factor to a major performance issue.
  8. Hard drive speed. This affects the time needed to get and store data, which may affect the speed of opening applications or opening and saving images.

This isn’t a comprehensive list and I’ve simplified a few concepts, but it covers all the things that make a big difference for purchasing decisions. And on this list, all but #1 are hardware factors which you may potentially be able to improve through an upgrade. The key to knowing which is important is understanding which of these represent a bottleneck to the tasks that are slowing you down. Any given task likely required multiple sub-systems (CPU, RAM, etc), but slow performance is often the result of a specific component taking a very long time for a specific task. The key to knowing where to invest in your computer is knowing where an investment might help address a bottleneck. And we can get a very good sense of the opportunities by understanding the bottlenecks we have now.

Using the tests below, you can help identify the bottlenecks affecting your work. Of course, this might change. The software industry as a whole has been doing a lot of work on optimizing code for multiple cores and for GPUs, and so a future software upgrade might remove a bottleneck you have today. But knowing your current bottlenecks is crucial, and you can make an educated guess from there as to how much margin of safety you might want given how long you expect to use a new computer or hardware upgrade.

My video tutorial shows how to test on MacOS using “Activity Monitor”, but I’ll add details below on how to perform similar tests on Windows using “Task Manager”.

Quick steps to setup Activity Monitor on MacOS:

  1. Go to Applications / Utilities / Activity Monitor
  2. Click on the “memory” tab in the main Activity Monitor Window (you can go to Window / Activity Monitor if it isn’t showing)
  3. Right click on the column headings to show and hide columns until your screen shows the same columns as mine. You can resize and reorder the columns by clicking and dragging.
  4. Go to View / Update frequency and set it to Often for 2 second intervals. This will give you a faster read on transient processes (1 second may be too variable).
  5. Click on Window / CPU History to see GPU usage over time. Click and drag the edges to get a nice display of all cores. You’ll probably see every other core shows little activity, these are “virtual cores” and the total number of cores is probably 2x the physical number of cores in your CPU.
  6. Click on Window / GPU History to see GPU usage over time. You may see more than one GPU depending on your system.
  7. You can also check or uncheck Window / Keep CPU Windows on Top if you want to keep both CPU and GPU history visible while you test.

Note that there are several columns available for memory. It’s a tricky subject, and I would just show “memory” as a way to gauge which apps are eating a lot of RAM. Ultimately, the green/yellow/red memory pressure graph is the best indicator of when the total system RAM is insufficient for what you and the operating system are collectively consuming.

 

Quick steps to setup Task Manager on Windows:

  1. Search for “Task Manager” in the start menu
  2. In the “processes” tab, right click the column headers to add “GPU”. You can then see CPU and GPU percentage here.
  3. Click on the “performance” tab to see graphs over time of CPU and RAM usage. You can also click “open resource monitor” at the bottom of the performance tab and then click the “memory” tab to see a plot and % of physical RAM used.
  4. Go to the menu for Options / Always on Top if you want to keep Task Manager visible as you test.

 

Tips for watching these metrics:

  • Test in a real environment. Background applications like anti-virus, backup software, open web browsers, etc all consume important resources. But if that’s how you use the machine, then that’s how you should test the machine. The goal of this testing is not better numbers, the goal is to get solid data to help you make a decision. The exception to that is if you’re trying to optimize your software setup, which you should do before you get collect data to inform hardware purchases (since you might be able to get more out of the hardware you have).
  • The metrics generally apply to specific tasks, not to programs on the whole. For example, some parts of Photoshop are optimized for the GPU and others are not. To get a true sense of the bottlenecks, you should run a variety of tasks representing the activities where the machine feels sluggish to you. Just do a variety of tasks you normally do and watch the metrics.
  • These are all meant as helpful guides. It’s not an exact science, so always use good judgement. Hitting a bottleneck on a process you normally let run when you step away from the computer isn’t all that important. It doesn’t matter if your videos take 2 hours or 6 to render if you’re going to be sleeping when it happens.
  • If you have questions on the options, see these support articles on Activity Monitor and Task Manager

 

CPU bottlenecks

The main indicators for CPU performance is CPU %. A process which consistently runs at a high percentage is heavily dependent on the speed of your CPU and higher clock speed would help. A process which consistently runs around 100% is not optimized for multiple cores. In general, CPU clock speed matters tremendously for Photoshop, with many operations still using a single core. Multiple cores are much more important for importing and exporting in Lightroom, where you’ll likely see CPU usage vastly higher than 100%.

As a general rule of thumb, I find that any bottleneck that shows 100% CPU usage is typically one where I’ll get the least boost with a computer upgrade. This reflects a software bottleneck and most hardware upgrades are more impressive with software which is optimized to utilize the full capabilities of a modern computer.

 

GPU bottlenecks

A good indicator for GPU utilization is the GPU history. You can watch the GPU%, but it tends to move too erratically to give you the best picture. If the GPU is sitting near 0 all the time, you clearly won’t get an immediate benefit from upgrading. If it is sitting around 30% or more for decent stretches, it may not quite be the biggest bottleneck, but upgrading is likely to offer meaningful gains in performance. If it is running up to much larger values during important slowdowns, then an upgraded GPU is very likely to be beneficial.

Another test (new Photoshop 2022) is under Help / GPU compatibility. If you see red X’s, then you are probably going to find some Photoshop features are greyed out and others run slowly. If you don’t see all green, investing in a GPU is a good idea. You should definitely have a GPU that meets Photoshop’s minimum requirements.

You’ll probably be surprised at how little your GPU is actually used for most photography apps. It generally has much more benefit for gaming and video. And the benefits are very app-specific, some take advantage of the GPU more than others. Most photographers won’t see a lot of benefit from a top of the line GPU.

 

RAM bottlenecks

The amount of RAM (memory) is crucial. When you have more RAM than you need, the system bottleneck will be in the CPU or GPU. But when RAM is limited, the computer will likely start to depend significantly on the hard drive and cause massive slowdowns. Which is to say adding more RAM will either help a lot (if you don’t have enough) or hardly at all (if you consistently have enough). Adding more than the computer uses won’t give you any benefit.

RAM speed bottlenecks are hard to test without physically swapping it. But generally, this isn’t a huge factor for performance and I wouldn’t worry about it too much. The total amount of RAM is generally much more important.

I would say that 32GB of RAM is the sweet spot for most photographers these days. You can definitely get by with only 16GB, but you’d likely see faster results with 32. I’ve personally used 32GB for years with great results most of the time, but have just upgraded to 64GB to avoid a few current slowdowns I run into and give myself flexibility as my software and needs continue to grow. If you work on very large and complex images or don’t care about cost, then 64GB is a good option.

A quick note on the various memory options in Activity Monitor. I consulted with some very deep experts on the topic and they didn’t know the answer. I’ve read a lot of information online that claims to be true, but then you’ll quickly find some counterexample which a given interpretation is dead wrong. Apple seems to be a bit deliberately cagey on what the various memory columns are and they’ve changed a bit over time. The “memory” and “real memory” columns can give you some idea as to which applications are consuming a lot of memory when you start really pushing the memory pressure into yellow/red territory. I am told these values represent actual RAM (not virtual use of the disk), and that things get very confusing with shared use of some memory across multiple processes. Don’t get too worried about the details, you’ll end up pulling out your hair trying to figure out why one is typically larger but then sometimes the other is much larger. But it can still be helpful to identify problems (such a faulty app consuming too much RAM).

 

Hard drive bottlenecks

The most important decision here is simply to ensure you have enough space to hold critical data. The picture gets a little murky, as you can often buy cheaper external drives as you need more storage rather than buying a very expensive internal SSD now. I believe a good rule of thumb is to buy a new computer with 2X the amount of internal storage you actually use now (with a goal of keeping a minimum of 100-200GB of space unused at all times). This assumes you have stable patterns of usage, you’ll have to be more thoughtful if you recently started producing video for example. It also assumes you aren’t using the internal storage in a wasteful manner now. Tools like DaisyDisk can be very helpful to help see what’s consuming space now in case you want to move some files to an external drive or just put them in the trash.

Hard drives of course also affect the speed of the computer in a couple ways. First, a fast hard drive will allow you to more quickly open applications and images. The difference between a cheap HDD (spinning hard disk drive) and SSD (solid state drive) can be massive. And there are various degrees of SSD speed, which can make a different to a point. However, if you are compressing your images, the opening and saving of those images is actually bottle-necked significantly by the CPU (which is not multi-core optimized at this time). I find that reading an uncompressed image is 3X faster an saving is about 20X faster. So a faster drive may offer very little benefit opening and saving compressed images.

Hard drive space also matters. Your computer does not use RAM exclusively for processing data, it also uses the hard drive. This is especially true when RAM is fully in use, but it can be a factor at any time (memory management is extremely complicated). For this reason, prefer to keep at least 200GB of free space on my internal drive at all times and would recommend you keep at least 100GB free. Once the free space gets too low, you’ll start to see erratic moments where things slow down. And eventually, you’ll run into errors when you’re completely out of space.

 

Summary of where to invest in computer hardware:

Here’s a quick recap of how to interpret the data:

  • The CPU % is 100% or more for long periods of time => A faster CPU clock speed are likely to help
  • The CPU % is much more than 100% for long periods of time => More cores are likely to help
  • The GPU % is >30% for long periods of time or close to 100% for short periods of time => GPU upgrades are likely to help
  • MacOS: “memory pressure” shows long periods of yellow or bursts of red => More RAM is likely to help
  • Windows: RAM performance graph shows in use RAM running high => More RAM is likely to help
  • Available space on your internal drive is < 100-200GB => A larger internal drive or freeing up space is likely to help immediately (but be sure to plan ahead for the data you’ll add in the years to come, 2x your current usage is a good target)
  • Applications launch slowly or you open/save uncompressed images => A faster drive is likely to help

It’s also important to keep a few things in mind:

  • Your results are subject to change as your software improves or your patterns of use change
  • Buying a $200 upgrade in the face of uncertainty might be a good bet if the alternative is a risk that you’ll have to replace the whole computer a year or two early

If you’re just looking for general photography advice, I would say the following in 2021 is ideal:

  • Fast CPU clock speed is key.
  • Don’t sweat the GPU (get one, but don’t pay for big upgrades unless you need for games/video), put the money towards CPU and RAM.
  • 32GB RAM, 64GB if you can splurge.
  • A SSD with 2x the internal storage you currently use.

Masking 2.0 in Lightroom and ACR

Adobe just released one of the most important updates to Lightroom (LR) and Adobe Camera RAW (ACR) ever: the introduction of  “Masking 2.0”. In this tutorial, we’ll cover what’s new and what it all means. Be sure to read below for lots of details I couldn’t fully cover in the video.

What’s new in Masking 2.0?

Up until now, local adjustments in LR and ACR were mostly based on gradients and brushes represented by pins. It wasn’t nearly as easy to visualize as layer masks in Photoshop and you couldn’t customize the targeting beyond range masks and brushes. The workflow is about to get much more powerful with Masking 2.0.

Instead of a system of somewhat pins you can vaguely visualize with red overlays, you now get to visualize masks in a variety of ways. You can still use the old overlays, and additionally have several new options (I personally find the first and last options extremely useful):

  • Color Overlay: This is the traditional way we’ve visualized local targeting previously. This is a great way to see both the mask and image at the same time.
  • Color Overlay on Black & White: Shows the same red overlay, but with the underlying image as black and white. This is helpful to remove the distraction of color.
  • Image on Black and White: Shows the image, but with everything that is not part of the mask converted to grayscale. I find this one a bit hard to interpret, but may be useful for adjustments on highly saturated images.
  • Image on Black: This shows the image, but with everything that is not part of the mask blacked out. This helps see exactly what you’re adjusting and is very helpful for colorful and bright images.
  • Image on White: Similar concept, but with everything that is not part of the mask going white. This would be helpful for seeing what you’re adjusting in high-key images.
  • White on Black: This is exactly how you see a layer mask in Photoshop and is the most useful new overlay. You aren’t getting layers, but you’re getting the exact same way to view a mask, which makes it more intuitive and easier to see the details. This is a very useful new way to review the local targeting.

The new default setting to use “automatically toggle overlay” will help hide and show your preferred overlay. This may be a little confusing at first. For example, if you zero out all sliders (such as by double-clicking the only adjustment you made), the overlay will become visible again. The logic being that you should see any real adjustments, but otherwise see the overlay if there are no adjustments. I find this setting very helpful when working with the “color overlay” mode, but prefer to turn it off when working with white on black so that I’m always seeing the image unless I specifically want to see the mask.

There are new options for targeting. You still have brushes, linear/radial gradients, and color/luminance/depth range masks and now additionally can use:

  • Select Sky to help target the sky (or possibly foreground if inverted)
  • Select Subject to help target people and pets
  • Luminance range now gives you 2 controls over falloff instead of 1. The old smoothness slider has been replaced with the ability to split the ends. Just click and drag the sides of the rectangular box (the full strength range) or the triangles at the end (which designate the point at which the targeting begins).

The most powerful changes are in the new ability to manipulate and combine multiple masks, including the ability to:

  • Invert any mask. Previously, you could only invert a radial gradient. Now, you target everything which is NOT red or adjust everything but that area you just brushed.
  • Subtract and Intersect any mask. Previously you could only subtract with a brush or intersect a color/luminance/depth range mask. Now, you can do things like target a person and use range and luminance masks to isolate their skin tones from their yellow jacket. (Note that I’ve lumped these together because “intersect” is billed as a combination of subtract and invert, more on that below).
  • Add any mask. Previously you could only add with a brush. Now you can do things like adjust multiple gradients at the same time with the same set of sliders, rather than duplicating them and trying to keep the settings in sync.

Each mask is comprised of 1 or more “components”, which are the gradients, brushes, select sky, etc. You can think of a component as a sub-mask. These all get combined into the net targeting represented by the mask. Each mask type gets it own icon on the image, such as a little landscape for the sky or little portrait for select subject. I find this much more clear than a generic pin for any type of adjustment. The component icons only show for the currently active mask.

The mask logic is built using the components from the bottom-up. For example, if you have 3 components and the middle one shows the “-” icon, then the final mask will be built as: start from the bottom component, subtract the middle component, and then add the top component. The indicators for how the components are combined are a bit subtle and include:

  • A subtracted component gets a “” on its icon
  • An added component simple does not have an indicator, like a default state
  • An inverted component also has no indicator on its icon AND its preview does not show inverted! But it is indicated in a couple of places: there is a checkbox in the mask option (which appear next to the tools, not the mask) and the “invert” menu option is checked (under the … icon). The mask will show the impact of the inversion, which should be pretty obvious in most cases.
  • While you can find an “intersect” option on some platforms (yes in LR Classic, no on mobile), there actually is no inverted component. Mathematically, it’s the same as subtracted the inverted component, and that’s what you’ll get. So look for both the “-” and “invert” being checked to confirm that you’ve intersected something.

If the mask panel is open but none of your mask components are selected, then you can see a pin representing each full mask if you have selected “Show Unselected Mask Pins”. This can be handy to hover and quickly review each mask. Note that as you hover, the respective mask’s name will become a little brighter to help identify it.

If you already do most of your edits in LR, this should be a huge boost for you. This will make the more complex aspects of LR faster and easier to understand, while unlocking some new capabilities such as select sky/subject and the ability to combine masks. But what about those of you who spend a lot of time in Photoshop (PS)? Should you do more work in LR before heading to PS? Are there cases where you can skip Photoshop entirely?

 

Which workflows will this replace?

Once you’ve had time to get comfortable with Masking 2.0, I think you’ll find that you can more quickly and easily target your local adjustments in LR. You might even move a few steps from PS back into the RAW processing. But on the whole, I think the split of work between LR and PS is likely to remain similar to where it is now. Ultimately, the intention of these changes are to make LR/ACR easier to use and a bit more capable and Adobe delivered that very well. They aren’t meant to give you layers in LR, expand the adjustments you can make in LR (only the masks), or match Photoshop’s most advanced capabilities.

Most of you following my blog are very interested in Photoshop and luminosity masks and are probably wondering if this will let you replace any of those workflows. For some simpler edits perhaps, just like Range masks allowed a few more things to be done in LR/ACR. I find that this new approach makes the adjustments I was already making in LR/ACR faster and more intuitive. I may use the sky targeting for some subtle work on certain images. I’m thrilled to see these updates. At the same time, this won’t replace hardly any of the advanced workflows I use and these updates were never intended for that purpose.

To put things in perspective, you still cannot do the following with Masking 2.0:

  • Combine multiple RAW images or use layers of any kind.
  • Create highly precision luminosity masks. The range mask controls are similar to BlendIf in Photoshop, which is insufficient for advanced edits.
  • Use a selection to paint a mask (this is foundational to the precision of luminosity masks in Photoshop).
  • Make local adjustments with any RAW tools you couldn’t already use. So you cannot use these new masks with vibrance, tone curves, HSL, color grading, lens corrections (for targeting chromatic aberration to avoid unwanted effects) or camera calibration.
  • Use any of the tools exclusive to Photoshop (anything on the filter menu, warps, selective color layers, precision cloning and healing tools, etc).

As a result, the following is either impossible to do or better done in Photoshop:

  • Exposure Blending (including multi-processing of a single RAW due to a much larger range of local tools and more precise masks).
  • Advanced dodging and burning. Photoshop offers much more precision with luminosity selections, its simpler to work with color, you can apply multiple different strengths of dodging and burning with a single adjustment, and it’s simpler to manage a multi-layer dodge in PS than the equivalent in LR.
  • Focal-length blending: no layers.
  • Time blending: no layers.
  • Perspective blending: no layers.
  • Advanced black and white: Cannot apply different color conversion settings to different parts of the same image.
  • Use 3rd-party plugins like Lumenzia, Web Sharp Pro, Nik Color Efex Pro, etc. There is an interface for plugins in LR, but does not provide access to the capabilities of PS.
  • And there are far too many more examples to list.

So the bottom line is that Masking 2.0 is (a) an awesome and very welcome improvement to LR and (b) not the end of Photoshop. For most of you, I expect you’ll need a couple weeks to get comfortable with the new interface and then generally find it makes local changes in LR faster and more intuitive.

 

What could be better?

While on the whole these changes make LR/ACR more intuitive, there are a few things which may confuse people:

  • The preview for an inverted mask is not updates. This can be a bit confusing, so keep an eye on the net result and the “invert” checkmark status. I hope to see this changed in a future update.
  • The implementation of intersected masks as subtraction of the inverse may be confusing, especially when trying to replicate previous use of range masks. Or perhaps I just think differently on this as a developer, I’d be curious to hear what others think in the comments below.
  • The parameters for the components (such as range for color targeting, feather for a gradient, etc) aren’t grouped with the masks, but rather above the adjustments (which may be hidden depending on how you’ve scrolled the right-hand column).

I’d also like to see a couple tweaks for efficiency:

  • Zooming into the image to check mask quality is very important, as finding artifacts after a bunch of processing would cause a lot of unnecessary work. Unfortunately zooming into the mask is not simple and intuitive, as the keyboard shortcuts change when viewing masks (for example, you can’t use <Z>).
  • There does not appear to be a way to copy and paste the tool settings from one mask to another. Being able to copy and paste could be very helpful for example if you wanted to compare results between using a sky selection and a linear gradient intersected with a luminance range to see which gave better results. You can duplicate a mask and then swap out the components, but this would be a cumbersome workaround.
  • I also wish there were a faster way to toggle between the red overlays (like Quick Mask in Photoshop) and “white on black” (which is the conventional way a mask appears in Photoshop). Both are very useful because one lets you review the mask in relationship to the image and the other lets you review the mask as clearly as possible.

On the whole, these are little things and I would expect Adobe continues to improve on this already excellent starting point.

 

The fine print:

There are some little details to this change that may be of interest:

  • Select Sky and Select Subject are NOT based on the unadjusted RAW but the current processed version of the image. This is fine because the mask is fixed and won’t change after creation, but you should be aware of this if you need to optimize the mask. Try this: set all the sliders from exposure down to blacks as far left as they can go and add a sky mask. You will most likely see a pure white mask. So if you’re making extreme adjustments, you might want to consider when you select the sky (before or after big changes).
  • On the other hand, range masks ARE based on the unadjusted RAW. This is ideal, as it means that the targeting does not move around as you adjust the image. But it might also mean that the targeting looks different than you expect. For example, if you make increased exposure quite a bit, you might find the highlights for luminance are more in the range of 70-80 than 90-100 because that’s where they started.
  • The new masks work based on “process version 5” (which you can see in the Camera Calibration tab. If you use the new masks on an image using version 3 or 4, it will be updated (as there are no impacts to image appearance). However, if you try to use the masks on an image using process version 1 or 2, the new masking options will be greyed out. This is because updating from 2 to 3+ changes the image and Adobe is trying to protect you from unwanted changes. However, the newer versions are great and I would recommend going to the Camera Calibration tab to update to v5, then go to the Basics tab and review slider settings to keep the look of the image you want and then add your masks. Of course, if you don’t like the impact on the image, you can just go back in the history tab to revert to the old version.
  • Luminance masks which were created in the old version of LR will show an “update” option, but they don’t migrate consistently. I’ve seen some massive changes, so just review carefully if you decide to update this as you’ll probably need to adjust the sliders to keep the same look.
  • The LR mask data is saved in the file with an lrcat-data extension. If you’re backing up or migrating your catalog, be sure to grab all the LR files (and do this when LR is closed, as some of the files are just working files that don’t exist after LR is closed).
  • The traditional gradients are just “vector” masks, which means they take up very little space. However, the new Select Sky and Select Subject masks are bitmaps, meaning they are grayscale images which take up space. In my quick testing, it looks like about 1MB for every 3-4 images from my D850. It will certainly very with image content and resolution. (Note that I’m not always seeing the lrcat-data file get smaller when I delete sky masks and optimize the LR catalog. It will shrink if you step back in history and do the same, so it seems that the mask is kept if there is a history state involved even if the mask is not actively in use.)
  • LR v11 does not carry over your previews when upgrading your catalog. This means you’ll have to regenerate previews and Smart Previews (under Library / Previews) if you want to see your images quickly and when the source file is not connected to the computer (such as content on an external drive which is not connected).

Kudos to the ACR / LR teams at Adobe for creating such an incredible improvement. Learn more about these new features via Adobe’s masking post and the new features page for LR and ACR.

Coming Soon: Lumenzia v10

Coming soon: Lumenzia v10 (beta available starting October 27, official release by end of year)

Many of you (including myself) are eagerly awaiting the arrival of a new Apple Silicon (“M1 Pro/Max”) MacBook Pro. The speed, battery, and display all look incredible.

As you may be aware, Apple Silicon is a huge technology shift where software which is optimized for it can run faster. That naturally raises questions about my own software, which means migrating to Adobe’s UXP platform to run natively for best speed. Web Sharp Pro is already a UXP panel and runs natively on Apple Silicon. Lumenzia v9 is already 100% compatible with Apple Silicon (under Rosetta).

I’m happy to announce that Lumenzia v10 will become available starting Oct 27th as a beta (with official release by the end of the year). This is a FREE upgrade for all Lumenzia customers (even if you bought 6 years ago) and will be a UXP panel which runs natively on Apple Silicon for an extra speed boost and simpler installation.

This launch will be different from previous ones because the Adobe APIs required to support Lumenzia’s needs have only recently become available. It will therefore be going through a beta phase to provide access as soon as possible. To avoid sending too many emails, I will not be sending regular notification of new betas. Instead, you can stay informed and try the beta by checking the Lumenzia beta page I have created. I anticipate the beta phase will last for 3-4 weeks, where I plan to rapidly address any reported bugs. Upon completion of the beta testing, I will email all customers to notify you the official Lumenzia v10 is officially available.

 

I’m adding some questions and answers here, but please comment below if I can help clarify anything for you.

Q: When will Lumenzia v10 be available? What is the latest version?

A: Please see the Lumenzia beta page for the latest information on timing/versions and how to get the beta.

 

Q: Is this a free update?

A: Yes. If you’ve been a customer since v1, you’ve received >1500 new features, updates, and fixes for free and I’m happy to continue offering free upgrades as thank you for your support and loyalty.

 

Q: What is required to run Lumenzia v10?

A: PS v22.5 (I anticipate raising the minimum soon based on the public PS beta, as it addresses some issues that were in the initial release of the new UXP APIs in v22.5). Lumenzia v10 will run on both Mac and Windows (it is not specific to any computer hardware, just the version of Photoshop).

 

Q: What’s new in v10?

A: In order to launch a UXP version as quickly as possible, the initial release is focused on migrating existing capabilities. This is a complete top-to-bottom rewrite of 7 years of code, and has been in progress for over a year now. Future updates will be built on this platform, and there is enormous potential with UXP. The immediate benefits of v10 are primarily faster speed on Apple Silicon, a simplified installation, and a more modern look and feel to popup dialogs from the panel. There are some other minor enhancements coming as well, which I’ll detail when the official v10 becomes available later this year.

 

Q: Who should install and use the beta?

A: I encourage anyone to try it. You can install both v9 (the CEP panel) and v10 (the UXP panel) at the same time without conflict. Apple Silicon users should see a roughly 20% speed boost if you switch off Rosetta.

 

Q: Can I use Lumenzia on older versions of Photoshop?

A: Yes, Lumenzia v9.2 runs on CS6 and all recent versions of CC. Lumenzia v9.2 will remain available in perpetuity for compatibility. Lumenzia will continue to run fine on older versions of Photoshop, and this includes all the functionality that is available today.

 

 

 

Greg Benz Photography