A photographer’s review of the new M4 MacBook Pro

Early Black Friday deal: If you’re looking for a lower-cost option and you don’t mind using an external SSD, B&H has an incredible deal on the entry-level 14″ M4 MBP for $1399.

 

I just upgraded to the new M4 MAX MacBook Pro (MBP). That’s a significant cost given it’s replacing an M3 MAX MBP I bought only one year ago. But as the center of my art and business, I consider it a worthy upgrade (especially given my productivity affects my income and the net loss on upgrade costs is somewhat offset with tax considerations). Not many photographers should feel the need to upgrade from the M3 (or even M1) to this new laptop. But I think this is a notable launch for a few reasons: it pushes the bounds for extreme users, has some great display updates, and will ensure more people can afford the excellent older M1-M3 computers (new or used).

See my recommendations below for good, better (14″ or 16″), and best (14″ or 16″)options for most photographers.

What’s new and notable in the M4 MacBook Pro?

Before we discuss what’s new, I think it’s important to consider just how unique the 14-16″ Apple Silicon MacBook Pros are. All of them (all the way back to M1) offer:

  • The best laptop display for photographers:
    • I do not say this lightly, there is simply no other laptop on the market which offers a comparable experience.
    • Best-in-class HDR displays.
    • Extremely high color accuracy (calibration is optional for most users with these displays).
  • Outstanding performance and battery life.
    • Apple Silicon is a major advantage. These laptops run for a long time, with almost no fan noise, and no need for bulky chargers.
    • While SnapDragon X Elite chips look very promising for the future, there simply is not enough support for Windows on ARM software yet (for example, you cannot run any UXP plugins in Photoshop on WinARM).
  • Very high quality. These laptops look great, they’re tough, and the keyboard and trackpad feel great to use.

 

The M4 MacBook Pro raises the performance bar compared to the M3 series in several ways:

  • M4 Max offers 20% CPU / GPU faster performance vs M3 Max.
  • Much easier to see clearly when working near windows or outside thanks to:
    • SDR brightness can now be raised to 1000-nits (vs the previous 500 for M1/M2 or 600 for M3).
    • A new nano-texture display option to cut down on glare.
  • Less fan noise (they were already rare, but run even less often now)
  • Upgraded web cam to support “center stage” and better results in challenging lighting conditions.
  • Thunderbolt 5 to support 3x faster external devices (on the Pro and Max models)
  • Memory bandwidth has increased by 20-75% (up to a max of 546 GB/s vs a previous max of 400)
  • Battery life on the base and Pro versions has been increased by 2-4 hours (up to 24 hours total).
    • Note that the 14″ Max version is the same and the 16″ Max actually decreased by 1 hour.
  • Minimum specs have been significantly improved for the base model.
    • Minimum RAM has been increased to 16GB (at no extra cost!)
    • Even the base model now supports at least 2 external monitors (even when the laptop lid is open to see three screens total).

The gains here are iterative over the M3, but this is a larger generational leap than in previous years given the display and other updates on top of the performance improvements. This helps to solidify the MacBook Pro’s position as what I consider to already be the best laptop for photographers. I’m historically pretty agnostic on the Apple vs PC debate – but after trying a wide range of computers over the past year, I believe Apple has a notable edge in laptops for photography use. They feature a best-in-class HDR display, optimal performance / battery life, and excellent overall quality.

 

M4 MAX display:

Apple has the best laptop displays I’ve ever seen, and I have tested dozens of them in the past year. Their gorgeous HDR displays and high color accuracy are in a class of their own. I have never experienced another laptop display that’s on par with Apple’s. And now the M4 display has been updated and is significantly improved even over the M3. The display is much easier to see for productivity work in bright environments.

There are a few notable changes in the M4 display:

  1. Nan0-texture display (optional for $150)
  2. Increased peak SDR brightness (this is a software change, but likely enabled by improved efficiency with the QD film).
  3. The display apparently changed from a red KSF phospher film to QD (quantum dot).
  4. The color is a bit different than the prior displays.

That’s rather technical, and we’ll step through each of those changes below. But if you want to know the bottom line, it’s this: the M4 offers a vastly better display for working in bright ambient light. The other changes will grab the attention of color nerds, but shouldn’t affect most photographers.

The new nano-texture display is very nice. It significantly cuts glare from strong lights behind you, which makes it much easier to read text or enjoy photographs. The image quality is excellent under both bright and dark ambient light. The display is much easier to view with typical reflections. If you have a very powerful light source shining directly on the display, the nano-texture spreads it and it can actually be worse. But this is a terrible condition for either display and you should avoid things like letting the sun shine directly on the screen. Repositioning the display is the correct solution for those extreme scenarios. I see no downsides to this new option and think it is well worth a $150 upgrade to be able to more easily see the screen when working in the field, a coffee shop, etc. I expect this will be one of my favorite new features of the M4 over time.

The M4 now supports 1000 nits SDR (up from 600 nits in M3 and 500 nits in M1 / M2). This makes the display easier to see for productivity work in bright environments (this has no benefit for HDR as the display is still limited to 1600 nits peak brightness). The increased limit is not something you can set via slider nor a custom preset. Instead, it uses an elegant system to offer it automatically only when beneficial. The 1000 nits SDR brightness is allowed when all the following are true:

  • The ambient light sensor detects a bright environment (this is located right next to the web cam)
  • automatically adjust brightness” is enabled in System Settings / Display
  • The brightness slider is set to the maximum.

To be clear, you cannot achieve 1000 nits by manually sliding to the max brightness if the automatic option is not enabled (personally, I believe setting the slider to maximum should always allow automatic use of 600-1000 nits based on ambient light, as the reasons for using a fixed brightness don’t apply when you’re pushing to the max and the limit is so far above any reasonable reference viewing condition). For those of you who are familiar with HDR, the headroom will be 1.4 stops if you set the slider to the max without auto (this implies ~600 nits SDR for a display that can achieve 1600 nits). If you turn on auto and have the brightness slider to the max, then HDR headroom will be somewhere between 0.7 to 2.0 stops depending on the level of ambient light (implying SDR luminance ranges from 400 to 1000 nits under auto when you push the slider to the max). In other words, when you enable auto brightness, the SDR white luminance is allowed to move within a moving range which is based on the brightness slider. I would recommend you enable the automatic brightness. Then if you need to target a fixed value for controlled / reference viewing, use a custom profile (for example 80-120 nits would often be ideal for critical photography work). You can create a custom profile under System Settings / Display at the bottom of the “preset” dropdown.

The change to a quantum dot (QD) film for the mini-LED display is not something I have seen documented by Apple, but it appears to be confirmed through testing by Ross Young. On that same post, note the comments from Blur Busters confirming that the M4 confirming benefits to pixel response time. If you view this test from Blur Busters and try to follow the bars with your eyes, you will see a cyan shift on the M3 whereas the M4 shows the white bars remaining neutral. The color shift in fast content on the old display is a result of red / green / blue sub-pixels not getting brighter / darker at the same rate. The new display offers better motion response and eliminates this cyan “flash”. Most people probably won’t notice this difference, but the new display is improved. Assuming that this all confirms the use of quantum dot technology, this likely implies improved efficiency and probably helped Apple enable the new 1000 nit SDR. The wider spectrum of the red sub-pixel may also reduce “observer metamerism” (ie the risk that difference people perceive color on the display differently when the red/green/blue colors are very narrow).

The color accuracy out of the box for the M4 is great, as has been the case with all recent Apple displays. But that does not mean perfect nor identical to other Apple displays. My M4 shows a slight but visible / measurable red bias in grey values, while my M3 shows a comparable bias towards green. This is just a sample size of two and could be variability within spec, but I suspect that the reason is more likely the new quantum dot film and it’s resulting impact to the “power spectral distribution” (PSD) of the display. It’s possible that the difference here is due to metameric failure (ie, the different spectral emission may not allow perfect matching). If anything, I expect the M4 is now more accurate as the peaky reds of the older displays are not ideal (probably more prone to metamerism). I don’t know and don’t have a spectro to confirm (which would be ideal to make both displays as accurate as possible). The changes here may affect the accuracy of profiling with a colorimeter (because such a device has to assume the PSD), so you may wish to contact your vendor to see if you need a software update from them for your colorimeter.

 

M4 MAX test results:

All my test results are a direct comparison of the fully-loaded 14″ M4 Max to the fully-loaded 14″ M3 Max.

Photoshop test results:

My G-Bench Photoshop benchmarking software is meant to evaluate performance on Photoshop tasks relevant for photographers. The time required to complete key tasks is weighted based on the estimated likelihood a photographer would use it. In other words, it’s meant to give you a reasonable way to compare how fast Photoshop would feel subjectively for a photographer.

The M4 Max achieved a weighted score of 34. That’s roughly 18% faster than the comparable M3 MAX (and roughly 40% faster than the M1 MAX). Note that my previous testing of the M3 Max showed a score of 44, whereas it is now down to 41.4. It has actually improved over the past year due to improvements in Photoshop itself.

The most significant gains in terms of total time were in opening images, smart objects, creating new adjustment layers (very helpful for actions/panels), and various blurs. The total test time (without weighting) decreased by 30 seconds (from 3:16 to 2:46). The most significant benefits here are for those who handle a lot of batch processing, large images, and smart objects.

I additionally tested Topaz Gigapixel, which ran 32% faster on the M4 (1:14 vs 1:38).

 

Lightoom test results:

Importing and exporting show the most benefit under the M4, which is consistent with the gains being related to writing data or tasks which are intensive on the CPU:

  • importing RAW or DNG filess was 16% faster on M4. (unlike my prior M3 vs M2 testing, I did not see a big difference)
  • exporting JPG was 27% faster on M4.
  • Applying AI Denoise was 7% faster on the M4 (27:30 vs 29:38 for 65 Nikon Z7ii images).
  • During the denoise test, the M3 fans kicked in sooner and continued to be significantly noisy for 5 minutes longer, so the M4 avoided 6+ minutes of significant fan noise.
  • exporting lossy DNG was actually 12% slower. I can’t think of a clear reason for the poor DNG results and suspect their may be room for the M4 to improve here with future software updates from Apple and/or Adobe (which might mean some of the advantages grow larger).

Video test results:

I did some very limited testing with video export and found mixed results for rendering 12-19 minute videos.

  • FCPX 11 was 40% faster when exporting as 12-minute movie as ProRes 4444 XQ (39 vs 65s) and 13% faster when exporting the same move as H.264 (66 vs 76s).
  • Handbrake was 13% faster, saving 29 seconds when exporting a 16-minute video (3:21 vs 3:50).
  • Screenflow was 8% faster, which saves 55 seconds exporting a roughly 15-minute video (11:11 vs 12:06).

Other test results:

Beyond this, I generally found about a 10-20% benefit for various tasks which are CPU intensive (GPU-intensive tasks show little benefit in my testing of current versions of software use for photography and video).

  • My build tasks for creating my Photoshop plugins were 12% faster.
  • Compressing 900 images into a ZIP was 10% faster (saving 31 seconds, 4:56 vs 5:27).
  • BlackMagic Design’s disk speed test showed write speeds were about 5% faster: 8300 MB/s write and 5750 read on the M4 vs 7900 and 5600 for the M3.

 

 

Conclusions:

  • I highly recommend any of the M1-M4 14-16″ MacBook Pros for anyone using an Intel Mac laptop, PC laptop users interested in HDR, or anyone making the switch from desktop to laptop. Getting a used, refurb, or closeout stock of the M2 are all very attractive ways to get the best possible value.
  • These improvements make the best laptop for photography even better. I don’t say that lightly – I try to remain technology agnostic, but I believe the advantages of Apple Silicon and the XDR display offer clear and objective benefits for those focused primarily on photography.
  • The key highlights are significant improvements to performance across a wide range of applications, a much better display for working in challenging light (which is very common for landscape photographers), and significantly less fan noise.
  • While the M1 remains a great laptop, the M4 is getting close to the point where it cuts the time of many tasks in half. They’re both fast, but the M4 is blazing fast.
  • There are clear gains over the M2 and M3 as well, but probably not enough for many photographers to consider an early upgrade.

The M4 MacBook Pro is likely to be a compelling upgrade if:

  • You own any computer other than an M1 or later 14-16″ MacBook Pro. The HDR (XDR) display alone is worth an upgrade if you don’t already own one of its predecessors, it is in a class of its own. And the performance is stunning compared to anything before the M1.
  • You are upgrading not just from M2 or M3, but also increasing specs (such as bumping the CPU or storage).
  • You frequently work in bright environments where you will appreciate the nano-texture and brighter display.

See my prior reviews of the M1, M2, and M3 Max.

 

Recommended configurations for photographers:

I recommend the 14″ laptop for lightweight travel (ideally with an external monitor at home). The 16″ display offers valuable room for toolbars and such and is highly recommended if you won’t travel with it much, don’t use an external monitor, or want a larger HDR display (as options for external HDR monitors are currently limited/pricy).

Most photographers can use a fairly basic CPU option, but should get 16-32GB of RAM and target internal storage twice as large as the data you currently store to ensure room for growth. Apple has created a fairly complex set of feature dependencies (likely to help encourage some upgrades and manage logistics/cost). So it helps to take your time to look through the details.

A few key things to consider:

  • Test your CPU / GPU / SSD to know where your bottlenecks likely are, as this is where upgrades will give you the most benefit. Don’t guess – Apple has done a masterful job of tying certain upgrades to others (from a business perspective, their price segmentation is almost as impressive as the product – they clearly know how to encourage people to make bigger upgrades than they otherwise would have).
  • Storage is the best place to cut cost: it is the most expensive option and the only upgrade you can make after purchase. See suggested external SSDs below.
  • The nano-texture display is also worth considering for any of these tiers if you are likely to work in a bright environment.
  • The 16″ display offers a huge boost in usable screen and is definitely worth the upgrade if you’re buying into at least the Pro M4 level. However, the 14″ is much lighter and is probably best regardless of your budget if you travel much or hike with your laptop. The 14″ display is also the only option if you don’t wish to upgrade to at least the Pro chip.
  • The most useful upgrades are a minimum 24GB RAM and the nano-texture.
  • RAM is one of the cheaper upgrades and likely to offer pretty good gains up to 36GB (beyond which the returns are smaller and depend on your usage).
  • Unless you do a lot of video or massive import/export jobs with LR, you can easily skip upgrades to the number of CPUs / GPUs.

 

Here are the options I think make the most sense for photographers:

  • Good (budget-conscious): 14″ 10-core M4 with 16GB RAM and 1TB SSD for $1749 (there is no 16″ version with the base M4 chip, you’ll have to jump to the “better” category for a larger screen)
    • This offers a fast, high-quality computer with an outstanding HDR display. 16GB is the minimum RAM a photographer should purchase.
    • An upgrade to 24GB may a valuable upgrade in the long run as you’ll likely get more years out of it and/or be able to recoup some of that cost when you sell it later.
    • You could save even more with 512GB storage, but you’ll be very dependent on external drives for storage.
    • Alternatively, the remaining new M3 inventory should be available at great prices, and there will surely be many great deals on older used models.
  • Better (ideal for photography): base M4 Pro with 24GB RAM and 1TB SSD for $2,199 ($2,699 for the 16″ version).
    • The Pro CPU offers an extra Thunderbolt port over the base model and is required if you want a 16″ screen.
    • An upgrade to 48GB RAM is certainly worth considering (and probably offers more benefit than the upgraded Pro CPU / GPU options for Photoshop).
  • Best (for heavy Lightroom import/export, serious video work, or if money is no object): M4 Max, 48GB, 2TB for $4,099 ($4,399 for the 16″ version)
    • If you are buying the 14″ model, you will have to upgrade the CPU / GPU to go beyond 36GB of RAM. That sufficient for most photographers, but 48-64GB will benefit heavy users.
    • If you are buying the 16″ model, you might consider the 36 GB RAM and lesser CPU / GPU option for $3,899.

I went all in on the fully-loaded 14″ MacBook Pro, and particularly appreciate the large internal storage (I’m already using over 6TB, with another 13TB on an external RAID). The only option I skipped is the 16″ screen. I have a Pro Display XDR monitor for my HDR work and prefer the lightweight 14″ model for travel.

 

I recommend the following options to compliment the laptop:

  • External SSD drives
    • Internal storage is convenient – but this is much cheaper and a good way to expand if needed down the road.
    • USB SSD’s I personally use and recommend:
      • Sandisk Extreme Portable for up to 4TB. Very fast / compact and connects with a single cable. I find this is a great option for backing up the computer, or adding more storage if you don’t have enough internal to the laptop (always be sure to backup your drives).
      • Samsung T5 EVO 8TB. I’ve only had mine for a few weeks, but it is working great.
      • Vectotech 8TB. I have three of these and have used them for a long time without issue. They were my top pick for large storage before the price on the Samsung recently dropped.
    • A Thunderbolt drive should be much faster than USB, but at a higher cost. As I have ample internal storage, I have personally opted for cheaper USB drives and have limited experience with the common Thunderbolt models. Also given cost, most of them have limited reviews. So it is hard to comment on reliability and performance. If you are going to use one as primary storage because you buy a laptop with a small drive, I think this is well worth considering (and you should have a robust backup strategy for any external drive, as likely all of them will be less reliable than the internal Apple SSD). So while I have not tested it personally, the LaCie Rugged Thunderbolt SSD has a large number of positive reviews.
  • CalDigit TS4 dock. This makes it very easy to plug your laptop into everything with a single cable (which includes power for the laptop and data connections to monitor, hard drives, Ethernet, mouse, etc). It includes two downstream Thunderbolt ports, which I find very handy so that I can turn off my RAID drive without losing access to downstream devices. I owned the previous TS3 and it’s also a great option if you don’t care about multiple downstream TB ports.
  • An external HDR monitor. This is optional, but nice if you also want a larger HDR display to complement the outstanding one built into the MacBook Pro.

 

Should you consider the new M4 Mini? The only benefit is lower cost. If we compare my “good” laptop option above to the comparable M4 Mini (16GB RAM, 1TB SSD), you will save a maximum of $750. I say maximum because you may still need to buy a monitor, keyboard, mouse, and speakers to use it. The extra cost of the laptop gives you a best in class HDR display and portability. If you do not care about either, then the Mini is a great value. And if you are eager for HDR, you can pick up one of these great recommended 42″ TVs for HDR for less than $1000. So it depends on your needs, but I many photographers would benefit significantly from the laptop.

 

Disclosure: This article contains affiliate links. See my ethics statement for more information. When you purchase through such links, you pay the same price and help support the content on this site.

AI order of operations in Camera RAW

Adobe Camera RAW has added several incredible AI-based tools in the past year, including:

It’s an incredible lineup of tools which work directly on the RAW, which allows results which are not only higher quality, but also non-destructive. And this second benefit has created a bit of a new conundrum. Your AI edits may interact with each other and require updating. For example, if you use the generative AI remove tool, you might need to update parts of your AI select sky mask to reflect new image content. As a result, you may sometimes see ACR show a button in the top left prompting you to “Update AI Settings“. This shows when you make an edit which requires updating existing AI edits in the image. If you don’t click to update the content, you might have significant problems (such as a bad sky mask).

While it would be amazing if these updates could happen automatically, there are several reasons why it is probably not practical for ACR to do this for you:

  1. Some of these AI tools run in the cloud and you might be offline. For example, you must be connected to the internet to use the generative remove tool.
  2. Performance may not be ideal. Some of these tools take several seconds (or much longer on older computers) and you probably wouldn’t want ACR to get locked up too often. Consider AI denoise: moving the amount slider requires updates to other AI content and you probably wouldn’t want a delay every time you move that slider. Better to tweak the amount and then let things like sky masks update once.
  3. Battery life or fan noise may be a problem if ACR was constantly updating AI content, as this requires significant CPU / GPU resources.

Perhaps we’ll see the need for updates reduced over time as Adobe continues to improve the AI tools, but these considerations probably won’t go away entirely. Either way, it’s part of the workflow required now to take advantage of these great tools.

Follow ACR’s order of operations for more efficient workflows:

Now that you understand why we need to manually update the AI content, it’s important to know how you can manage it effectively. There are a couple of key principles to know:

  • You must click “Update AI Settings” button to ensure you do not have artifacts.
    • However, you don’t have to click it right away, just before you’re done editing the RAW to ensure the final output is ok.
    • You can also click just the circular arrow to update one step at a time, which is mostly helpful for educational purposes (to see which update causes which change).
  • If you edit in the same order the AI works, you’ll save a lot of time (and can avoid the update button).
    • There is an order of operations to the AI. For example, denoise is always done before the other AI tools.
    • Anytime you impact one of the earlier steps, the later ones must be redone.

If you click the down arrow on the update AI settings button, ACR will show you the preferred order of operations. Anytime you change something higher on the list, everything below it must be redone. So the ideal order for you to work if you wish to avoid AI updates is:

  1. Denoise. While this is non-destructive, it is particularly ideal to get this slider right before you do any generative expand or remove (as redoing those can create artifacts and requires careful review).
  2. Generative expand. Due to resolution limits, you may not use this much yet. If you’re using this feature for a social export, you might want to do this on a duplicate layer/image – as you probably don’t want to commit to this yet for a full resolution image you might print.
  3. Generative remove. So ideally, you should deal with AI-based “cloning” before you start working on things like a sky mask.
  4. Lens blur.
  5. Adaptive profile. This tends to be fairly safe, as changes above won’t cause big changes here.
  6. Local masks. These tend to be fairly safe to update without causing problems, but it’s always ideal to review the details for any AI-based mask (which includes the first group of options under “create new mask”).

If you had to boil it down to the most important lesson: always do AI denoise first and try not to change it later. It’s great that you can if you need to, but you’ll save yourself a lot of work reviewing other local AI details if you don’t revisit denoise. Any time you change it, you may force updates to gen remove / expand, which need close attention to ensure you don’t introduce unwanted changes.

For more info, see Adobe’s support page.

Cats vs dogs (HDR gain map test)

I created this dog / cat test image to experiment a bit on my Instagram account. It’s a gain map which is designed to make it very easy to tell if you are viewing with HDR support. When you view this image and have HDR support, you will see a dog. But if you have an SDR display (or almost no HDR headroom at all), you will see a cat.

This test image is taking advantage of the fact that a “gain map” offers two different views of an image, based on the level of HDR headroom you have. The intent is that you would encode two variations of the same image: a basic SDR which is safe for viewing on any display, and an enhanced HDR which looks much better on displays which supports both HDR and gain maps. 

But I’ve hacked the gain map format so that it is rendering two completely different images. If you lack HDR support, you see the base image (the SDR image of a cat). If you have at least 0.5 stops of HDR headroom, then the gain map is applied to generate the alternative version of the image. The intention would normally be to encode HDR content, but I’m not really using HDR pixel values in the dog image. So the effect is that you see one of two regular images, depending on your display’s capabilities.

What are you seeing here? 

  • This image is a JPG gain map.
  • The base image is an SDR image of a cat.
  • The alternative image shows a dog. This is encoded for HDR, but I’m not really using HDR values.
  • So the effect is that you should see one of two different images based on whether your display supports HDR / gain maps (and whether you have more or less than 0.5 stops of HDR headroom, which you can confirm via test #1).
  • (At the bottom below, I’ve added discussion on some ghosting you may see of the cat. It’s not a real concern, but the extreme nature of my test exposes an edge case.)

I have posted a higher-resolution version of this image on Instagram (IG), where the experience is also a bit different due to how IG uses it.

This simple test can teach us a lot about Instagram (IG), gain maps, and HDR support in general.

First, a few key things to know about gain maps:

  1. If you encode your HDR image properly, everyone gets a great experience every time on any device.
    • The intended use is not to share two unrelated images like I am here, but this demonstrates how incredibly flexible gain maps are.
    • For a real photo, the worst case is the same result you would have achieved if you did not share HDR – and those with support see a much better image.
    • This makes gain maps the key to sharing HDR. Without them, you have no idea what your audience will see, and no ability to ensure a great result everywhere.
  2. The majority of people on IG will be able to see HDR photos properly
    • IG is used primarily on phones and most phones less than 4 years old have great HDR support.
  3. Other people will see different versions of your image.
    • This has always been the case even with SDR, primarily due to differences in color accuracy, color gamut, display brightness, and ambient light.  HDR just introduces some additional variation.
    • This is not at all a concern if you are doing #1 correctly (a proper gain map provides a very predictable and consistently excellent result).
    • However, if you do not understand gain maps and do not check that the SDR fallback looks as you expect, others may have a poor experience. This is especially true if you simply share an HDR image with no gain map (in which case bad tone mapping is a risk and you have no artistic control over the fallback SDR image).
    • This is easy to test by simply viewing your JPG gain map in Safari or Firefox, as neither support HDR gain maps at this time and will simply show the base SDR image.
    • I will share more information on this topic in the future through this website and my newsletter in the future to help you get consistently great results. I recommend my tutorial on gain maps as a good starting point.
  4. Your image may be altered anytime you upload to a website.
    • Whenever you upload your image, it will almost always be “transcoded” to a new image. For this reason, you ideally should confirm that your HDR image looks like your original when you upload it.
    • This transcoding may be done to make the image smaller, lower resolution, cropped, check for hidden malware, etc.
    • When the image is transcoded, the result may intentionally or accidentally strip the gain map from your image and result in an SDR-only experience. (It might even be converted to a static HDR as noted below, though this is very unlikely).
    • In time, this should happen much less frequently as most of this is simply a lack of support in back end tools for transcoding gain maps. But there are some cases where a service provider may intentionally limit HDR (such as to minimize variability in a mix of SDR and HDR images in a grid).
  5. It is possible for pixels in an HDR image to be darker than in the SDR.
    • Some of the dark pixels for the cat become much brighter in order to render the “HDR” dog.
    • This isn’t terribly important for most people to know, but I think it nicely demonstrates the incredible power and flexibility of gain maps – even when working with just a pair of 8-bit JPG images.

On Instagram (IG), you may see only a cat, only a dog, or it may change!

Here are some examples of what I see:

  • On a computer, the image will show as a dog in both the grid and large view, if you are viewing on a browser / display which supports HDR and gain maps.
    • No real surprises here, this is how gain maps should work.
    • If you are on MacOS, I recommend you consider using Chrome, Brave, Edge, or Opera. Apple has been producing outstanding HDR displays for six years, but unfortunately Safari does not yet support it for HDR photos.
  • On a phone more than 4 years old (or using an operating system more than 1 year old), you are very likely only going to see the cat (SDR). Again, this is the expected behavior because these old phones simply do not have supporting hardware.
  • On a Pixel 8 Pro running Android 15:
    • The large view shows a dog, indicating HDR is supported (as expected).
    • The grid also shows a dog, so the thumbnail is completely consistent with the large view of the image (as expected).
    • If I change the phone’s brightness while viewing the large image, it remains a dog (as expected).
  • On a Samsung S24 running Android 14:
    • The experience is surprisingly quite a bit different. I don’t know if this is due to updates in the Android 15 (not yet available for Samsung), Samsung’s unique take on Android (One UI), or possibly something about the hardware (seems unlikely).
    • The grid always shows a cat (SDR). This indicates that the small thumbnail generated by IG is SDR only. In other words, it is no longer a gain map.
      • While it would be my preference that all derived versions of my image remain as I created them, I understand how IG may prefer to show a grid where the images appear more uniform by not mixing SDR and HDR content.
      • Future web standards reference a middle ground where HDR may be rendered with partial support. This would offer a compromise so that HDR content shows some of its benefit, without looking so different from SDR content. I imagine that a grid view like this might adopt such an approach in the future (especially as HDR content becomes increasingly common).
    • If I change the phone’s brightness while viewing the large image, the live view of the large image shows a cat. This indicates that I am seeing an SDR view of the image while I am actively changing the display brightness. It’s not a big deal, but I would prefer that I get an accurate preview of the display while I’m changing the brightness. Otherwise, the image I see on screen may be significantly brighter after I set brightness.
  • On an iPhone:
    • The experience is different from both Android scenarios above.
    • The large view always shows a dog, even when the phone has zero HDR headroom (which would occur when I put the phone in direct sunlight or enable the “reduce white point” setting in iOS).
      • This indicates that my image is not shared as a gain map on iOS. It is not adapting in an optimal way. Instead, the gain map has been converted to a simple HDR image (with no gain map), and some kind of tone mapping is being applied as needed (most likely the tone mapping provided by iOS).
      • I never achieve the full headroom that the iPhone is capable of showing (which is 3 stops when brightness is no more than 80%). I’m getting something close though (between 2 and 2.5 stops of support).
      • In my experience, the results are still extremely good and I have no concerns here.
      • What is likely occurring is that IG uses the Google standard for sharing gain maps and iOS only supports the new ISO standard and Apple’s proprietary encoding). Computer browsers and Android do support that gain map format, but rendering on iOS is probably achieved by deriving a simple HDR.
      • IG has been a leader in HDR and made numerous improvements to HDR support since launch. So assuming my theory that ISO gain map support is key, it is likely that the IG iOS experience will get even better in the future.

Your experience may differ from mine for several reasons including: phone hardware or operating system, the version of the IG app you’re using, the phone’s brightness setting, ambient brightness, split testing on IG (ie, an HDR display may be forced to SDR for testing), etc.

I’d love to hear your experience via the comments below. It would be helpful to know which hardware / operating system you are using, as well as your current level of “HDR headroom” (which you can find on a computer with test #1, or in a mobile phone via the histogram of an HDR photo in Lightroom mobile).

In addition, the image on this page (or IG) can help illustrate a quirk I have seen in numerous implementations of gain map decoders (including several browsers).

  • Even when you have full HDR support, you may see a bit of a ghosted white outline of the cat eyes and ears or test over the dog.
  • This helps illustrate that a gain map is not truly 2 photos in the same file. It is a base image (SDR here) and a psuedo image which provides instructions on how to multiply each base pixel to derive the alternate image (HDR here).
  • The reason for the ghosting here is because you are not viewing the image at 100%. When the image needs to be resized, the base image and gain map should be combined and then rescaled as needed. However, you will likely see ghosting in some viewers because these two “images” are rescaled first and then combined. This causes potential misalignment at edges and/or errors in the gain map math.
  • If you zoom in or out at the browser level (such as ctrl +/-), you may see the problem get worse or better. If you use the Adobe Gain Map Demo app and set the view to 100%, you’ll notice that the problem goes away.
  • I consider this a bug, but you are unlikely to see this in a real image. It might occur near high contrast, hard edges – such as a sunset behind a building.
  • Ideally, the decoders would be fixed to avoid this issue. However, there is a relatively simple fix for any image you share on your website. Just add CSS styling to use “image-rendering: pixelated;“.
  • To demonstrate this, I have added this CSS to the image here. Just hover over the dog and the ghosting should disappear (as I am applying the fix only while you are hovering for demonstration – you should implement it without the “hover” restriction).
  • Note that Adobe Camera RAW and Lightroom work well they do NOT exhibit this concern (as the image is rendered to HDR before you might rescale).

Note that if you are looking for an easy way to dive into sharing HDR images / gain maps, I recommend taking a look at the “enhance SDR to HDR” feature in my Web Sharp Pro plugin for Photoshop.

How to show an HDR slideshow on your TV with Keynote

If you have MacOS, Keynote, and an HDMI connection to a TV, you can create a slideshow using HDR photos which will play by itself.

Requirements: MacOS Sequoia (ie v15.0 or later) and latest Keynote.

To create the slideshow:

  • Export images in an HDR format supported by Keynote (HDR JPG / AVIF / JXL from latest Adobe Lightroom Classic / ACR works well).
  • Make a series of slides with those images.
    • Note: the headroom will be limited (not full HDR support) while building the slide. You will only see proper HDR when you play the slideshow (below).
  • Each image should have an animation.
    • Click the image to select it, then near the top right under “animate”, set the preferred “build in” effect.
    • Try using “dissolve” to fade in, “fade and scale” to zoom in, or perhaps “blur”.
    • Set the duration for about 1.0 seconds.
  • At the top-right of Keynote, click “document” and set the following:
    • presentation type = “self-playing
    • delay = about 3-5 seconds (this is how long each slide/image will be visible)
    • builds = 0 seconds (this just adds delay before starting the animation on each slide).
    • enable “loop slideshow” if preferred.
  • Go to the “Play” menu and make sure “in fullscreen” is checked.
  • Click the “Play” button at the top to start (or shortcut key <option><shift><P>)

Please let me know what you think (any other settings you like, etc).

New in Adobe Camera RAW 17: “Adobe Adaptive” profiles, non-destructive Denoise, and generative expand

Adobe Camera RAW (ACR) v17 just added some very interesting new AI features:

  • NEW: “Adobe Adaptive” profiles.
  • NEW: “Generative Expand
  • Updated: AI Denoise, Raw Details, and Super Resolution can all be applied non-destructively on even more RAW files (details below).

These features have the ability to get great results more easily and significantly simplify your workflow. Let’s dive into each of them.

 

What is the new “Adobe Adaptive” profile and how do I use it?

The various profiles we’ve had in the past (Adobe Standard, Adobe Color, etc) are fixed starting points. The new AI-based “Adobe Adaptive” is meant to provide a better starting point by analyzing the image to generate a custom profile. It’s effect is somewhat like adjusting the sliders for increased shadows and decreased highlights. It compresses the tonal range. The greatest benefit seems to in large areas of shadows (common in landscape) or small areas of nearly blown highlights (such as city lights).

In ACR, just click the profile dropdown and select “Adobe Adaptive (beta)“. That’s really it. You’ll immediately see changes and likely some very impressive results. Any existing sliders or local edits will remain as they were. That’s often fine, but you’ll probably want to make some further tweaks to get the most out of it.

Aside from the profile itself, there is an “amount” slider available when in the adaptive profile. If you drag the amount slider down to 0%, you’ll get the same result as the “Adobe Standard” profile. This lets you easily back off from the AI if it is too much. Often times that is the case (the default can look a bit like the results from older tone mapping software where shadows are too light). Conversely, you can increase the amount up to 200% to really lean into the effect it has on the image.

For more info, see Adobe’s post on adaptive profiles. Note that this profile limits HDR to 2 stops (per Adobe). This can help rein in HDR highlights, and you can always edit brighter from there.

 

When should you choose Adobe Adaptive?

Without more experience, it’s hard to predict the best uses for such a complex new may be. This will likely appeal to a lot of novice users who are unclear how to get the kind of incredible results which are typically the norm when shooting with a smart phone. If you’ve struggled to get the best results out of your fancy camera, you’ll probably love this new feature.

My experience so far suggests that a wide range of images may benefit even skilled editors. I have seen some great improvements in things like nearly blown highlights which benefit greatly from the new Adaptive profile. It appears to be safe to use on a wide range of images (including those which have already been edited or use HDR). The quality of results will probably surprise many advanced users.

 

When should you avoid Adobe Adaptive?

As incredible as this new feature is, there are some scenarios where you may wish to skip the adaptive profile or exercise caution:

  • First, keep in mind this is a beta. There may well be bugs and performance may change over time (ie, re-editing later might produce a different result).
  • You will not be able to use this feature when working with the Camera RAW filter. At this time, you can only use the adaptive profile when opening RAW images or editing RAW Smart Objects.
  • There is likely a long learning curve to optimize results. Some types of images or workflows may be optimal with the adaptive profile, while others may be better with the regular profiles.
  • Do not use the adaptive profile in addition to the “auto” button. ACR will explicitly warn you against this, as the auto feature is not currently optimized to work with adaptive. There will likely be many requests for that, as it would be a very handy combination in the future for those seeking very quick and simple edits.
  • Those who prefer to work in Lightroom should probably wait, as support is just in ACR for now. You can of course use RAW Smart Objects and view your edited TIF in LR, but you should do the entire edit in PS / ACR if you’re doing to use the adaptive profile.
  • Be careful if you enable adaptive preset for images you have already edited, as your sliders may need some tweaking with the new profile. That said, I’ve seen some images which benefit nicely.

 

What is “generative expand” and how do I use it?

Photoshop’s cropping tool has had “generative expand” for a while. It allows you “outcrop” or expand the image area and  use AI to create new pixels at the edge. This is great for things like adding more sky when exporting your image for social media. However, this is a destructive workflow. The new pixels will probably be useless if you change the original edit.

With ACR 17, generative expand can now be done directly in the RAW file. This has a couple of important advantages:

  • Non-destructive. You can make any changes you wish to RAW settings and will not have to recreate the new pixels.
  • Avoid cropping when making geometry corrections. For example, if you need to tilt or rotate the image to straighten some lines, you may now simply fill in the gaps in the corner rather than cropping out parts of your original image.

To use generative expand:

  • enable the technology preview. In Photoshop, go to preferences / File Handling / Camera Raw Preferences / Technology Previews, and check the option there.
  • Go to the crop tab (near the top right). This now includes geometry adjustments (aka “transform” in Lightroom).
  • Expand the crop and / or make geometry adjustments as desired. If you are cropping, be sure to check “enable expand“.
  • Click “generative expand“.
  • Note that the results outside your filled area won’t be optimal, so if you need to further expand later, you will likely need to re-run generative expand.

This is a very exciting feature which targets an important need. However, it is definitely a technology preview and the results are sometimes not great. It seems to work best in areas with simple detail or texture, such as expanding the sky. So be sure to check your results for quality. While it isn’t perfect, it’s an exciting new feature and should continue to improve from a great starting point.

 

What are the benefits of the new “non-destructive” AI enhancements?

Adobe has generated a lot of buzz around several AI features for RAW images, including:

  • AI Denoise, which offers incredible improvements on images at any ISO.
  • raw details“, which enhances detail within the native resolution of the image.
  • super resolution“, which doubles the linear resolution of the image (ie 4x the total pixel count).

What’s new is that you no longer need to generate a new image, and a much wider range of RAW files is supported. You simply enable the feature in ACR and your existing RAW image will be enhanced. This has some important benefits:

  • Less file clutter, as you aren’t generating a new DNG and no longer need to consider whether you should retain the original RAW (just in case).
  • You can upgrade existing edits. For example, if you used RAW Smart Objects for your work, you can simply turn on AI Denoise the improve the final result without having to redo the edit.
  • You can work with many more RAW source files, including: HDR and panorama DNG files, Apple ProRAW DNG, Samsung Galaxy Expert Raw DNG, etc.
    • Nearly any RAW file should now work (other than exotic sensors such as Fovean).
    • Not currently supported: raster images (such as TIF or any use of ACR as a filter) and Lightroom.

To use this feature, you must enable the technology preview (as as above, go to PS preferences / File Handling / Camera Raw Preferences / Technology Previews).

 

Conclusions:

The vision for these tools is amazing, and I hope to see ongoing improvement to address a few opportunities (which is to be expected for any “tech preview”).

The overall picture in ACR v17.0 is:

  • Non-destructive denoise / raw details: Amazing and great to use now. It works like before, but is just much easier and supports more RAW files.
  • Adaptive profile: Very helpful for some images. Great for enhancing shadow detail, as well as taming some bright highlights. (Be sure to adjust the “amount” slider to optimize results).
  • The generative expand is on the right track. It can be useful for some social media edits, but needs work to be useful for high quality work such as large prints.

Collectively, these show great vision to bring useful AI capabilities into RAW editing, where they can provide the most benefit by allowing you to work in a fully non-destructive manner. This is a great update, and it will be just as exciting to see these capabilities expand and mature over time.

Greg Benz Photography