A Deep Look Into the iPhone’s New Deep Fusion Feature

Loading...

This week, iPhone 11 owners are supposed to get a free upgrade to their cameras thanks to a beefed-up neural engine and “mad science.” It’s called Deep Fusion, and it’s designed to deliver incredibly detailed photos in especially challenging environments. I’ve spent weeks testing the beta version of the computational photography software on an iPhone 11 Pro against the old camera software on a separate iPhone 11 Pro. Truth is, Deep Fusion works—but only in the strangest scenarios.

The first thing you need to know about Deep Fusion is that Apple is very proud of it. The company devoted several minutes to a preview of the feature at its September event, where it touted Deep Fusion as “the first time a neural engine is responsible for generating the output image.” In practice, this involves the iPhone taking nine total photographs, and then the neural engine in the new ultra-powerful A13 Bionic chip essentially pulls out the best pixels in each image and reassembles a photo with more detail and less noise than you’d get from an iPhone without Deep Fusion.

Loading...

Allow me to zoom in on that process a little more because it’s not quite as confusing as it sounds. What the iPhone camera with eight of those nine exposures is doing is similar to bracketing, the old school photography technique where you take the same shot with different settings. In this case, the iPhone camera captures four short-exposure frames and four standard-exposure frames before you hit the shutter button. (The iPhone camera starts capturing buffer frames whenever the camera app is open, just in case it needs them for a Deep Fusion or Smart HDR shot.) When you hit the shutter, the camera captures one long exposure that draws in additional detail.

All of these exposures quickly become two inputs for Deep Fusion. The first input is the short-exposure frame with the most detail. The second is what Apple calls a “synthetic long” which results from merging the standard-exposure shots with the long exposure. Both the short-exposure shot and the synthetic long get fed into the neural network which analyzes them on four different frequency bands, each one more detailed than the last. Noise reduction gets added to each image, and then finally, the two are fused together on a pixel-by-pixel basis. This whole process takes about a second, but the Camera app will queue up proxy images so you can keep shooting while the neural engine is humming along, Deep Fusioning all your photos.

If you’ve paid close attention to Apple’s computational photography features, this Deep Fusion situation might sound a lot like the Smart HDR feature that came out last year with the iPhone XS. In theory, it is similar, since the iPhone is constantly capturing these buffer images before the photo is taken to prevent shutter lag. In practice, however, Deep Fusion isn’t just pulling out the highlights and shadows of different exposures to capture more detail. It’s working on a hyper granular level to preserve details that individual frames might have lost.

Okay, so maybe all that is kind of complicated. When it comes to using the new iPhone with Deep Fusion, you don’t really need to think about how the magic happens, because the device activates it automatically. There are a few key things to know about when Deep Fusion does and doesn’t work. Deep Fusion does not work on the Ultra Wide camera. Deep Fusion only works on the Wide camera in low- to medium-light scenarios. Deep Fusion works almost all the time on the Telephoto camera, except in very bright light where it wouldn’t do much.

There’s one more scenario that will absolutely ensure Deep Fusion never works. If you’ve toggled on the new option under the COMPOSITION header in the Camera app settings that say “Photos Capture Outside the Frame,” then Deep Fusion will never work. So keep that option off if you want to try Deep Fusion.

Now that all of the nitty-gritty technical details are out of the way, let’s dig into what Deep Fusion’s computation photography mad science really feels like. If I’m being honest, it doesn’t feel like much. Right after the Deep Fusion feature appeared on the iOS 13.2 public beta, I installed the software on Gizmodo’s iPhone 11 Pro, while I kept the previous iOS version, the one without Deep Fusion on my iPhone 11 Pro. Then I just took a crapload of pictures in all kinds of different environments. Frankly, I often couldn’t tell the difference between the Deep Fusion shot and the non-Deep Fusion shot.

Take a look at these two photos of the clock in the middle of Grand Central Terminal, each taken with the telephoto camera on an iPhone 11 Pro. Can you tell which one was taken with Deep Fusion and which one was not? If you can understand the very basic symbols I’ve added to the bottom corner of each shot, you can probably guess. Otherwise, it’s going to take a lot of squinting. There is a difference. Look closely at the numbers on the clock. They’re much crisper in the Deep Fusion shot. The same goes for the ripples on the American flag and the nuanced texture of the stone pillars around it. You might not notice that the shot without Deep Fusion looks a little fuzzy in these areas, but then you see the Deep Fusion shot and realize that the details are indeed sharper.

Subtle, right? But in this case, without zooming in, one can clearly see how the Deep Fusion version of the photo pops more and looks less noisy. Both photos also showcase the impressive performance of the iPhone 11 Pro in low light scenarios. The Main Concourse in Grand Central Terminal is a surprisingly dark place, especially at dusk when these photos were taken. Both look good, but the Deep Fusion one does look slightly better.

Now let’s look at a different example. Here’s a boring but detail-rich shot of a skyscrapers in Midtown Manhattan on a dark and rainy day. In this case, you really do need to zoom in to see some of the slight differences between the regular iPhone 11 Pro photo and the one that used Deep Fusion. They’re super similar. You’ll see a little less noise, and the reflections in the window are clearer in the image on the right. The major difference I can spot is on the white railing near the bottom of the frame. It looks almost smudged out in the non-Deep Fusion photo. And much like the numbers on the clock in the Grand Central photo, the white railing pops in the Deep Fusion one………Read More>>

Source:- gizmodo

Loading...