Pixel 6 and Pixel 6 Pro Camera Feature Deep Dive and Snapchat Integration

Pixel 6 and Pixel 6 Pro Camera Feature Deep Dive

Pixel 6 and Pixel 6 Pro Camera Feature Deep Dive


Google leadership in computational photography and machine learning have led to some remarkable camera capabilities over the years, with Pixel 6, Google is applying all that software expertise to a fully upgraded camera system, for the most advanced smartphone camera in the world.

It's leagues ahead of our previous Pixel Cameras, from the hardware, to the software, to the computational photography.

Both Pixel 6 and 6 Pro have a massive new 1/1.3-inch, 50-megapixel sensor


For beginners let's look at the main camera. Both Pixel 6 and 6 Pro have a massive new 1/1.3-inch, 50-megapixel sensor. Google combine adjacent pixels on the sensor to get extra-large 2.4-micron pixels! With Night Sight, the Pixel Camera has always been able to do a lot with very little light, but now the primary sensor captures up to 2 and a half times as much light, thanks to those huge pixels.

Pixel 6 Pro has a larger ultrawide front camera that records 4K video


This means you're gonna get photos with even greater detail and richer color. Both phones also have completely new ultra-wide cameras with larger sensors than before, so photos look great when you wanna fit more in your shot. Pixel 6 Pro has a larger ultrawide front camera that records 4K video. It also has a telephoto lens with 4X optical zoom for getting in close.

Pixel Camera uses what's called "folded optics

Pixel Camera uses what's called "folded optics" A flawless prism bends the light 90 degrees so that the camera can fit in the body of the phone and you can get up to 20X zoom with an improved version of Pixel's Super Res Zoom, Google advanced computational approach to combining optical and digital zoom.

Finally, the sensor behind the telephoto lens is even larger than the primary rear sensor in past Pixel phones, so you can capture great low-light zoomed shots with Night Sight. When this amazing hardware is paired with Tensor, we can build new camera features that were impossible before. Video is a great example. Video is a hard use case for computational photography because you're basically taking lots of photos very quickly. Applying a machine learning algorithm to a single photo is very different than running the same algorithm for each frame, 60 times per second.

Google R&D team also developed an algorithm called HDRnet


Google started by developing more efficient methods for applying tonemapping edits very quickly, and doing everything they could, to get the most out of the sensor. Google R&D team also developed an algorithm called HDRnet, which can deliver the signature Pixel look much more efficiently. With Tensor now they are able to embed parts of HDRnet directly into the ISP and accelerate it to make the process faster and more efficient.

Pixel 6 can now run HDRnet and 4K video at 60 frames per second- that's 498 million pixels each second. The color accuracy is excellent, with a big boost to the vividness, the stabilization, and overall video quality. This is all thanks to the bigger camera sensors, Google's cutting-edge machine learning, and the efficiency gains from the new Tensor chip. It's a giant step forward.

Pixel's new Magic Eraser can do the job!


Has your perfect photo ruined by something random in the background? Let's say you want be the only one on the beach in your photos. If you don't have access to a deserted island or don't want to spend hours in a photo editing suite, Pixel's new Magic Eraser can do the job! In Google Photos, you'll see suggestions for distractions you might wanna remove from your photo. Erase them all at once, or tap to remove them one by one. What really sets this feature apart is how we're able to figure out what you're trying to remove and how well we can fill in what's in its place. Even if something is not suggested, you can still erase that distraction. Just circle them, and they disappear. And you can use Magic Eraser on Pixel to clean up all your photos, whether you took them a minute ago or years ago.

Pixel Camera is using FaceSSD to figure out if there are faces in the scene


You go to take a picture, but the lighting isn't great, and the subject is moving around. You can't quite get the perfect photo. It's a little blurry.  It's a physics problem but Tensor's on-device machine learning can solve.

Pixel Camera is using FaceSSD to figure out if there are faces in the scene. If they're blurry, it spins up a second camera, so it's primed and ready to go when you tap the shutter button. In that moment, Pixel 6 takes two images simultaneously, one from the ultrawide camera and one from the main. The main image uses a normal exposure to reduce noise, and the ultrawide uses a faster exposure that minimizes blur. Machine learning fuses the sharper face from the ultrawide with the low-noise shot from the main camera to get the best of both into the image. As a last step, Pixel Camera takes one final look to see if there's any remaining blur in the fused image, estimates the level and direction of the blur, and then removes it for you. In all, it takes 4 machine learning models combining data from two cameras, to deliver the scene you know you saw but couldn't get from your camera until now, with Face Unblur.

So most of the time we wanna eliminate blurriness from our pictures, but sometimes a bit of blur can actually add to the picture, especially for action shots that don't seem to have much action... Pixel 6 introduces Motion Mode, which brings a professional look to your nature scenes, urban photos, or even a night out. Typically, you'd create these effects with panning and long exposures-- techniques that require fancy equipment and lots of practice. Motion Mode makes it easy. For action shots like this one, the Pixel Camera takes several photos and combines them, using on-device machine learning and computational photography to identify the subject of the photo, figure out what's moving, and add aesthetic blur to the background.

camera applies computational photography and ML to align multiple frames


For a nature shot, the camera applies computational photography and ML to align multiple frames, determine motion vectors, and interpolate intermediate frames that are blurred so you get the best photo ever.  Also, with Motion Mode, just wait on the subway platform for the right moment, snap a photo of your friend, and you have a vibrant artistic photo to remember this moment. Now, we know that not every picture is taken in the Pixel Camera app. Some of these new camera capabilities and image quality improvements extend to any app that uses the camera, including your favorite camera apps.


Snapchat is partnering with Google on a Pixel 6 feature called "Quick Tap to Snap.


Snapchat is partnering with Google on a Pixel 6 feature called "Quick Tap to Snap." This Pixel-first feature puts the Snap camera directly into the lock screen for fast and easy access to the Snapchat camera. Just tap the back of your phone twice and you're into the camera. This new feature is a speedy and simple gesture that will help our communities Snap more moments before they disappear. Snapchat has designed Quick Tap to launch into "Camera Only" mode so Snapchatters can create Snaps even if they haven't yet unlocked their device. Once you make a great Snap that you want to share, simply authenticate on your device to unlock the full app experience. With Quick Tap to Snap, Pixel 6 will be the fastest phone to make a Snap, and Snapchat is also working with Google on exclusive augmented reality Lenses, and bringing other key Pixel features, like live translation, directly into the Chat feature on Snapchat. Snapchatters can talk to their friends in more than 10 languages, and conversations will be translated in real-time. These are the first features coming to Snapchat on Pixel 6.

Snapchat bringing more innovation to their community with our partners at Google.  Google has built a great camera system which are very futuristic.

Source - Google youtube video

0 Comments