In all my years of reviewing personal technology gadgets, I can count the number of times my jaw has dropped when learning about a new product. It’s good to be a skeptical journalist! But I failed to maintain that detachment when Google demoed a few imaging tricks on its new Pixel 8 and Pixel 8 Pro smartphones.
Taken in a vacuum, these features are things anyone with knowledge of Photoshop or video editing software can execute. But the new Pixel phones make them accessible to everyone, which is exciting, and frankly a little scary. Let’s go through ’em.
Magic Editor
Google teased this feature during its developer conference in May. It’s the natural evolution of Magic Eraser, which Google debuted a few years ago. The latter lets you erase unwanted objects in your photo, like a fire hydrant or a person in the background. Magic Editor can warp the whole photo to a new level.
In a demo, Google showed a picture of a girl running on a beach. With Magic Editor in the Google Photos app, a spokesperson pressed on the subject and the software accurately made a cutout. They were then able to move the subject anywhere in the scene, and the software filled in the space left behind with what it thought should be there. These were photos picked by Google, of course, but Magic Editor filled them in with great accuracy.
Magic Editor also enabled the option to change the scene’s lighting. If you take a photo at noon with harsh lighting, you can easily change it to golden hour to get those wonderfully warm evening tones—and maybe even throw in a sunset!
In another photo, a kid about to shoot a basketball from the ground. The spokesperson grabbed the subject in the photo, dragged him up into the air to make it look like he was about to dunk, and then casually said, “You can move their shadow too!”
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Radios to Catch Your Favorite AirwavesBy Nena Farrell GearThe Best Robot Vacuums to Keep Your Home CleanBy Adrienne So
GearLast year, I spoke with Ramesh Raskar, an associate professor at MIT Media Lab, about computational photography and digital photo alteration. His words now sound prescient. Companies are making an assumption that “most consumers would like to just take a photo, click a button, and get something they really would like to see, whether it matches the reality or not,” Raskar said. Say you reach Paris, and the Eiffel Tower is in a haze. “What you would like is to take a photo with your family with the Eiffel Tower in the back as if it’s a bright sunny day, right? If somebody can paste a bright, sunny photo of the Eiffel Tower behind your family, you’ll be pretty happy about it.”
This is now easier than ever with Magic Editor. There’s also a chance you’ll come across more nefarious, distorted images that might subtly massage the truth of a scene, not unlike the AI-generated viral images of Donald Trump that circulated over the summer. There’s some hope for truth seekers, as Google says the metadata will note whether Magic Editor was employed. It’s easy to strip metadata from images though, so it’s unclear how effective this will be.
Best Take
We’ve all taken group photos where someone is looking away or has their eyes closed. Best Take is going to let parents of active kids breathe a sigh of relief (while perhaps also inducing mild panic).
When you capture a photo on most smartphones, they’re actually snapping multiple images at different exposures, which is how you can get well-exposed photos in various kinds of lighting. Google’s solution to fix someone’s closed eyes is to snag another frame from what it has captured and replace the person’s face with one where their eyes are open.
This is not unlike a feature Google introduced years ago called Top Shot, which suggests a potentially better frame from the series of photos captured when you tap the shutter button. However, Best Take can pull a frame from a series of up to six photos taken within seconds of each other—handy if whoever took the photo snapped multiple images in a row.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Radios to Catch Your Favorite AirwavesBy Nena Farrell GearThe Best Robot Vacuums to Keep Your Home CleanBy Adrienne So
GearI watched as the spokesperson selected a person’s face and cycled through other versions of the face from recent images and other frames. Just choose the face you want (a weird sentence to write) to complete your perfect group photo. Google assured me it is not generating any facial expressions but is instead using an on-device face recognition algorithm (Google Photos can already detect familiar faces) to match images up.
Audio Magic Eraser
Magic Eraser removes stuff you don’t want to see in your photos. Now, with the Pixel 8 series, it can also eliminate sounds you don’t want to hear.
In one of my demos, I saw a video of someone playing a cello at a park. In the background? A siren going off in the distance (classic New York City). With Audio Magic Eraser, you can edit the clip and split the sounds up to completely remove the frequencies of the siren. The result is a video with just the sounds of a cello. It was pretty remarkable. This also means you can cut the sound of the cello and just play the siren, so you do you.
Google says the system uses machine learning to identify up to five types of common sounds, like “sirens,” “animals,” and “crowds.” It’s not going to work perfectly every time—I watched a demo of a man humming while at the beach, and when we tried to cancel the sounds of the ocean, I could still hear them cropping up here and there.
Video Boost
This feature is a little less creepy, and more just plain impressive. Video Boost is exclusive to the Pixel 8 Pro, and you can toggle it on when you’re shooting video clips in low light or if there’s going to be a lot of action.
A copy of your video, which can be up to 4K at 30 frames per second, is then sent over to Google’s Cloud for processing. This processing can dramatically improve stabilization, upgrade clarity, and reduce noise, and the improved clip is then sent back to your device. Depending on the video length, this could take minutes or you might have to wait overnight.
Still, the results were startling when I was shown a comparison clip alongside an iPhone 14 Pro. The Pixel 8 Pro’s video in similar low light was dramatically clearer, brighter, more colorful, and better stabilized. It’ll be exciting to see how this works, but it won’t be available at launch.
Again, none of the aforementioned features are things that you can’t currently do with other tools, but the ability to democratize them and make them accessible for the cost of a smartphone—without requiring any technical know-how—made me stare. You can read more about the Pixel 8, Pixel 8 Pro, and Pixel Watch 2 here. These smarts aren't the only AI features Google talked about—read more about its announcement around upgrading Google Assistant with Bard here.