Skip to main content

Meta is quietly winning the AI wearable race

Meta is quietly winning the AI wearable race

/

It’s a low bar so far, but Meta’s Ray-Ban smart glasses are proving to be the best implementation of wearable AI out there.

Share this story

For the last several weeks, I’ve been playing with Meta’s AI assistant in its Ray-Ban smart glasses. It works by responding to the voice command “Hey Meta” and can answer a question or examine what you’re looking at. It’s far from perfect. But when it does work, it feels like a glimpse into the future.

Meta didn’t expect generative AI to play such a large role in the glasses until very recently. When CEO Mark Zuckerberg first revealed that multimodal AI was coming to them in an interview with me last fall, he described it as a “whole new angle” on smart glasses that may end up being the killer feature before “super high-quality holograms.” 

Given the billions Meta has poured into AR glasses over the last six years and the lackluster reception to the first generation of Meta Ray-Bans, version two needed to be a win. Early indications are good. I’ve seen third-party estimates that over 1 million have been sold. During Meta’s last earnings call, Zuckerberg mentioned that many styles were sold out. Now, with multimodal AI enabled, Meta may have the best AI wearable on the market.

To be fair, it’s a low bar. This week, my colleague David Pierce managed to give the Rabbit R1 an even lower review score than the Humane AI Pin. (I recently joined him on The Vergecast to talk about Meta’s broader AI strategy and how its glasses roadmap fits in.) It turns out that the phones we have in our pockets still work best for most use cases.

The Verge’s Victoria Song wrote about her experience using AI in the Ray-Bans when Meta brought the feature out of private beta last week. I agree with her top-level assessment: “It can be handy, confidently wrong, and just plain finicky — but smart glasses are a much more comfortable form factor for this tech.”

During a recent weekend afternoon around the house with my pair of Meta Ray-Bans on, I tried using the assistant as much as possible. Here’s how it performed:

  • It correctly said my Samsung Frame TV was a Samsung QLED. However, I was expecting it to recognize that it was a Frame TV — more valuable information — instead of focusing on the type of panel.
  • It told me I could buy an Apple TV remote I was looking at on Amazon with no info or price, which wasn’t helpful. Later, when I asked it to recommend a dining table that looked similar to mine, it told me it couldn’t help with finding products at all. I believe an official product recommendation system is in the works. 
  • It correctly told me how to make a caprese salad and correctly said that a can of La Croix contains no calories. It also gave good advice for smoking wood chips in a propane grill. 
  • It correctly identified my dying fiddle-leaf fig tree and gave good instructions on how to care for it. It misidentified another tree in my backyard but only slightly. 
  • When I asked it who was headlining Coachella that weekend, it listed the wrong artists, some of whom performed at last year’s festival. 

Overall, the assistant was accurate and helpful more than half the time, which certainly can’t be said for the AI powering Humane and Rabbit. I was also impressed with the speed at which the assistant analyzed what I was looking at and gave me a response — each query was processed in just a few seconds max. 

If hallucinations can be fixed, it’s easy for me to imagine using a conversational AI in my glasses throughout the day. Being able to interact with an AI through voice as you’re looking at something is a far more natural experience than Humane’s Minority Report-style projector on your hand or the Rabbit’s interface. (Joanna Stern has a great video comparing all three devices you should watch.)

My big impression is that Meta’s Ray-Bans is the first wearable implementation of AI that feels like it’s on the trajectory of taking some of my day-to-day iPhone usage. This dynamic will get more interesting with next year’s version, which will have a heads-up display and neural interface wristband. 

Zuckerberg has long wanted to get out from under the thumbs of Apple and Google. He may have finally stumbled into the way out.


Notebook

My notes on what else is happening in tech right now:

  • What is OpenAI cooking? Recent sleuthing of OpenAI’s web logs has revealed the domain search.chatgpt.com. In February, The Information reported that OpenAI was developing a “web search product,” and I’ve heard that the company has been trying to poach from Google’s search org. Then there’s Sam Altman’s recent talk at Stanford, where he told the crowd that GPT-4 is “the dumbest model any of you will ever have to use again.” That’s not something a CEO would typically say without the next version being around the corner. 
  • Also during that Stanford talk: Altman gave a little more insight into his chip project: “It’s not just foundries, though that’s part of it… Energy, data centers, ship design, new kinds of networks… It’s how we look at the entire ecosystem and how we make more of that… We gotta do the whole thing.”
  • Google’s search deal with Apple: I maintain that the DoJ’s antitrust lawsuit targeting Google’s search default deal with Apple feels right. Closing arguments for the trial began this week, and as my colleague Lauren Feiner writes, it sounds like the judge may agree with me. If Google giving Apple a deal too good for anyone else to match — more than 30% of revenue flowing through an unrivaled ad platform to the tune of $20 billion a year — isn’t boxing out rivals, I don’t know what is.
  • TikTok gets an alley: What interesting timing! Less than two weeks after the US government passed a law that will effectively ban TikTok, the company has reached a deal with Universal Music CEO Lucian Grainge to get the label’s music back on its platform. Based on Grainge’s memo, it sounds like TikTok caved on pretty much everything that was causing the stalemate, including UMG’s demand to not have its music used for AI training and an increase in its royalties. Considering how strong of a lobbying voice the music industry has, I have to imagine that TikTok wanted UMG on its side as it heads into a messy legal challenge.

People moves

Some interesting career moves I’ve noticed recently:

  • Chris Clark, OpenAI’s first COO who also managed its nonprofit operations, is leaving alongside head of people Diane Yoon.
  • In a move that everyone close to this team expected, Thomas Reardon, the founder of the CTRL-Labs startup behind Meta’s forthcoming neural interface wristband, is stepping away to become a company advisor. 
  • Eduardo Indacochea, Meta’s VP of advertising product, has left to start an AI company.
  • Tagu Kato, formerly the head of design for Facebook, has joined Roblox as chief design officer.
  • Esther Crawford (of Twitter/X/sleeping-on-the-floor fame) has landed at Meta as director of product for Messenger. 
  • Former Zynga COO Matthew Bromberg was named the new CEO of Unity.
  • Holden Karnofsky left Dustin Moskovitz’s Open Philanthropy to become a visiting scholar at the Carnegie Endowment for International Peace.
  • Peloton CEO Barry McCarthy stepped down as the company laid off 15 percent of employees.
  • Javier Varela is the new COO of Rivian, replacing Frank Klein who is out after less than two years.
  • Twitter co-founder Biz Stone joined the new US nonprofit board of Mastodon.

Interesting links


If you aren’t already subscribed to Command Line, don’t forget to sign up and get future issues delivered directly to your inbox. 

As always, I appreciate your feedback and tips.

Thanks for subscribing.