New Meta Rayban

Meta’s AR Glasses and the Next Hardware Battle

New Meta Rayban, Are Meta’s AR Glasses and the Next Hardware Battle”

Given the recent Meta announcements, I cannot help but compare this moment to the early days of mobile phones, the year when mobile phones started to become a thing. Yes, they’re big, they’re clunky, and in some ways still awkward to use, but this is clearly a strategic focus by Meta. For years, they’ve faced one fundamental challenge, they’ve never owned the hardware where their software runs. Now, with these glasses, they’re taking a major step to address that, staking a claim in the next potential computing platform before anyone else can dominate it.

And the reasons for this are clear when you look at what Meta is actually doing. They’re packing microphones, cameras, speakers, and a full-color, high-resolution in-lens display into glasses that are still wearable, though not exactly stylish yet. That’s a serious hardware play, reminiscent of the early days of mobile phones when every innovation in chips, batteries, and radios mattered.

They’ve also invested in the neural band, a wristband that lets you control the glasses with subtle hand gestures. Strategically, this is a clear play toward moving away from the phone as the central device, exploring new forms of interaction that could become the foundation for a more independent platform. They’re tackling the hard constraints of wearables, weight, battery life, and integrating all this compute and display tech into something that can actually sit on a person’s face.

Counterpoints and Challenges

Of course, there are important challenges and differences that temper the optimism here.

Apple vs. Meta control gap

Unlike Apple, which controls the full stack, hardware, OS, app store, and services, Meta doesn’t control the operating system on which their glasses run. This creates a clear strategic challenge, if users still require a phone to access augmented reality features, they remain dependent on the phone platform. That means they could just as easily use Apple’s or Google’s version of AR, leaving Meta’s hardware reliant on external platforms. Meta may develop the device, but the “brains” of their hardware, the OS, app ecosystem, and platform rules, are still controlled by someone else, who can enforce app taxes, compatibility restrictions, or other limitations that could constrain Meta’s ecosystem.

The time compression problem

Mobile phones only became widely adopted once hardware miniaturization, battery life, and connectivity reached usable levels, and crucially, once they rode the wave of mobile internet. The introduction of mobile internet created a completely new platform and a new market that allowed companies to build apps, services, and revenue streams that simply weren’t possible before.

For AR glasses, that same underlying wave doesn’t yet exist. There isn’t a parallel breakthrough technology driving adoption, mobile AR, standalone connectivity, or a new computing infrastructure hasn’t reached a tipping point that makes users feel they need AR glasses. Hardware miniaturization alone may not be enough to create a mass market in the way mobile internet once did for phones.

Ecosystem and software limitations

MetaHorizon OS does exist, but it’s unclear whether it will ever transition into the glasses. Right now, the hardware simply doesn’t allow for a fully standalone operating system to run all the features independently. The glasses still rely heavily on paired phones for connectivity, compute, and apps, so any broader OS ambitions remain speculative.

Summary

Meta’s advance in the AR field is extremely interesting, it’s undeniably cool, but at this stage, it’s not very useful. From a company strategy perspective, unless they own the “brains” of what their hardware runs, they’ll always be stuck in the same position they are now, reliant on the owners of the app ecosystem.

The neural band approach is clearly intended to move away from phone-based interaction and clunky buttons, making the glasses lighter and simpler. But at the end of the day, unless they can pack a significant portion of the hardware capabilities that phones currently have into these tiny glasses, or even smaller versions, because the current ones are still bulky, they’ll face persistent limitations. I don’t foresee a future where users carry a phone and a separate “compute pouch” just to make the glasses work.

Similarly, in-lens displays alone are unlikely to substitute for full AR experiences. To truly replace or augment a user’s digital environment, AR glasses will likely need something closer to what headsets like Meta’s Orion prototype offer, wide fields of view, multiple screens, and more immersive interaction. The current devices are a step in the right direction, but we’re not there yet.

For Businesses

The AR train is coming. We don’t know exactly when it will arrive, or which form factor will dominate, but here’s the question for businesses: do you jump on the research train now, or wait a few more years until the technology becomes more established?

Starting earlier could provide a significant advantage. By exploring the current capabilities, even if the hardware is bulky or limited, companies can build understanding, workflows, and competencies. Then, when the technology stabilizes, which we don’t know when, they can implement features nearly immediately, while competitors are still figuring out how to get started.

The question is open: when should businesses start preparing for AR? Comment and answer below.

A Note on Ethics and AI Use: Transparency is important. For this article, I used AI tools to augment my discussion and explore phrasing, as well as to assess SEO performance and readability. While AI helped refine ideas and highlight optimization opportunities, all insights, examples, and analysis are the product of my own experience and judgment. AI served as a support tool, not a replacement for critical thinking or human perspective.

Posted by Mikhael Santos on September 19, 2025