Designing for Disconnect: What Big Tech is Missing about Interface Design

Everyone working in tech has their eyes glued to the screen this week, with major announcements from Google, OpenAI, and Anthropic. The reactions have been largely positive; the biggest names in tech are making major investments into new interfaces and the systems that operate them, and the world is beyond ready. But as a team focused specifically on interface design, what we’re seeing is a bit of déjà vu.

Before starting PeKe Labs to build the next generation of interface tech, our team spent the majority of our careers working with a population of users (people with disabilities) who were always left behind by interfaces. These users have been begging for hardware that shifts how we think about interface design long before AI took center stage. And when we started exploring the roots of that phenomenon we recognized that interfaces were actually leaving everyone behind.

There is a lot to be excited about in these announcements. And, importantly, a lot to be critical of. These incumbents are still structured to fail users. From our experience designing interfaces for every ability, here’s our take on what these companies got right, what’s still wrong, and why we think big tech still isn’t set up to author the breakthrough we’re all looking for.

What They Get Right

Credit where it’s due: serious money is finally going into frictionless computing. Glasses and headsets that translate conversations, whisper directions, and deliver context-aware insights as you move through the world? It seems as though the sci-fi promise people have been waiting for is finally going to ship.

Even better, they’re not just repackaging AI in new form factors like Humane and Rabbit did (perhaps part of why they failed). These efforts are starting from the right premise: if artificial intelligence is truly transformative, then we have to fundamentally rethink what an interface actually is. 

Currently we’re limited by interfaces that interfere with our ability to stay present (typing) and private (voice-based). But do you really think the future is typing a prompt while walking down the street or talking outloud into a pair of glasses? Or, is the future ambient, seamless, intuitive, and technology designed so that it’s off until the moment you need it?

We’re grateful that the big companies are starting to see the world users have been asking for, but they’ve had their ears plugged for years, forcing users to compromise by accepting agitation for convenience.

What They’ll Miss

On that note, these announcements carry the unmistakable mark of incumbency, with two major flaws standing out. One they recognize, but hope you’ll ignore. The other, they still don’t understand.

First, the issue of data. Every tech company entering the AI race knows that their ability to deliver value depends on how well they can model you—your preferences, your rhythms, your thoughts. And to do that, they need your data: passively collected, constantly analyzed, always running.

Think about what happens when companies that already turned your browsing history into a behavioral goldmine gain access to your live biometric state. If they knew how you were feeling in real time, what wouldn’t they try to sell you? This critique doesn’t even include the reality that many of the input modalities for these devices will use cameras that infringe upon the privacy of others let alone your own. 

The second flaw is subtler, but just as critical. Most of these companies still assume that intelligence is the interface. That the smarter their model gets, the more it will anticipate your needs without you having to ask.

But human interaction doesn’t work like that. It’s emotional. Messy. Unpredictable. The last 10% of what makes us human isn’t something you can model. That final 10%—our intuition, our quirks, our non-verbal signals—is what gives us depth.

We don’t need technology that thinks for us. We need technology that helps us access the parts of ourselves a language model will never reach.

Where We’re Headed

The real leap in interface design won’t come from smarter chat prompts or cleaner voice assistants. It will come from systems that understand how we move, how we hold stress in our bodies, how we hesitate before making a decision. Interfaces that appear only when needed, and disappear the moment we don’t.

Ask someone why something felt off, and they’ll often say, “I don’t know, I just felt it.” That’s not something you can script. The brain might not explain it—but the body always knows.

As a builder in Disability Tech and as a new father, the glaring issue of our times is that our relationship with technology is not serving us. More screens, more mics and cameras that invade our space, will only disconnect us more from reality and from each other. What we need is a built environment that responds to our needs in a way that keeps us more present and more connected. The winner in this space is going to be a company that makes users feel respected.

We’re glad the giants are waking up. But let’s not forget—they’re the ones who built the systems that caused these problems in the first place.

Can we also trust them to sell us the cure?

Next
Next

Mr./Ms. Tech CEO, Tear Down This Wall(ed Garden)