The Principles of Great Interfaces
Every Interface Does One of Three Things
(And once you see it, you can’t unsee it.)
In college I was a double major, but if you ask me what I studied, I’ll usually only mention Chinese—because, well, it sounds impressive. The other major, Geography, doesn’t get the same reaction. Say “Geography” and most people think of memorizing state capitals or labeling a map in fifth grade.
But when I think about the classes that actually shaped how I see the world, it’s always been Geography.
For those who haven’t thought about it since they were ten (which is probably most people), Geography is the science of how people and place shape one another. Its basic premise is simple but profound: people and place are inextricably linked, and if you understand their relationship, you can explain a lot.
What hooked me, though, wasn’t just the relationship itself—but how people navigate place. The first thing I ever did in a Geography class was a 150-person survey, where I asked my classmates in a specific dorm to rank the three dining halls on campus by food quality and by perceived distance.
Turns out, people always think the dining halls they like are closer. They also tend to perceive anything across a street or around a corner as “farther”—even if it’s objectively the same distance. In my survey, all three dining halls were within 30 feet of one another in walking distance. But practically no one saw it. They had already drawn their own mental maps to figure out how to meet a basic need.
As I’ve transitioned into tech, I’ve realized that navigation and “mental mapping” isn’t just about how people find their way through physical space. It’s also about how systems guide people. In the digital world, we call that system the interface.
An interface’s job is simple: give you a map to access what a place—or product—can do. The best ones are invisible. They remove the need for users to construct their own mental maps. The experience just flows.
I’ll never forget the first time TurboTax made doing taxes almost fun. Or when I swiped on a Tinder profile. Or when I completed a Duolingo lesson and immediately wanted to do another. Those weren’t just apps—they were well-designed spaces. Places that knew how to guide, focus, and activate me.
So, like any good Geography major, I started looking. I wanted to understand what these systems were actually doing. And eventually I came to a simple realization:
Every good interface, no matter the medium, is doing one of three things:
Helping you determine the direction you want to go (Directional)
Helping you change your position within the system (Positional)
Helping you take a meaningful action once you’re there (Action)
Let’s break each one down.
Directional: Pointing Your Focus
A good interface lets you look around. Explore. It gives you a way to indicate what you’re interested in or where your attention is going.
In video games, this is the right joystick—the one that controls the camera. It doesn’t move your character; it moves your perspective.
One of the most elegant directional interfaces of all time? The iPod scroll wheel. It didn’t move you anywhere structurally, but it let you select where you wanted to go. It helped you aim.
Great directional elements give you that feeling: “I know what’s around me. I can look freely. I can choose.”
Positional: Moving Somewhere
Of course, looking around is only part of it. A strong interface also lets you move.
In video games, that’s the left joystick—the one that actually moves your character through space.
On the iPod, it was the menu button (to go back) and the center button (to drill down). These were Positional controls: “Take me here.” “Move me deeper.” “Go back.”
If Directional shows you the doors, Positional walks you through one.
Without Positional design, an interface is just a viewfinder. But users want to go places. They want to explore. And they want it to feel natural.
Action: Doing the Thing
And then there’s the last step: once you’re in position, looking at the thing you want—there has to be a way to do it.
That’s Action.
On a PS5 controller, these are the four buttons to the right of the joysticks. There are four for a reason—our brains can’t handle many more.
On the iPod, it was just three: play/pause, next, previous. And that was enough.
Action elements are often contextual. That’s what makes them feel intuitive. You don’t need a “fridge open” button and a separate “fridge close” button. You just need a single Action, and the system knows—based on your context—which one you meant.
Look at your kitchen (Directional). Walk up to the fridge (Positional). Press the Action. 99.9% of the time, it opens.
Do it again? It closes.
You didn’t think about it. You just acted.
Great interfaces give the illusion of infinite choice. But they’re really guiding you gently: look here, go there, do this.
These principles might sound abstract, but they’re not. They’re actually how people use—and trust—systems.
Every system that feels good to use nails these three elements.
The PS5 controller? One stick for moving, one for aiming, buttons for acting.
The iPod? Scroll to aim, buttons to move up or down, actions to play. Total clarity.
You didn’t need an instruction manual. Your brain already knew what to do.
So What Happens When AI Enters the Picture?
Here’s where it gets interesting.
Artificial intelligence will almost always have your context. It can already guess what you want to see, what mode you’re in, and where you probably want to go next. That makes it very good at directional and positional decisions.
But great AI doesn’t just guess for you—it empowers you to act. It still needs you to signal “yes,” “go,” “not now.” That’s where interfaces still matter.
And as we shift into a new era where screens disappear, apps blur, and control lives closer to your body those interface principles don’t go away. They just show up in new ways.
We’ve been through this before:
MS-DOS: letters were directional, Enter was positional
Windows + Mouse: cursor was directional, click was positional, menus gave you actions
Touchscreens: swipes became direction, taps became position, long-presses added action
Now? We’re headed toward a new future where AI has rewritten the entire game. And from where I sit, there’s nothing more intuitive than body language.
More soon.