Home Technology AI Will Kill the Smartphone—and Maybe the Screen Entirely

AI Will Kill the Smartphone—and Maybe the Screen Entirely

0


You wake up. You do not check your phone. Instead, you activate various wearables embedded in your body and have a series of conversations with inanimate objects. You make Minority Report–style gestures in the air. You blink a lot. Things power on, tasks get done, the day begins. It turns out you have no need for a smartphone at all.

Lots of people are making big predictions about AI. Critical-thinking this, end-of-the-world that, and aren’t you worried about jobs jobs jobs? For our part, we’re confused. Not because we don’t believe the doomsday scenarios are coming. We just think they miss the most obvious, most visible way AI will remake society. Right now, we live and die in the harsh, merciless glare of screens. They’re everywhere. And in an AI age, they simply, mercifully, won’t be.

AI won’t just kill the phone, in other words. If done right, it’ll free us from the tyranny of the screen altogether.

Why aren’t more people talking about this? Sam Altman, at least, sort of is. When pressed at a recent dinner about OpenAI’s new partnership with famed Apple designer Jony Ive, he allowed this: “You don’t get a new computing paradigm very often.” It’s true, and probably why more people aren’t risking it. New tech always feels impossible, right up until it’s inevitable. The smartphone was an impossibility, once. A pocket-sized computer? With apps and networked communication? Those poor guys at General Magic had the idea and a prototype something like 13 years before Steve Jobs announced the iPhone. The tech just wasn’t ready. Neither was the general public.

Which is to say: We’re probably another 15 years away from the Great De-Screening. But it’ll happen, and maybe you’ve noticed that the process has already begun. We’re texting with our AIs less, and talking, actually talking, to them more. The side button on our iPhones? Sorry, stupid Siri—it now launches ChatGPT’s voice instead. Soon enough, we’ll be signing up for AI agents, installing AI speakers in our homes, and pinning AI-powered recording devices to our vests. Eventually, as both we and they interact with the world, we’ll begin to wonder, and then to demand: Why aren’t there advanced AI interfaces everywhere, in everything, in our cars and smart appliances, at the drive-throughs and information booths? They’re called chatbots for a reason: Voice is their killer application.

But it’ll take an actual product, as ever, to kill what’s come before. So look, first, to OpenAI, because it’s their game to lose. In the past year, Altman has stolen away a bunch of Apple’s manufacturing and wearables guys, and put Ive in charge of them, to make top-secret designs. Nobody can say for sure what they’re working on, but please. We know. They know. These guys are obsessed with the movie Her, the one where Joaquin Phoenix falls for a chatbot voiced by Scarlett Johansson. Altman allegedly even tried, like a modern-day Ursula, to steal ScarJo’s precious voice for ChatGPT. If he’s to dominate the world and its oceans of AI data, then OpenAI needs hardware, and so, yes, ScarJo be damned, you can be sure his people are busy prototyping an anti-smartphone device as we speak, some sort of always-on companion with an even sultrier fembot voice.

Is it, as in Her, an inconspicuous in-ear device? According to documents submitted as part of an ongoing trademark dispute, no. Apparently it might not even be a wearable. This, frankly, shocks us. With AirPods, its last great hardware innovation, Apple trained whole generations to stuff their ears full of floaty little bits of speaker, meaning the pieces are perfectly in place for a next-gen, AI-optimized form factor. And you don’t hire Ive to start from scratch. He’s a redesigner, not a radical.

Or is the idea that we still, somehow, need screens? Apple seems to think so: It, like Microsoft and Samsung and so many others, is building out its “smart home” offerings and adding displays left and right. Meta, meanwhile, is investing, or reinvesting, in smart glasses. (We don’t care how “good” they might be—glasses will never be universal.) Even novel devices like the Rabbit r1, which is voice-based and doesn’t run apps and signals “a move away from the traditional screen-based paradigm,” as one AI CEO put it, still has a screen. Old habits, etc.

The fact is, screens suck and always have. In an exceedingly divided world, most people—including, per Pew, 74 percent of teens—seem to agree on that. Screens are clumsy, a necessary evil, an intermediary step. Some may cling on, but they were never going to last forever, for the simple reason that they slow our interactions with the all-important machines way down.

So imagine a post-screen world. No smudges, no cracks. No texting thumbs, no neck aches. Video and image won’t shrink, they’ll explode. Released from their verticality, they’ll be beamed into our eyes, projected onto surfaces. Everything will change, every map, every interior. If you thought audio tours were lame, just wait. The world will become a museum, and we its humble patrons, walking around in a daze, pointing at this, staring at that, freed from the screen, and talking, talking, all the while talking! To the machines, to everything, to nothing, to ourselves.



Source link

NO COMMENTS

Exit mobile version