I started this on LinkedIn but hit the character limit, so here we are.
This beauty showed up on my feed. I have seen some bad AI takes, some terrible AI takes, some downright dumb AI takes, but this may be the takiest take to ever take the take.
Nailed it! Google’s ex-CEO Eric Schmidt on the future of AI:
“User interfaces are going to go away”.
What drives this Pavlovian response on LinkedIn? [Executive] makes pronouncement! [Influencer] breathlessly repeats pronouncement! (BTW subscribe to [Influencer] Substack where you can pay to see more such mega-ditto parroting!)
The driver is clicks, probably, but still. Is the hope that the residual reflected light from the quoted executive outshines the need to bring context and judgement to your post? Do you need to throw away any sense of history, background or even a couple minutes of Googling to get the post out and bask in the glow?
These posts are incessant, probably millions of these are posted a day - the dandruff on the shoulder of LinkedIn, so what makes this one so bad?
Ok first, natural language interfaces...are still user interfaces. There is still a user and a thing they are interacting with. That interface requires intentionality, design to be effective. Whether a human or an AI does the designing, the interface doesn't go away - it can't. Separate entities are interacting, so there has to be an interface.
So let's be charitable and read what he said as "visual user interfaces are going away because AI will make them obsolete." Let’s even put aside the issues of hallucination and bad information we currently see in LLMs. Even then, he's missing really basic things about UI - like "why UI paradigms are the way they are today" and "why people use things to do things."
Two assertions he makes are particularly demonstrative of the lack of context
The desktop killed the command line.
It didn't. It literally didn't. Usage of the command line moved to more specialized domains (engineering, data science, sysops, etc.) where it is a massively efficient interface for those domains. Even for AI - I have mostly interacted with things like Claude code or internal LLMs via a command line.
Touchscreens killed the mouse.
Man, I don't even know. Touchscreens are great and all, but are ergonomic nightmares for many tasks. You are free, as they say, to do you, but if you are typing long text on a touch screen constantly then enjoy financing your orthopedist's kid's college education. Even my gen-alpha kids who are growing up on iPads still use a keyboard and mouse do most assignments for school.
Its rare that anything is ever totally killed in UI. We are always working with a palimpsest of UI paradigms. Are pedals and a steering wheel the absolute best way to drive a car? Maybe or maybe not, but they have evolved to this point for a reason (or lots of reasons - ergonomics, not having to learn a new control scheme every time you change cars, manufacturing standardization). You certainly could make a touchscreen-only car, but it's probably not a great idea to do so. Those layered menus that he derides are similarly there for a reason - they are often balancing a lot of different concerns including marketing, ease of use, ability to search and browse, discovery, spear-fishing and more.
Going back to the car … he might Well Actually that the best driving UI is an autonomous vehicle where the passenger doesn't do anything. Putting aside that there are always going to be people who want the experience of driving and who will need a UI, there still needs to be some UI between the passenger and the autonomous driver. Again maybe not a visual one per se (although I would probably want to see how fast my Waymo is driving even if I’m not driving it), but some kind of designed interface even if it’s a verbal one.
Which leads us to the second thing he misses. People use different UI paradigms to do different things. There may well be self-assembling UIs like the ones he describes and they might even be great experiences for some use cases (some of my colleagues at Prime Video were doing good work on this type of problem). But they aren't going to cover every customer intent. Sometimes I want to play something specific on Spotify, sometimes I want to listen to Spotify radio starting with a specific song (at least until Spotify starts playing The Talking Heads to me over and over. I like The Talking Heads just fine, but no one likes The Talking Heads as much as Spotify thinks I like The Talking Heads.), sometimes I want something curated.
Will self-assembling UI from prompts help in use cases where a user needs some direction? Maybe, probably, but there are going to many times where they won’t. Or they just aren’t the interaction mode that someone wants to use at the time. They are definitely not going to be the categorically most efficient way to interact with something.
Just like with a Google search bar, you need a certain amount of self-direction to start an AI conversation. You need some prior knowledge and some skill at prompt hacking. Rather than reducing cognitive load, prompts are often the opposite of “don’t make me think.”
Beyond the specific issues with this post, this category of post is part of what makes LinkedIn feel so … barren? Devoid? I know we’re not going there to read great literature - LinkedIn is always going to be utilitarian at best. But it would be great to see just a bit of thought and critical thinking go into what shows up on our feeds. At a minimum I’d like to hear what the post author thinks for once - not what they want to repeat from some snippet from some other exec.
