Same Definition, New Meaning
People say AI changes everything. Usually they mean it makes old things faster. But the more interesting change is that it makes some old things unnecessary.
Take first principles. The definition is the same as ever: remove assumptions and reason from what must be true. What changes is what you discover when you do it honestly.
In software, we spent decades treating interface design as the center of the craft. We got very good at arranging forms, tabs, buttons, and graphs so a person could move information from one place to another while making as few mistakes as possible.
That was sensible. We had no better option.
But if you ask first-principles questions now, you get a different answer.
Why does this workflow exist? Usually: to collect data, reshape it, and present it to a human who decides what to do next.
Why does the human do that step? Usually: because the software couldn’t.
That last sentence used to be a law of nature. Now it is often just legacy architecture.
So the center of gravity moves. The work is less about designing screens and more about orchestrating intent: gathering the right context, choosing the right tools, executing safely, and making the result inspectable.
This does not mean UI disappears. It means UI loses its monopoly.
Some interfaces will survive because they are genuinely human tasks: setting policy, handling moral ambiguity, resolving novel exceptions, and assigning accountability. But many interfaces were never the job. They were the scaffolding around the job.
When the scaffolding can carry itself, we should remove it.
That is the "same definition -> new meaning" move. First principles still asks you to peel back assumptions. In the AI era, one of the assumptions you keep peeling back is that every important system needs a human clicking at the edge.
Sometimes it does. More often than we admit, it doesn’t.