The Intelligence Age

A reflection on the dawn of the Intelligence Age and its impact on society, inspired by Sam Altman blog.

Sam Altman wrote a piece called "The Intelligence Age" that got me thinking. Not because I agree with all of it, but because it forced me to articulate what I actually believe about where we're headed.

The core claim is that AI will enable achievements far beyond what any individual could accomplish alone. I think this is probably right, though the timeline is uncertain. What interests me more is the second-order question: what happens to people when this is true?

Here's the optimistic version. Everyone gets access to something like a personal team of experts. Children learn from tutors perfectly adapted to how they think. Medical treatment gets coordinated by systems that can hold the entire context of a patient's history. Scientific discovery accelerates because the bottleneck of human attention gets removed. Material scarcity, if not eliminated, gets dramatically reduced.

I find myself wanting to believe this version. And I think there's a reasonable chance it's correct. The trajectory of deep learning has been surprisingly consistent: more compute, better results. That's not guaranteed to continue, but it has a track record now.

But I notice that most writing about the Intelligence Age, including Altman's, skips too quickly past the hard parts.

The labor market question is the obvious one. Jobs will change or disappear. The standard response is that this has happened before. Lamplighters, switchboard operators, the usual examples. New jobs emerged, people adapted, everything was fine.

I'm not sure this analogy holds. Previous automation affected specific skills. AI affects general cognitive ability. When the thing being automated is "thinking," it's less clear what humans move to next.

The optimistic answer is that humans will find new ways to contribute value, ways we can't currently imagine. Maybe. But "we can't imagine it" is doing a lot of work in that sentence.

The other hard part is access. If compute stays expensive and concentrated, AI becomes a tool that widens inequality rather than reducing it. The benefits flow to whoever controls the infrastructure. This isn't a technical problem. It's a political and economic one. And I don't see obvious mechanisms that guarantee good outcomes here.

I should be clear about my uncertainty. It's possible the transition will be smoother than I fear. Humans are adaptable. Markets are creative. The desire to contribute and find purpose runs deep, and people may discover new roles faster than the pessimistic models suggest.

I also think there's something to the argument that reducing material scarcity is good even if the transition is painful. Being poor in 2024 is better than being rich in 1924 in most ways that matter.

What I keep coming back to is the pace. Previous technological transitions happened over generations. This one might happen over years. The difference matters because adaptation takes time, and compressing the timeline means more people caught in the gap between the old economy and the new one.

I'm hopeful about the destination. I'm worried about the path.

The honest answer is that I don't know how this plays out. Nobody does. The systems we're building are genuinely new, and extrapolating from historical analogies only gets you so far.

What I do think is that the decisions made in the next decade, about infrastructure, access, and how we handle displacement, will matter enormously. And that most public discussion of AI focuses on the wrong things: either utopian visions or apocalyptic fears, when the actual questions are more mundane and more urgent.

How do we make compute accessible? How do we retrain people whose skills become obsolete? How do we maintain social cohesion when economic roles shift rapidly?

These aren't exciting questions. But they're the ones that will determine whether the Intelligence Age is good for most people or just for some.