

Introduction: Beyond the Hype
As we step into 2026, the news cycle around Artificial Intelligence remains a relentless firehose of product launches, breathless predictions, and existential warnings. To navigate the year ahead, separating fleeting hype from fundamental truth is essential.
This article distills five of the most surprising insights from the leaders building the future of AI. Drawing from recent talks by pioneers at OpenAI, Google, and Microsoft, these truths provide a definitive roadmap for the year ahead.
Key Takeaways for 2026
- Coding: Andrew Ng argues AI makes coding skills more essential, not less.
- Intelligence: Andrej Karpathy defines LLMs as having “Jagged Intelligence”: genius memory but severe cognitive deficits.
- Scaling: François Chollet’s data proves that simply making models bigger is no longer yielding “Fluid Intelligence.”
- Business: Aaron Levie predicts the death of “SaaS seats” in favor of outcome-based “Service-as-Software.”
- UX: We are still in the “MS-DOS era” of AI, with chat interfaces set to be replaced by ambient computing.
1. The Coding Paradox: Why Andrew Ng Says “Everyone Must Code”
As AI assistants get better at writing code, a common piece of advice has emerged: don’t bother learning to code; AI will just do it for you. According to Andrew Ng, a foundational figure in the AI field, this is a dangerous misconception. He argues that as AI lowers the barrier to entry for software engineering, the value of being able to wield that power increases.
I think we’ll look back on this as some of the worst career advice ever given because as better tools make software engineering easier more people should do it not fewer.
Ng holds the controversial opinion that coding is the new literacy. On his own team, roles ranging from the CFO to the front desk receptionist utilize Python to automate workflows.
This reflects a fundamental shift: The most important skill in 2026 is Deterministic Control: the ability to tell a computer exactly what you want it to do. It’s not about memorizing syntax; it’s about having the mental model to direct an AI to generate precise, functional systems.
2. The “Jagged Intelligence” of Your New AI Colleague
It’s tempting to anthropomorphize Large Language Models (LLMs) as super-smart humans. Andrej Karpathy, former director of AI at Tesla, warns that this analogy is fundamentally flawed. He describes LLMs as possessing “Jagged Intelligence”: a paradoxical mixture of superhuman abilities and startling deficits.
The Superpower
An LLM has encyclopedic knowledge and near-perfect recall, similar to the autistic savant character in Rain Man.
The Deficit
It suffers from Anterograde Amnesia. Unlike a human colleague who consolidates knowledge over time, an LLM’s core “weights” are frozen. Its context window is merely a temporary scratchpad that is wiped clean after every session.
LLMs are like the protagonists of Memento or 50 First Dates: entities whose memories are erased daily.
Combined with a susceptibility to hallucinations and prompt injection attacks, this “jagged” profile is a crucial reality check. You are not working with a human; you are working with a brilliant but amnesiac library.
3. The Scaling Wall: Why Bigger Isn’t Smarter
A dominant narrative in AI has been that “Scale is all you need”: that bigger data centers will inevitably lead to Artificial General Intelligence (AGI).
Research from François Chollet, creator of Keras, provides a stark counter-narrative. He argues that scaling up current architectures improves “Crystallized Intelligence” (memorization) but fails to produce “Fluid Intelligence” (novel problem solving).
To prove this, Chollet created the ARC Benchmark (Abstraction and Reasoning Corpus). The results are damning:
- Model Growth: 50,000x increase in compute/data since 2019.
- ARC Accuracy: Improvement from ~0% to only ~10%.
- Human Baseline: >95%.
So we can decisively conclude that fluid intelligence does not emerge from scaling up pre-training… you absolutely need test adaptation in order to demonstrate genuine fluid intelligence.
This suggests that 2026 will not be defined by larger models, but by Test-Time Adaptation: systems that can “think” and learn in real-time, rather than just reciting pre-trained patterns.
4. The Economic Shift: From “SaaS Seats” to “Service-as-Software”
For decades, the software economy relied on the SaaS (Software-as-a-Service) model: selling licenses per human user. Aaron Levie, CEO of Box, argues that AI Agents are about to “completely blow up” this foundation.
The shift is from Tools to Outcomes. Traditional software helps a lawyer work faster; an AI Agent does the legal analysis.
- Old Model: You buy 10 seats for 10 lawyers.
- New Model: You pay for 500 contracts reviewed.
I know you only have three lawyers, but my agents could do the amount of work of basically unlimited lawyers.
This transition to Service-as-Software means pricing will no longer be capped by headcount. It rewrites the total addressable market (TAM) for the entire tech industry, moving value capture from “productivity” to “labor replacement.”
5. The “MS-DOS Era” of Interfaces
Our primary interface for AI today is the Chatbot. While functional, visionaries like Dylan Field (CEO, Figma) and Sam Altman (CEO, OpenAI) view it as a primitive, temporary bridge.
”It feels intuitively like we’re in the MS-DOS era of AI right now.” — Dylan Field
Just as command-line interfaces eventually gave way to the Graphical User Interface (GUI), the “text box” is destined to become obsolete. Karpathy adds that while LLMs are a new “Operating System,” we are currently in the “1960s of OS design.” We lack the mouse, the window, and the desktop metaphor for this new intelligence.
The future isn’t a better chatbot; it is Ambient Intelligence. As Altman envisions, the interface will eventually “melt away,” leaving a trusted assistant that proactively navigates the digital world on your behalf, without you needing to type a single prompt.
Conclusion: The Best Time in History
From the revelation that AI demands more coders to the rewriting of the SaaS business model, the architects of this revolution see a future far more nuanced than the headlines suggest. We are living through a “Jagged” transition: defined by models with amnesia, interfaces as crude as MS-DOS, and limits to what raw scale can achieve.
As we stand at this precipice, Sam Altman offers a definitive call to action for builders:
This is the best f***ing time ever in the history of technology… ever, period… to start a company.
Think your network would value this?
One-tap to share these insights instantly.
Recommended for You
Hand-picked articles to expand your knowledge.


