We are certainly living through an extraordinary moment in the evolution of technology (and society). Techno-optimists opine that we are about to have unlimited intelligence at our disposal, and that we're on the cusp of a scientific revolution unlike anything in human history and that the only limit is our imagination. There might be a modicum of truth in those statements, but I want to shine some emphasis on the fact that in many problems, the main bottleneck isn't cognitive intelligence.
Types of Bottlenecks
Imagine you could decompose any scientific or engineering challenge into a pie chart of bottlenecks. What's actually slowing us down? For the sake of argument (these aren't independent) here are five major categories:
Deep Intelligence (I): It is difficult to define intelligence, but let's assume it is a proxy for the cool stuff-insight, theory, creative leaps, the flash of insight that suggests a new catalyst or a novel algorithm. I'd add that in most science problems, the intelligence need not be anywhere near Einsteinian-level deep.
Routine and Programmable Labor (R): The grunt work that keeps science moving. Data cleaning that takes weeks. Running standard experiments hundreds of times. Writing simple code, documenting, making pie charts for this blog (I actually gave Claude some numbers and it did it for me), etc.
Experiments and Data (E): The physical reality of science. Running particle accelerators or exascale solvers. Collecting samples from the ocean floor. The universe revealing its secrets on its own scales
Infrastructure and Capital (C): The $10 billion telescopes. The wet lab. The supercomputers. The venture capital. The factory that needs to be built to test an idea at scale.
Reward Signals, Regulation (V): Verification, Proving something works, Safety certifications for aircraft. Peer review.
The key point is that even incredibly powerful AI primarily addresses only some of these bottlenecks. By ‘intelligence’, I am primarily referring to cognitive intelligence, though intelligence can take different forms.
The Amdahl's Law & Scientific Progress
Amdahl's Law (adapted from computer architecture) can be interpreted as the following: If fraction f of your work can be accelerated by a factor S, your maximum total speedup is:
Maximum speedup = 1 / (1 - f + f/S)
For instance, if 40% of the work can be accelerated by 90x, the total work gets done 65.5% faster. That's it.
Applied to scientific progress, if AI can accelerate a fraction f of our bottlenecks (primarily the I and R components, and some parts of E and V), then even with infinitely powerful AI, our speedup is bounded by 1/(1-f).
Of course this is simplistic analysis and assumes bottlenecks don't change in proportion as 'intelligence' enables the crossing of a threshold, after which the proportions could change. In other words, speedup could be dynamic and more pronounced. But this is still a reasonable mental model to begin thinking about acceleration, and so let's put this in concrete terms.
Tale of Many Sciences
Consider pure mathematics: an often-referenced example of an "AI-friendly" domain. The laboratory is a whiteboard and a laptop. Experiments are cheap; one can check some proofs in milliseconds. Maybe 90% of the bottleneck lives in deep intelligence and routine mathematical labor. AI could plausibly deliver a 10x acceleration here. It is probably reasonable (in some cases) to expect a decade of progress in a year.
Systems like AlphaProof are early indicators, but there's a long way to go. Math will not be "solved in 2026", but it is certainly in the realm of possibility that the way mathematics is done may be completely transformed in the next couple of years. Human-in-the-loop, but transformed. The bottleneck that is being addressed is real and dominant. The feedback loop is tight. The path from insight to verification is short and clean.
For drugs and materials, AI does definitely help in designing molecules, predicting interactions, identifying targets. But then we might hit the wall of reality. Perhaps an AI system designs promising drug candidates in hours, but going from there to synthesis to maybe mice to humans will take years. Of course digital twins and robotic labs can help shorten some timescales, but some bottlenecks remain.
Now consider commercial aviation. Yes, AI can help design better wings, optimize fuel efficiency, run many simulations automatically. But then we still need to build prototypes. Run thousands of hours of physical tests. Convince regulators, Build a supply chain, Build new manufacturing lines. Retrain pilots. Replace fleets worth 100s of Billions of dollars.
Perhaps 30% of aviation's bottleneck is AI-addressable. Even with perfect AI, Amdahl's Law says we're looking at maybe a 1.4x speedup. Of course, this is simplistic analysis and some effects may compound, but under no circumstances are we getting to a 10x speedup in this case. Let's say we achieve 1.4x speedup and 1.1x fuel efficiency. That *would* still be be truly transformative for the industry and have cascading effects downstream.
Some of these bottlenecks might apply to realizing fusion energy, especially if the bar is delivering usable energy, but who knows.. once a threshold is crossed....
AI is its own Bottleneck
The AI systems we're counting on to accelerate science face their own bottlenecks, often the same ones plaguing the fields they're trying to accelerate.
Training data : Two of the biggest successes in AI for Science epitomize this
1. Mapping a 3D structure of a protein used to take an entire PhD thesis. AlphaFold can do it in seconds. Demis Hassabis said it did the work of a Billion PhD years in one year. Truly impressive, but it still required probably 100,000 PhD years worth of effort to generate the data... and AlphaFold brilliantly solved one problem of going from an amino acid sequence to a static structure. There is a lot more that is important.
2. One of the very few real successes in the so-called field of scientific machine learning is that AI models have gotten really good at short term weather prediction. But as one of my collaborators (Prof. Karen Willcox) points out, that was only possible because of 50+ years of work in developing scientific simulators and computational science techniques to assimilate data from a large number of physical sensors. Again, like AlphaFold, this is a sweet spot problem that AI solves beautifully, but training data is key.
The reward signal: I characterized 'intelligence' as something AI will eventually replicate significantly enough. Let me be more clear: AI will replicate/replace tasks where the reward signal or verification signal is clear and cheap to access. Go or Chess are good examples. The objective is clear and rules are clear. It is less so in many problems, and even those `intelligent tasks' might not be taken over by AI. How do you train an AI to optimize for "good science"? In my field of turbulence, even the goal is unclear, and in some problems, what constitutes progress is unclear. How do we weigh efficiency of a drug against side effects against accessibility against cost? What is the correct reward signal there?
So...
What I have said so far must be fairly obvious, but it must be said because narratives gloss over the fact that AI delivers dramatic improvements in the intelligence and basic automation-limited parts, and modest improvements in the hybrid spaces, and basically not much for some truly physical bottlenecks.
Sometimes, we have to accept that some progress is fundamentally rate-limited by the universe, not by our intelligence or tools. These problems require moving atoms, building infrastructure, changing institutions-the very things AI can help in, but not radically accelerate. Realizing this is also important to not create unrealistic expectations of AI. AI is going to be transformative even with these bottlenecks.
Again, I have already mentioned that this breakdown of bottlenecks is somewhat simplistic: The bottlenecks aren't truly independent but deeply interconnected. Perhaps breakthroughs in intelligence (recursive self-improvement) create tools that address physical bottlenecks in ways we can't imagine? That is not out of the realm of possibility and this is why R(&D) is exciting.
Additionally, if scientific productivity (via "intelligence acceleration") increases by say 15%, the impact on GDP could be enormous. All of these are difficult to quantify, but my main point here is to keep bottlenecks front and center.

Comments
Post a Comment