Disclaimer: I am not an expert in quantum computing. My knowledge of this field stems from my education in electrical engineering and computer science, conversations with other investors (VCs and fund managers), and self-education from research papers and social media. This an investment view of quantum computing with a science focus.
Solving Hard Problems
In computer science, complexity is the notion of the level of difficulty of solving problems. They can be broadly be divided into the following classes: P, NP, and BQP.
P (polynomial time) problems are easiest to solve and traditional computers solve them regularly with minimal effort. NP (non-deterministic polynomial time) are incredibly difficult (bordering the limits of impossible) to solve in a reasonable amount of time. They can be deceivingly simple in expression, such as the travelling salesman problem, but the solution is immensely complex.
BQP (bounded-error quantum polynomial time) problems are where quantum computers excel – the prototypical quantum problem is cracking the RSA code (your credit card and bank accounts use this) using Shor’s algorithm. Below is an important diagram showing the hierarchy of complexity that forms the basis for why quantum computers are not compatible with AI in its current state of evolution.
From Complexity to Matrix Multiplications
Current LLM architectures rely on the following 3 unit operations at the most foundational level.
All 3 unit operations are P-type problems. That is, traditional computers work just fine. In fact, traditional computers and GPUs excel at the parallelizability of these operations, especially with matrix multiplications (which has a rich mathematical backbone). Transformers – the genesis of the AI supercycle – is what made GPUs incredibly helpful due to its parallelizable architecture.
Quantum offers no advantage to Traditional Computing for Matrix Muplications
Matrix multiplications are thus the critical unit compute of LLMs (attention scoring is in fact mostly matrix multiplications). Matrix multiplications are basic P-type problems, and since quantum computers only excel at BQP-type problems, quantum computing confers no advantage over traditional computing for LLMs (at least with the current architectures).
Quantum computing is like trying to get from Union Square in New York City on 14th St. to Times Square on 42nd St. in a Ferrari during rush hour, when a bicycle (traditional computing) would get you there in the same if not faster time.
Quantum Computing Narrative has been a Speculative Deep Tech Play
When the odds of Trump winning the 2024 U.S. presidential election began to materially increase, we saw quantum names (RGTI, IONQ, QBTS, QUBT) starting to rally on speculation of increased U.S. government support under a second Trump admin (echoing his initiatives from his first admin, like the NQIA / National Quantum Initiative Act).
The stratospheric rise in quantum names began in early October 2024 (perhaps as NVDA momentum was starting to wane and tech investors were looking for the next AI-adjacent narrative). These names started going parabolic on 12/19/24 with Google’s announcement of its own quantum chip called Willow.
Willow proved quantum systems could scale without exponential error explosion, shifting investor perceptions from “decades away” to “5-10 years to utility.” This boosted confidence in pure plays like RGTI (super conduction), IONQ (trapped ions) and QBTS (quantum annealing) despite their minimal revenues, capex intensive R&D, and no path to profitability or even meaningful near-term commerciality.
I believe non-technical investors (both institutional and retail) like this BlackRock Technology Opportunities Fund thought they were buying into the next AI-adjacent narrative, but failed to understand the science that underpinned this flawed thesis. Thus, if a $7 billion institutional fund can be tricked, I don’t think the average retail investor would have much of a chance to digest the noise that was coming from X and Reddit.
Quantum Computing for building Foundation Large World Models
Let me make a brief digression into one area in AI where quantum computing may have a significant impact – it is in developing “world models” for the next generation of LLM architectures, what researchers like Fei-Fei Li calls spatial intelligence. I believe these are the 3 critical issues that needs to be solved for LLMs to reach its next step function.
1. Casual reasoning
2. World model / spatial intelligence
3. Energy efficiency
DeepSeek-R1 released in early 2025 contributed significant understanding on energy efficiency – how for example clever quantization techniques can significantly improve memory requirements for large parameter LLMs.
The challenge of enabling AI systems with robust causal reasoning is a fundamentally unsolved problem. Many researchers argue that mastering this capability represents the critical frontier between today’s pattern-matching systems, often built on next-token prediction, and the emergence of human-level cognitive intelligence.
The most approachable problem right now is developing what Fei-Fei Li at World Labs describe as large world models or LWMs in contrast to large language models of LLMs.
Fei-Fei argues that spatial intelligence is even more fundamental than verbal intelligence and the LWMs will enable us to narrow this fundamental weakness of existing LLM-based AI models. It’s hard to argue against such a basic fact as language exists in abstract space constructed by human thought (a Platonic existence), while reality is based on physical laws of nature.
More precisely, this world model framework posits that intelligence is not merely a product of passive statistical pattern recognition on large-scale datasets, but emerges from an agent’s capacity to learn through sensory-motor interaction with a multi-model environment. The research goal of this paradigm is therefore centered on building foundation LWMs that learn a predictive model of the world by processing and integrating egocentric, multi-sensory data stream – encompassing visual, auditory, and physical interactions. That is, essentially replicating the human perception.
In brief: so although quantum computing may have applications to developing LWMs that may help us advance to the next step function of this AI model evolution, it is detractive from enhancing the performance of current LLMs.
Reality of Quantum Computing
Quantum has a storied history in science. I won’t go through the entire history, but it started with Max Planck in 1900 with the discovery of black-body radiation, touching Albert Einstein (Nobel Prize in 1921 for photoelectric effect), Werner Heisenberg with matrix mechanics in 1925, Erwin Schrodinger (Schrodinger’s cat) with wave mechanics in 1926, and the unification of matrix and mechanics incorporating Einstein’s special relativity by Paul Dirac (embodied in the Dirac equation in 1928) known as quantum field theory (QFT), for which Richard Feynman made seminal contributions leading to his Nobel Prize in 1965 for quantum electrodynamics (QED).
That’s a mouthful, but that’s basically the seed of quantum computing*
*No dogs, especially not Daisy, were harmed during my validation of this investment narrative on quantum computing.
Quantum computing was born in 1980 when Paul Benioff first described a quantum mechanical model of a Turing machine (the mathematical model for a digital computer), with Feynman providing the motivation for quantum computing in 1982, and David Deutsch providing the formal mathematical framework (Deutsch-Jozsa algorithm) in 1985.
This captured the first phase of the evolution – the mathematical foundation. The second phase involving developing the algorithms, the “killer app,” that would justify the enormous effort to build any type of quantum computer.
Peter Shor did just that with Shor’s Algorithm in 1994 which showed a quantum computer could crack the RSA code (the encryption behind banks and credit cards) quickly. Per our earlier discussion on the complexity of algorithms, cracking the RSA code is an NP-hard problem – something that a traditional computer would be unable to do in any reasonable amount of time (think the entire life of the universe). Quantum computers allow us to do this within a fraction of a fraction of a human lifetime. This was the killer app.
So the mathematical model (theory) led to the algorithms (application), which provided the impetus for the product development of quantum computers. So the “modern era” of quantum computing has been around for 30 years – that’s a long time for scientific research, especially in the age of Moore’s Law and the birth of LLM-based AI models.
During that time, we have seen 5 principle approaches to building stable “qubits” in order to make a quantum computer feasible:
Superconducting Qubits (IBM, $GOOGL)
Trapped Ions ($IONQ, Honeywell)
Topological Qubits (theoretical, pursued by Microsoft)
The hardware phase is where we are today, in the Noisy Intermediate-Scale Quantum (NISQ) era, where we have built small, imperfect quantum processors and are learning how to use them. These are all research projects with no near-team commercial viability, and have really no relevance to LLM-based AI models. However – be warned – if you ask a quantum computing engineer or VC investor in this space, expect a lot of bias about the “rapid pace of development” and the “incredible TAM that quantum will unlock.”
Quantum computing continues to be an area of academic and private research and from my perspective as a thematic growth investor, it is not investible at this moment (2025-2030).
Investment Implications
I believe the AI quantum computing narrative is a broken one and we have seen the cracks emerged and now cratering with $QUBT, $QBTS, $IONQ, $RGTI down 50% over the past month. The systems paradox is being resolved as investors realize the thesis of AI quantum computing was a mirage.
Leave a comment