← Lab
← Back to Lab[Research Note]

Predictions and Themes for 2026

by QSL Team

Why This Matters

You have to skate to where the puck is going to be, and this puck is moving fast. Otherwise, your brand-new AI enabled product is obsolete before you ship it.

Here's where our heads are at; posting just in case we turn out right.

We won't say much about the obvious predictions here:

  • Retail AI is going to explode (in a good way)
  • Security and governance will adapt to manage always-on AI "employees"
  • Multimodal will be big

What we wanted to focus on were the interesting possibilities that might be more controversial. Here goes:


Tokens as Startup Investment

The AI-native unicorn with a handful of employees is coming. Teams leveraging AI and autonomous agents can demonstrate product-market fit with minimal burn, making early-stage funding less critical than ever. This will shift power away from traditional VCs toward founders who can prove their thesis quickly.

However, startups need compute/inference - the CEO of NVIDIA just said that he would be worried about a developer not using $250K/year in tokens (which does sound great for NVIDIA stock...). We think we will increasingly see tokens/inference-for-equity deals by LLM providers for several reasons:

First, the foundation model providers and data centers can make highly leveraged investments based on token economics (tokens are cheaper for them than the cash cost to them). Also, it will potentially lock in winners, contractually.

Second, token costs are going to be a major component of startups in the early stages and even more important in growth stages to those startups where token use scales with revenue.

Look for deals where LLM providers become kingmaker investors with significant leverage over the startup ecosystem.


Inference Capacity Isn't Assured

AI inference demand might outrun compute supply, at least until all the capital going into inference capacity results in some oversupply in the future. Any one of the following - resistance to data center construction and associated energy and water usage, shortage of chips and other hardware, some macro setback like the private credit markets falling apart, China invades Taiwan - could delay buildout enough that we have years of supply-demand problems.

Everyone keeps baking in assumptions that token costs will go down: long-term probably but companies need to hedge against cost volatility in business-critical AI services. We believe that some of this issue is at the heart of the recent news about "partnership" discussions between the major LLM providers and major private equity software investors. Small language models and edge compute might be propelled faster if inference capacity is constrained.


An Internet of and for Agents Is Forming

An agentic web is coming. Visa projects that 2025 will be the last year people shop and check out alone, while Mastercard is rolling out systems for agents to use special tokens and transact on behalf of users with clear attribution.

WebMCP (February 2026) represents a first shot at turning the human-web into a human-agentic-web by standardizing how agents interact with digital ecosystems. This isn't just about e-commerce, it is about a parallel internet optimized for agents, not humans. Sending your OpenClaw to the store to buy milk is inevitable.


Openclawification Leads to 24/7 AI Team Members

Openclawification of AI is going to happen. This isn't all that controversial to some, but we hear many companies dismiss Openclaw as an interesting but unusable toy. That might be correct now, but we believe that Openclaw-like capability (not necessarily "Openclaw" itself) will show up in increasingly sophisticated ways, with some users running autonomous tasks for hours at a time. Further, individuals will likely adopt it faster (as is happening now in China) and actually create more inference demand, faster than expected. Rather than thinking about "AI applications," we will start to think about "AI team members" working 24/7.

Yeah, there are real engineering challenges to keeping these systems reliable, safe and compliant with corporate governance but the genie is out of the bottle on this one.


Startups Will Form to Knock Off Systems of Record

ERP cloud migrations are already causing pain and enormous costs in enterprises. The historic moat for all of these systems was the massive cost of configuration/consulting, something that the big consulting companies trained non-developers to do. AI can do that.


Use of Offshore Software Teams Is Going to Change

The value of non-senior, offshore developers will drop dramatically as coding agents handle repetitive work. Senior engineers who can architect, build and deploy AI systems will command premium compensation, no matter where they are. Expect to see more risk sharing offerings from offshore software consulting firms, because they have to do that to get the work. Bad news for India's economy.


Backlash Will Be Real (But Selective)

Backlash to AI is growing, but it will be more nuanced than blanket opposition. The primary drivers will be job displacement in routine technical and administrative roles, compounded by the energy costs and environmental concerns around training and deploying large models. However, the only real impact this will likely have on the growth of AI is interruption of data center buildout, because governments will struggle to regulate AI without risking falling behind economies that don't regulate it. I'd also bet we see some anti-AI planks in midterm candidates' platforms too.