2026 AI & ML.
Natural-language AI agents quietly become Europe’s default interface for work
François Paulus, Co-founder & Executive Chairman
For the past few years of my VC career, AI in Europe has been something you hid in the back of the system: a recommendation engine in a telco stack, a fraud model in a bank, a proof-of-concept chatbot that nobody really owned. Enterprises bought a few point solutions, hired a small data team and ticked the “AI” box. It was treated as seasoning, not as the way people actually interacted with tools.
That is now changing in a very simple way: people would rather talk than click. In 2024, European AI startups raised over $13 billion, and roughly a fifth of all VC funding in the region went into AI. A lot of that money is not going into new models; it’s going into products that sit in front of existing systems and let you say, “show me this, fix that, do this next”, in your own language. At the same time, the EU AI Act has entered into force, which means that by 2026 every Member State has to run at least one AI regulatory sandbox for “high-risk” systems in areas like healthcare, employment or critical infrastructure. So we’re not just going to have agents; we’re going to have agents that live under supervision, in hospitals, schools, banks and public services.
When I talk to founders now, they are not pitching generic copilots. They are building “Her-style” agents that can see across tools, understand the local language and regulations, and are safe enough that a CIO, a regulator and a works council can all sleep at night. The interface to work becomes a conversation, not a menu of icons, and Europe’s mix of regulation, domain depth and industrial customers makes it a surprisingly good place to build that.

Where I expect demand to grow in 2026:
  • Sector-specific AI agents in healthcare, education and government, trained on local data and built to comply with the AI Act from day one, including deployment inside sovereign or on-prem environments.
  • “Agent orchestration” layers that let enterprises manage dozens of internal and external agents with observability, access control and clear escalation paths, instead of a zoo of disconnected bots.
  • Telco- and network-native AI services that turn customer support, provisioning and network operations into conversational experiences for both end-users and field teams.
In European AI, I don’t think the long-term value will sit with whoever has the flashiest demo. It will sit with the teams who own the trusted, regulated agents that people actually work through every day, and with the platforms that make those agents understandable and controllable for the people who are on the hook for risk.
The next great marketplaces will be built by agents, not account managers.
Nour Alnuaimi, Partner UK
The last decade of marketplaces came in waves. First the B2C giants (Uber, Airbnb, etc.) then a rush of B2B marketplaces for everything from freight to industrial inputs. Most ran into the same brick wall: expensive sales on both sides, heavy manual onboarding and compliance, long cycles to reach liquidity, and unit economics that stopped making sense as soon as growth slowed. A lot of those models were “right idea, wrong friction cost”.
What changes with AI isn’t just better search; it’s the workflow. The very tasks that made B2B marketplace economics painful: qualification, onboarding, KYC/AML, fraud checks, support, reconciliations, are the ones that AI can now automate. And crucially, the cultural shift has started: large tech companies are already asking employees to use AI as part of their day-to-day. The question is how quickly traditional industries follow, because once that expectation takes hold, automated workflows move from innovation to the new standard.
That’s why I see “AI in Europe” less as a contest over foundation models and more as an opportunity to own the workflow layer. Europe’s real edge sits in enterprise complexity, from financial infrastructure to manufacturing to regulated operations. And the horizontal AI that can orchestrate these systems is still wide open: products that plug into CRMs, ERPs, core banking, payments and data warehouses. In that world, marketplace liquidity can be unlocked far faster and cheaper because AI strips out the friction, and suddenly it’s no longer a brute-force, headcount-driven exercise.

In particular, I expect 2026 to see accelerated demand for:
  • AI “junior teams in a box” for enterprise workflows – SDRs, customer support and basic ops agents that live inside existing systems and handle the repetitive work of sourcing, qualifying, onboarding and servicing customers.
  • AI-native B2B marketplaces where onboarding, KYC/AML and fraud checks are automated by design, using models to verify entities, assess risk and route workflows so that liquidity can build with far less manual effort and cost.
  • Horizontal enterprise AI platforms above sector-specific tools, integrating data, permissions and process orchestration so that manufacturers, wholesalers, financial institutions and others can safely switch on AI agents without rebuilding their entire stack.
If we’re right, the next generation of big marketplaces won’t be defined by a prettier front-end. They’ll be defined by how much of the workflow and risk stack is quietly handled by AI behind the scenes, and how quickly that lets them reach real liquidity with a fraction of the humans previous waves needed.
For a long time, “AI” in the market meant two extremes: glossy chatbots on one side and dense research models stuck in Jupyter notebooks on the other. The serious science lived inside pharma R&D, climate labs or physics departments and almost never made it into products that normal people or SMEs could actually touch. It was easy to think of “AI & ML” as consumer interfaces or enterprise copilots, with scientific computing sitting off to the side.
That boundary is already blurring. We now have AI systems supporting almost every stage of the research process, from hypothesis generation and literature triage to data analysis and experimental design, and they are being used by working scientists, not just AI labs. You see it in climate modelling, in protein design, in materials optimisation. Companies like Lila are raising hundreds of millions to build “AI science factories”: automated labs guided by domain-specific models and valued north of a billion dollars. The interesting part for me is what happens next: those capabilities don’t stay in the lab. They leak out as APIs and tools that sit under climate software, energy-optimisation products, even consumer apps that help you understand how efficient your home, your training plan or your physiology really are.

In other words, by 2026 I expect the most interesting “AI companies” in Europe not to be “AI for X” wrappers, but platforms that expose serious modelling and optimisation as a service layer other products build on.
Where I see demand building:
  • AI-native “research engines” that sit under climate, materials and biotech startups, continuously proposing, prioritising and adapting experiments rather than just analysing the results after the fact.
  • Tools that bring research-grade modelling into everyday decisions, letting households and SMEs optimise energy use, nutrition, training or logistics with the same kind of rigour you’d expect in a lab, without needing a PhD.
  • Governance and provenance layers for scientific AI, so enterprises and regulators can trace which data, models and simulations underpin a given conclusion, design or recommended action.
I don’t think the next wave of value will come from adding one more chat interface on top of a model. It will come from the platforms that quietly turn deep scientific and optimisation capabilities into reliable building blocks for whole sectors.
If you look at what we’ve called “AI agents” over the last two years, most of it has been UI. Clever wrappers sitting on top of someone else’s stack: they demo well, they live in the browser, and then they get parked in a side channel because nobody trusts them with real money or real risk. From a capital point of view, we’ve basically funded thousands of small experiments at the edge.
That window is closing. In 2024, AI startups pulled in roughly a fifth of all European VC funding, which tells you that investors increasingly see AI as a core infrastructure layer, not an add-on. Regulation is catching up fast: the EU AI Act is live, and by 2026 every Member State has to operate an AI sandbox for “high-risk” systems in areas like finance, employment and critical services. That forces agents out of shadow IT and into governed environments where CROs, CISOs and regulators have an opinion.
In parallel, financial institutions are quietly leaning in. Around three-quarters of UK financial firms already use AI somewhere in their stack and another ~10% plan to within three years. The interesting shift is not another chatbot; it’s agents that sit across trading, risk, procurement or finance systems, read the logs and documents, and start proposing or triggering actions. In security specifically, this looks like an autonomous SOC: agents continuously ingesting telemetry across endpoints, cloud, identity and apps, triaging alerts, auto-resolving low-risk incidents, and escalating only the real edge cases to humans, under hard guardrails. Most of the thin, single-skill agents launched in the last 24 months will either be acquired as features or disappear. The ones that matter will look more like infrastructure: deeply integrated, auditable, with proper guardrails and kill switches.

Where I expect the real demand in 2026:
  • Sector-specific agent platforms in finance, procurement and operations that plug into existing systems but present a single, natural-language interface to the business, instead of ten different dashboards.
  • Autonomous SOC platforms for enterprise security, where agents orchestrate detection, investigation and response across SIEM, EDR, IAM and ticketing tools, with policy-based limits on what can be auto-remediated and full replay of every decision for auditors.
  • Governance and observability layers for agents, logging, policy engines, approval workflows, that a CRO, CISO or regulator can actually sign off, with a clear view of who did what and why.
  • “Agent gateways” for capital markets and core banking, standardising how enterprises route tasks to external models and internal tools, so teams stop wiring up their own bots to legacy systems and hoping for the best.
In AI & ML, I don’t think the lasting value sits with whoever ships the most wrappers. It sits with the teams who own the control plane for agents in regulated environments, and who can make those systems boring, traceable and safe enough to sit directly in the flow of funds and decisions.
Check out other industries in europe