Quantum memories, errors and encounters: a dive into Welinq
- Guillaume MATILLA
- 2 days ago
- 8 min read
How a coffee turned into a conversation about the quantum internet
Some conversations flip a topic from “abstract idea” to “all right, this is actually real”.
My meeting with Thomas Nieddu was one of those.
Thomas did his PhD at the Laboratoire Kastler Brossel (LKB, Sorbonne Université / CNRS / ENS) on optical quantum memories based on cold atoms, and then joined Welinq, a startup that wants to provide the “links” of the quantum internet: hardware building blocks that can store, route and synchronize qubits between quantum processors.
What really hooked me is this question:
How do you go from a beautiful, fragile lab demonstration to an industrial-grade building block that has to survive real networks, noise, drift, repeated cycles, and hard security constraints?
And even more: how do you control error at every level – from the physics of atoms to information theory, via cryptography?
From optical tables to record-breaking memories
To understand what Welinq is doing today, you have to wind the story back a few years, on the LKB side.
Around 2017–2018, Julien Laurat’s group demonstrated a quantum memory for polarization qubits using an ensemble of cesium atoms cooled by lasers. The implementation relied on electromagnetically induced transparency (EIT) in an elongated atomic cloud, with a so-called dual-rail architecture: the two polarizations are stored in two separate spatial modes.
On paper, the promise is simple: take a flying qubit (a photon), make it interact with this atomic cloud so that its state is written into the medium, and later read it out on demand.
In practice, they achieved two headline numbers that circulated a lot in the community:
a conditional fidelity above 99% between the stored-and-retrieved state and the target state,
and a storage–retrieval efficiency of about 68%.
In plain language: whenever a photon is successfully re-emitted by the memory, its quantum state is almost identical to what went in, and overall you recover more information than you lose.
Experimentally, this was already a serious milestone: these memories became realistic candidates for playing the role of nodes in future quantum networks.
But at that point the context was still very “fundamental physics”: a large optical setup, lots of stabilization and tweaking, and almost no systems-engineering constraints yet.
When the memory steps into cryptography
A qualitative shift happened when the memory stopped being tested only with “demo qubits” and was integrated into a full cryptographic protocol.
That’s what happens in “Quantum cryptography integrating an optical quantum memory” by Hadriel Mamann, Thomas Nieddu, Julien Laurat and co-authors, published in 2025 in Science Advances.
The idea is bold:
they implement a version of Wiesner’s quantum money protocol,
but with an intermediate storage step in a quantum memory,
and they work in the regime where the protocol is mathematically secure, not just “nice on a plot”.
In Wiesner-style quantum money, a “banknote” is encoded as a sequence of qubits prepared in random bases. A counterfeiter who tries to copy them is forced to measure in the wrong basis part of the time, which introduces detectable errors.
What Mamann / Nieddu / Laurat’s team does is:
encode the “banknotes” or “quantum cards” in the polarization of weak coherent states (very faint light pulses),
store those photonic qubits in a high-efficiency cold-atom memory,
and then read them out and verify them, while obeying the constraints of the quantum money protocol (maximum acceptable error rate, no-cloning conditions, and so on).
The paper is very explicit on a key point: the cryptographic protocol imposes tight constraints on the memory:
a sufficiently high storage–retrieval efficiency,
and a sufficiently low added noise.
Below those thresholds, you exit the region where the mathematical security proofs guarantee that the “money” cannot be forged. Above them, the protocol stays rigorously secure.
So this is no longer “just” a memory that works well in isolation: it is a memory that passes a cryptographic crash test.
In the following years, these ideas show up again in conference talks, for instance at Quantum 2.0 in the presentation “Combining a quantum cryptographic protocol with a highly efficient cold-atom-based quantum memory”, by Félix Garreau de Loubresse, Hadriel Mamann, Thomas Nieddu and others.
You can see a clear trajectory emerging:
record-class quantum memories (Vernaz-Gris et al.),
then a memory inserted into an actual quantum protocol,
and then a systematic discussion of error thresholds and security conditions.
That trajectory is exactly what makes it possible for a company like Welinq to appear a few years later.
Welinq and QDrive: the memory leaves the lab
Welinq explicitly positions itself as a provider of “quantum links” – the network pieces that will connect quantum processors and make large-scale quantum networks possible.
Their flagship product right now is called QDrive.
You can think of it as an industrialized version of a cold-atom memory:
storage–retrieval efficiencies above 90–95%,
storage times up to about 200 microseconds,
a system integrated into a standard 19-inch rack,
and operation close to room temperature, thanks to neutral atoms trapped by light rather than heavy cryogenic platforms.
In public communications, Thomas Nieddu mentions 95% efficiency and 200 microseconds of memory time achieved in a stable way on the product.
What strikes me is not just the “95%” number, but the context:
the same family of techniques used in the academic papers (cold atoms, light–matter interfaces) reappears in a compact, reproducible, calibratable system,
we are no longer looking at a fragile optical cave that needs a PhD student standing next to it; we’re talking about a network device you could, in principle, install in a data center.
And that naturally triggers a question from an information-theory perspective:
If we now have a memory that, in steady-state operation, can exceed 90–95% efficiency, what more can we do so that errors stop being the bottleneck, even for deep protocols and large networks?
To answer that, we need a minimal conceptual and mathematical toolkit.
A minimal conceptual toolkit for talking about “errors” and “memory”
The qubit and the quantum state
A pure qubit is described mathematically as a vector in a two-dimensional Hilbert space. In the textbook picture you pick two basis states, often called “zero” and “one”, and any pure state can be written as a superposition of these two with complex amplitudes. A normalization condition ensures that the total probability is one.
In the experiments we’ve been discussing, that qubit is usually encoded in the polarization of a photon (horizontal/vertical, or superpositions of those two).
As soon as you take noise and statistical mixtures into account, the right mathematical object is no longer just a vector, but a density matrix: a positive, trace-one matrix that describes both pure states and statistical mixtures. I’ll refer to it simply as “the state” and denote it by ρ when needed.
The memory as a quantum channel
An ideal memory would be the identity map: whatever state you put in, you get the exact same state out.
In the real world, a quantum memory behaves like a noisy quantum channel: it maps an input state to an output state by adding loss and noise. Depending on the physical implementation, that channel may look like:
a loss channel, where some photons simply never reappear,
a dephasing channel, where you lose phase information between basis states,
or a depolarizing channel, which pushes the state toward a more mixed, less informative one.
En fonction de la physique de la mémoire, ce canal peut ressembler à un canal de perte (certains photons disparaissent), un canal de déphasage (on perd de l’information de phase) ou un canal dépolarisant (on se rapproche d’un état mélangé).
An important figure of merit, highlighted for instance in Vernaz-Gris et al.’s work, is the efficiency of the memory: the ratio between the number of photons detected at the output and the number of photons sent in.
In their “record” memory it is around 68%. In more recent products like QDrive, that number climbs above 90–95%, which radically changes what is possible in more complex protocols
So you can think of the memory as a device that has two jobs:
preserve the structure of the quantum state (phases, superpositions, entanglement),
and not throw away too many carriers in the process.
Fidelity: “how close” is the output state?
To quantify “how close” the output state is to the ideal input state, quantum information theory uses a quantity called fidelity.
In the simplest case, when both states are pure, fidelity is just the squared overlap between the two state vectors. It ranges from 0 (orthogonal states, completely distinguishable) to 1 (identical states).
In the simplest case, when both states are pure, fidelity is just the squared overlap between the two state vectors. It ranges from 0 (orthogonal states, completely distinguishable) to 1 (identical states).
Intuitively:
fidelity close to 1 means the memory has respected the input state very well;
fidelity significantly below 1 means that the memory, together with noise, has distorted the state in a way that you can in principle detect.
In Vernaz-Gris et al.’s experiments, the conditional fidelity (i.e. conditioned on a successful retrieval event) is above 0.99. In quantum money and more general cryptographic protocols, there are explicit thresholds on fidelity: above some value, the security proofs still hold; below it, you leak too much information or open a door to forgery.
Entropy: measuring uncertainty and noise
To capture the “uncertainty” or disorder introduced by noise, we use the von Neumann entropy of a state ρ\rhoρ, which plays the same role as Shannon entropy for classical distributions:
If ρ describes a pure state, the entropy is zero: there is no uncertainty. If ρ is maximally mixed (for a qubit, a perfect "heads or tails" toss), the entropy is 1 bit.
Every time a qubit goes through a noisy memory, the entropy tends to increase: some of the coherent information turns into classical mixture. In a cryptographic protocol, the question is not only whether the memory “works well”, but whether the entropy increase induced by the memory stays within the bounds that the security proof can tolerate.
These few ingredients – qubits, channels, efficiency, fidelity, entropy – are already enough to frame the problem cleanly: we know how to describe what the memory does to the state, how much information it lets through, and how much disorder it injects.
Where the real “error wall” sits
Historically, the work of Laurat, Vernaz-Gris, Nieddu and colleagues showed two key things.
First, that one can build cold-atom memories that are extremely good on their own, with conditional fidelity above 99% and already strong efficiency. Second, that such a memory can be placed inside a real cryptographic protocol (quantum money) while still satisfying the theoretical security conditions.
Today, companies like Welinq are pushing the same ideas toward deployable systems like QDrive, which deliver efficiencies above 90–95% and storage times of order hundreds of microseconds in a rack-mountable format. The technology has clearly left the realm of single-shot academic demonstrations.
But if you look at the whole picture through the lens of information theory, one point remains critical:
a single stage that runs at 90–95% fidelity is excellent,
a full network that chains together dozens of stages (preparation, distribution, storage, synchronization, measurement, post-processing) will see errors accumulate,
even low per-stage noise eventually eats up the security margin once you increase protocol depth or network size.
That is the real error wall: as long as we treat the memory as a high-performance buffer and nothing more, we underestimate how errors propagate at the system level.
The real question now becomes:
Now that we know how to build a very good memory, how do we make it robust at the network level, by explicitly tackling errors with the right theoretical tools (channels, entropy, capacity, error-correcting codes) and the right architectures (repeaters, built-in correction, adaptive protocols)?
That is exactly what I want to dig into in the second part: set up the information-theoretic framework properly, map out the correction strategies that are realistic in and around the memory, and think about what it would take to move from “90–95% in the lab” to “99%+” in complex quantum networks.
Comments