PhD defense – Julien Dudas
On October 30, Julien Dudas will defend his thesis entitled ‘Quantum Machine Learning with Bosonic Modes,’ prepared under the supervision of Julie Grollier and carried out at the Albert Fert Laboratory with Danijela Marković.
Quantum machine learning with bosonic modes
Abstract
Quantum systems have the potential to improve state of the art classical computing techniques, thanks to properties such as superposition and entanglement.
Combining them with machine learning frameworks could enhance many domains of quantum data processing such as automatic recognition of quantum states and the control of quantum devices. Despite early promising results with quantum neural networks (QNNs), key challenges remain: (i) implementing architectures with a large number of neurons and trainable parameters on current quantum hardware, and (ii) reliably training such models, where issues like barren plateaus in the loss landscape can suppress gradients and hinder learning.
Indeed solving complex tasks requires a large number of neurons densely connected by many trainable parameters, which is hard to achieve on quantum hardware.
In this thesis, we consider the implementation of quantum neural networks with coupled bosonic modes. We obtain a large number of neurons by considering the Fock state probabilities as output features, and trainable parameters by coupling the modes parametrically, through simultaneous three-wave mixing processes such as coherent photon conversion and two-mode squeezing.
In order to train the quantum system, we explore two approaches: (i) quantum reservoir computing, which sidesteps training of the physical parameters, and (ii) direct training of the parametric couplings via gradient descent. Because the intrinsic dynamics of coupled bosonic modes are linear, nonlinearity arises solely through Fock basis measurements. We find that this measurement-induced nonlinearity is remarkably expressive: using only two quantum oscillators, we can extract a sufficient set of nonlinear features to learn tasks that typically require on the order of 20 neurons in a classical network, including sine/square classification and Mackey-Glass time series prediction.
However, quantum reservoir computing faces two main limitations: because all trainable weights reside solely in the classical post-processing of measurement outcomes, achieving good performance typically requires measuring a large number of features, and its expressivity is constrained by its essentially single-layer architecture. To address this, we train the three-wave mixing parameters directly; for simulation efficiency we restrict the internal dynamics to Gaussian modes and use backpropagation for end-to-end optimization. We show that this approach learns effectively: although the number of trainable parameters scales only linearly with the number of modes, the number of measured features needed is reduced compared to reservoir computing, the expressivity is increased, and with the same number of modes we solve harder tasks than the reservoir baseline.
This thesis has shown that bosonic quantum neural networks are a promising route to hardware-efficient learning and has provided theoretical tools to understand how they learn and how to improve their performance. Looking ahead, enriching the Hamiltonian with higher-order nonlinearities such as Kerr, cross-Kerr or engineered multi-photon processes, would push the system beyond the Gaussian regime, unlocking non-Gaussian resources and a larger effective feature space. As a further perspective, it will be important to probe their susceptibility to barren plateaus and identify conditions under which gradient scaling remains favorable.


Leave A Comment