Fields relying on the variational learning framework encounter common problems. These include the design of symmetry-invariant parametrization, the theoretical understanding of the expressive power of the variational model, the approximation of quantum dynamics and the use of higher-order optimization techniques. For more details, read the drop-down menus below.

Embedding symmetries

While respecting physical symmetries of quantum systems is of utmost importance, resorting to generic and flexible wave-function parametrization (such as Neural Network Quantum States) also comes at the cost of breaking these symmetries.

Recent work has shown that embedding symmetries into the wave-function ansatze tends to greatly improve both the accuracy by which one can represent a quantum system, as well as gain efficiency in its optimization.

In this pillar, we cover methods to embed symmetries into neural network quantum states and variational quantum circuits, and the close connection between these approaches.

Keywords: Symmetries in Neural Networks; Embedding Symmetries in Variational Quantum Eigensolvers

Representational power

Understanding which states can be represented by variational models, like neural networks or quantum circuits, is a vital task to theoretically motivate variational methods. Furthermore, it is important to understand how factors such as width, depth and the type of parametrized building blocks of such architectures affect the accuracy of the solution.

In this light, theoretical mappings between tensor networks and neural networks have been derived. It is, however, still unclear if finite-width networks can correctly approximate more complex tasks such as quantum dynamics.

Moreover, the trainability of quantum circuits seems to be deeply related to concepts such as the theory of over-parametrization and the neural tangent kernel.

In this pillar, we will discuss how the representation power and trainability of different variational models can be investigated using mathematical tools from the theory of learning.

Keywords: Neural-Network Quantum States; Variational Algorithms on Quantum Hardware; Theoretical connections between Neural Quantum States, Tensor Networks and Quantum Circuits


Even when a variational model can represent the desired states in theory, in practice it is not always feasible to optimize the wave function and obtain suitable parameters.

Several algorithms have been proposed to improve upon plain gradient descent. Recent examples are Natural Gradient descent, Gauss-Newton and Riley-Newton, which have been generalized to quantum algorithms as well.

In this pillar, higher-order optimization techniques, as well as ways to represent and optimize the dynamics of a quantum system will be discussed.

Keywords: Stochastic Optimization in Quantum Monte Carlo methods; Variational Quantum Dynamics


The topic of variational learning is becoming increasingly relevant in condensed matter physics, quantum computing, quantum chemistry, and computer science. For example, state-of-the-art quantum chemistry methods use neural-network-based variational models to approximate the electronic orbitals in molecules.

The domain not only receives attention from academic researchers, but also private players actively participate in this research (e.g. IBM, Google, Microsoft, Amazon).