I am interested in the quantum many-body problem, how is it possible that the same constituents atoms, when joined together in different scenarios produce such a beautiful and diverse world? This is a consequence of collective emergence a field of study that is relevant to diverse disciplines such as condensed matter physics (think of superconductors), high energy physics (think of quark confinement), quantum information (think of topological quantum computers) and ultimately quantum gravity (think of the emergence of space and time).
Our world is beautifully diverse, how does it emerge from the few elementary constituents we know? And what we observe is it all that there can be?
Everything around us, including ourselves, is a many-body system and is made of quantum constituents. However somewhere we lose our quantum properties and become classical beings, always out of equilibrium.
How can this fact happen, can we engineer a large many-body system that conserves its quantum properties?
Besides the intrinsic interest in these questions, their answer comes with multiple applications.
If we better understand how collective phenomena such as superconductivity emerge we can also artificially engineer them and design ever better materials that can help us move forward. Ultimately, collective emergence is also responsible for the functional properties of drugs and materials.
I mostly develop new tools based on quantum information to address the many-body problem. These tools are based on understanding the correlations in the systems and using them to our advantage. Indeed it turns out that physical states of relevant quantum many-body systems are far from generic, and they have a lot of structures we can exploit to find a cheaper representation and use it to simulate them on computers
Everything around us is out of equilibrium. Everything constantly changes. In quantum mechanics, the constant change is even more dramatic, isolated systems are never at rest, and they keep evolving. When systems interact, on the other hand, if one of the two systems is much larger, it can act as a bath for the smaller system, leading to relaxation towards equilibrium. In many cases, we know that such relaxation should happen, but the rate it takes, and the final equilibrium state are often unknown.
Furthermore, we constantly discover new mechanisms that many-body systems can exploit to evade thermalization and thus display genuine out-of-equilibrium features.
This fact has exciting applications. Indeed the phase diagram of many-body systems is dictated by equilibrium relations that cannot be evaded at equilibrium. For example, there should be a balance between energy and entropy and that balance depends on the equilibrium temperature of the system.
By using the mechanism that prevents thermalization one could stabilize phases of matter that would not be available at that temperature.
Think of a superconductor, that needs to be very cold. Now people have observed that by shining laser pulses onto a material, that material can become a superconductor at room temperature. Something that is highly counter-intuitive since in principle light should heat the system rather than cooling it down…
In our group, we try to understand these phenomena and predict new ones.
Gauge theories are ubiquitous. They constitute specific quantum many-body systems that possess some extra invariance under local transformations. Perhaps the most notable example of a gauge theory is the Standard Model of particle physics, where gauge theories describe three fundamental interactions between the elementary particles that make up all matter (namely electromagnetism, weak and strong interactions). Intriguingly, gauge theories are present in many effective models of condensed matter physics such as the description of spin-liquid phases in antiferromagnets and of the pseudo-gap phase in high-temperature superconductors.
Despite the enormous importance of gauge theories, as with any other strongly many-body quantum system, they are extremely difficult to solve. Wilson’s formulation of gauge theories on the lattice (where continuous space-time is replaced with a discrete set of space-time points) provided the first computational tool able to investigate their strong coupling regime. It also opened the way to enormous efforts in Monte Carlo simulations of lattice gauge theories that, currently, are among the main tools to test the prediction of quantum chromodynamics with experimental data.
Monte-Carlo simulations, however, struggle to describe the phases of gauge theories where there is a finite density of nuclear matter, and more generally out-of-equilibrium scenarios. These are the reasons I am intrigued about how much the new methods we are developing such as tensor networks, quantum simulators, and quantum computers can help us better understand gauge theories in those challenging scenarios.
We have thought for many years that the main obstacle to simulating many-body quantum systems is the growth of the number of parameters necessary to describe them with the number of their constituents. The technical sentence for this idea is that the size of the Hilbert space grows exponentially with the number of constituents of the system. However, we have now understood that the Hilbert space is a “convenient illusion” using the worlds of F. Verstraete and D. Poulin.
What exists are quantum states emerging from the local interactions among constituents after a “computation” that lasts at most a polynomial time in the number of constituents. These states are tensor networks and are exponentially fewer than the generic states of the Hilbert space.
In a tensor network, the interactions among constituents are represented by the tensor, and the constituents' world lines are the threads of the network.
Generic tensor networks are hard to manipulate and to extract physical information from them, we typically need to run exponentially hard computations. Still, they provide suggestive descriptions of physical systems, and many believe they could be the ether building our space-time. Well, I am not sure this is correct, but I am fascinated by how tensor networks can help us describe complex phenomena in a simple and unified framework. Most of my research is thus formulated and developed using the language of tensor networks.
Simulating large quantum systems with computers is hard, why not use quantum hardware to help out?
Many of us believe that this will be one of the first real applications of quantum computers. While these computers are still under active development, people in the labs can exquisitely control ultra-cold atoms, trapped ions, light in cavities, and many other quantum constituents.
Can we engineer using those systems interactions that are typical in strongly correlated quantum systems? Can we e.g. build a proton out of neutral atoms that are orders of magnitude larger than quarks and thus move much slowlier allowing us to optically image and follow their evolution in real-time?
I am trying to figure out how and what we can simulate with these types of systems in the lab.
Maybe one day we will have a simulation of the collision happening at the LHC, the large hadron collider on an atomic chip, such as the one that people use today to study the emergence of quantum hydrodynamics.