What to Learn from Tensor Networks, and What Not
Simulating quantum circuits on classical computers is a notoriously hard, yet increasingly important task for the development and testing of quantum algorithms. In order to alleviate this inherent complexity, efficient data structures and methods such as tensor networks and decision diagrams have been proposed. However, their efficiency heavily depends on the order in which the individual computations are performed. For tensor networks the order is defined by so-called contraction plans and a plethora of methods has been developed to determine suitable plans. On the other hand, simulation based on decision diagrams is mostly conducted in a straight-forward, i.e., sequential, fashion thus far. In this work, we study the importance of the path that is chosen when simulating quantum circuits using decision diagrams and show, conceptually and experimentally, that choosing the right simulation path can make a vast difference in the efficiency of classical simulations using decision diagrams. We propose an open-source framework (available at github.com/cda-tum/ddsim) that not only allows to investigate dedicated simulation paths, but also to re-use existing findings, e.g., obtained from determining contraction plans for tensor networks. Experimental evaluations show that translating strategies from the domain of tensor networks may yield speedups of several factors compared to the state of the art. Furthermore, we design a dedicated simulation path heuristic that allows to improve the performance even further—frequently yielding speedups of several orders of magnitude. Finally, we provide an extensive discussion on what can be learned from tensor networks and what cannot.