Computational shortcut for neural networks
Neural networks are learning algorithms that approximate the solution to a task by training with available data. However, it is usually unclear how exactly they accomplish this. Two young Basel physicists have now derived mathematical expressions that allow one to calculate the optimal solution without training a network. Their results not only give insight into how those learning algorithms work, but could also help to detect unknown phase transitions in physical systems in the future.
30 September 2022 | Oliver Morsch
Neural networks are based on the principle of operation of the brain. Such computer algorithms learn to solve problems through repeated training and can, for example, distinguish objects or process spoken language.
For several years now, physicists have been trying to use neural networks to detect phase transitions as well. Phase transitions are familiar to us from everyday experience, for instance when water freezes to ice, but they also occur in more complex form between different phases of magnetic materials or quantum systems, where they are often difficult to detect.
Julian Arnold and Frank Schäfer, two PhD students in the research group of Prof. Dr. Christoph Bruder at the University of Basel, have now single-handedly derived mathematical expressions with which such phase transitions can be discovered faster than before. They recently published their results in the scientific journal Physical Review X.
Skipping training saves time
A neural network learns by systematically varying parameters in many training rounds in order to make the predictions calculated by the network match the training data fed into it more and more closely. That training data can be the pixels of pictures or, in fact, the results of measurements on a physical system exhibiting phase transitions about which one would like to learn something.