Microscopic Imprints of Learned Solutions in Tunable Networks

Back to all publications

Publication date
DOI http://dx.doi.org/10.1103/f2hb-c9s1
Reference M. Guzman, F. Martins, M. Stern and A.J. Liu, Microscopic Imprints of Learned Solutions in Tunable Networks, Phys. Rev. X 15, (3), 031056: 1-19 (2025)
Group Learning Machines

In physical networks trained using supervised learning, physical parameters are adjusted to produce desired responses to inputs. An example is an electrical contrastive local learning network of nodes connected by edges that adjust their conductances during training. When an edge conductance changes, it upsets the current balance of every node. In response, physics adjusts the node voltages to minimize the dissipated power. Learning in these systems is therefore a coupled double-optimization process, in which the network descends both a cost landscape in the high-dimensional space of edge conductances and a physical landscape—the power dissipation—in the high-dimensional space of node voltages. Because of this coupling, the physical landscape of a trained network contains information about the learned task. Here, we derive a structure-function relation for trained tunable networks and demonstrate that all the physical information relevant to the trained input-output relation can be captured by a tuning susceptibility, an experimentally measurable quantity. We supplement our theoretical results with simulations to show that the tuning susceptibility is correlated with functional importance and that we can extract physical insight into how the system performs the task from the conductances of highly susceptible edges. Our analysis is general and can be applied directly to mechanical networks, such as networks trained for protein-inspired function such as allostery.