Simulated configurations of flexible
knotted rings confined
inside
a spherical cavity are fed into long–short-term memory neural
networks (LSTM NNs) designed to distinguish knot types. The results
show that they perform well in knot recognition even if tested against
flexible, strongly confined, and, therefore, highly geometrically
entangled rings. In agreement with the expectation that knots delocalize
in dense polymers, a suitable coarse-graining procedure on configurations
boosts the performance of the LSTMs when knot identification is applied
to rings much longer than those used for training. Notably, when the
NNs fail, the wrong prediction usually belongs to the topological
family of the correct one. The fact that the LSTMs can grasp some
basic properties of the ring’s topology is corroborated by
a test on knot types not used for training. We also show that the
choice of the NN architecture is important: simpler convolutional
NNs do not perform so well. Finally, all results depend on the features
used for input. Surprisingly, coordinates or bond directions of the
configurations provide the best accuracy to the NNs, even if they
are not invariant under rotations (while the knot type is invariant).
We tested other rotational invariant features based on distances,
angles, and dihedral angles.