Neural algorithmic reasoning without intermediate supervision

Neural algorithmic reasoning focuses on building models that can execute classic algorithms. It allows one to combine the advantages of neural networks, such as handling raw and noisy input data, with theoretical guarantees and strong generalization of algorithms. Assuming we have a neural network capable of solving a classic algorithmic task, we can incorporate it into a more complex pipeline and train end-to-end. For instance, if we have a neural solver aligned to the shortest path problem, it can be used as a building block for a routing system that accounts for complex and dynamically changing traffic conditions. In our work
[1], we study algorithmic reasoners trained only from input-output pairs, in contrast to current state-of-the-art approaches that utilize the trajectory of a given algorithm. We propose several architectural modifications and demonstrate how standard contrastive learning techniques can regularize intermediate computations of the models without appealing to any predefined algorithm’s trajectory.

Background The link has been copied to clipboard

The main challenge of neural algorithmic reasoning is the requirement of out-of-distribution (OOD) generalization. Since algorithms have theoretical guarantees to work correctly on any input from the domain of applicability, the actual difference between train and test data for neural algorithmic reasoners is potentially unlimited. Usually, the model is required to perform well on much larger input sizes compared to the instances used for training.
A significant step forward in developing neural algorithmic reasoners was made with the proposal of the CLRS Algorithmic Reasoning Benchmark
[2]. This benchmark represents a set of several diverse algorithmic problems. CLRS proposes graphs as a general way to encode data since any particular algorithm can be expressed as manipulations over a set of objects and relations between them. For example, for sorting problems, we can treat every input list element as a separate node, and predict the output of the sorting stage with pointers between nodes representing the predecessor of a node in the sorted order.
The common way to tackle the OOD generalization problem is to use hints that represent intermediate steps of the algorithm’s execution. With hint-based supervision, the model is forced to imitate the exact trajectory of a given algorithm. Such supervision is considered a fundamental ingredient for progress in neural algorithmic reasoning.
Neural algorithmic reasoning with hint-based supervision

Our approach The link has been copied to clipboard

In our paper [1], we argue that there are several disadvantages of using hints and investigate whether it is possible to train a competitive model using only input-output pairs.
First, we highlight some disadvantages of hint usage:
  • Hints need to be carefully designed separately for each task and algorithm. Moreover, various model architectures may benefit from differently designed hints. 
  • Hints force the model to follow a particular trajectory. While this may help the interpretability and generalization of the trained model, following the algorithm’s steps can be suboptimal for a neural network architecture. For instance, it may force the model to learn sequential steps, while parallel processing can be preferable. 
  • A model trained with direct supervision on hints does not necessarily capture the dependencies between hints and the output of the algorithm. In other words, the model potentially can consider the hint sequence as a separate task (we refer to Section 3 of our paper).
  • If a model is not restricted to a particular trajectory, it may potentially learn new ways to solve the problem, and thus by interpreting its computations, we may get new insights into the algorithms’ design.
To design our alternative approach, we make an observation that helps us to combine the advantages of learning algorithms only from input-output pairs with inductive bias improving the size generalization capabilities of the models.
We note that the underlying combinatorial structure of some tasks allows us to describe the properties of the desired algorithm. Such properties can reduce the search space of the model while remaining in the wide range of possible algorithms, which can solve the given problem. For example, any comparison sort has the same trajectory for any pair of arrays with the same relative order of its elements. Given that, we can construct such a pair of different inputs that every step of any comparison sort algorithm is the same for both inputs. A less trivial example is the minimum spanning tree problem. A wide range of algorithms have the same execution trajectory on graphs with fixed topology (graph structure) and different weights on edges if the relative order of the corresponding edge weights is the same. 
Forcing a model to act similarly on algorithmically equivalent inputs gives the model an additional signal about the problem without providing any constraints on computations that the model needs to follow. We can do it with classic contrastive learning techniques.
We also propose several architectural modifications for models trained without intermediate supervision that are aimed at making the comparison versus hint-based models more clear and fair. For example, we propose to reduce the difference between computational graphs of the models, which usually occurs in such comparisons. We refer to our paper for the details.

Results The link has been copied to clipboard

We compare our approach with several forms of hint usage. First, we consider the standard supervision of the hints, denoted as Baseline
[3]. Then, No-hint is the model that completely ignores the hint sequence. Finally, Hint-ReLIC
[4] utilizes an advanced way of using the hints. The last column demonstrates improvements obtained due to the proposed changes and self-supervised regularization. In particular, with this regularization, we achieve the best results for sorting (98.7%), significantly outperforming the existing models. We refer to the paper for additional details and experiments.
AlgorithmNo-hintBaselineHint-ReLICNo-hint (ours)
MST-Kruskal92.28 ± 0.8291.18 ± 1.0596.01 ± 0.4593.15 ± 1.01
MST-Prim85.33 ± 1.2187.64 ± 1.7987.97 ± 2.9485.20 ± 1.77
Insertion sort77.29 ± 7.4275.28 ± 5.6292.70 ± 1.2998.74 ± 1.12
Bubble sort81.32 ± 6.50 79.87 ± 6.8592.94 ± 1.2398.74 ± 1.12
Quicksort71.60 ± 2.2270.53 ± 11.5993.30 ± 1.9698.74 ± 1.12
Heapsort68.50 ± 2.8132.12 ± 5.2095.16 ± 1.2798.74 ± 1.12
Binary Search93.21 ± 1.1074.60 ± 3.6189.68 ± 2.1387.12 ± 2.23
Minimum99.24 ± 0.2197.78 ± 0.6399.37 ± 0.2099.99 ± 0.01
In summary, we propose several improvements to learning algorithms without intermediate supervision and demonstrate that the models without hints are competitive with hint-based models or even outperform them. We hope our work will encourage further investigation of neural algorithmic reasoners without intermediate supervision.