We trained a neural model to linearly combine sentence embeddings to produce latent prefix material which improves performance on the sequence-to-sequence task with BART. I think this area of reducing problem complexities in terms of linear algebra is promising for interpretability.
We can already get >99% classification accuracy with state-of-the-art convolutional models. We wanted to see what happens during training when we add Fourier features to the input data, so we created this visualization.