UNDERLINE DOI: https://doi.org/10.48448/6d8g-5z77
technical paper
The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

