UNDERLINE DOI: https://doi.org/10.48448/qtkd-7x63

technical paper

ACL-IJCNLP 2021

August 02, 2021

Thailand

H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from ACL-IJCNLP 2021

CogAlign: Learning to Align Textual Neural Representations to Cognitive Language Processing Signals
technical paper

CogAlign: Learning to Align Textual Neural Representations to Cognitive Language Processing Signals

ACL-IJCNLP 2021

Deyi XiongYuqi Ren
Yuqi Ren and 1 other author

02 August 2021

Similar lecture

On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers
technical paper

On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers

ACL-IJCNLP 2021

+3Niranjan BalasubramanianTianchu Ji
Tianchu Ji and 5 other authors

02 August 2021