UNDERLINE DOI: https://doi.org/10.48448/v5rn-cz20

poster

EMNLP 2021

November 08, 2021

Live on Underline

How much pretraining data do language models need to learn syntax?

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2021

Efficient Mind-Map Generation via Sequence-to-Graph and Reinforced Graph Refinement
poster

Efficient Mind-Map Generation via Sequence-to-Graph and Reinforced Graph Refinement

EMNLP 2021

Mengting Hu
Mengting Hu

08 November 2021

Similar lecture

VisualSem: a high-quality knowledge graph for vision and language
workshop paper

VisualSem: a high-quality knowledge graph for vision and language

EMNLP 2021

Yibo Liu
Yibo Liu

08 November 2021