VIDEO DOI: https://doi.org/10.48448/jcq7-rf77

technical paper

IJCNLP-AACL 2021

August 02, 2021

Thailand

BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from IJCNLP-AACL 2021

Superbizarre Is Not Superb: Derivational Morphology Improves BERT's Interpretation of Complex Words
technical paper

Superbizarre Is Not Superb: Derivational Morphology Improves BERT's Interpretation of Complex Words

IJCNLP-AACL 2021

Hinrich SchützeJanet PierrehumbertValentin Hofmann
Valentin Hofmann and 2 other authors

02 August 2021

Similar lecture

Distilling Relation Embeddings from Pre-trained Language Models
workshop paper

Distilling Relation Embeddings from Pre-trained Language Models

ACL 2022

Steven SchockaertJose Camacho-ColladosAsahi Ushio
Asahi Ushio and 2 other authors

27 May 2022