VIDEO DOI: https://doi.org/10.48448/1tj6-w052

poster

NAACL 2022

July 12, 2022

Seattle, United States

Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from NAACL 2022

Improving negation detection with negation-focused pre-training
poster

Improving negation detection with negation-focused pre-training

NAACL 2022

+1Thinh Truong
Thinh Truong and 3 other authors

12 July 2022

Similar lecture

RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge
technical paper

RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge

IJCNLP-AACL 2021

+2Dong-Ho LeeBill Yuchen Lin
Bill Yuchen Lin and 4 other authors

03 August 2021