VIDEO DOI: https://doi.org/10.48448/xwwj-gc90

technical paper

IJCNLP-AACL 2021

August 02, 2021

Thailand

One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from IJCNLP-AACL 2021

GhostBERT: Generate More Features with Cheap Operations for BERT
technical paper

GhostBERT: Generate More Features with Cheap Operations for BERT

IJCNLP-AACL 2021

Zhiqi Huang
Zhiqi Huang

02 August 2021

Similar lecture

One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers
workshop paper

One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers

IJCNLP-AACL 2021

Chuhan Wu
Chuhan Wu and 2 other authors

02 August 2021