UNDERLINE DOI: https://doi.org/10.48448/kx7t-kp77
technical paper
Dynamic Knowledge Distillation for Pre-trained Language Models
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
