VIDEO DOI: https://doi.org/10.48448/sb7g-bv94

poster

IJCNLP-AACL 2022

November 23, 2022

Taipei City, Taiwan

NepBERTa: Nepali Language Model Trained in a Large Corpus

Please log in to leave a comment

Downloads

Transcript English (automatic)

Next from IJCNLP-AACL 2022

Towards Simple and Efficient Task-Adaptive Pre-training for Text Classification
poster

Towards Simple and Efficient Task-Adaptive Pre-training for Text Classification

IJCNLP-AACL 2022

+2Raviraj JoshiSamiksha JagadaleArnav Ladkat
Arnav Ladkat and 4 other authors

23 November 2022

Similar lecture

Table-To-Text generation and pre-training with TabT5
workshop paper

Table-To-Text generation and pre-training with TabT5

NAACL 2022

Ewa Andrejczuk
Ewa Andrejczuk

14 July 2022