VIDEO DOI: https://doi.org/10.48448/ka7m-8476

poster

EMNLP 2021

November 08, 2021

Live on Underline

What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2021

STANKER: Stacking Network based on Level-grained Attention-masked BERT for Rumor Detection on Social Media
poster

STANKER: Stacking Network based on Level-grained Attention-masked BERT for Rumor Detection on Social Media

EMNLP 2021

+1Zhihua JiangDongning Rao
Dongning Rao and 3 other authors

08 November 2021

Similar lecture

All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality
technical paper

All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality

EMNLP 2021

William Timkey
William Timkey and 1 other author

08 November 2021