Author's picture or avatar or logo

This website contains the official release of the model NepBERTa and associated downstream finetuning code, datasets, and benchmarks introduced in the paper titled, NepBERTa: Nepali Language Model Trained in a Large Corpus, which is published in The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: AACL-IJCNLP 2022.