--- license: cc-by-nc-sa-4.0 language: - 'no' --- Gnerative Pretrained Tranformer with 3 Billion parameters for Norwegian. It belongs to NorGLM, a suite of pretrained Norwegian Generative Language Models (NorGLMs). The model is based on GPT2 architecture. NorGLM can be used for non-commercial purposes. ## Datasets All models in NorGLM are trained on 200G datasets, nearly 25B tokens, including Norwegian, Denish, Swedish, Germany and English. ## Run the Model ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "NorGLM/NorGPT-3B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map='auto', torch_dtype=torch.bfloat16 ) text = "Tom ønsket å gå på barene med venner" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) ``` ## Citation Information If you feel our work is helpful, please cite our paper: ``` @article{liu2023nlebench+, title={NLEBench+ NorGLM: A Comprehensive Empirical Analysis and Benchmark Dataset for Generative Language Models in Norwegian}, author={Liu, Peng and Zhang, Lemei and Farup, Terje Nissen and Lauvrak, Even W and Ingvaldsen, Jon Espen and Eide, Simen and Gulla, Jon Atle and Yang, Zhirong}, journal={arXiv preprint arXiv:2312.01314}, year={2023} } ```