--- license: apache-2.0 --- # NEO [πŸ€—Neo-Models](https://huggingface.co/collections/m-a-p/neo-models-66395a5c9662bb58d5d70f04) | [πŸ€—Neo-Datasets](https://huggingface.co/collections/m-a-p/neo-datasets-66395dc55cbebc0a7767bbd5) | [Github](https://github.com/multimodal-art-projection/MAP-NEO) Neo is a completely open source large language model, including code, all model weights, datasets used for training, and training details. ## Model | Model | Describe | Download | |---|---|---| neo_7b| This repository contains the base model of neo_7b | β€’ [πŸ€— Hugging Face](https://huggingface.co/m-a-p/neo_7b) neo_7b_sft_v0.1| This repository contains the supervised fine-tuning version of the neo_7b model. | β€’ [πŸ€— Hugging Face](https://huggingface.co/m-a-p/neo_7b_sft_v0.1) neo_7b_instruct_v0.1| This repository contains the instruction-tuned version of the neo_7b model. | β€’ [πŸ€— Hugging Face](https://huggingface.co/m-a-p/neo_7b_instruct_v0.1) neo_7b_intermediate| This repo contains normal pre-training intermediate ckpts. A total of 3.7T tokens were learned at this phase. | β€’ [πŸ€— Hugging Face](https://huggingface.co/m-a-p/neo_7b_intermediate) neo_7b_decay| This repo contains intermediate ckpts during the decay phase. A total of 720B tokens were learned at this phase. | β€’ [πŸ€— Hugging Face](https://huggingface.co/m-a-p/neo_7b_decay) neo_scalinglaw_980M | This repo contains ckpts related to scalinglaw experiments | β€’ [πŸ€— Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_980M) neo_scalinglaw_460M | This repo contains ckpts related to scalinglaw experiments | β€’ [πŸ€— Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_460M) neo_scalinglaw_250M | This repo contains ckpts related to scalinglaw experiments | β€’ [πŸ€— Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_250M) neo_2b_general | This repo contains ckpts of 2b model trained using common domain knowledge | β€’ [πŸ€— Hugging Face](https://huggingface.co/m-a-p/neo_2b_general) ### Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello, what can you help me do?"}, ] input_ids = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda'), max_new_tokens=20) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) print(response) ```