license: apache-2.0

from https://huggingface.co/bartowski/Llama-3.1-Nemotron-70B-Instruct-HF-GGUF/tree/main

!apt-get install aria2 !aria2c -x 16 -s 16

!./llama-gguf-split --merge Llama-3.1-Nemotron-70B-Instruct-HF-Q8_0-00001-of-00002.gguf Nemotron-70B-Instruct-HF-Q8_0.gguf

!/content/llama.cpp/llama-gguf-split --split-max-size 10G /content/llama.cpp/Nemotron-70B-Instruct-HF-Q8_0.gguf /content/Nemotron-70B-Instruct-HF-Q8

from huggingface_hub import upload_folder

مسار المجلد المراد رفعه

folder_path = "/content/split_model" # استبدل هذا بالمسار الصحيح

اسم المستودع

repo_id = "sdyy/Nemotron-70B-Instruct-HF-Q8_8parts"

اسم المجلد في المستودع (اختياري)

repo_folder_name = "split_model" # استبدل هذا بالاسم الذي تريده

توكن Hugging Face الخاص بك

token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

رفع المجلد

upload_folder( folder_path=folder_path, repo_id=repo_id, repo_type="model", token=token, )

Downloads last month
2
GGUF
Model size
70.6B params
Architecture
llama
Inference API
Unable to determine this model's library. Check the docs .