Edit model card

πŸ»β€β„οΈCOKAL-v1_70BπŸ»β€β„οΈ

img

Model Details

Model Developers Seungyoo Lee (DopeorNope)

Input Models input text only.

Output Models generate text only.

Model Architecture
COKAL-v1_70B is an auto-regressive 70B language model based on the LLaMA2 transformer architecture.

Base Model

Training Dataset

Training
I developed the model in an environment with A100 x 8

Implementation Code


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "DopeorNope/COKAL-v1_70B"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)

Downloads last month
1,499
Safetensors
Model size
69.4B params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using DopeorNope/COKAL-v1-70B 13