Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
This repository contains the LoRA adapter weights from the fine-tuning of the Llama 3 (8B) model on patent documents using masked next token prediction (MNTP). MNTP is the first step in the adaptation of the base model for embedding generation following the llm2vec approach.
Framework versions
- PEFT 0.12.0
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for saroyehun/Llama3-8B-Instruct-mntp-patent
Base model
meta-llama/Meta-Llama-3-8B-Instruct