Edit model card

This repository is a fork of the original almatkai/ingredientExtractor-Mistral-7b, with custom GGUF quantizations, specifically tailored for NeurochainAI's inference network. The models provided here are a fundamental part of NeurochainAI's state-of-the-art AI inference solutions.

NeurochainAI leverages these models to optimize and run inference across distributed networks, enabling efficient and robust language model processing across various platforms and devices.

Additionally, this repository includes customizations of LoRA adapters specifically developed for Darkfrontiers and ImaginaryOnes game chatbots, enhancing AI interactions within these gaming environments.

Downloads last month
51
GGUF
Model size
7.24B params
Architecture
llama

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for neurochainai/ingredient-extractor-mistral-7b-instruct-v0.1

Quantized
(14)
this model