modelId
stringlengths 4
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
434M
| likes
int64 0
6.54k
| library_name
stringclasses 366
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 51
values | createdAt
unknown | card
stringlengths 1
913k
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/deepseek-math-7b-base-GGUF | mradermacher | "2024-10-31T23:49:43Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:deepseek-ai/deepseek-math-7b-base",
"base_model:quantized:deepseek-ai/deepseek-math-7b-base",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:37:09Z" | ---
base_model: deepseek-ai/deepseek-math-7b-base
language:
- en
library_name: transformers
license: other
license_link: https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL
license_name: deepseek
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/deepseek-ai/deepseek-math-7b-base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-math-7b-base-GGUF/resolve/main/deepseek-math-7b-base.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF | featherless-ai-quants | "2024-11-01T00:09:57Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K",
"base_model:quantized:aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K",
"region:us"
] | text-generation | "2024-10-31T23:37:24Z" | ---
base_model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K GGUF Quantizations π
![Featherless AI Quants](./featherless-quants.png)
*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.1-Uncensored-32K-IQ4_XS.gguf) | 4276.62 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
aidadev48/model16 | aidadev48 | "2024-10-31T23:39:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-31T23:37:42Z" | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** aidadev48
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shxdw/stud | shxdw | "2024-10-31T23:37:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-10-31T23:37:47Z" | Entry not found |
mradermacher/phi-2-OpenHermes-2.5-GGUF | mradermacher | "2024-10-31T23:51:26Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:g-ronimo/phi-2-OpenHermes-2.5",
"base_model:quantized:g-ronimo/phi-2-OpenHermes-2.5",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:39:33Z" | ---
base_model: g-ronimo/phi-2-OpenHermes-2.5
datasets:
- teknium/OpenHermes-2.5
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/g-ronimo/phi-2-OpenHermes-2.5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-GGUF/resolve/main/phi-2-OpenHermes-2.5.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/miqu-70b-6-GGUF | mradermacher | "2024-10-31T23:50:35Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:typeof/miqu-70b-6",
"base_model:quantized:typeof/miqu-70b-6",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:39:53Z" | ---
base_model: typeof/miqu-70b-6
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/typeof/miqu-70b-6
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q2_K.gguf) | Q2_K | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q3_K_S.gguf) | Q3_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q3_K_M.gguf) | Q3_K_M | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q3_K_L.gguf) | Q3_K_L | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.IQ4_XS.gguf) | IQ4_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q4_K_S.gguf) | Q4_K_S | 3.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q4_K_M.gguf) | Q4_K_M | 3.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q5_K_S.gguf) | Q5_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q5_K_M.gguf) | Q5_K_M | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q6_K.gguf) | Q6_K | 4.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.Q8_0.gguf) | Q8_0 | 6.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/miqu-70b-6-GGUF/resolve/main/miqu-70b-6.f16.gguf) | f16 | 11.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF | featherless-ai-quants | "2024-11-01T00:26:18Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:nbeerbower/Lyra4-Gutenberg2-12B",
"base_model:quantized:nbeerbower/Lyra4-Gutenberg2-12B",
"region:us"
] | text-generation | "2024-10-31T23:40:19Z" | ---
base_model: nbeerbower/Lyra4-Gutenberg2-12B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/Lyra4-Gutenberg2-12B GGUF Quantizations π
![Featherless AI Quants](./featherless-quants.png)
*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [nbeerbower-Lyra4-Gutenberg2-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q8_0.gguf) | 12419.10 MB |
| Q4_K_S | [nbeerbower-Lyra4-Gutenberg2-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q4_K_S.gguf) | 6790.35 MB |
| Q2_K | [nbeerbower-Lyra4-Gutenberg2-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q2_K.gguf) | 4569.10 MB |
| Q6_K | [nbeerbower-Lyra4-Gutenberg2-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q6_K.gguf) | 9590.35 MB |
| Q3_K_M | [nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_S.gguf) | 5277.85 MB |
| Q3_K_L | [nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_L.gguf) | 6257.54 MB |
| Q4_K_M | [nbeerbower-Lyra4-Gutenberg2-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q4_K_M.gguf) | 7130.82 MB |
| Q5_K_S | [nbeerbower-Lyra4-Gutenberg2-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q5_K_S.gguf) | 8124.10 MB |
| Q5_K_M | [nbeerbower-Lyra4-Gutenberg2-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q5_K_M.gguf) | 8323.32 MB |
| IQ4_XS | [nbeerbower-Lyra4-Gutenberg2-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-IQ4_XS.gguf) | 6485.04 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
Trickshotblaster/Llamma1BShakespeare | Trickshotblaster | "2024-10-31T23:43:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"text-generation",
"en",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-bnb-4bit",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-31T23:40:30Z" | ---
library_name: transformers
tags:
- unsloth
language:
- en
base_model:
- unsloth/Llama-3.2-1B-bnb-4bit
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
straykittycat/catnip1 | straykittycat | "2024-10-31T23:47:10Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2024-10-31T23:40:52Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
straykittycat/catnip | straykittycat | "2024-10-31T23:47:22Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2024-10-31T23:40:55Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
nrjrita/whisper-medium-sv | nrjrita | "2024-10-31T23:44:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-10-31T23:44:56Z" | Entry not found |
dhamu4hf/donut-base-sroie | dhamu4hf | "2024-10-31T23:58:59Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"region:us"
] | null | "2024-10-31T23:45:16Z" | ---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
jrbeduardo/vit-model-jrbeduardo-v2 | jrbeduardo | "2024-10-31T23:50:52Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-10-31T23:45:17Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-model-jrbeduardo-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model-jrbeduardo-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the AI-Lab-Makerere/beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0727
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1509 | 3.8462 | 500 | 0.0727 | 0.9850 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF | mradermacher | "2024-10-31T23:50:17Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"dataset:AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-rl-trl",
"base_model:AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo",
"base_model:quantized:AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:47:14Z" | ---
base_model: AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo
datasets: AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-rl-trl
language:
- en
library_name: transformers
model_name: ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF | mradermacher | "2024-10-31T23:50:34Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"dataset:AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-rl-trl",
"base_model:AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs",
"base_model:quantized:AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:47:15Z" | ---
base_model: AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs
datasets: AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-rl-trl
language:
- en
library_name: transformers
model_name: ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AlekseyKorshuk/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs-GGUF/resolve/main/ai-detection-gutenberg-human-formatted-ai-v1-sft-qwen-3b-dpo-3epochs.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF | mradermacher | "2024-11-01T00:59:56Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"maldv/badger-writer-llama-3-8b",
"vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B",
"Orenguteng/Llama-3-8B-Lexi-Uncensored",
"abacusai/Llama-3-Smaug-8B",
"en",
"base_model:ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B",
"base_model:quantized:ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:47:15Z" | ---
base_model: ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- maldv/badger-writer-llama-3-8b
- vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B
- Orenguteng/Llama-3-8B-Lexi-Uncensored
- abacusai/Llama-3-Smaug-8B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lesliewu321/aiflora | lesliewu321 | "2024-10-31T23:47:42Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-10-31T23:47:39Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: aiflora
---
# Aiflora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `aiflora` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('lesliewu321/aiflora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Quinero32/j | Quinero32 | "2024-10-31T23:49:00Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-10-31T23:47:40Z" | ---
license: apache-2.0
---
|
mradermacher/LexGPT-V1-GGUF | mradermacher | "2024-11-01T00:01:23Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"de",
"dataset:TIGER-Lab/MathInstruct",
"dataset:LDJnr/Capybara",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:imone/OpenOrca_FLAN",
"dataset:Open-Orca/OpenOrca",
"dataset:Intel/orca_dpo_pairs",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"dataset:tiedong/goat",
"dataset:glaiveai/glaive-code-assistant",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"base_model:lex-hue/LexGPT-V1",
"base_model:quantized:lex-hue/LexGPT-V1",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:47:54Z" | ---
base_model: lex-hue/LexGPT-V1
datasets:
- TIGER-Lab/MathInstruct
- LDJnr/Capybara
- openchat/openchat_sharegpt4_dataset
- imone/OpenOrca_FLAN
- Open-Orca/OpenOrca
- Intel/orca_dpo_pairs
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
- tiedong/goat
- glaiveai/glaive-code-assistant
- OpenAssistant/oasst_top1_2023-08-25
language:
- en
- de
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lex-hue/LexGPT-V1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V1-GGUF/resolve/main/LexGPT-V1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF | featherless-ai-quants | "2024-11-01T00:25:50Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:lashid11/CheckGPT-SOLAR-10.7B",
"base_model:quantized:lashid11/CheckGPT-SOLAR-10.7B",
"region:us"
] | text-generation | "2024-10-31T23:48:22Z" | ---
base_model: lashid11/CheckGPT-SOLAR-10.7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# lashid11/CheckGPT-SOLAR-10.7B GGUF Quantizations π
![Featherless AI Quants](./featherless-quants.png)
*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [lashid11-CheckGPT-SOLAR-10.7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q8_0.gguf) | 10875.85 MB |
| Q4_K_S | [lashid11-CheckGPT-SOLAR-10.7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q4_K_S.gguf) | 5835.08 MB |
| Q2_K | [lashid11-CheckGPT-SOLAR-10.7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q2_K.gguf) | 3817.78 MB |
| Q6_K | [lashid11-CheckGPT-SOLAR-10.7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q6_K.gguf) | 8397.30 MB |
| Q3_K_M | [lashid11-CheckGPT-SOLAR-10.7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q3_K_M.gguf) | 4954.98 MB |
| Q3_K_S | [lashid11-CheckGPT-SOLAR-10.7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q3_K_S.gguf) | 4448.48 MB |
| Q3_K_L | [lashid11-CheckGPT-SOLAR-10.7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q3_K_L.gguf) | 5388.98 MB |
| Q4_K_M | [lashid11-CheckGPT-SOLAR-10.7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q4_K_M.gguf) | 6162.33 MB |
| Q5_K_S | [lashid11-CheckGPT-SOLAR-10.7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q5_K_S.gguf) | 7054.70 MB |
| Q5_K_M | [lashid11-CheckGPT-SOLAR-10.7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q5_K_M.gguf) | 7245.95 MB |
| IQ4_XS | [lashid11-CheckGPT-SOLAR-10.7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-IQ4_XS.gguf) | 5557.67 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/LexGPT-V2-GGUF | mradermacher | "2024-11-01T00:02:43Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:lex-hue/LexGPT-V2",
"base_model:quantized:lex-hue/LexGPT-V2",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:48:54Z" | ---
base_model: lex-hue/LexGPT-V2
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lex-hue/LexGPT-V2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LexGPT-V2-GGUF/resolve/main/LexGPT-V2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-7B-task2-GGUF | mradermacher | "2024-11-01T00:42:12Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:allknowingroger/Qwen2.5-7B-task2",
"base_model:quantized:allknowingroger/Qwen2.5-7B-task2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:49:15Z" | ---
base_model: allknowingroger/Qwen2.5-7B-task2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/allknowingroger/Qwen2.5-7B-task2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF/resolve/main/Qwen2.5-7B-task2.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
autoprogrammer/CulturaX-zh-unsupervised-half | autoprogrammer | "2024-10-31T23:52:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-31T23:50:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF | featherless-ai-quants | "2024-11-01T00:23:00Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:netcat420/MFANNv0.20.12",
"base_model:quantized:netcat420/MFANNv0.20.12",
"region:us"
] | text-generation | "2024-10-31T23:50:43Z" | ---
base_model: netcat420/MFANNv0.20.12
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# netcat420/MFANNv0.20.12 GGUF Quantizations π
![Featherless AI Quants](./featherless-quants.png)
*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [netcat420-MFANNv0.20.12-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [netcat420-MFANNv0.20.12-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [netcat420-MFANNv0.20.12-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [netcat420-MFANNv0.20.12-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [netcat420-MFANNv0.20.12-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [netcat420-MFANNv0.20.12-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [netcat420-MFANNv0.20.12-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [netcat420-MFANNv0.20.12-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [netcat420-MFANNv0.20.12-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [netcat420-MFANNv0.20.12-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [netcat420-MFANNv0.20.12-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-IQ4_XS.gguf) | 4276.62 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
MonsterMMORPG/Model_Training_Experiments_As_A_Baseline | MonsterMMORPG | "2024-11-01T00:38:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-10-31T23:51:11Z" | Purely for science Model Trainings
Installers and config files : https://www.patreon.com/posts/112099700
Fine Tunings : https://youtu.be/FvpWy1x5etM
Used config name : 48GB_GPU_28200MB_6.4_second_it_Tier_1.json
Trained up to 200 epochs with exactly same config
Captions : ohwx man - nothing else
Activation token - trigger word : ohwx man
Dataset - 1024x1024 - 28 images : https://www.patreon.com/posts/114972274
LoRA : https://youtu.be/nySGu12Y05k
Used config name : Rank_1_29500MB_8_85_Second_IT.json
Rest are same as above
Used Kohya GUI : 021c6f5ae3055320a56967284e759620c349aa56
Torch : 2.5.1 , xFormers 0.0.28.post3 : https://www.patreon.com/posts/112099700
### Model File Name Meanings
Dwayne_Johnson_FLUX_Fine_Tuning-000010.safetensors - 10 epochs FLUX Fine Tuning / DreamBooth training = 28 * 10 = 280 steps - Batch size 1, 1024x1024
Dwayne_Johnson_FLUX_Fine_Tuning-000020.safetensors - 20 epochs FLUX Fine Tuning / DreamBooth training = 28 * 20 = 560 steps - Batch size 1, 1024x1024
Dwayne_Johnson_FLUX_LoRA-000010.safetensors - 10 epochs FLUX LoRA Training = 28 * 10 = 280 steps - Batch size 1, 1024x1024
Dwayne_Johnson_FLUX_LoRA-000010.safetensors - 20 epochs FLUX LoRA Training = 28 * 20 = 560 steps - Batch size 1, 1024x1024
|
davitu/FIRD | davitu | "2024-10-31T23:51:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-10-31T23:51:27Z" | Entry not found |
straykittycat/playfulcats | straykittycat | "2024-10-31T23:58:37Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2024-10-31T23:52:27Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Dheeraj46329/llama-3.2-new-18-0.5-3e | Dheeraj46329 | "2024-11-01T00:02:41Z" | 0 | 0 | null | [
"safetensors",
"llama",
"license:llama3.2",
"region:us"
] | null | "2024-10-31T23:52:34Z" | ---
license: llama3.2
---
|
jjtamayoa/imdbreviews_classification_amazon-review-sentiment-analysis_v02 | jjtamayoa | "2024-11-01T01:34:34Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"region:us"
] | null | "2024-10-31T23:53:00Z" | Entry not found |
mradermacher/CodeActAgent-Llama-2-7b-GGUF | mradermacher | "2024-11-01T00:23:59Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llm-agent",
"en",
"dataset:xingyaoww/code-act",
"base_model:xingyaoww/CodeActAgent-Llama-2-7b",
"base_model:quantized:xingyaoww/CodeActAgent-Llama-2-7b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:53:51Z" | ---
base_model: xingyaoww/CodeActAgent-Llama-2-7b
datasets:
- xingyaoww/code-act
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- llm-agent
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/xingyaoww/CodeActAgent-Llama-2-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF/resolve/main/CodeActAgent-Llama-2-7b.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
joe611/chickens-composite-403232323232-150-epochs-w-transform-metrics-test | joe611 | "2024-11-01T01:35:03Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"detr",
"region:us"
] | null | "2024-10-31T23:55:07Z" | Entry not found |
Lekhansh/Llama-3.1-8B-Instruct-mixed-instructions | Lekhansh | "2024-10-31T23:55:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-10-31T23:55:42Z" | Entry not found |
Smit12345678/AI_COSTUME_CHANGER | Smit12345678 | "2024-10-31T23:57:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-10-31T23:57:40Z" | Entry not found |
lhallee/proreg_650 | lhallee | "2024-11-01T00:00:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:58:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Onii-Chan-3-GGUF | mradermacher | "2024-11-01T01:18:06Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Onii-Chan-3/Onii-Chan-3",
"base_model:quantized:Onii-Chan-3/Onii-Chan-3",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T23:59:11Z" | ---
base_model: Onii-Chan-3/Onii-Chan-3
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Onii-Chan-3/Onii-Chan-3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-GGUF/resolve/main/Onii-Chan-3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
yiran-wang3/qwen2_chat_adamw_iter1 | yiran-wang3 | "2024-11-01T00:00:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-10-31T23:59:59Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
datasets:
- self-generate/qw1_original_cn_mining_oj_iter0-binarized
model-index:
- name: qwen2_chat_adamw_iter1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2_chat_adamw_iter1
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the self-generate/qw1_original_cn_mining_oj_iter0-binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 100
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.45.0
- Pytorch 2.4.0+cu121
- Datasets 2.14.6
- Tokenizers 0.20.1
|
Anteia/Qwen2.5-7B-Instruct-fin-v2.0 | Anteia | "2024-11-01T00:08:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-01T00:00:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Onii-Chan-3-55-GGUF | mradermacher | "2024-11-01T00:16:11Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Onii-Chan-3/Onii-Chan-3-55",
"base_model:quantized:Onii-Chan-3/Onii-Chan-3-55",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:00:29Z" | ---
base_model: Onii-Chan-3/Onii-Chan-3-55
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Onii-Chan-3/Onii-Chan-3-55
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Grohv/pa-lora | Grohv | "2024-11-01T00:00:42Z" | 0 | 1 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-11-01T00:00:34Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/pa-lora_003600_00_20241031231831.png
text: pa_lora, portrait, woman
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: pa_lora
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# pa_lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `pa_lora` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
minoosh/bi-encoder-CosineSimilarityLoss | minoosh | "2024-11-01T00:01:48Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-11-01T00:01:31Z" | ---
base_model: google-bert/bert-base-uncased
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("minoosh/bi-encoder-CosineSimilarityLoss")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.1
- Transformers: 4.45.1
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
DanJoshua/profesor_Swin3D_B_RWF2000 | DanJoshua | "2024-11-01T01:31:37Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-11-01T00:02:26Z" | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: profesor_Swin3D_B_RWF2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# profesor_Swin3D_B_RWF2000
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6709
- Accuracy: 0.89
- F1: 0.8900
- Precision: 0.8904
- Recall: 0.89
- Roc Auc: 0.9586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 480
- training_steps: 4800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| 0.2214 | 2.0333 | 480 | 0.2990 | 0.8875 | 0.8872 | 0.8918 | 0.8875 | 0.9601 |
| 0.1739 | 5.0333 | 960 | 0.4723 | 0.895 | 0.8947 | 0.8990 | 0.895 | 0.9608 |
| 0.1093 | 8.0333 | 1440 | 0.5475 | 0.8925 | 0.8925 | 0.8927 | 0.8925 | 0.9619 |
| 0.0735 | 11.0333 | 1920 | 0.5279 | 0.8925 | 0.8925 | 0.8927 | 0.8925 | 0.9674 |
| 0.0436 | 14.0333 | 2400 | 0.6160 | 0.8975 | 0.8975 | 0.8977 | 0.8975 | 0.9680 |
| 0.0766 | 17.0333 | 2880 | 0.6692 | 0.8975 | 0.8975 | 0.8977 | 0.8975 | 0.9664 |
| 0.0433 | 20.0333 | 3360 | 0.7716 | 0.885 | 0.8849 | 0.8869 | 0.885 | 0.9695 |
| 0.0653 | 23.0333 | 3840 | 0.9919 | 0.8675 | 0.8671 | 0.8724 | 0.8675 | 0.9580 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.0.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
mradermacher/zephyr-7b-UC-0-GGUF | mradermacher | "2024-11-01T00:33:11Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"dpo",
"generated_from_trainer",
"en",
"base_model:weijie210/zephyr-7b-UC-0",
"base_model:quantized:weijie210/zephyr-7b-UC-0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:02:46Z" | ---
base_model: weijie210/zephyr-7b-UC-0
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- trl
- dpo
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/weijie210/zephyr-7b-UC-0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nikutd01/emotion_tweet_distilbert-base-uncased_2024-11-01 | nikutd01 | "2024-11-01T00:03:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-01T00:03:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nazlisevdam/Qwen-Qwen1.5-0.5B-1730419433 | nazlisevdam | "2024-11-01T00:03:54Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-11-01T00:03:53Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
impossibleexchange/tiktok | impossibleexchange | "2024-11-01T00:10:53Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2024-11-01T00:04:09Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
0xIbra/flux.1-dev-turbo-alpha | 0xIbra | "2024-11-01T00:42:37Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"arxiv:1910.09700",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] | text-to-image | "2024-11-01T00:04:38Z" | ---
library_name: diffusers
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
base_model:
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 𧨠diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF | mradermacher | "2024-11-01T00:36:07Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:TroyDoesAI/BlackSheep-Llama3.2-3B-Context_Obedient",
"base_model:quantized:TroyDoesAI/BlackSheep-Llama3.2-3B-Context_Obedient",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:05:07Z" | ---
base_model: TroyDoesAI/BlackSheep-Llama3.2-3B-Context_Obedient
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TroyDoesAI/BlackSheep-Llama3.2-3B-Context_Obedient
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MaziyarPanahi/IceMartiniV1RP-7b-GGUF | MaziyarPanahi | "2024-11-01T00:28:04Z" | 0 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:icefog72/IceMartiniV1RP-7b",
"base_model:quantized:icefog72/IceMartiniV1RP-7b",
"region:us"
] | text-generation | "2024-11-01T00:05:39Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: IceMartiniV1RP-7b-GGUF
base_model: icefog72/IceMartiniV1RP-7b
inference: false
model_creator: icefog72
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/IceMartiniV1RP-7b-GGUF](https://huggingface.co/MaziyarPanahi/IceMartiniV1RP-7b-GGUF)
- Model creator: [icefog72](https://huggingface.co/icefog72)
- Original model: [icefog72/IceMartiniV1RP-7b](https://huggingface.co/icefog72/IceMartiniV1RP-7b)
## Description
[MaziyarPanahi/IceMartiniV1RP-7b-GGUF](https://huggingface.co/MaziyarPanahi/IceMartiniV1RP-7b-GGUF) contains GGUF format model files for [icefog72/IceMartiniV1RP-7b](https://huggingface.co/icefog72/IceMartiniV1RP-7b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
selimercan/Qwen-Qwen1.5-0.5B-1730419574 | selimercan | "2024-11-01T00:06:15Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-11-01T00:06:14Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
impossibleexchange/toktik | impossibleexchange | "2024-11-01T00:13:48Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2024-11-01T00:06:41Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
svhozt/profile_based_recommendation_model.keras | svhozt | "2024-11-01T00:12:59Z" | 0 | 0 | keras | [
"keras",
"license:apache-2.0",
"region:us"
] | null | "2024-11-01T00:06:50Z" | ---
license: apache-2.0
---
Hereβs a draft for a model card that you can use for Hugging Face, detailing the purpose, training data, architecture, and intended use of your recommendation model:
---
# Model Card: Profile-Based Movie Recommendation Model
## Model Overview
This model is a **profile-based movie recommendation system** designed to recommend movies based on user demographics and genre preferences. It was trained on the [MovieLens 1M dataset](http://files.grouplens.org/datasets/movielens/ml-1m.zip) and uses demographic and genre preferences to create user profiles through clustering. By leveraging user profiles and movie embeddings, the model provides movie recommendations tailored to each userβs interests.
## Model Architecture
The model is built using **TensorFlow** and **Keras** and employs an **embedding-based architecture**:
1. **User Profiles and Clustering**: User demographics and genre preferences are clustered into a specified number of profiles using **KMeans** clustering. This results in profile IDs that capture user similarities based on age, occupation, gender, and preferred movie genres.
2. **Embedding Layers**:
- The **user profile IDs** are embedded in a lower-dimensional space using a trainable embedding layer.
- Similarly, **movie IDs** are embedded into a separate lower-dimensional space.
3. **Dot Product for Recommendation**: The model computes the dot product between the profile embedding and movie embedding, resulting in a similarity score. The higher the score, the more relevant the movie is predicted to be for the user profile.
## Training Dataset
The model was trained on the [MovieLens 1M dataset](http://files.grouplens.org/datasets/movielens/ml-1m.zip) by GroupLens. The dataset contains **1 million ratings** from **6,040 users** on **3,900 movies**.
- **Users**: Contains demographic information such as age, gender, and occupation.
- **Ratings**: Provides ratings from users for different movies.
- **Movies**: Includes movie titles and genres (e.g., Action, Comedy, Romance).
### Dataset Preparation
- **Preprocessing**:
- User demographic data was one-hot encoded to include age, occupation, and gender.
- User genre preferences were extracted by identifying each user's top-rated genres, with genres being split and exploded for individual assignment.
- **Clustering**: User profiles were clustered into 10 groups using KMeans clustering based on demographic and genre features.
- **Embedding Preparation**: Profile IDs and Movie IDs were prepared for embedding layers.
## Training Configuration
- **Optimizer**: Adam
- **Loss Function**: Mean Squared Error (MSE)
- **Metric**: Mean Absolute Error (MAE)
- **Epochs**: 10
- **Batch Size**: 256
- **Embedding Dimension**: 64
## Intended Use
This model is intended to provide **movie recommendations** based on user profile clusters. By embedding user profiles and movies into a shared space, it provides recommendations by finding the best matching movies for a particular user profile.
### Use Cases
- **Personalized Movie Recommendations**: For streaming platforms, this model can serve as the core recommendation engine for suggesting movies tailored to user preferences based on demographics and past high-rated genres.
- **User Segmentation**: The model clusters users based on demographic and genre preferences, which can also be used for analysis and targeted advertising.
### Limitations
- **Cold Start Problem**: The model may not perform optimally for new users without enough past ratings or for movies without sufficient interaction data.
- **Demographic Constraints**: Recommendations are influenced heavily by demographic data and may not fully capture nuanced user preferences.
- **Genre Limitation**: Genre preferences are based on past ratings, which may not always reflect the userβs evolving interests.
## How to Use
To use this model, you'll need:
1. **Profile ID**: Identify or calculate the userβs profile ID based on demographics and genre preferences.
2. **Movie ID**: Specify the movie IDs you want to score for a particular profile.
```python
from tensorflow import keras
import numpy as np
# Load the trained model
model = keras.models.load_model("profile_based_recommendation_model.keras")
# Example: Generate recommendations for a user with profile_id 3 for movies with IDs 10, 50, and 100
profile_id = np.array([3])
movie_ids = np.array([10, 50, 100])
# Predict scores
predictions = model.predict([profile_id, movie_ids])
# Display predicted scores for each movie
for movie_id, score in zip(movie_ids, predictions):
print(f"Movie ID: {movie_id}, Predicted Score: {score}")
```
## Dataset Citation
If you use this model or the dataset, please cite the MovieLens dataset as follows:
```
@article{harper2015movielens,
title={The MovieLens datasets: History and context},
author={Harper, F Maxwell and Konstan, Joseph A},
journal={ACM Transactions on Interactive Intelligent Systems (TIIS)},
volume={5},
number={4},
pages={1--19},
year={2015},
publisher={ACM New York, NY, USA}
}
```
## Acknowledgments
Thanks to **GroupLens Research** for providing the MovieLens dataset and the open-source tools that make it accessible for research purposes.
---
This model card can be customized further if you want to add more specific instructions or additional use cases. |
nazlisevdam/Qwen-Qwen1.5-1.8B-1730419619 | nazlisevdam | "2024-11-01T00:07:00Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | "2024-11-01T00:06:59Z" | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Dheeraj46329/llama-3.2-new-16-0.5-3e | Dheeraj46329 | "2024-11-01T00:13:25Z" | 0 | 0 | null | [
"safetensors",
"llama",
"license:llama3.2",
"region:us"
] | null | "2024-11-01T00:09:03Z" | ---
license: llama3.2
---
|
selimercan/Qwen-Qwen1.5-1.8B-1730419756 | selimercan | "2024-11-01T00:09:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | "2024-11-01T00:09:16Z" | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
AIDUDE0541/Porche_Muk | AIDUDE0541 | "2024-11-01T00:09:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-11-01T00:09:32Z" | Entry not found |
sulaimank/wav2vec-xlsr-cv-grain-lg_grn_only | sulaimank | "2024-11-01T01:32:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-11-01T00:10:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LEESM/qwen2.5-7b-64-cpft | LEESM | "2024-11-01T00:28:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-01T00:10:27Z" | ---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
---
unregist model
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
sfkuo/whisper-largev3-peft-zh-TW_20241101_epochs_11 | sfkuo | "2024-11-01T00:10:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-11-01T00:10:40Z" | Invalid username or password. |
mradermacher/Moe-3x7b-QA-Code-Inst-GGUF | mradermacher | "2024-11-01T00:50:13Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"reasoning",
"mixtral",
"mistral",
"QA",
"MOE",
"en",
"base_model:nextai-team/Moe-3x7b-QA-Code-Inst",
"base_model:quantized:nextai-team/Moe-3x7b-QA-Code-Inst",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:12:57Z" | ---
base_model: nextai-team/Moe-3x7b-QA-Code-Inst
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
- reasoning
- mixtral
- mistral
- QA
- MOE
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nextai-team/Moe-3x7b-QA-Code-Inst
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q2_K.gguf) | Q2_K | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q3_K_S.gguf) | Q3_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q3_K_L.gguf) | Q3_K_L | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.IQ4_XS.gguf) | IQ4_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q5_K_S.gguf) | Q5_K_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q5_K_M.gguf) | Q5_K_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q6_K.gguf) | Q6_K | 15.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Moe-3x7b-QA-Code-Inst-GGUF/resolve/main/Moe-3x7b-QA-Code-Inst.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf | RichardErkhov | "2024-11-01T01:34:18Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-11-01T00:13:30Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nemomix-v1.0-12B - GGUF
- Model creator: https://huggingface.co/MarinaraSpaghetti/
- Original model: https://huggingface.co/MarinaraSpaghetti/Nemomix-v1.0-12B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Nemomix-v1.0-12B.Q2_K.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q2_K.gguf) | Q2_K | 4.46GB |
| [Nemomix-v1.0-12B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q3_K_S.gguf) | Q3_K_S | 5.15GB |
| [Nemomix-v1.0-12B.Q3_K.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q3_K.gguf) | Q3_K | 5.67GB |
| [Nemomix-v1.0-12B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q3_K_M.gguf) | Q3_K_M | 5.67GB |
| [Nemomix-v1.0-12B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [Nemomix-v1.0-12B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.IQ4_XS.gguf) | IQ4_XS | 6.33GB |
| [Nemomix-v1.0-12B.Q4_0.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q4_0.gguf) | Q4_0 | 6.59GB |
| [Nemomix-v1.0-12B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.IQ4_NL.gguf) | IQ4_NL | 6.65GB |
| [Nemomix-v1.0-12B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q4_K_S.gguf) | Q4_K_S | 6.63GB |
| [Nemomix-v1.0-12B.Q4_K.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q4_K.gguf) | Q4_K | 6.96GB |
| [Nemomix-v1.0-12B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q4_K_M.gguf) | Q4_K_M | 6.96GB |
| [Nemomix-v1.0-12B.Q4_1.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q4_1.gguf) | Q4_1 | 7.26GB |
| [Nemomix-v1.0-12B.Q5_0.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q5_0.gguf) | Q5_0 | 7.93GB |
| [Nemomix-v1.0-12B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q5_K_S.gguf) | Q5_K_S | 7.93GB |
| [Nemomix-v1.0-12B.Q5_K.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q5_K.gguf) | Q5_K | 8.13GB |
| [Nemomix-v1.0-12B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q5_K_M.gguf) | Q5_K_M | 8.13GB |
| [Nemomix-v1.0-12B.Q5_1.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q5_1.gguf) | Q5_1 | 8.61GB |
| [Nemomix-v1.0-12B.Q6_K.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q6_K.gguf) | Q6_K | 9.37GB |
| [Nemomix-v1.0-12B.Q8_0.gguf](https://huggingface.co/RichardErkhov/MarinaraSpaghetti_-_Nemomix-v1.0-12B-gguf/blob/main/Nemomix-v1.0-12B.Q8_0.gguf) | Q8_0 | 12.13GB |
Original model description:
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/syvemXcGlikU40CKFgniy.jpeg)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/NKOvSaa2w2ATxCnziTkWw.png)
# V4.0 is the best one, use that one.
## Information
### Description
My main goal with this one was to merge the smartness of the base Instruct Nemo with the better prose from the different roleplaying fine-tunes. This is version v0.1, still to be tested. Weights shamelessly stolen from ParasiticRogue (thank you, friend). All credits and thanks go to Intervitens, Mistralai, NeverSleep and ShuttleAI for providing amazing models used in the merge.
### Instruct
Both Mistral Instruct and ChatML should work.
```
<s>[INST] {system} [/INST]{assistant}</s>[INST] {user} [/INST]
```
Or...
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{assistant}<|im_end|>
```
### Settings
Lower Temperature of 0.35 recommended, although I had luck with Temperatures above one (1.0-1.2) if you crank up the Min P (0.01-0.1). Run with base DRY of 0.8/1.75/2/0 and you're good to go.
### GGUF
https://huggingface.co/MarinaraSpaghetti/Nemomix-v0.1-12B-GGUF
### Other Versions
V1: https://huggingface.co/MarinaraSpaghetti/Nemomix-v1.0-12B
V2: https://huggingface.co/MarinaraSpaghetti/Nemomix-v2.0-12B
V3: https://huggingface.co/MarinaraSpaghetti/Nemomix-v3.0-12B
V4: https://huggingface.co/MarinaraSpaghetti/Nemomix-v4.0-12B
# Nemomix-v0.1-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using F:\mergekit\mistralaiMistral-Nemo-Base-2407 as a base.
### Models Merged
The following models were included in the merge:
* F:\mergekit\intervitens_mini-magnum-12b-v1.1
* F:\mergekit\mistralaiMistral-Nemo-Instruct-2407
* F:\mergekit\NeverSleep_Lumimaid-v0.2-12B
* F:\mergekit\shuttleai_shuttle-2.5-mini
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: F:\mergekit\shuttleai_shuttle-2.5-mini
parameters:
weight: 0.16
density: 0.42
- model: F:\mergekit\NeverSleep_Lumimaid-v0.2-12B
parameters:
weight: 0.22
density: 0.54
- model: F:\mergekit\intervitens_mini-magnum-12b-v1.1
parameters:
weight: 0.28
density: 0.66
- model: F:\mergekit\mistralaiMistral-Nemo-Instruct-2407
parameters:
weight: 0.34
density: 0.78
merge_method: dare_ties
base_model: F:\mergekit\mistralaiMistral-Nemo-Base-2407
parameters:
int8_mask: true
dtype: bfloat16
```
## Ko-fi
### Enjoying what I do? Consider donating here, thank you!
https://ko-fi.com/spicy_marinara
|
lightbird-ai/gemma-2b-healthcare | lightbird-ai | "2024-11-01T00:13:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-2b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:13:47Z" | ---
base_model: unsloth/gemma-2-2b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
---
# Uploaded model
- **Developed by:** lightbird-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-it-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lightbird-ai/gemma-2b-healthcare-tokenizer | lightbird-ai | "2024-11-01T00:14:01Z" | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:13:55Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nazlisevdam/google-gemma-2b-1730420204 | nazlisevdam | "2024-11-01T00:16:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | "2024-11-01T00:16:44Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Textway/results | Textway | "2024-11-01T01:21:50Z" | 0 | 0 | null | [
"safetensors",
"bart",
"region:us"
] | null | "2024-11-01T00:16:44Z" | Invalid username or password. |
impossibleexchange/tiptap | impossibleexchange | "2024-11-01T00:24:05Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2024-11-01T00:17:05Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LinoPlus/product_manager | LinoPlus | "2024-11-01T00:17:19Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-11-01T00:17:19Z" | ---
license: unknown
---
|
milleoakrey/fehvoices | milleoakrey | "2024-11-01T00:20:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-11-01T00:18:02Z" | Entry not found |
yiran-wang3/qwen1_chat_adamw_iter1 | yiran-wang3 | "2024-11-01T00:18:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-11-01T00:18:19Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
datasets:
- self-generate/qw1_original_cn_mining_oj_iter0-binarized
model-index:
- name: qwen1_chat_adamw_iter1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen1_chat_adamw_iter1
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the self-generate/qw1_original_cn_mining_oj_iter0-binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 100
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.45.0
- Pytorch 2.4.0+cu121
- Datasets 2.14.6
- Tokenizers 0.20.1
|
Glagol117/Qwerty | Glagol117 | "2024-11-01T00:18:28Z" | 0 | 0 | null | [
"license:llama3.2",
"region:us"
] | null | "2024-11-01T00:18:28Z" | ---
license: llama3.2
---
|
Jeffsimpsons/dazzle | Jeffsimpsons | "2024-11-01T00:18:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-11-01T00:18:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
selimercan/google-gemma-2b-1730420338 | selimercan | "2024-11-01T00:19:00Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | "2024-11-01T00:18:58Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Dheeraj46329/llama-3.2-new-20-0.5-3e | Dheeraj46329 | "2024-11-01T00:24:19Z" | 0 | 0 | null | [
"safetensors",
"llama",
"license:llama3.2",
"region:us"
] | null | "2024-11-01T00:19:51Z" | ---
license: llama3.2
---
|
Ameer-Sameh123/InasBag3 | Ameer-Sameh123 | "2024-11-01T01:32:28Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-11-01T00:20:10Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: InasBag
---
# Inasbag3
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `InasBag` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Ameer-Sameh123/InasBag3', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Glagol117/Qwertyu | Glagol117 | "2024-11-01T00:20:50Z" | 0 | 0 | null | [
"license:llama3.2",
"region:us"
] | null | "2024-11-01T00:20:50Z" | ---
license: llama3.2
---
|
richie-ghost/sft_llama3_2_2b | richie-ghost | "2024-11-01T00:22:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-11-01T00:21:14Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GoldenLlama/krx_qwen2.5_7b_it_vX3 | GoldenLlama | "2024-11-01T00:23:27Z" | 0 | 0 | null | [
"krx",
"text-generation",
"ko",
"en",
"dataset:amphora/krx-sample-instructions",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-11-01T00:23:02Z" | ---
license: apache-2.0
datasets:
- amphora/krx-sample-instructions
language:
- ko
- en
base_model:
- unsloth/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
- krx
--- |
mradermacher/FoFoNet-SuperMBX-slerp-GGUF | mradermacher | "2024-11-01T01:02:10Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/MBX-7B-v3",
"vanillaOVO/supermario_v4",
"en",
"base_model:fterry/FoFoNet-SuperMBX-slerp",
"base_model:quantized:fterry/FoFoNet-SuperMBX-slerp",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:23:58Z" | ---
base_model: fterry/FoFoNet-SuperMBX-slerp
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/MBX-7B-v3
- vanillaOVO/supermario_v4
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/fterry/FoFoNet-SuperMBX-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/FoFoNet-SuperMBX-slerp-GGUF/resolve/main/FoFoNet-SuperMBX-slerp.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
async0x42/EVA-Qwen2.5-32B-v0.1-exl2_5.0bpw | async0x42 | "2024-11-01T00:38:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Nopm/Opus_WritingStruct",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:Gryphe/ChatGPT-4o-Writing-Prompts",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts",
"dataset:allura-org/Celeste-1.x-data-mixture",
"dataset:cognitivecomputations/dolphin-2.9.3",
"base_model:Qwen/Qwen2.5-32B",
"base_model:quantized:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | "2024-11-01T00:24:03Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-32B
datasets:
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Gryphe/ChatGPT-4o-Writing-Prompts
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
- allura-org/Celeste-1.x-data-mixture
- cognitivecomputations/dolphin-2.9.3
tags:
- generated_from_trainer
model-index:
- name: EVA-Qwen2.5-32B-SFFT-v0.1
results: []
---
# EVA Qwen2.5-32B v0.1
<p>
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.<br>
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
</p>
<p>Version notes for 0.1: Additional round of cleaning for the datasets, new subsets of 4o-WritingPrompts and Charcards, picking the most diverse samples from them, plus added a small subset of SystemChat2.0 to improve instruction following and sliglthy increased sequence length. Additionally, fixed the training config mistake from 32B 0.0, layernorm layers stay frozen this time. Unfreezing them caused positivity bias to appear in 32B 0.0 for some reason.</p>
<p>
<p>Prompt format is ChatML.</p><br>
<h3>Recommended sampler values:</h3>
<ul>
<li>Temperature: 1</li>
<li>Typical-P: 0.9</li>
<li>Min-P: 0.05</li>
<li>Top-A: 0.2</li>
<li>Repetition Penalty: 1.03</li>
</ul>
<h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
- [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
</p>
<p>
<br>
<h3>
Training data:
</h3>
<ul>
<li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
<li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
<li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
<li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
<li>Synthstruct and SynthRP datasets by Epiculous</li>
<li>A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.</li>
</ul>
<h3>
Training time and hardware:
</h3>
<ul><li>7 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br>
</p>
<p>Model was trained by Kearm and Auri.</p>
<h4>Special thanks:</h4><ul>
<li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li>
<li>to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CogninitiveComputations for the data</li>
<li>and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.</li></ul>
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Qwen/Qwen2.5-32B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
# plugins:
# - axolotl.integrations.spectrum.SpectrumPlugin
# spectrum_top_fraction: 0.5
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
# spectrum_model_name: Qwen/Qwen2.5-32B
datasets:
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
type: sharegpt
- path: datasets/opus-instruct-22k-no_refusals-filtered.jsonl
type: sharegpt
- path: datasets/Celeste_Filtered.jsonl
type: sharegpt
- path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt.jsonl
type: sharegpt
- path: datasets/deduped_SynthRP-Gens_processed_09-25-2024-ShareGPT_converted_cleaned.jsonl
type: sharegpt
- path: datasets/Gryphe-4o-WP-filtered-sharegpt.jsonl
type: sharegpt
- path: datasets/deduped_not_samantha_norefusals.jsonl
type: sharegpt
- path: datasets/SystemChat_subset_filtered_sharegpt.jsonl
type: sharegpt
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.001
output_dir: ./EVA-Qwen2.5-32B-SFFT-v0.1
sequence_len: 9216
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
# adapter: qlora
# lora_model_dir:
# lora_r: 64
# lora_alpha: 128
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# mlp.down_proj layers
- model.layers.63.mlp.down_proj
- model.layers.49.mlp.down_proj
- model.layers.48.mlp.down_proj
- model.layers.45.mlp.down_proj
- model.layers.44.mlp.down_proj
- model.layers.47.mlp.down_proj
- model.layers.46.mlp.down_proj
- model.layers.43.mlp.down_proj
- model.layers.8.mlp.down_proj
- model.layers.11.mlp.down_proj
- model.layers.19.mlp.down_proj
- model.layers.35.mlp.down_proj
- model.layers.20.mlp.down_proj
- model.layers.52.mlp.down_proj
- model.layers.39.mlp.down_proj
- model.layers.62.mlp.down_proj
- model.layers.50.mlp.down_proj
- model.layers.29.mlp.down_proj
- model.layers.16.mlp.down_proj
- model.layers.28.mlp.down_proj
- model.layers.53.mlp.down_proj
- model.layers.30.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.32.mlp.down_proj
- model.layers.7.mlp.down_proj
- model.layers.36.mlp.down_proj
- model.layers.12.mlp.down_proj
- model.layers.18.mlp.down_proj
- model.layers.37.mlp.down_proj
- model.layers.38.mlp.down_proj
- model.layers.14.mlp.down_proj
- model.layers.13.mlp.down_proj
# mlp.gate_proj layers
- model.layers.43.mlp.gate_proj
- model.layers.61.mlp.gate_proj
- model.layers.60.mlp.gate_proj
- model.layers.44.mlp.gate_proj
- model.layers.62.mlp.gate_proj
- model.layers.28.mlp.gate_proj
- model.layers.29.mlp.gate_proj
- model.layers.45.mlp.gate_proj
- model.layers.37.mlp.gate_proj
- model.layers.35.mlp.gate_proj
- model.layers.59.mlp.gate_proj
- model.layers.36.mlp.gate_proj
- model.layers.30.mlp.gate_proj
- model.layers.48.mlp.gate_proj
- model.layers.38.mlp.gate_proj
- model.layers.27.mlp.gate_proj
- model.layers.31.mlp.gate_proj
- model.layers.34.mlp.gate_proj
- model.layers.58.mlp.gate_proj
- model.layers.33.mlp.gate_proj
- model.layers.39.mlp.gate_proj
- model.layers.26.mlp.gate_proj
- model.layers.32.mlp.gate_proj
- model.layers.46.mlp.gate_proj
- model.layers.42.mlp.gate_proj
- model.layers.49.mlp.gate_proj
- model.layers.57.mlp.gate_proj
- model.layers.50.mlp.gate_proj
- model.layers.47.mlp.gate_proj
- model.layers.56.mlp.gate_proj
- model.layers.63.mlp.gate_proj
- model.layers.55.mlp.gate_proj
# mlp.up_proj layers
- model.layers.61.mlp.up_proj
- model.layers.60.mlp.up_proj
- model.layers.32.mlp.up_proj
- model.layers.59.mlp.up_proj
- model.layers.58.mlp.up_proj
- model.layers.57.mlp.up_proj
- model.layers.44.mlp.up_proj
- model.layers.28.mlp.up_proj
- model.layers.35.mlp.up_proj
- model.layers.36.mlp.up_proj
- model.layers.29.mlp.up_proj
- model.layers.31.mlp.up_proj
- model.layers.34.mlp.up_proj
- model.layers.55.mlp.up_proj
- model.layers.49.mlp.up_proj
- model.layers.30.mlp.up_proj
- model.layers.53.mlp.up_proj
- model.layers.43.mlp.up_proj
- model.layers.56.mlp.up_proj
- model.layers.33.mlp.up_proj
- model.layers.54.mlp.up_proj
- model.layers.62.mlp.up_proj
- model.layers.27.mlp.up_proj
- model.layers.51.mlp.up_proj
- model.layers.52.mlp.up_proj
- model.layers.37.mlp.up_proj
- model.layers.45.mlp.up_proj
- model.layers.26.mlp.up_proj
- model.layers.42.mlp.up_proj
- model.layers.50.mlp.up_proj
- model.layers.48.mlp.up_proj
- model.layers.39.mlp.up_proj
# self_attn.k_proj layers
- model.layers.63.self_attn.k_proj
- model.layers.55.self_attn.k_proj
- model.layers.60.self_attn.k_proj
- model.layers.7.self_attn.k_proj
- model.layers.12.self_attn.k_proj
- model.layers.13.self_attn.k_proj
- model.layers.57.self_attn.k_proj
- model.layers.29.self_attn.k_proj
- model.layers.14.self_attn.k_proj
- model.layers.51.self_attn.k_proj
- model.layers.53.self_attn.k_proj
- model.layers.54.self_attn.k_proj
- model.layers.22.self_attn.k_proj
- model.layers.61.self_attn.k_proj
- model.layers.18.self_attn.k_proj
- model.layers.30.self_attn.k_proj
- model.layers.9.self_attn.k_proj
- model.layers.24.self_attn.k_proj
- model.layers.23.self_attn.k_proj
- model.layers.25.self_attn.k_proj
- model.layers.10.self_attn.k_proj
- model.layers.58.self_attn.k_proj
- model.layers.56.self_attn.k_proj
- model.layers.15.self_attn.k_proj
- model.layers.32.self_attn.k_proj
- model.layers.28.self_attn.k_proj
- model.layers.8.self_attn.k_proj
- model.layers.59.self_attn.k_proj
- model.layers.11.self_attn.k_proj
- model.layers.48.self_attn.k_proj
- model.layers.16.self_attn.k_proj
- model.layers.50.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.15.self_attn.o_proj
- model.layers.23.self_attn.o_proj
- model.layers.31.self_attn.o_proj
- model.layers.30.self_attn.o_proj
- model.layers.18.self_attn.o_proj
- model.layers.24.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.28.self_attn.o_proj
- model.layers.34.self_attn.o_proj
- model.layers.33.self_attn.o_proj
- model.layers.25.self_attn.o_proj
- model.layers.12.self_attn.o_proj
- model.layers.14.self_attn.o_proj
- model.layers.29.self_attn.o_proj
- model.layers.16.self_attn.o_proj
- model.layers.26.self_attn.o_proj
- model.layers.22.self_attn.o_proj
- model.layers.27.self_attn.o_proj
- model.layers.35.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.13.self_attn.o_proj
- model.layers.36.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.37.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.11.self_attn.o_proj
- model.layers.54.self_attn.o_proj
- model.layers.5.self_attn.o_proj
- model.layers.38.self_attn.o_proj
- model.layers.6.self_attn.o_proj
- model.layers.8.self_attn.o_proj
- model.layers.9.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.1.self_attn.q_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.45.self_attn.q_proj
- model.layers.54.self_attn.q_proj
- model.layers.35.self_attn.q_proj
- model.layers.48.self_attn.q_proj
- model.layers.61.self_attn.q_proj
- model.layers.52.self_attn.q_proj
- model.layers.50.self_attn.q_proj
- model.layers.60.self_attn.q_proj
- model.layers.56.self_attn.q_proj
- model.layers.58.self_attn.q_proj
- model.layers.42.self_attn.q_proj
- model.layers.59.self_attn.q_proj
- model.layers.44.self_attn.q_proj
- model.layers.55.self_attn.q_proj
- model.layers.57.self_attn.q_proj
- model.layers.41.self_attn.q_proj
- model.layers.36.self_attn.q_proj
- model.layers.39.self_attn.q_proj
- model.layers.4.self_attn.q_proj
- model.layers.43.self_attn.q_proj
- model.layers.34.self_attn.q_proj
- model.layers.46.self_attn.q_proj
- model.layers.49.self_attn.q_proj
- model.layers.40.self_attn.q_proj
- model.layers.25.self_attn.q_proj
- model.layers.51.self_attn.q_proj
- model.layers.17.self_attn.q_proj
- model.layers.37.self_attn.q_proj
- model.layers.53.self_attn.q_proj
# self_attn.v_proj layers
- model.layers.55.self_attn.v_proj
- model.layers.31.self_attn.v_proj
- model.layers.47.self_attn.v_proj
- model.layers.45.self_attn.v_proj
- model.layers.49.self_attn.v_proj
- model.layers.48.self_attn.v_proj
- model.layers.15.self_attn.v_proj
- model.layers.30.self_attn.v_proj
- model.layers.7.self_attn.v_proj
- model.layers.44.self_attn.v_proj
- model.layers.29.self_attn.v_proj
- model.layers.51.self_attn.v_proj
- model.layers.50.self_attn.v_proj
- model.layers.14.self_attn.v_proj
- model.layers.54.self_attn.v_proj
- model.layers.32.self_attn.v_proj
- model.layers.43.self_attn.v_proj
- model.layers.10.self_attn.v_proj
- model.layers.46.self_attn.v_proj
- model.layers.38.self_attn.v_proj
- model.layers.57.self_attn.v_proj
- model.layers.22.self_attn.v_proj
- model.layers.39.self_attn.v_proj
- model.layers.6.self_attn.v_proj
- model.layers.23.self_attn.v_proj
- model.layers.58.self_attn.v_proj
- model.layers.53.self_attn.v_proj
- model.layers.40.self_attn.v_proj
- model.layers.24.self_attn.v_proj
- model.layers.9.self_attn.v_proj
- model.layers.25.self_attn.v_proj
- model.layers.5.self_attn.v_proj
wandb_project: EVA-Qwen2.5-32B-SFFT-v0.1
wandb_entity:
wandb_watch:
wandb_name: Unit-01
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00005
max_grad_norm: 3
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: "unsloth"
# gradient_checkpointing_kwargs:
# use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 2
save_safetensors: true
hub_model_id:
hub_strategy:
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: false
# fsdp_offload_params: true
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
# fsdp_activation_checkpointing: true
# fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_forward_prefetch: false # Added
# fsdp_backward_prefetch: "BACKWARD_PRE" # Added
# fsdp_backward_prefetch_limit: 1 # Added
# fsdp_mixed_precision: BF16 # Added
```
</details> |
Dheeraj46329/llama-3.2-new-19-0.5-3e | Dheeraj46329 | "2024-11-01T00:29:17Z" | 0 | 0 | null | [
"safetensors",
"llama",
"license:llama3.2",
"region:us"
] | null | "2024-11-01T00:25:18Z" | ---
license: llama3.2
---
|
EveryMatrix/DiffMatte | EveryMatrix | "2024-11-01T00:36:07Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-11-01T00:25:24Z" | ---
license: mit
---
https://github.com/YihanHu-2022/DiffMatte
|
septyoa/LaptopPricePredv3 | septyoa | "2024-11-01T00:25:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-11-01T00:25:31Z" | Entry not found |
nazlisevdam/Qwen-Qwen1.5-0.5B-1730420782 | nazlisevdam | "2024-11-01T00:26:23Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-11-01T00:26:22Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
mradermacher/Qwen2.5-7B-task2-i1-GGUF | mradermacher | "2024-11-01T00:42:12Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:allknowingroger/Qwen2.5-7B-task2",
"base_model:quantized:allknowingroger/Qwen2.5-7B-task2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:28:11Z" | ---
base_model: allknowingroger/Qwen2.5-7B-task2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/allknowingroger/Qwen2.5-7B-task2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
selimercan/Qwen-Qwen1.5-0.5B-1730420912 | selimercan | "2024-11-01T00:28:34Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-11-01T00:28:32Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
raaedk/anime-girl | raaedk | "2024-11-01T00:28:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-11-01T00:28:41Z" | Entry not found |
vnthuan02/FaceTesting | vnthuan02 | "2024-11-01T00:29:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-11-01T00:29:11Z" | Entry not found |
nazlisevdam/Qwen-Qwen1.5-1.8B-1730420955 | nazlisevdam | "2024-11-01T00:29:16Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | "2024-11-01T00:29:15Z" | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
richie-ghost/srt_trainer_llama2_2B_peft | richie-ghost | "2024-11-01T00:30:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:30:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/llama213bTimeBook-GGUF | mradermacher | "2024-11-01T01:21:08Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"autotrain",
"text-generation",
"en",
"base_model:Jimmyhd/llama213bTimeBook",
"base_model:quantized:Jimmyhd/llama213bTimeBook",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-01T00:30:27Z" | ---
base_model: Jimmyhd/llama213bTimeBook
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- autotrain
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Jimmyhd/llama213bTimeBook
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AImused/cold40 | AImused | "2024-11-01T00:56:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-01T00:30:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
swapnil7777/llava_level_6epoch_multi_image | swapnil7777 | "2024-11-01T00:31:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:31:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
selimercan/Qwen-Qwen1.5-1.8B-1730421081 | selimercan | "2024-11-01T00:31:22Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | "2024-11-01T00:31:21Z" | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
cobordism/LVN_mistral_7b-parallel10k-10 | cobordism | "2024-11-01T01:34:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:31:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dheeraj46329/llama-3.2-new-20-0.5-3e-warmup | Dheeraj46329 | "2024-11-01T00:36:39Z" | 0 | 0 | null | [
"safetensors",
"llama",
"license:llama3.2",
"region:us"
] | null | "2024-11-01T00:32:15Z" | ---
license: llama3.2
---
|
nazlisevdam/google-gemma-2b-1730421138 | nazlisevdam | "2024-11-01T00:32:20Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | "2024-11-01T00:32:18Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
personalidadartificial/Maelo | personalidadartificial | "2024-11-01T00:32:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-11-01T00:32:53Z" | Entry not found |
selimercan/google-gemma-2b-1730421255 | selimercan | "2024-11-01T00:34:16Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | "2024-11-01T00:34:15Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF | mradermacher | "2024-11-01T01:11:06Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jpacifico/chocolatine-cook-3B-v0.5",
"base_model:quantized:jpacifico/chocolatine-cook-3B-v0.5",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T00:34:17Z" | ---
base_model: jpacifico/chocolatine-cook-3B-v0.5
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jpacifico/chocolatine-cook-3B-v0.5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ2_S.gguf) | i1-IQ2_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ3_M.gguf) | i1-IQ3_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.3 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.3 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.3 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/chocolatine-cook-3B-v0.5-i1-GGUF/resolve/main/chocolatine-cook-3B-v0.5.i1-Q6_K.gguf) | i1-Q6_K | 3.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
richie-ghost/merged_sft_llama3_2_2B_base_and_QLORA_Adapter | richie-ghost | "2024-11-01T00:36:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-11-01T00:34:36Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/llama-2-7b-Amharic-finetuned-GGUF | mradermacher | "2024-11-01T01:29:49Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-11-01T00:34:59Z" | ---
base_model: AbelBekele/llama-2-7b-Amharic-finetuned
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AbelBekele/llama-2-7b-Amharic-finetuned
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-finetuned-GGUF/resolve/main/llama-2-7b-Amharic-finetuned.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|