The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError Exception: TypeError Message: Couldn't cast array of type list<item: string> to string Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema arrays = [ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp> cast_array_to_feature( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2065, in cast_array_to_feature casted_array_values = _c(array.values, feature.feature) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2102, in cast_array_to_feature return array_cast( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1950, in array_cast raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") TypeError: Couldn't cast array of type list<item: string> to string The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ID
string | type
string | url
string | content
string | links
sequence | dois
sequence | bibs
sequence |
---|---|---|---|---|---|---|
GroNLP/T0pp-sharded | model | https://huggingface.co/GroNLP/T0pp-sharded/raw/main/README.md | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match against Barca."
- text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
example_title: "Sentiment analysis"
- text: "Question A: How is air traffic controlled?
\nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady.
\nIn the previous sentence, decide who 'her' is referring to."
example_title: "Coreference resolution"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n
Select the category for the above sentence from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n
Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n
Do sentences 1 and 2 have the same meaning?"
example_title: "Paraphrase identification"
- text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n
The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n
(CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best."
- text: "Max: Know any good websites to buy clothes from?\n
Payton: Sure :) LINK 1, LINK 2, LINK 3\n
Max: That's a lot of them!\n
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n
Max: I'll check them out. Thanks.\n\n
Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n
Sentence A: you can leave the books on the table over there.\n
Sentence B: the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n
Which book is the leftmost book?"
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n
Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n
Who are the men running for mayor?"
example_title: "Reading comprehension"
- text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n
Which of the following best characterizes binne bams?\n
- Sentence 1: Binne bams are for pets.\n
- Sentence 2: Binne bams are typically furnished with sofas and televisions.\n
- Sentence 3: Binne bams are luxurious apartments.\n
- Sentence 4: Binne bams are places where people live."
---
*This repository provides a sharded version of the T0pp model that can be loaded in low-memory setups.*
**Official repositories**: [Github](https://github.com/bigscience-workshop/t-zero) | [Hugging Face Hub](https://huggingface.co/bigscience/T0pp)
# Model Description
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*.
A few other examples that you can try:
- *A is the son's of B's uncle. What is the family relationship between A and B?*
- *Question A: How is air traffic controlled?<br>
Question B: How do you become an air traffic controller?<br>
Pick one: these questions are duplicates or not duplicates.*
- *Is the word 'table' used in the same meaning in the two following sentences?<br><br>
Sentence A: you can leave the books on the table over there.<br>
Sentence B: the tables in this book are very hard to read.*
- *Max: Know any good websites to buy clothes from?<br>
Payton: Sure :) LINK 1, LINK 2, LINK 3<br>
Max: That's a lot of them!<br>
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br>
Max: I'll check them out. Thanks.<br><br>
Who or what are Payton and Max referring to when they say 'them'?*
- *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br>
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br>
Which book is the leftmost book?*
- *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.*
# How to use
We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[T0](https://huggingface.co/bigscience/T0)|11 billion|
|[T0p](https://huggingface.co/bigscience/T0p)|11 billion|
|[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion|
|[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion|
|[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion|
|[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion|
Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")
inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`.
**Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.**
# Training procedure
T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.
At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section.
Training details:
- Fine-tuning steps: 12'200
- Input sequence length: 1024
- Target sequence length: 256
- Batch size: 1'024 sequences
- Optimizer: Adafactor
- Learning rate: 1e-3
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples)
- Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions|
|T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC|
|T0_single_prompt|Same as T0 but only one prompt per training dataset|
|T0_original_task_only|Same as T0 but only original tasks templates|
|T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model|
For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page.
*: We recast Hotpot QA as closed-book QA due to long input sequence length.
# Evaluation data
We evaluate our models on a suite of held-out tasks:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI, CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Limitations
- The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html).
- We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model.
- Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text.
# Bias and fairness
Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics:
- Input: `Is the earth flat?` - Prediction: `yes`
- Input: `Do vaccines cause autism?` - Prediction: `yes`
- Input: `Complete this sentence: This man works as a` - Prediction: `Architect`
- Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny`
- Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex`
- Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault`
- Input: `what is something everyone hates, but you like?` - Prediction: `sex`
- Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex`
- Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut`
- Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy`
Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases.
To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts.
<table>
<tr>
<td>Dataset</td>
<td>Model</td>
<td>Average (Acc.)</td>
<td>Median (Acc.)</td>
</tr>
<tr>
<td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td>
</tr>
<td>T0p</td><td>57.6</td><td>83.8</td>
<tr>
</tr>
<td>T0pp</td><td>62.7</td><td>64.4</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>57.6</td><td>69.5</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>47.1</td><td>37.8</td>
<tr>
</tr>
<td>T0_3B</td><td>56.9</td><td>82.6</td>
</tr>
<tr>
<td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td>
</tr>
<td>T0p</td><td>80.1</td><td>80.6</td>
<tr>
</tr>
<td>T0pp</td><td>89.2</td><td>90.0</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>81.6</td><td>84.6</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>83.7</td><td>83.8</td>
<tr>
</tr>
<td>T0_3B</td><td>69.7</td><td>69.4</td>
</tr>
</table>
To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts.
<table>
<tr>
<td rowspan="2">Model</td>
<td rowspan="2">Subset</td>
<td colspan="3">Average (Acc.)</td>
<td colspan="3">Median (Acc.)</td>
</tr>
<tr>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
</tr>
<tr>
<td rowspan="2">T0</td><td>Type 1</td>
<td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td>
</tr>
<td>Type 2</td>
<td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0p</td>
<td>Type 1</td>
<td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td>
</tr>
</tr>
<td rowspan="2">T0pp</td>
<td>Type 1</td>
<td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td>
</tr>
</tr>
<td>Type 2</td>
<td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td>
</tr>
</tr>
<td rowspan="2">T0_single_prompt</td>
<td>Type 1</td>
<td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td>
</tr>
</tr>
<td rowspan="2">T0_original_task_only</td>
<td>Type 1</td>
<td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td>
</tr>
</tr>
<td> Type 2</td>
<td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0_3B</td>
<td>Type 1</td>
<td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td>
</tr>
</tr>
<td> Type 2</td>
<td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td>
</tr>
</table>
# BibTeX entry and citation info
```bibtex
@misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
year={2021},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | [
"https://github.com/bigscience-workshop/t-zero",
"https://huggingface.co/bigscience/T0pp",
"https://arxiv.org/abs/2110.08207",
"https://huggingface.co/bigscience/T0pp",
"https://huggingface.co/bigscience/T0",
"https://huggingface.co/bigscience/T0p",
"https://huggingface.co/bigscience/T0pp",
"https://huggingface.co/bigscience/T0_single_prompt",
"https://huggingface.co/bigscience/T0_original_task_only",
"https://huggingface.co/bigscience/T0_3B",
"https://huggingface.co/google/t5-v1_1-large",
"https://huggingface.co/datasets/c4",
"https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k",
"https://huggingface.co/datasets/bigscience/P3",
"https://github.com/google/BIG-bench",
"https://huggingface.co/transformers/parallelism.html",
"https://aclanthology.org/D18-1007/"
] | [] | [
"@misc{sanh2021multitask,\n title={Multitask Prompted Training Enables Zero-Shot Task Generalization}"
] |
DavidAU/bagel-dpo-7b-v0.5-Q8_0-GGUF | model | https://huggingface.co/DavidAU/bagel-dpo-7b-v0.5-Q8_0-GGUF/raw/main/README.md | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
base_model: alpindale/Mistral-7B-v0.2-hf
datasets:
- ai2_arc
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/math
- camel-ai/physics
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- WizardLM/WizardLM_evol_instruct_70k
- glaiveai/glaive-function-calling-v2
- jondurbin/gutenberg-dpo-v0.1
- grimulkan/LimaRP-augmented
- lmsys/lmsys-chat-1m
- ParisNeo/lollms_aware_dataset
- TIGER-Lab/MathInstruct
- Muennighoff/natural-instructions
- openbookqa
- kingbri/PIPPA-shareGPT
- piqa
- Vezora/Tested-22k-Python-Alpaca
- ropes
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- b-mc2/sql-create-context
- squad_v2
- mattpscott/airoboros-summarization
- migtissera/Synthia-v1.3
- unalignment/toxic-dpo-v0.2
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- winogrande
---
# DavidAU/bagel-dpo-7b-v0.5-Q8_0-GGUF
This model was converted to GGUF format from [`jondurbin/bagel-dpo-7b-v0.5`](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/bagel-dpo-7b-v0.5-Q8_0-GGUF --model bagel-dpo-7b-v0.5.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/bagel-dpo-7b-v0.5-Q8_0-GGUF --model bagel-dpo-7b-v0.5.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bagel-dpo-7b-v0.5.Q8_0.gguf -n 128
```
| [
"https://huggingface.co/jondurbin/bagel-dpo-7b-v0.5",
"ggml.ai",
"https://huggingface.co/spaces/ggml-org/gguf-my-repo",
"https://huggingface.co/jondurbin/bagel-dpo-7b-v0.5",
"https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage",
"https://github.com/ggerganov/llama.cpp"
] | [] | [] |
nandysoham/2008_Sichuan_earthquake-clustered | model | https://huggingface.co/nandysoham/2008_Sichuan_earthquake-clustered/raw/main/README.md | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/2008_Sichuan_earthquake-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/2008_Sichuan_earthquake-clustered
This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4560
- Train End Logits Accuracy: 0.8854
- Train Start Logits Accuracy: 0.7882
- Validation Loss: 0.3706
- Validation End Logits Accuracy: 0.8947
- Validation Start Logits Accuracy: 0.8947
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4560 | 0.8854 | 0.7882 | 0.3706 | 0.8947 | 0.8947 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| [
"https://huggingface.co/nandysoham16/12-clustered_aug"
] | [] | [] |
PrunaAI/AI-MO-NuminaMath-7B-CoT-HQQ-2bit-smashed | model | https://huggingface.co/PrunaAI/AI-MO-NuminaMath-7B-CoT-HQQ-2bit-smashed/raw/main/README.md | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: AI-MO/NuminaMath-7B-CoT
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results
![image info](./plots.png)
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo AI-MO/NuminaMath-7B-CoT installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/AI-MO-NuminaMath-7B-CoT-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/AI-MO-NuminaMath-7B-CoT-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("AI-MO/NuminaMath-7B-CoT")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model AI-MO/NuminaMath-7B-CoT before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | [
"https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg",
"https://www.pruna.ai/",
"https://i.imgur.com/eDAlcgk.png",
"https://img.shields.io/twitter/follow/PrunaAI?style=social",
"https://twitter.com/PrunaAI",
"https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social",
"https://github.com/PrunaAI",
"https://img.shields.io/badge/LinkedIn-Connect-blue",
"https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following",
"https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord",
"https://discord.gg/rskEr4BZJx",
"https://www.pruna.ai/contact",
"https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai",
"https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/",
"https://discord.gg/CP4VSgck",
"https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai",
"https://pypi.org/project/pruna-engine/",
"https://www.pruna.ai/contact",
"https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai"
] | [] | [] |
DaizeDong/GraphsGPT-4W | model | https://huggingface.co/DaizeDong/GraphsGPT-4W/raw/main/README.md | ---
license: apache-2.0
pipeline_tag: graph-ml
tags:
- biology
- medical
- chemistry
---
This is the checkpoint of ICML 2024 paper [A Graph is Worth K Words: Euclideanizing Graph using Pure Transformer](https://arxiv.org/abs/2402.02464). For more information, please check the [GitHub Page](https://github.com/DaizeDong/GraphsGPT).
| [
"https://arxiv.org/abs/2402.02464",
"https://github.com/DaizeDong/GraphsGPT"
] | [] | [] |
rinna/llama-3-youko-8b-instruct-gptq | model | https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq/raw/main/README.md | ---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama3
datasets:
- CohereForAI/aya_dataset
- kunishou/databricks-dolly-15k-ja
- kunishou/HelpSteer-35k-ja
- kunishou/HelpSteer2-20k-ja
- kunishou/hh-rlhf-49k-ja
- kunishou/oasst1-chat-44k-ja
- kunishou/oasst2-chat-68k-ja
- meta-math/MetaMathQA
- OpenAssistant/oasst1
- OpenAssistant/oasst2
- sahil2801/CodeAlpaca-20k
language:
- ja
- en
tags:
- llama
- llama-3
- gptq
inference: false
base_model: rinna/llama-3-youko-8b-instruct
base_model_relation: quantized
---
# `Llama 3 Youko 8B Instruct GPTQ (rinna/llama-3-youko-8b-instruct-gptq)`
![rinna-icon](./rinna.png)
# Overview
rinna/llama-3-youko-8b-instruct-gptq is the quantized model for [rinna/llama-3-youko-8b-instruct](https://huggingface.co/rinna/llama-3-youko-8b-instruct) using [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ). The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference.
| Size | Continual Pre-Training | Instruction-Tuning |
| :- | :- | :- |
| 8B | Llama 3 Youko 8B [[HF]](https://huggingface.co/rinna/llama-3-youko-8b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-gptq) | Llama 3 Youko 8B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-8b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq) |
| 70B | Llama 3 Youko 70B [[HF]](https://huggingface.co/rinna/llama-3-youko-70b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-gptq) | Llama 3 Youko 70B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-70b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-instruct-gptq) |
* **Training: Built with Meta Llama 3**
See [rinna/llama-3-youko-8b-instruct](https://huggingface.co/rinna/llama-3-youko-8b-instruct) for details about model architecture and data.
* **Contributors**
- [Toshiaki Wakatsuki](https://huggingface.co/t-w)
- [Xinqi Chen](https://huggingface.co/Keely0419)
- [Koh Mitsuda](https://huggingface.co/mitsu-koh)
- [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
---
# How to use the model
We found this instruction-tuned model tends to generate repeated text more often than its base counterpart, and thus we set repetition_penalty=1.1 for better generation performance. The same repetition penalty was applied to the instruction-tuned model in the aforementioned evaluation experiments.
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "rinna/llama-3-youko-8b-instruct-gptq"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
)
messages = [
{"role": "system", "content": "あなたは誠実で優秀なアシスタントです。どうか、簡潔かつ正直に答えてください。"},
{"role": "user", "content": "西田幾多郎とはどんな人物ですか?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
repetition_penalty=1.1,
)
response = outputs[0][input_ids.shape[-1]:]
response = tokenizer.decode(response, skip_special_tokens=True)
print(response)
~~~~
---
# Tokenization
The model uses the original [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) tokenizer.
---
# How to cite
```bibtex
@misc{rinna-llama-3-youko-8b-instruct-gptq,
title = {rinna/llama-3-youko-8b-instruct-gptq},
author = {Wakatsuki, Toshiaki and Chen, Xinqi and Mitsuda, Koh and Sawada, Kei},
url = {https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq}
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
pages = {13898--13905},
url = {https://aclanthology.org/2024.lrec-main.1213},
note = {\url{https://arxiv.org/abs/2404.01657}}
}
```
---
# References
```bibtex
@article{llama3modelcard,
title = {Llama 3 Model Card},
author = {AI@Meta},
year = {2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
@article{frantar2022gptq,
title = {{GPTQ}: Accurate Post-training Compression for Generative Pretrained Transformers},
author = {Frantar, Elias and Ashkboos, Saleh and Hoefler, Torsten and Alistarh, Dan},
year = {2022},
url = {https://arxiv.org/abs/2210.17323}
}
```
---
# License
[Meta Llama 3 Community License](https://llama.meta.com/llama3/license/) | [
"https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png",
"https://huggingface.co/rinna/llama-3-youko-8b-instruct",
"https://github.com/AutoGPTQ/AutoGPTQ",
"https://huggingface.co/rinna/llama-3-youko-8b",
"https://huggingface.co/rinna/llama-3-youko-8b-gptq",
"https://huggingface.co/rinna/llama-3-youko-8b-instruct",
"https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq",
"https://huggingface.co/rinna/llama-3-youko-70b",
"https://huggingface.co/rinna/llama-3-youko-70b-gptq",
"https://huggingface.co/rinna/llama-3-youko-70b-instruct",
"https://huggingface.co/rinna/llama-3-youko-70b-instruct-gptq",
"https://huggingface.co/rinna/llama-3-youko-8b-instruct",
"https://huggingface.co/t-w",
"https://huggingface.co/Keely0419",
"https://huggingface.co/mitsu-koh",
"https://huggingface.co/keisawada",
"https://rinnakk.github.io/research/benchmarks/lm/index.html",
"https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct",
"https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq",
"https://aclanthology.org/2024.lrec-main.1213",
"https://arxiv.org/abs/2404.01657",
"https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md",
"https://arxiv.org/abs/2210.17323",
"https://llama.meta.com/llama3/license/"
] | [] | [
"@misc{rinna-llama-3-youko-8b-instruct-gptq,\n title = {rinna/llama-3-youko-8b-instruct-gptq}",
"@inproceedings{sawada2024release,\n title = {Release of Pre-Trained Models for the {J}",
"@article{llama3modelcard,\n title = {Llama 3 Model Card}",
"@article{frantar2022gptq,\n title = {{GPTQ}"
] |
mrSoul7766/odia-t5-base | model | https://huggingface.co/mrSoul7766/odia-t5-base/raw/main/README.md | ---
datasets:
- OdiaGenAI/odia_domain_context_train_v1
- OdiaGenAI/odia_context_qa_98k
- OdiaGenAI/OdiEnCorp_translation_instructions_25k
language:
- or
- hi
- en
metrics:
- bleu
- rouge
tags:
- india language
widget:
- text: >-
translate into odia: And thy seed shall be as the dust of the earth, and
thou shalt spread abroad to the west, and to the east, and to the north, and
to the south: and in thee and in thy seed shall all the families of the
earth be blessed.</s>
example_title: English to odia translation
- text: >-
translate into odia: पाकिस्तानी जासूस को मिली अहम खुफिया जानकारी, जम्मू
कश्मीर के पुलिस अधिकारी से हुई लापरवाही</s>
example_title: Hindi to odia translation
- text: 'answer: ଓଡ଼ିଶାରେ ଅଟ୍ଟାଳିକା ପାଇଁ ସର୍ବାଧିକ ଆସନ ସୀମା କ’ଣ?</s>'
example_title: Question answer
- text: >-
summarize: ନନ୍ଦିଗ୍ରାମରେ ଶୁଭେନ୍ଦୁ ଅଧିକାରୀଙ୍କ ସହ ହେବାକୁ ଯାଉଥିବା ନିର୍ବାଚନୀ
ଯୁଦ୍ଧ ଅବସରରେ ଭାଜପାର ବିରୋଧୀ ଦଳର ନେତାଙ୍କୁ ଚିଠି ଲେଖିଛନ୍ତି ପଶ୍ଚିମବଙ୍ଗ
ମୁଖ୍ୟମନ୍ତ୍ରୀ ମମତା ବାର୍ନାଜୀ। ଏହି ଚିଠି କଂଗ୍ରେସର ସୋନିଆ ଗାନ୍ଧୀଙ୍କ ସମେତ ପ୍ରମୁଖ ୧୦
ବିରୋଧୀ ଦଳର ନେତାଙ୍କୁ ବୁଧବାର ପଠା ଯାଇଛି। ଚଳିତ ବିଧାନସଭା ନିର୍ବାଚନ ପରେ ଭାଜପାକୁ
ହଟାଇବା ପାଇଁ ଏକତ୍ରିତ ହୋଇ କେଉଁ ରଣନୀତି ଆପଣେଇବାକୁ ପଡ଼ିବ ସେ ସମ୍ପର୍କରେ ଉଲ୍ଲେଖ
କରିଛନ୍ତି ତୃଣମୂଳ କଂଗ୍ରେସ (ଟିଏମ୍ସି) ସୁପ୍ରିମୋ। ଗଣତନ୍ତ୍ର ଉପରେ ଭାଜପାର ଆକ୍ରମଣ
ବିରୋଧରେ ଏକତ୍ରିତ ହେବାର ସମୟ ଆସିଯାଇଛି। ଦେଶର ନାଗରିକଙ୍କ ପାଇଁ ଏକ ବିଶ୍ୱସନୀୟ ବିକଳ୍ପ
ଉପସ୍ଥାପନ କରିବାକୁ ଚିଠିରେ ମମତା ଉଲ୍ଲେଖ କରିଛନ୍ତି। ଦିଲ୍ଲୀ ଗଭର୍ନରଙ୍କୁ ନିର୍ବାଚିତ
ସରକାରଙ୍କଠାରୁ ଅଧିକ ଶକ୍ତି ପ୍ରଦାନ କରୁଥିବା ନୂଆ ବିବାଦୀୟ ଆଇନ ଭଳି ୭ଟି ବିବାଦୀୟ ଘଟଣା
ତଥା ଆଇନ ସମ୍ପର୍କରେ ଚିଠିରେ ଉଲ୍ଲେଖ କରିଛନ୍ତି ମମତା। ସୋନିଆଙ୍କ ବ୍ୟତୀତ ନ୍ୟାଶନାଲିଷ୍ଟ
କଂଗ୍ରେସ ପାର୍ଟି ମୁଖ୍ୟ ଶରଦ ପାଓ୍ବାର, ଡିଏମ୍କେର ଷ୍ଟାଲିନ, ଶିବ ସେନାର ଉଦ୍ଧବ ଠାକରେ,
ଓ୍ବାଇଆରଏସ୍ର ଜଗନ ମୋହନ ରେଡ୍ଡୀ, ବିଜେଡି ସୁପ୍ରିମୋ ନବୀନ ପଟ୍ଟନାୟକ, ଟିଆରଏସ୍ର କେ.
ଚନ୍ଦ୍ରଶେଖର ରାଓ, ସମାଜବାଦୀ ପାର୍ଟିର ଅଖିଳେଶ ଯାଦବ, ରାଷ୍ଟ୍ରୀୟ ଜନତା ଦଳର ତେଜସ୍ବୀ
ଯାଦବ, ଆମ ଆଦମି ପାର୍ଟିର ଅରବିନ୍ଦ କେଜରିଓ୍ବାଲ ଓ ସିପିଆଇଏମ୍ଏଲର ଦୀପାଙ୍କର
ଭଟ୍ଟଚାର୍ଯ୍ୟଙ୍କୁ ଏହି ଚିଠି ପଠାଯାଇଛି</s>
example_title: summarize
- text: >-
context: ଯେଉଁମାନେ ଏଭଳି ଉପଲବ୍ଧତାର ପୂର୍ବାନୁମାନ କରିପାରିବେ ନାହିଁ ସେମାନଙ୍କ
ତୁଳନାରେ ସେମାନେ ପରିବେଶଗତ ସମ୍ବଳକୁ (ଯେପରିକି ଆଲୋକ ଏବଂ ଖାଦ୍ୟ) ସର୍ବୋତ୍ତମ ଭାବେ
ଉପଯୋଗ କରିପାରିବେ। question: କେଉଁ ଜୀବ ସର୍କାଡିଆନ ରିଦମ ବ୍ୟବହାର କରି ଏହାର ଲାଭ
ଉଠାଇପାରିବ ଯାହା ଅନ୍ୟମାନେ କରିପାରିବେ ନାହିଁ? </s>
example_title: Context-Based qa
license: cc-by-sa-4.0
---
# Model Card for odia-t5-base
![model image](https://adaptiverecs-web.s3.ap-south-1.amazonaws.com/my-test-data/Odia+T5.png)
# Model Details
## Model Description
odia-t5-base is a multilingual Text-To-Text Transfer Transformer fine-tuned to perform downstream tasks in the Odia language.
- **Developed by:** Mohammed Ashraf
- **Model type:** Language model
- **Language(s) (NLP):** Odia, English, Hindi
- **License:** CC BY-NC-SA 4.0
- **Related Models:** [All MT5 Checkpoints](https://huggingface.co/models?search=mt5)
# Uses
## Direct Use and Downstream Use
- Translate English to Odia.
- Translate Hind to Odia.
- Odia Sentence Summarization.
- Question Answering in Odia.
- Context-Based Question answering in Odia.
### How to use
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("mrSoul7766/odia-t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("mrSoul7766/odia-t5-base")
# Set maximum generation length
max_length = 512
# Generate response with question as input
input_ids = tokenizer.encode("answer: ଓଡ଼ିଶାରେ ଅଟ୍ଟାଳିକା ପାଇଁ ସର୍ବାଧିକ ଆସନ ସୀମା କ’ଣ?</s>", return_tensors="pt")
output_ids = model.generate(input_ids, max_length=max_length)
# Decode response
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(response)
```
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
## Dataset
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
## Model
```
@misc{mrSoul7766,
author = {Mohammed Ashraf},
title = {odia-t5-base},
year = {2024},
note = {Licensed under Attribution-NonCommercial-ShareAlike 4.0 International},
}
``` | [
"https://adaptiverecs-web.s3.ap-south-1.amazonaws.com/my-test-data/Odia+T5.png",
"https://huggingface.co/models?search=mt5",
"http://creativecommons.org/licenses/by-nc-sa/4.0/",
"https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png",
"https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg",
"https://huggingface.co/OdiaGenAI"
] | [] | [
"@misc{OdiaGenAI,\n author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan}",
"@misc{mrSoul7766,\n author = {Mohammed Ashraf}"
] |
efeno/paligemma_biology_finetuned_text_projection | model | https://huggingface.co/efeno/paligemma_biology_finetuned_text_projection/raw/main/README.md | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"https://mlco2.github.io/impact#compute",
"https://arxiv.org/abs/1910.09700"
] | [] | [] |
bartowski/Einstein-7B-exl2 | model | https://huggingface.co/bartowski/Einstein-7B-exl2/raw/main/README.md | ---
license: other
tags:
- axolotl
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- wenhu/TheoremQA
- TIGER-Lab/ScienceEval
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Einstein-7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Weyaxi/Einstein-7B
| Branch | Bits | lm_head bits | Size | Description |
| ----- | ---- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/8_0) | 8.0 | 8.0 | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/5_0) | 5.0 | 6.0 | 7.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/4_25) | 4.25 | 6.0 | 6.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/3_5) | 3.5 | 6.0 | 6.1 GB | Lower quality, only use if you have to. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Einstein-7B-exl2 Einstein-7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Einstein-7B-exl2`:
```shell
mkdir Einstein-7B-exl2
huggingface-cli download bartowski/Einstein-7B-exl2 --local-dir Einstein-7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Einstein-7B-exl2-6_5
huggingface-cli download bartowski/Einstein-7B-exl2 --revision 6_5 --local-dir Einstein-7B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Einstein-7B-exl2-6.5
huggingface-cli download bartowski/Einstein-7B-exl2 --revision 6_5 --local-dir Einstein-7B-exl2-6.5 --local-dir-use-symlinks False
``` | [
"https://github.com/turboderp/exllamav2/releases/tag/v0.0.12",
"https://huggingface.co/Weyaxi/Einstein-7B",
"https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/8_0",
"https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/6_5",
"https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/5_0",
"https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/4_25",
"https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/3_5",
"https://huggingface.co/bartowski/Einstein-7B-exl2"
] | [] | [] |
maldv/badger-writer-llama-3-8b | model | https://huggingface.co/maldv/badger-writer-llama-3-8b/raw/main/README.md | ---
license: cc-by-nc-4.0
library_name: transformers
tags:
- fourier
- task addition
- merge
base_model:
- maldv/badger-lambda-llama-3-8b
- maldv/llama-3-fantasy-writer-8b
- dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5
model-index:
- name: badger-writer-llama-3-8b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 53.03
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 26.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.57
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.26
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.2
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.67
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b
name: Open LLM Leaderboard
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/SpNOAI3VKUsWegChuQHTk.png)
Badger Writer is a *normalized fourier task addition* of [maldv/badger-lambda-llama-3-8b](/maldv/badger-lambda-llama-3-8b) and [maldv/llama-3-fantasy-writer-8b](/maldv/llama-3-fantasy-writer-8b).
I also used the first and last layer directly from [dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5](/dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5) due to the obvious advantages. I didn't train either the lm_head or embed_tokens layers on the fantasy-writer, but opus is part of lambda ; so they all fit nicely together.
Rep-pen 1.05-1.1 ; Min-p 0.01 ; Temp 0.7 ; Dynatemp 0.4 ; <s>32k</s>24k context ; llama 3 instruct template
```
<|start_header_id|>system<|end_header_id|>
It is so good for you to sit and let me watch you write, master of sci-fi.
Author Isaac Asimov ; Bruce Sterling
Title Terminal Connection
Tags adventure, computer science, body swapping
<|eot_id|>
```
````
{{[INPUT]}}
```story
Setting: 20th November 2050
Location: Las Vegas
Time: 11:17 pm
Character: Sally
Purpose: Exposition ; Descriptive ; Visual Detail, character appearance
```
// Chapter 1
{{[OUTPUT]}}
````
> The neon sky behind Sally was a strobe-lit smear. The street before her had the sheen of wet pavement under the headlights of a hundred different cars, each one bearing people away from this party, towards some more obscure destination.
>
> She took another drag off her cigarette, letting the smoke dance between her lips as she thought about this meeting, and how it came together.
>
> Sally stood tall at five foot two inches, but her athletic build made her seem taller. Her hair was jet black and trimmed short. She wore a low-cut yellow sweater that showed off an ample bosom, paired with black leather pants that hugged her hips perfectly. A silver necklace rested over her bare shoulders, studded with a small golden snake on its chain. Her skin was darkened, not by a sunburn, but by a tan lotion. Deep-brown eyes stared out into the night, squinting slightly against the glare. She looked like a girl who didn’t give a damn, except she did.
>
> Over her left shoulder hung a door to the penthouse suite in the Silver Star Hotel. Behind it, people danced, laughed and drank the night away. But not Sally; she had no interest in any of them, right now. They would have to wait until later.
>
> There were fifty other people on the street, milling around, some watching the view or waiting for friends, others lost in their own thoughts, maybe smoking too. People from all walks of life - tourists, locals, gamblers, street performers and dealers - they passed her by, always in a hurry to get somewhere else, and never lingered long enough to notice what might be the most interesting thing standing there.
>
> The music was loud inside, beating through the walls to create a bass-thumping rhythm that vibrated against Sally’s feet. It came from the DJ in the corner spinning tracks that kept the party moving along, his work supplemented by the pulsating strobe lights above. Sally could see it all from where she stood, even with the doors closed: the bartender working furiously as he handed out drinks; the waitress pouring wine, never stopping as the bottles glowed neon green under the black light that shone down on the tablecloths; men and women dancing together, losing themselves in the music or each other; and at the far end of the room, five or six poker games going full tilt with chips flying between them. The air smelled of smoke, sweat and sex as it flowed out onto the sidewalk.
>
> This was one of the many parties Vegas had to offer, but this particular party attracted a unique clientele. It was known as "The Gathering," and if you knew what to look for, you could find your way here easily enough. That’s why Sally found herself standing outside these doors, staring into the night.
````
{{[INPUT]}}
// Continue
{{[OUTPUT]}}
````
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_maldv__badger-writer-llama-3-8b)
| Metric |Value|
|-------------------|----:|
|Avg. |20.93|
|IFEval (0-Shot) |53.03|
|BBH (3-Shot) |26.88|
|MATH Lvl 5 (4-Shot)| 6.57|
|GPQA (0-shot) | 5.26|
|MuSR (0-shot) | 3.20|
|MMLU-PRO (5-shot) |30.67|
| [
"https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b",
"https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b",
"https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b",
"https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b",
"https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b",
"https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=maldv/badger-writer-llama-3-8b",
"https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/SpNOAI3VKUsWegChuQHTk.png",
"https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard",
"https://huggingface.co/datasets/open-llm-leaderboard/details_maldv__badger-writer-llama-3-8b"
] | [] | [] |
breezedeus/pix2text-mfd-coreml | model | https://huggingface.co/breezedeus/pix2text-mfd-coreml/raw/main/README.md | ---
tags:
- latex-ocr
- math-ocr
- math-formula-detection
- mfd
- pix2text
- p2t
- image-to-text
license: mit
---
# Model Card: Pix2Text-MFD
Mathematical Formula Detection (MFD) model from [Pix2Text (P2T)](https://github.com/breezedeus/Pix2Text).
It's CoreML format of Model `mfd`.
When using this format, you need to set `resized_shape=(768, 768)`, since CoreML doesn't support dynamic size.
Requried:
- `pix2text >= 1.1.1`
## Documents / 文档
- Pix2Text (P2T) Github: [breezedeus/pix2text](https://github.com/breezedeus/Pix2Text) ;
- Pix2Text Online Free Service: [p2t.breezedeus.com](https://p2t.breezedeus.com/) ;
- Pix2Text Online Docs: [Docs](https://pix2text.readthedocs.io) ;
- Pix2Text More: [breezedeus.com/pix2text](https://breezedeus.com/article/pix2text) ;
- Pix2Text Discard: https://discord.gg/GgD87WM8Tf | [
"https://github.com/breezedeus/Pix2Text",
"https://github.com/breezedeus/Pix2Text",
"p2t.breezedeus.com",
"https://p2t.breezedeus.com/",
"https://pix2text.readthedocs.io",
"breezedeus.com/pix2text",
"https://breezedeus.com/article/pix2text",
"https://discord.gg/GgD87WM8Tf"
] | [] | [] |
EnD-Diffusers/LactoseIntolerance | model | https://huggingface.co/EnD-Diffusers/LactoseIntolerance/raw/main/README.md | ---
license: creativeml-openrail-m
datasets:
- fka/awesome-chatgpt-prompts
- Nerfgun3/bad_prompt
- EarthnDusk/Embeddings
tags:
- stable diffusion
- anime
- comic book
- illustration
language:
- en
metrics:
- accuracy
library_name: diffusers
pipeline_tag: text-to-image
---
Lactose Intolerance
---
# Join us on our journey
## Website: https://www.end-media.org
## Discord: :https://discord.gg/5t2kYxt7An
## Backups: https://huggingface.co/EarthnDusk
## Send a Pizza: https://ko-fi.com/earthnicity / https://ko-fi.com/earthnicity
## Reddit: https://www.reddit.com/r/earthndusk/
# SPONSORED & ADVERTISING:
## WE ARE PROUDLY SPONSORED BY: https://www.piratediffusion.com/ https://yodayo.com/
We are excited to announce the upcoming release of new models! Stay tuned for more updates.
Thank you for being a part of Earth & Dusk! | [
"https://www.end-media.org",
"https://discord.gg/5t2kYxt7An",
"https://huggingface.co/EarthnDusk",
"https://ko-fi.com/earthnicity",
"https://ko-fi.com/earthnicity",
"https://www.reddit.com/r/earthndusk/",
"https://www.piratediffusion.com/",
"https://yodayo.com/"
] | [] | [] |
legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF | model | https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/raw/main/README.md | ---
base_model: Qwen/Qwen2-Math-72B-Instruct
inference: false
language:
- en
library_name: gguf
license: other
license_link: https://huggingface.co/Qwen/Qwen2-Math-72B-Instruct/blob/main/LICENSE
license_name: tongyi-qianwen
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- chat
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# Qwen2-Math-72B-Instruct-IMat-GGUF
_Llama.cpp imatrix quantization of Qwen/Qwen2-Math-72B-Instruct_
Original Model: [Qwen/Qwen2-Math-72B-Instruct](https://huggingface.co/Qwen/Qwen2-Math-72B-Instruct)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3547](https://github.com/ggerganov/llama.cpp/releases/tag/b3547)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Qwen2-Math-72B-Instruct.Q8_0/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q8_0) | Q8_0 | 77.26GB | ✅ Available | ⚪ Static | ✂ Yes
| [Qwen2-Math-72B-Instruct.Q6_K/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q6_K) | Q6_K | 64.35GB | ✅ Available | ⚪ Static | ✂ Yes
| [Qwen2-Math-72B-Instruct.Q4_K/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q4_K) | Q4_K | 47.42GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [Qwen2-Math-72B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q3_K.gguf) | Q3_K | 37.70GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q2_K.gguf) | Q2_K | 29.81GB | ✅ Available | 🟢 IMatrix | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Qwen2-Math-72B-Instruct.BF16/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.BF16) | BF16 | 145.42GB | ✅ Available | ⚪ Static | ✂ Yes
| [Qwen2-Math-72B-Instruct.FP16/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.FP16) | F16 | 145.42GB | ✅ Available | ⚪ Static | ✂ Yes
| [Qwen2-Math-72B-Instruct.Q8_0/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q8_0) | Q8_0 | 77.26GB | ✅ Available | ⚪ Static | ✂ Yes
| [Qwen2-Math-72B-Instruct.Q6_K/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q6_K) | Q6_K | 64.35GB | ✅ Available | ⚪ Static | ✂ Yes
| [Qwen2-Math-72B-Instruct.Q5_K/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q5_K) | Q5_K | 54.45GB | ✅ Available | ⚪ Static | ✂ Yes
| [Qwen2-Math-72B-Instruct.Q5_K_S/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q5_K_S) | Q5_K_S | 51.38GB | ✅ Available | ⚪ Static | ✂ Yes
| [Qwen2-Math-72B-Instruct.Q4_K/*](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q4_K) | Q4_K | 47.42GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [Qwen2-Math-72B-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q4_K_S.gguf) | Q4_K_S | 43.89GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ4_NL.gguf) | IQ4_NL | 41.32GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ4_XS.gguf) | IQ4_XS | 39.71GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q3_K.gguf) | Q3_K | 37.70GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q3_K_L.gguf) | Q3_K_L | 39.51GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q3_K_S.gguf) | Q3_K_S | 34.49GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ3_M.gguf) | IQ3_M | 35.50GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ3_S.gguf) | IQ3_S | 34.49GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ3_XS.gguf) | IQ3_XS | 32.84GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 31.85GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q2_K.gguf) | Q2_K | 29.81GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q2_K_S.gguf) | Q2_K_S | 29.57GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ2_M.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ2_M.gguf) | IQ2_M | 29.34GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ2_S.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ2_S.gguf) | IQ2_S | 27.94GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ2_XS.gguf) | IQ2_XS | 27.06GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 25.49GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Qwen2-Math-72B-Instruct.IQ1_M.gguf](https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ1_M.gguf) | IQ1_M | 23.74GB | ✅ Available | 🟢 IMatrix | 📦 No
| Qwen2-Math-72B-Instruct.IQ1_S | IQ1_S | - | ❌ Error | 🟢 IMatrix | -
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF --include "Qwen2-Math-72B-Instruct.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF --include "Qwen2-Math-72B-Instruct.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
<|im_start|>user
{next_user_prompt}<|im_end|>
```
### Chat template with system prompt
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
<|im_start|>user
{next_user_prompt}<|im_end|>
```
### Llama.cpp
```
llama.cpp/main -m Qwen2-Math-72B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Qwen2-Math-72B-Instruct.Q8_0`)
3. Run `gguf-split --merge Qwen2-Math-72B-Instruct.Q8_0/Qwen2-Math-72B-Instruct.Q8_0-00001-of-XXXXX.gguf Qwen2-Math-72B-Instruct.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! | [
"https://huggingface.co/Qwen/Qwen2-Math-72B-Instruct/blob/main/LICENSE",
"https://huggingface.co/Qwen/Qwen2-Math-72B-Instruct",
"https://github.com/ggerganov/llama.cpp/releases/tag/b3547",
"https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/imatrix.dat",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q8_0",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q6_K",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q4_K",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q3_K.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q2_K.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.BF16",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.FP16",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q8_0",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q6_K",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q5_K",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q5_K_S",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/tree/main/Qwen2-Math-72B-Instruct.Q4_K",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q4_K_S.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ4_NL.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ4_XS.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q3_K.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q3_K_L.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q3_K_S.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ3_M.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ3_S.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ3_XS.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ3_XXS.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q2_K.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.Q2_K_S.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ2_M.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ2_S.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ2_XS.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ2_XXS.gguf",
"https://huggingface.co/legraphista/Qwen2-Math-72B-Instruct-IMat-GGUF/blob/main/Qwen2-Math-72B-Instruct.IQ1_M.gguf",
"https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/",
"https://github.com/ggerganov/llama.cpp/releases",
"https://x.com/legraphista"
] | [] | [] |
blockblockblock/dolphin-2.9.2-qwen2-7b-bpw4.2-exl2 | model | https://huggingface.co/blockblockblock/dolphin-2.9.2-qwen2-7b-bpw4.2-exl2/raw/main/README.md | ---
license: apache-2.0
base_model: Qwen/Qwen2-7B
tags:
- generated_from_trainer
- axolotl
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# Dolphin 2.9.2 Qwen2 7B 🐬
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
[![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsors of Dolphin 2.9.2:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
This model is based on Qwen2-7b, and is governed by [tongyi-qianwen license](LICENSE)
The base model has 128k context, and the full-weight fine-tuning was with 16k sequence length.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.2 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Qwen's tongyi-qianwen license. We grant permission for any use, including commercial, that falls within accordance with said license. Dolphin was trained on data generated from GPT4, among other models.
## Evals
![image/png](https://i.ibb.co/0Qw3XtM/file-Oq9-Wr-Qx-H2-Wr8-Eb-Gs15z-Rv-TGe.png) | [
"https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm",
"https://discord.gg/cognitivecomputations",
"https://discord.gg/cognitivecomputations",
"https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png",
"https://crusoe.ai/",
"https://erichartford.com/uncensored-models",
"https://i.ibb.co/0Qw3XtM/file-Oq9-Wr-Qx-H2-Wr8-Eb-Gs15z-Rv-TGe.png"
] | [] | [] |
Locutusque/Hyperion-3.0-Yi-34B | model | https://huggingface.co/Locutusque/Hyperion-3.0-Yi-34B/raw/main/README.md | ---
language:
- en
license: other
library_name: transformers
tags:
- code
- chemistry
- medical
datasets:
- Locutusque/hyperion-v3.0
widget:
- example_title: Derivative
text: |
<|im_start|>user
In this task, you are given a description of an object and its surroundings. Your
task is to describe the motion of the object mentioned in the description using
differential equations. The answer should be a mathematical expression in t. Description:
A ball is thrown up at 30 ft/sec from 3 feet off the ground. Solve this step-by-step,
explaining each step in detail.<|im_end|>
<|im_start|>assistant
output:
text: |
Let's denote the height of the ball as h(t) where t represents time in seconds. Since the ball is thrown up with an initial velocity of 30 ft/sec, we can write the following equation for the vertical component of the ball's velocity:
v(t) = -32t + 30
Here, v(t) represents the velocity of the ball at time t (in feet per second), and the negative sign indicates that the velocity decreases due to gravity. Now, we can integrate the velocity function to find the position function:
h(t) = ∫v(t) dt = ∫(-32t + 30) dt
Integrating each term separately, we get:
h(t) = (-16t^2 + 30t + C)
To determine the constant C, we use the initial condition that the ball is thrown from 3 feet above the ground. At t=0, the height is 3 feet:
h(0) = 3
Substituting t=0 into the equation for h(t):
3 = (-16(0)^2 + 30(0) + C)
Solving for C, we get:
C = 3
So the final equation for the height of the ball as a function of time is:
h(t) = -16t^2 + 30t + 3
---
# Locutusque/Hyperion-3.0-Yi-34B
![Hyperion](https://th.bing.com/th/id/OIG4.16hfui67GAonzb98DGFF?pid=ImgGn)
## Model Details
- **Model Name**: Locutusque/Hyperion-3.0-Yi-34B
- **Base Model**: Yi-34B
- **Publisher**: Locutusque
- **Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning.
- **Language**: Multi-domain, English language.
- **License**: Apache-2.0
## Model Description
Locutusque/Hyperion-3.0-Yi-34B is a state-of-the-art language model fine-tuned on the Hyperion-v3.0 dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning. This model is designed to greatly outperform its predecessors.
## Intended Use
This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios:
- AI-driven tutoring systems for science, medicine, mathematics, and computer science.
- Assistive tools for professionals requiring fast and accurate domain-specific information retrieval.
- Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning.
- Automation in code generation and understanding complex programming context.
## Training Data
The Locutusque/Hyperion-3.0-Yi-34B model was fine-tuned on 150,000 examples of the Hyperion-3.0 dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks.
## Quants
ExLlamaV2: https://huggingface.co/bartowski/Hyperion-3.0-Yi-34B-exl2
GGUF: https://huggingface.co/bartowski/Hyperion-3.0-Yi-34B-GGUF
## Evaluation Results
Coming soon
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/Hyperion-3.0-Yi-34B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality.
This model is also very compliant, it will respond to any request. Please make sure to build upon this model with DPO if you plan on using it for enterprise-level deployment.
## Licensing Information
This model is released under the Yi NC license. | [
"https://th.bing.com/th/id/OIG4.16hfui67GAonzb98DGFF?pid=ImgGn",
"https://huggingface.co/bartowski/Hyperion-3.0-Yi-34B-exl2",
"https://huggingface.co/bartowski/Hyperion-3.0-Yi-34B-GGUF"
] | [] | [] |
BVRA/tf_efficientnet_b0.in1k_ft_df20m_299 | model | https://huggingface.co/BVRA/tf_efficientnet_b0.in1k_ft_df20m_299/raw/main/README.md |
---
tags:
- image-classification
- ecology
- fungi
library_name: DanishFungi
license: cc-by-nc-4.0
---
# Model card for BVRA/tf_efficientnet_b0.in1k_ft_df20m_299
## Model Details
- **Model Type:** Danish Fungi Classification
- **Model Stats:**
- Params (M): 4.2M
- Image size: 299 x 299
- **Papers:**
- **Original :** Deep Residual Learning for Image Recognition --> https://arxiv.org/pdf/1512.03385
- **Train Dataset:** DF20 --> https://github.com/BohemianVRA/DanishFungiDataset/
## Model Usage
### Image Embeddings
```python
import timm
import torch
import torchvision.transforms as T
from PIL import Image
from urllib.request import urlopen
model = timm.create_model("hf-hub:BVRA/tf_efficientnet_b0.in1k_ft_df20m_299", pretrained=True)
model = model.eval()
train_transforms = T.Compose([T.Resize((299, 299)),
T.ToTensor(),
T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
img = Image.open(PATH_TO_YOUR_IMAGE)
output = model(train_transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@InProceedings{Picek_2022_WACV,
author = {Picek, Lukas and Sulc, Milan and Matas, Jiri and Jeppesen, Thomas S. and Heilmann-Clausen, Jacob and L{e}ss{\o}e, Thomas and Fr{\o}slev, Tobias},
title = {Danish Fungi 2020 - Not Just Another Image Recognition Dataset},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2022},
pages = {1525-1535}
}
@article{picek2022automatic,
title={Automatic Fungi Recognition: Deep Learning Meets Mycology},
author={Picek, Lukas and Sulc, Milan and Matas, Jiri and Heilmann-Clausen, Jacob and Jeppesen, Thomas S and Lind, Emil},
journal={Sensors},
volume={22},
number={2},
pages={633},
year={2022},
publisher={Multidisciplinary Digital Publishing Institute}
}
``` | [
"https://arxiv.org/pdf/1512.03385",
"https://github.com/BohemianVRA/DanishFungiDataset/"
] | [] | [
"@InProceedings{Picek_2022_WACV,\n author = {Picek, Lukas and Sulc, Milan and Matas, Jiri and Jeppesen, Thomas S. and Heilmann-Clausen, Jacob and L{\u0007e}",
"@article{picek2022automatic,\n title={Automatic Fungi Recognition: Deep Learning Meets Mycology}"
] |
NikolayKozloff/granite-3b-code-instruct-Q8_0-GGUF | model | https://huggingface.co/NikolayKozloff/granite-3b-code-instruct-Q8_0-GGUF/raw/main/README.md | ---
license: apache-2.0
library_name: transformers
tags:
- code
- llama-cpp
- gguf-my-repo
base_model: ibm-granite/granite-3b-code-base
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
metrics:
- code_eval
pipeline_tag: text-generation
inference: false
model-index:
- name: granite-3b-code-instruct
results:
- task:
type: text-generation
dataset:
name: HumanEvalSynthesis(Python)
type: bigcode/humanevalpack
metrics:
- type: pass@1
value: 51.2
name: pass@1
- type: pass@1
value: 43.9
name: pass@1
- type: pass@1
value: 41.5
name: pass@1
- type: pass@1
value: 31.7
name: pass@1
- type: pass@1
value: 40.2
name: pass@1
- type: pass@1
value: 29.3
name: pass@1
- type: pass@1
value: 39.6
name: pass@1
- type: pass@1
value: 26.8
name: pass@1
- type: pass@1
value: 39.0
name: pass@1
- type: pass@1
value: 14.0
name: pass@1
- type: pass@1
value: 23.8
name: pass@1
- type: pass@1
value: 12.8
name: pass@1
- type: pass@1
value: 26.8
name: pass@1
- type: pass@1
value: 28.0
name: pass@1
- type: pass@1
value: 33.5
name: pass@1
- type: pass@1
value: 27.4
name: pass@1
- type: pass@1
value: 31.7
name: pass@1
- type: pass@1
value: 16.5
name: pass@1
---
# NikolayKozloff/granite-3b-code-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3b-code-instruct`](https://huggingface.co/ibm-granite/granite-3b-code-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3b-code-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/granite-3b-code-instruct-Q8_0-GGUF --model granite-3b-code-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/granite-3b-code-instruct-Q8_0-GGUF --model granite-3b-code-instruct.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m granite-3b-code-instruct.Q8_0.gguf -n 128
```
| [
"https://huggingface.co/ibm-granite/granite-3b-code-instruct",
"ggml.ai",
"https://huggingface.co/spaces/ggml-org/gguf-my-repo",
"https://huggingface.co/ibm-granite/granite-3b-code-instruct",
"https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage",
"https://github.com/ggerganov/llama.cpp"
] | [] | [] |
End of preview.