modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
679M
| likes
int64 0
11k
| library_name
stringclasses 256
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
alirezaomneky/sd-background-model | alirezaomneky | "2023-12-13T23:56:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T23:56:13Z" | Entry not found |
Carlosgg14/zoro | Carlosgg14 | "2023-12-13T23:56:56Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T23:56:24Z" | ---
license: openrail
---
|
Sam845/Scott_Pilgrim_Latin_Spanish | Sam845 | "2023-12-14T00:08:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T23:58:34Z" | Entry not found |
ktrinh38/fashion-txt2img | ktrinh38 | "2024-03-06T01:07:37Z" | 0 | 0 | null | [
"tensorboard",
"dataset:ktrinh38/fashion-dataset",
"region:us"
] | null | "2023-12-14T00:05:38Z" | ---
datasets:
- ktrinh38/fashion-dataset
---
### [Stable Diffusion with Text Conditioning](https://github.com/CompVis/latent-diffusion), fine tuned on fashion set
Checkpoints for now :D |
jp4prezi/julain | jp4prezi | "2023-12-14T00:05:53Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-14T00:05:53Z" | ---
license: apache-2.0
---
|
jp4prezi/Justin | jp4prezi | "2023-12-14T00:06:29Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2023-12-14T00:06:29Z" | ---
license: mit
---
|
jp4prezi/mit | jp4prezi | "2023-12-14T00:08:48Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2023-12-14T00:08:48Z" | ---
license: mit
---
|
vinicm/modelocarlos | vinicm | "2023-12-14T00:11:24Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T00:11:04Z" | ---
license: openrail
---
|
accgatsu/luanvoznome | accgatsu | "2023-12-14T00:14:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T00:14:09Z" | Entry not found |
TexX/Nobara | TexX | "2023-12-14T00:29:26Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T00:17:46Z" | ---
license: openrail
---
|
amir4m/test | amir4m | "2023-12-14T00:18:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T00:18:44Z" | Entry not found |
cmtn/email_extractor_more_data_model_t5_small | cmtn | "2023-12-14T00:20:40Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-14T00:20:30Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: email_extractor_more_data_model_t5_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# email_extractor_more_data_model_t5_small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1772
- Rouge1: 0.8316
- Rouge2: 0.7897
- Rougel: 0.8306
- Rougelsum: 0.8303
- Gen Len: 15.439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 74 | 0.4608 | 0.6421 | 0.5952 | 0.632 | 0.6312 | 18.6829 |
| No log | 2.0 | 148 | 0.2775 | 0.7726 | 0.7305 | 0.7735 | 0.7734 | 16.9512 |
| No log | 3.0 | 222 | 0.2164 | 0.7865 | 0.7549 | 0.7854 | 0.7856 | 16.3659 |
| No log | 4.0 | 296 | 0.1901 | 0.8316 | 0.7897 | 0.8306 | 0.8303 | 15.439 |
| No log | 5.0 | 370 | 0.1772 | 0.8316 | 0.7897 | 0.8306 | 0.8303 | 15.439 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
namkyeong/whisper_13 | namkyeong | "2023-12-14T06:36:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T00:20:55Z" | Entry not found |
hkivancoral/smids_5x_beit_base_adamax_001_fold3 | hkivancoral | "2023-12-14T01:40:03Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-14T00:21:00Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_beit_base_adamax_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_adamax_001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1469
- Accuracy: 0.835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8795 | 1.0 | 375 | 1.0709 | 0.465 |
| 0.805 | 2.0 | 750 | 0.8267 | 0.54 |
| 0.7578 | 3.0 | 1125 | 0.8592 | 0.5683 |
| 1.0023 | 4.0 | 1500 | 0.7631 | 0.6317 |
| 0.7622 | 5.0 | 1875 | 0.6997 | 0.685 |
| 0.5711 | 6.0 | 2250 | 0.5607 | 0.76 |
| 0.5125 | 7.0 | 2625 | 0.4986 | 0.8067 |
| 0.5239 | 8.0 | 3000 | 0.4781 | 0.8 |
| 0.4547 | 9.0 | 3375 | 0.6145 | 0.77 |
| 0.4777 | 10.0 | 3750 | 0.4360 | 0.8267 |
| 0.3636 | 11.0 | 4125 | 0.4074 | 0.8417 |
| 0.4518 | 12.0 | 4500 | 0.4481 | 0.8317 |
| 0.3493 | 13.0 | 4875 | 0.5307 | 0.805 |
| 0.3009 | 14.0 | 5250 | 0.4470 | 0.835 |
| 0.2927 | 15.0 | 5625 | 0.4302 | 0.8383 |
| 0.233 | 16.0 | 6000 | 0.4642 | 0.835 |
| 0.3176 | 17.0 | 6375 | 0.4522 | 0.835 |
| 0.2704 | 18.0 | 6750 | 0.4353 | 0.8317 |
| 0.2544 | 19.0 | 7125 | 0.4509 | 0.835 |
| 0.2122 | 20.0 | 7500 | 0.5169 | 0.8183 |
| 0.135 | 21.0 | 7875 | 0.5912 | 0.82 |
| 0.1564 | 22.0 | 8250 | 0.4970 | 0.8383 |
| 0.2284 | 23.0 | 8625 | 0.5113 | 0.8217 |
| 0.1624 | 24.0 | 9000 | 0.6295 | 0.825 |
| 0.165 | 25.0 | 9375 | 0.5951 | 0.81 |
| 0.0933 | 26.0 | 9750 | 0.6337 | 0.8233 |
| 0.1787 | 27.0 | 10125 | 0.5739 | 0.8267 |
| 0.0977 | 28.0 | 10500 | 0.6837 | 0.8283 |
| 0.0607 | 29.0 | 10875 | 0.7084 | 0.8467 |
| 0.0515 | 30.0 | 11250 | 0.8096 | 0.8167 |
| 0.0587 | 31.0 | 11625 | 0.8299 | 0.8367 |
| 0.1097 | 32.0 | 12000 | 0.7487 | 0.8333 |
| 0.0784 | 33.0 | 12375 | 0.7788 | 0.815 |
| 0.0505 | 34.0 | 12750 | 0.8520 | 0.8417 |
| 0.0243 | 35.0 | 13125 | 0.8865 | 0.8233 |
| 0.0517 | 36.0 | 13500 | 0.8229 | 0.83 |
| 0.0484 | 37.0 | 13875 | 0.9870 | 0.8367 |
| 0.0198 | 38.0 | 14250 | 0.9718 | 0.825 |
| 0.0203 | 39.0 | 14625 | 0.8284 | 0.8467 |
| 0.0211 | 40.0 | 15000 | 0.9506 | 0.8333 |
| 0.0035 | 41.0 | 15375 | 0.9695 | 0.8367 |
| 0.0109 | 42.0 | 15750 | 1.1050 | 0.835 |
| 0.0054 | 43.0 | 16125 | 1.1815 | 0.8317 |
| 0.0043 | 44.0 | 16500 | 1.0406 | 0.8433 |
| 0.0242 | 45.0 | 16875 | 1.1360 | 0.8417 |
| 0.0127 | 46.0 | 17250 | 1.1706 | 0.8317 |
| 0.0068 | 47.0 | 17625 | 1.1596 | 0.8333 |
| 0.0108 | 48.0 | 18000 | 1.1303 | 0.8333 |
| 0.0029 | 49.0 | 18375 | 1.1332 | 0.8267 |
| 0.0113 | 50.0 | 18750 | 1.1469 | 0.835 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
nleroy917/viral-sequence-prediction | nleroy917 | "2023-12-14T00:28:56Z" | 0 | 1 | null | [
"region:us"
] | null | "2023-12-14T00:24:10Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for viral-sequence-prediction
This model was a part of the University of Virginia's NLP course final project: NLP for social good. Using publically available data from the [NCBI virus data portal](https://www.ncbi.nlm.nih.gov/labs/virus/vssi/#/), we designed an LSTM-based classifier that can predict the virus of origin for any arbitrary protein sequence.
Specifically, we were interested in classifying COVID v Flu.
### Using the model
Code for the model can be found at: https://github.com/nleroy917/CS6501-final
You can use the `train.ipynb` notebook, or use the code directly:
```python
from dna_classification.models import DNASequenceClassifier
model = DNASequenceClassifier("nleroy917/viral-sequence-prediction")
virus = model.predict("MGYINVFAFPFTIYSLLLCRMNFRNYIAQVDVVNFNLT")
>>> COVID
``` |
arpachat/output-fashion-2 | arpachat | "2023-12-14T00:25:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T00:25:49Z" | Entry not found |
PandaUniverz/Drizzle_Wasabeatz | PandaUniverz | "2023-12-14T00:30:03Z" | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | "2023-12-14T00:26:41Z" | ---
license: openrail++
---
|
RootReturn0/SER | RootReturn0 | "2023-12-14T00:29:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T00:29:00Z" | Entry not found |
EmmaGthn/results_lora_40_20000_bias | EmmaGthn | "2023-12-14T05:44:28Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2023-12-14T00:32:30Z" | Entry not found |
arpachat/output-fashion-3 | arpachat | "2023-12-14T00:36:23Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:jwl25b/final_project_dataset",
"base_model:OFA-Sys/small-stable-diffusion-v0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-12-14T00:34:18Z" |
---
license: creativeml-openrail-m
base_model: OFA-Sys/small-stable-diffusion-v0
datasets:
- jwl25b/final_project_dataset
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - arpachat/output-fashion-3
This pipeline was finetuned from **OFA-Sys/small-stable-diffusion-v0** on the **jwl25b/final_project_dataset** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Blue Tommy Hilfiger jacket']:
![val_imgs_grid](./val_imgs_grid.png)
## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("arpachat/output-fashion-3", torch_dtype=torch.float16)
prompt = "Blue Tommy Hilfiger jacket"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 1
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 2
* Image resolution: 512
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](wandb_run_url).
|
littlechi118/mistral-7b | littlechi118 | "2023-12-14T00:36:06Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-14T00:36:06Z" | ---
license: apache-2.0
---
|
arpachat/output-fashion-400 | arpachat | "2023-12-14T01:16:02Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:jwl25b/final_project_dataset",
"base_model:OFA-Sys/small-stable-diffusion-v0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-12-14T00:42:22Z" |
---
license: creativeml-openrail-m
base_model: OFA-Sys/small-stable-diffusion-v0
datasets:
- jwl25b/final_project_dataset
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - arpachat/output-fashion-400
This pipeline was finetuned from **OFA-Sys/small-stable-diffusion-v0** on the **jwl25b/final_project_dataset** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Blue Tommy Hilfiger jacket']:
![val_imgs_grid](./val_imgs_grid.png)
## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("arpachat/output-fashion-400", torch_dtype=torch.float16)
prompt = "Blue Tommy Hilfiger jacket"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 58
* Learning rate: 1e-05
* Batch size: 4
* Gradient accumulation steps: 2
* Image resolution: 512
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](wandb_run_url).
|
kimjisoobkkai/CADRES_YT_BR_250_EPOCHS | kimjisoobkkai | "2023-12-14T00:47:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T00:46:43Z" | Entry not found |
WmVernon/GOTY | WmVernon | "2023-12-14T00:48:26Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-14T00:48:26Z" | ---
license: apache-2.0
---
|
kimjisoobkkai/ATHOS_YT_BR_250_EPOCHS | kimjisoobkkai | "2023-12-14T00:52:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T00:50:11Z" | Entry not found |
GraydientPlatformAPI/no-esc | GraydientPlatformAPI | "2023-12-14T00:52:04Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T00:52:04Z" | ---
license: openrail
---
|
GraydientPlatformAPI/fromhades-xl | GraydientPlatformAPI | "2023-12-14T01:05:43Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-12-14T00:53:14Z" | ---
license: openrail
---
|
jp4prezi/faith | jp4prezi | "2023-12-14T00:57:11Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T00:57:11Z" | ---
license: openrail
---
|
DrM/my_awesome_billsum_model | DrM | "2023-12-14T00:57:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T00:57:11Z" | Entry not found |
ManifoldRG/NEKO | ManifoldRG | "2023-12-14T01:00:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T01:00:03Z" | Entry not found |
arpachat/stable-diffusion-v1-2-fashion-400 | arpachat | "2023-12-14T01:01:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T01:01:32Z" | Entry not found |
spicyplantain/racjstadif | spicyplantain | "2023-12-14T01:10:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T01:10:13Z" | Entry not found |
arpachat/small-stable-diffusion-v0-fashion-1000 | arpachat | "2023-12-14T01:21:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T01:21:32Z" | Entry not found |
wangjianyang/bert_finetuning_test | wangjianyang | "2023-12-14T01:32:42Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-12-14T01:26:04Z" | ---
license: unknown
---
|
GlycerinLOL/LLM_Teached_Bart_FromScratch | GlycerinLOL | "2023-12-14T05:51:00Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-14T01:30:42Z" | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: LLM_Teached_Bart_FromScratch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLM_Teached_Bart_FromScratch
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7504
- Rouge1: 0.3746
- Rouge2: 0.1776
- Rougel: 0.3165
- Rougelsum: 0.3164
- Gen Len: 19.9727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.0692 | 1.0 | 625 | 1.7369 | 0.3826 | 0.1796 | 0.3166 | 0.3164 | 19.9691 |
| 0.9466 | 2.0 | 1250 | 1.7602 | 0.3738 | 0.1772 | 0.3142 | 0.3143 | 20.0 |
| 0.8898 | 3.0 | 1875 | 1.7657 | 0.3778 | 0.1751 | 0.3156 | 0.3157 | 19.9727 |
| 0.8871 | 4.0 | 2500 | 1.7504 | 0.3746 | 0.1776 | 0.3165 | 0.3164 | 19.9727 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.0
|
GraydientPlatformAPI/realwow | GraydientPlatformAPI | "2023-12-14T01:43:13Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-12-14T01:31:04Z" | ---
license: openrail
---
|
peterjacksonsxx9/GanzNeuLivvy | peterjacksonsxx9 | "2023-12-14T01:33:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T01:32:24Z" | Entry not found |
Vital65/Michael | Vital65 | "2023-12-14T01:39:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T01:39:45Z" | Entry not found |
hkivancoral/smids_5x_beit_base_adamax_001_fold4 | hkivancoral | "2023-12-14T02:29:38Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-14T01:40:53Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_beit_base_adamax_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7916666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_adamax_001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9816
- Accuracy: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8866 | 1.0 | 375 | 0.8458 | 0.5233 |
| 0.8299 | 2.0 | 750 | 0.7752 | 0.57 |
| 0.7493 | 3.0 | 1125 | 0.7658 | 0.615 |
| 0.7146 | 4.0 | 1500 | 0.7493 | 0.6417 |
| 0.7017 | 5.0 | 1875 | 0.7203 | 0.6817 |
| 0.763 | 6.0 | 2250 | 0.6999 | 0.68 |
| 0.6756 | 7.0 | 2625 | 0.6915 | 0.6967 |
| 0.7133 | 8.0 | 3000 | 0.7086 | 0.6683 |
| 0.6524 | 9.0 | 3375 | 0.6683 | 0.6883 |
| 0.7431 | 10.0 | 3750 | 0.6664 | 0.7017 |
| 0.6733 | 11.0 | 4125 | 0.6953 | 0.675 |
| 0.6611 | 12.0 | 4500 | 0.6840 | 0.6933 |
| 0.6809 | 13.0 | 4875 | 0.6889 | 0.6717 |
| 0.6678 | 14.0 | 5250 | 0.6820 | 0.6917 |
| 0.6888 | 15.0 | 5625 | 0.6381 | 0.7083 |
| 0.6174 | 16.0 | 6000 | 0.6105 | 0.7417 |
| 0.6389 | 17.0 | 6375 | 0.6744 | 0.7033 |
| 0.6613 | 18.0 | 6750 | 0.6040 | 0.7417 |
| 0.6398 | 19.0 | 7125 | 0.6119 | 0.7317 |
| 0.6011 | 20.0 | 7500 | 0.5771 | 0.7517 |
| 0.5356 | 21.0 | 7875 | 0.6886 | 0.7033 |
| 0.5588 | 22.0 | 8250 | 0.6145 | 0.7433 |
| 0.5597 | 23.0 | 8625 | 0.6084 | 0.7383 |
| 0.5734 | 24.0 | 9000 | 0.5790 | 0.7633 |
| 0.5451 | 25.0 | 9375 | 0.5688 | 0.7567 |
| 0.5084 | 26.0 | 9750 | 0.5594 | 0.765 |
| 0.4925 | 27.0 | 10125 | 0.6035 | 0.7633 |
| 0.4449 | 28.0 | 10500 | 0.5736 | 0.7717 |
| 0.4935 | 29.0 | 10875 | 0.5611 | 0.76 |
| 0.518 | 30.0 | 11250 | 0.6001 | 0.75 |
| 0.5374 | 31.0 | 11625 | 0.5726 | 0.7867 |
| 0.4582 | 32.0 | 12000 | 0.5878 | 0.7717 |
| 0.4449 | 33.0 | 12375 | 0.6022 | 0.77 |
| 0.4991 | 34.0 | 12750 | 0.5950 | 0.7833 |
| 0.3652 | 35.0 | 13125 | 0.5638 | 0.8033 |
| 0.4263 | 36.0 | 13500 | 0.5959 | 0.7883 |
| 0.4604 | 37.0 | 13875 | 0.6072 | 0.8 |
| 0.4152 | 38.0 | 14250 | 0.6172 | 0.8033 |
| 0.3735 | 39.0 | 14625 | 0.6726 | 0.79 |
| 0.3648 | 40.0 | 15000 | 0.6751 | 0.7933 |
| 0.3489 | 41.0 | 15375 | 0.6954 | 0.7833 |
| 0.235 | 42.0 | 15750 | 0.7474 | 0.7767 |
| 0.2834 | 43.0 | 16125 | 0.7611 | 0.7933 |
| 0.2126 | 44.0 | 16500 | 0.7672 | 0.7917 |
| 0.2122 | 45.0 | 16875 | 0.8481 | 0.7683 |
| 0.1955 | 46.0 | 17250 | 0.8595 | 0.795 |
| 0.1764 | 47.0 | 17625 | 0.8929 | 0.7917 |
| 0.2086 | 48.0 | 18000 | 0.9496 | 0.79 |
| 0.175 | 49.0 | 18375 | 0.9722 | 0.7917 |
| 0.152 | 50.0 | 18750 | 0.9816 | 0.7917 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Vital65/Michael2 | Vital65 | "2023-12-14T01:43:04Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T01:42:10Z" | ---
license: openrail
---
|
lenguist/long-former-model | lenguist | "2023-12-14T01:44:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T01:44:32Z" | Entry not found |
emboy/Crystal_Combing | emboy | "2023-12-14T01:47:10Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-14T01:47:10Z" | ---
license: apache-2.0
---
|
arunlokanatha/zephyr-7b-dpo-full | arunlokanatha | "2023-12-14T01:48:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T01:48:57Z" | Entry not found |
lenguist/longformer-model | lenguist | "2023-12-14T01:51:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T01:51:40Z" | Entry not found |
Umbrellos/pablobackyardigans | Umbrellos | "2023-12-14T02:02:16Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T02:01:36Z" | ---
license: openrail
---
|
Umbrellos/gobbleygordmsm | Umbrellos | "2023-12-14T02:04:13Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T02:03:23Z" | ---
license: openrail
---
|
BAAI/tokenize-anything | BAAI | "2024-05-28T03:11:06Z" | 0 | 13 | null | [
"image-to-text",
"arxiv:2312.09128",
"license:apache-2.0",
"region:us"
] | image-to-text | "2023-12-14T02:06:05Z" | ---
license: apache-2.0
pipeline_tag: image-to-text
---
<div align="center">
<h1>Tokenize Anything via Prompting</h1>
[Ting Pan](https://github.com/PhyscalX/)<sup>1,2*</sup>, [Lulu Tang](https://github.com/lulutang0608)<sup>2*</sup>, [Xinlong Wang](https://www.xloong.wang/)<sup>2¶</sup>, [Shiguang Shan](https://scholar.google.com/citations?user=Vkzd7MIAAAAJ&hl=en)<sup>1</sup>
<sup>1</sup>[ICT-CAS](http://english.ict.cas.cn/), <sup>2</sup>[BAAI](https://www.baai.ac.cn/english.html)<br>
<sup>*</sup> Equal Contribution, <sup>¶</sup>Project Lead
[[`Paper`](https://arxiv.org/pdf/2312.09128.pdf)] [[`🤗 Demo`](https://huggingface.co/spaces/BAAI/tokenize-anything)]
</div>
We present **T**okenize **A**nything via **P**rompting, a unified and promptable model capable of simultaneously segmenting, recognizing, and captioning arbitrary regions, with flexible visual prompts (point, box and sketch). The model is trained with exhaustive segmentation masks sourced from SA-1B, coupled with semantic priors from a pre-trained EVA-CLIP with 5 billion parameters.
## Installation
See [Github Page](https://github.com/baaivision/tokenize-anything).
## Models
### Model weights
#### V1.1 Release Notes
- Three versions of the model are available with different image encoders.
- Use a longer pre-training and fine-tuning schedule (improved segmentation and caption performance).
- Apply weight decay for all bias parameters (avoid FP16 overflow in QK matmul).
- Sample point prompts from predicted mask instead of GT box during VG training.
| Model | Description | Schedule | MD5 | Weights |
| ----- | ------------| ------ | ----| ------ |
| **tap_vit_h** | ViT-H TAP v1.1 model | (100% SA-1B, 180k), (VG, 50ep) | 4bdfb9 | [🤗 HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/models/tap_vit_h_v1_1.pkl) |
| **tap_vit_l** | ViT-L TAP v1.1 model | (100% SA-1B, 180k), (VG, 50ep) | c1d41f | [🤗 HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/models/tap_vit_l_v1_1.pkl) |
| **tap_vit_b** | ViT-B TAP v1.1 model | (100% SA-1B, 180k), (VG, 50ep) | 707f80 | [🤗 HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/models/tap_vit_b_v1_1.pkl) |
#### V1.0 Release Notes
- Two versions of the model are available with different image encoders.
- Original paper results.
| Model | Description | Schedule | MD5 | Weights |
| ----- | ------------| ------ | ----| ------ |
| **tap_vit_l** | ViT-L TAP v1.0 model | (50% SA-1B, 90k), (VG, 25ep) | 03f8ec | [🤗 HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/models/tap_vit_l_v1_0.pkl) |
| **tap_vit_b** | ViT-B TAP v1.0 model | (50% SA-1B, 90k), (VG, 25ep) | b45cbf | [🤗 HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/models/tap_vit_b_v1_0.pkl) |
### Concept weights
***Note***: You can generate these weights following the [Concept Guide](https://github.com/baaivision/tokenize-anything/blob/main/notebooks/concept.ipynb).
| Concept | Description | Weights |
| ------- | ------------| ------ |
| **Merged-2560** | Merged concepts | [🤗 HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/concepts/merged_2560.pkl) |
| **LVIS-1203** | LVIS concepts | [🤗 HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/concepts/lvis_1203.pkl) |
| **COCO-80** | COCO concepts | [🤗 HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/concepts/coco_80.pkl) |
## License
[Apache License 2.0](LICENSE)
## Citation
```
@article{pan2023tap,
title={Tokenize Anything via Prompting},
author={Pan, Ting and Tang, Lulu and Wang, Xinlong and Shan, Shiguang},
journal={arXiv preprint arXiv:2312.09128},
year={2023}
}
```
## Acknowledgement
We thank the repositories: [SAM](https://github.com/facebookresearch/segment-anything), [EVA](https://github.com/baaivision/EVA), [LLaMA](https://github.com/facebookresearch/llama), [FlashAttention](https://github.com/Dao-AILab/flash-attention), [Gradio](https://github.com/gradio-app/gradio), [Detectron2](https://github.com/facebookresearch/detectron2) and [CodeWithGPU](https://github.com/seetacloud/codewithgpu).
|
allenai/paloma-1b-baseline-mc4 | allenai | "2023-12-19T00:03:38Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"olmo",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-14T02:06:57Z" | ---
extra_gated_prompt: "Access to this model is automatically granted upon accepting the [**AI2 ImpACT License – Low Risk Artifacts (“LR Agreement”)**](https://allenai.org/licenses/impact-lr) and completing all fields below. This model is licensed under the LR Agreement."
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the low risk artifact(s): text
I AGREE to the terms and conditions of the LR Agreement above: checkbox
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
The [Paloma](https://paloma.allen.ai/) 1B baselines are a collection of language models pretrained on popular corpora while controlling all other experimental variables. These models are developed to facilitate scientific comparisons of language model fit using the Paloma benchmark of 585 textual domains. This collection of models includes 6 baseline 1B parameter models each trained on ~150B tokens from one the following corpora: [Dolma](https://github.com/allenai/dolma), [The Pile](https://api.semanticscholar.org/CorpusID:230435736), [RedPajama](https://github.com/togethercomputer/RedPajama-Data), [Falcon-RefinedWeb](https://api.semanticscholar.org/CorpusID:259063761), [C4](https://aclanthology.org/2021.emnlp-main.98), and [MC4-en](https://api.semanticscholar.org/CorpusID:258187051).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Ian Magnusson, Akshita Bhagia, Valentin Hofmann, Luca Soldaini, Ananya Harsh Jha, Oyvind Tafjord, Dustin Schwenk, Evan Pete Walsh, Yanai Elazar, Kyle Lo, Dirk Groeneveld, Iz Beltagy, Hannaneh Hajishirzi, Noah A. Smith, Kyle Richardson, and Jesse Dodge
- **Model type:** Decoder-only transformer language model
- **Language(s) (NLP):** English
- **License:** [**AI2 ImpACT License – Low Risk Artifacts (“LR Agreement”)**](https://allenai.org/licenses/impact-lr)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is primarily intended as research artifact that is a baseline for the language modeling benchmark [Paloma](https://paloma.allen.ai/).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The restrictions to use of this model are described in the model license: [**AI2 ImpACT License – Low Risk Artifacts (“LR Agreement”)**](https://allenai.org/licenses/impact-lr)
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model is purely trained as an autoregressive language model. It has not been adapted in any way to prevent bias. It is a model of the language distribution that it is trained on.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
This research artifact is a baseline for a language modeling benchmark. Best uses of this model will take advantage of the experimental controls applied to this model and the other Paloma baselines. These enable comparisons of models that vary only in the pretraining corpus used to train them.
## How to Get Started with the Model
Install the code needed to run inference with the model
```
pip install ai2-olmo
```
Download and instantiate the model
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("allenai/<model name here>", trust_remote_code=True)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Each of the Paloma baseline models are trained on one of the following datasets:
[Dolma](https://github.com/allenai/dolma), [The Pile](https://api.semanticscholar.org/CorpusID:230435736), [RedPajama](https://github.com/togethercomputer/RedPajama-Data), [Falcon-RefinedWeb](https://api.semanticscholar.org/CorpusID:259063761), [C4](https://aclanthology.org/2021.emnlp-main.98), and [MC4-en](https://api.semanticscholar.org/CorpusID:258187051).
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
We remove any document in the pretraining data that is contaminated with respect to the [Paloma](https://paloma.allen.ai/) evaluation data. We match overlaps of evaluation and train text at the paragraph level, i.e., newline separated spans of text. To avoid coincidental collisions in the space of small strings, we ignore matches in paragraphs smaller than 13 unicode segmented tokens. Similarly, we ignore paragraphs composed of only punctuation, spaces, and emoji, as, unlike words, these can be arbitrarily repeated when used as formatting, leading to high frequency n-grams greater than our 13-gram threshold. Lastly, as code data consists almost entirely of short and often repeated lines, we forgo any decontamination against the code evaluations in Paloma.
#### Training Hyperparameters
The Paloma baseline 1B parameter models that we train employ the following architecture: 2048 maximum sequence length, 2048 model dimension, 16 layers, 16 attention heads, RoPE embedding, SwiGLU activation, mixed precision, non-parametric layer normalization, and sequential model blocks for attention and feed-forward networks.
We use EleutherAI's GPT NeoX tokenizer but add 3 additional special tokens that are used to mask PII in Dolma.
We train to 35k steps (∼150B tokens) with the following LionW optimizer configurations: 2.0e-4 peak learning rate, warm-up of 2000 steps, cosine decay to 70k steps (∼300B tokens), 0.1 weight decay, and betas of 0.9 and 0.95. Note that our batch size varies slightly to accommodate two groups of baselines that were run on different hardware. The Dolma and Falcon-RefinedWeb baselines were run with a batch size of 2112 training instances per step on 24 A100s. The RedPajama, The Pile, C4, and mC4-EN baselines were run with a batch size of 2048 on 64 AMD Instinct MI250X GPUs. In each case we save model checkpoints every 5k steps (∼20B tokens).
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
The [Paloma](https://paloma.allen.ai/) benchmark is used to evaluate these baseline models.
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
Paloma evaluates on 585 domains. These are a collection of the most fine-grained domains readily available in current metadata.
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Paloma measures langauge modeling fit using Perplexity. It is a benchmark of language modeling, so examination of downstream uses is out of scope.
### Results
To demonstrate possible uses of results from the Paloma benchmark, we conduct a series of case studies. We show that performance improves in almost all domains as models are scaled, but domains improve unequally. Further, across domains, perplexity is driven by strings in the vocabulary, i.e., types, that occur in most domains, but other types even get worse as models scale. Finally, our experiments isolate change in pretraining corpora and find that pretraining without heterogeneous data sources beyond Common Crawl leads to perplexities that do not improve consistently with tokens seen.
## Environmental Impact
The Dolma and Falcon-RefinedWeb baselines were run with on 24 A100s for 9 days per model.
The RedPajama, The Pile, C4, and mC4-EN baselines were run on 64 AMD Instinct MI250X GPUs for 2 days per model.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{paloma,
title={{Paloma}: A Benchmark for Evaluating Language Model Fit},
author={Magnusson, Ian and Bhagia, Akshita and Hofmann, Valentin and Soldaini, Luca and Harsh Jha, Ananya and Tafjord, Oyvind and Schwenk,Dustin and Walsh, Evan Pete and Elazar, Yanai and Lo, Kyle and Groenveld,Dirk and Beltagy,Iz and Hajishirz,Hanneneh and Smith, Noah A. and Richardson,Kyle and Dodge,Jesse},
journal={technical report},
year={2023},
url={https://paloma.allen.ai/}
}
```
## Model Card Contact
{ianm,jessed}@allenai.org
|
maxmashup/MarinaSena | maxmashup | "2023-12-14T20:27:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T02:07:07Z" | Entry not found |
Tsuinzues/pokungfupanda | Tsuinzues | "2023-12-14T02:17:29Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T02:12:41Z" | ---
license: openrail
---
|
JaxxGame15/philip | JaxxGame15 | "2023-12-14T02:15:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T02:15:16Z" | Entry not found |
dvshah13/distilhubert-finetuned-gtzan | dvshah13 | "2023-12-14T02:21:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T02:21:38Z" | Entry not found |
Nubletz/food_classifier | Nubletz | "2023-12-14T02:25:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-14T02:25:33Z" | Entry not found |
ADISH007/Aws_donut_10k_incremental_1_Epoch_15 | ADISH007 | "2023-12-14T02:28:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:28:10Z" | Entry not found |
hkivancoral/smids_5x_beit_base_adamax_001_fold5 | hkivancoral | "2023-12-14T03:19:00Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-14T02:30:25Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_beit_base_adamax_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7616666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_adamax_001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6041
- Accuracy: 0.7617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8603 | 1.0 | 375 | 0.8648 | 0.5183 |
| 0.8445 | 2.0 | 750 | 0.8098 | 0.5417 |
| 0.7944 | 3.0 | 1125 | 0.7826 | 0.5917 |
| 0.7602 | 4.0 | 1500 | 0.8095 | 0.6133 |
| 0.7358 | 5.0 | 1875 | 0.7702 | 0.62 |
| 0.7338 | 6.0 | 2250 | 0.7325 | 0.6383 |
| 0.7068 | 7.0 | 2625 | 0.7570 | 0.6267 |
| 0.7788 | 8.0 | 3000 | 0.7318 | 0.6183 |
| 0.7701 | 9.0 | 3375 | 0.7391 | 0.65 |
| 0.7025 | 10.0 | 3750 | 0.7251 | 0.6617 |
| 0.7076 | 11.0 | 4125 | 0.7171 | 0.6433 |
| 0.6226 | 12.0 | 4500 | 0.7139 | 0.6333 |
| 0.6825 | 13.0 | 4875 | 0.7299 | 0.63 |
| 0.6882 | 14.0 | 5250 | 0.7324 | 0.6517 |
| 0.7468 | 15.0 | 5625 | 0.6842 | 0.7 |
| 0.6568 | 16.0 | 6000 | 0.7213 | 0.6533 |
| 0.6593 | 17.0 | 6375 | 0.6880 | 0.6583 |
| 0.68 | 18.0 | 6750 | 0.6884 | 0.6733 |
| 0.6767 | 19.0 | 7125 | 0.7231 | 0.665 |
| 0.6609 | 20.0 | 7500 | 0.6577 | 0.6983 |
| 0.6233 | 21.0 | 7875 | 0.7352 | 0.6417 |
| 0.6128 | 22.0 | 8250 | 0.6662 | 0.695 |
| 0.6939 | 23.0 | 8625 | 0.7254 | 0.71 |
| 0.6892 | 24.0 | 9000 | 0.7067 | 0.695 |
| 0.5723 | 25.0 | 9375 | 0.6348 | 0.72 |
| 0.6474 | 26.0 | 9750 | 0.6506 | 0.7083 |
| 0.6695 | 27.0 | 10125 | 0.6672 | 0.6883 |
| 0.7033 | 28.0 | 10500 | 0.6914 | 0.6833 |
| 0.6792 | 29.0 | 10875 | 0.6764 | 0.685 |
| 0.5904 | 30.0 | 11250 | 0.6857 | 0.6883 |
| 0.5913 | 31.0 | 11625 | 0.6709 | 0.6933 |
| 0.5784 | 32.0 | 12000 | 0.7184 | 0.69 |
| 0.6212 | 33.0 | 12375 | 0.6393 | 0.7233 |
| 0.6674 | 34.0 | 12750 | 0.6697 | 0.71 |
| 0.5844 | 35.0 | 13125 | 0.6220 | 0.7283 |
| 0.5892 | 36.0 | 13500 | 0.6265 | 0.7217 |
| 0.572 | 37.0 | 13875 | 0.6315 | 0.7117 |
| 0.5345 | 38.0 | 14250 | 0.6267 | 0.7417 |
| 0.5582 | 39.0 | 14625 | 0.5952 | 0.7433 |
| 0.5947 | 40.0 | 15000 | 0.6182 | 0.715 |
| 0.5681 | 41.0 | 15375 | 0.6009 | 0.7533 |
| 0.5885 | 42.0 | 15750 | 0.6107 | 0.7367 |
| 0.5772 | 43.0 | 16125 | 0.5746 | 0.75 |
| 0.4378 | 44.0 | 16500 | 0.5833 | 0.755 |
| 0.5286 | 45.0 | 16875 | 0.6256 | 0.7417 |
| 0.538 | 46.0 | 17250 | 0.6036 | 0.7483 |
| 0.5732 | 47.0 | 17625 | 0.6044 | 0.76 |
| 0.4485 | 48.0 | 18000 | 0.5966 | 0.7533 |
| 0.4959 | 49.0 | 18375 | 0.6043 | 0.7583 |
| 0.4683 | 50.0 | 18750 | 0.6041 | 0.7617 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_RandomError0.6_Seed102 | behzadnet | "2023-12-14T02:34:02Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | "2023-12-14T02:33:56Z" | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_RandomError0.6_Seed102 | behzadnet | "2023-12-14T02:34:11Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | "2023-12-14T02:34:08Z" | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
sumit077/sumiy | sumit077 | "2023-12-14T02:36:01Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-14T02:36:01Z" | ---
license: apache-2.0
---
|
TurtleLiu/falcon7b_psychology_bot | TurtleLiu | "2023-12-14T07:26:06Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | "2023-12-14T02:41:28Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: ybelkada/falcon-7b-sharded-bf16
model-index:
- name: falcon7b_psychology_bot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7b_psychology_bot
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0 |
hassy0811/w2v_kakenhi | hassy0811 | "2023-12-19T08:43:53Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2023-12-14T02:43:32Z" | ---
license: mit
---
|
LinsTLHandGFfan/Mazzy | LinsTLHandGFfan | "2023-12-14T02:45:39Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T02:44:27Z" | ---
license: openrail
---
|
mogam/bert-base-cased-wikitext2 | mogam | "2023-12-14T02:55:05Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-12-14T02:45:48Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 294 | 7.2902 |
| 7.6498 | 2.0 | 588 | 7.1289 |
| 7.6498 | 3.0 | 882 | 7.0701 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.13.3
|
GALAIXYZ/GALWorld | GALAIXYZ | "2023-12-14T02:54:04Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-14T02:47:27Z" | ---
license: creativeml-openrail-m
---
|
LinsTLHandGFfan/Nina | LinsTLHandGFfan | "2023-12-14T02:49:51Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-14T02:48:59Z" | ---
license: openrail
---
|
yixuantt/InvestLM-33b-awq | yixuantt | "2023-12-14T03:43:59Z" | 0 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"finance",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-14T02:56:12Z" | ---
license: llama2
language:
- en
tags:
- finance
---
# InvestLM
This is the repo for a new financial domain large language model, InvestLM, tuned on LLaMA-33B[1], using a carefully curated instruction dataset related to financial investment. We provide guidance on how to use InvestLM for inference.
Github Link: [InvestLM](https://github.com/AbaciNLP/InvestLM)
# About AWQ
AWQ [2] is an efficient, accurate, and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is supported by:
1. AutoAWQ: AutoAWQ is an easy-to-use package for AWQ 4-bit quantized models. [AutoAWQ Github link](https://github.com/casper-hansen/AutoAWQ)
```
pip install autoawq
```
2. vLLM: vLLM is a Python library that contains pre-compiled C++ and CUDA (11.8) binaries. We can use it to run offline inference or serve as the endpoint. It offers advanced continuous batching and a much higher (~10x) throughput. But it's more complicated. [vllm Doc](https://vllm.readthedocs.io/en/latest/getting_started/installation.html)
```
# (Optional) Create a new conda environment.
conda create -n myenv python=3.8 -y
conda activate myenv
# Install vLLM.
pip install vllm
```
3. Additional options: these are some Python libraries integrated with vLLM.
* Fastchat: FastChat is an open platform for training, serving, and evaluating large language model based chatbots. We can use vLLM as an optimized worker implementation in FastChat. [Fastchat Github](https://github.com/lm-sys/FastChat)
* aphrodite-engine: Aphrodite is the official backend engine for PygmalionAI. It is designed to serve as the inference endpoint for the PygmalionAI website. [Aphrodite-engine Github](https://github.com/PygmalionAI/aphrodite-engine)
# Inference
Please use the following command to log in hugging face first.
```
huggingface-cli login
```
## Prompt template
```
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with further context. "
"Write a response that appropriately completes the request.\n\n"
"Instruction:\n{instruction}\n\n Input:\n{input}\n\n Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"Instruction:\n{instruction}\n\nResponse:"
),
}
```
## How to use this AWQ model from Python code
```
# Please run the following command in CLI.
# pip install autoawq transformers
from transformers import AutoTokenizer
from awq import AutoAWQForCausalLM
# Inference Template
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with further context. "
"Write a response that appropriately completes the request.\n\n"
"Instruction:\n{instruction}\n\n Input:\n{input}\n\n Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"Instruction:\n{instruction}\n\nResponse:"
),
}
def generate_prompt(instruction, input=None):
if input:
return PROMPT_DICT["prompt_input"].format(instruction=instruction,input=input)
else:
return PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
tokenizer = AutoTokenizer.from_pretrained('yixuantt/InvestLM-awq', use_fast=False)
tokenizer.pad_token = tokenizer.unk_token
model = AutoAWQForCausalLM.from_quantized('yixuantt/InvestLM-awq', fuse_layers=False)
print("\n\n*** Generate:")
tokens = tokenizer(
generate_prompt(instruction="Tell me about finance."),
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.1,
top_p=0.75,
top_k=40,
repetition_penalty = 1.1,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
```
## Serving this model from vLLM
* Please ensure you are using vLLM version 0.2 or later.
* When using vLLM as a server, pass the --quantization awq parameter.
* At the time of writing, vLLM AWQ does not support loading models in bfloat16, so to ensure compatibility with all models, also pass --dtype float16.
For example:
```
python3 python -m vllm.entrypoints.api_server --model 'yixuantt/InvestLM-awq' --quantization awq --dtype float16
```
When using vLLM from Python code, again pass the quantization=awq and dtype=float16 parameters.
```
from vllm import LLM, SamplingParams
# Inference Template
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with further context. "
"Write a response that appropriately completes the request.\n\n"
"Instruction:\n{instruction}\n\n Input:\n{input}\n\n Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"Instruction:\n{instruction}\n\nResponse:"
),
}
def generate_prompt(instruction, input=None):
if input:
return PROMPT_DICT["prompt_input"].format(instruction=instruction,input=input)
else:
return PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
question = generate_prompt(instruction="Tell me about finance.")
prompts = [
question,
]
sampling_params = SamplingParams(temperature=0.1, top_p=0.75)
llm = LLM(model="yixuantt/InvestLM-awq", quantization="awq", dtype="float16")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
# References
[1] Touvron, Hugo, et al. "Llama: Open and efficient foundation language models." arXiv preprint arXiv:2302.13971 (2023).
[2] Lin, J., Tang, J., et al. "AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration" arXiv preprint arXiv:2306.00978 (2023).
---
license: llama2
--- |
Thien0103/enhanced_trocr_drugs_10epochs | Thien0103 | "2023-12-14T03:22:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:56:12Z" | Entry not found |
GridTST/weather_96_96 | GridTST | "2023-12-14T02:59:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:16Z" | Entry not found |
GridTST/weather_96_192 | GridTST | "2023-12-14T02:59:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:18Z" | Entry not found |
GridTST/weather_96_336 | GridTST | "2023-12-14T02:59:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:19Z" | Entry not found |
GridTST/weather_96_720 | GridTST | "2023-12-14T02:59:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:21Z" | Entry not found |
GridTST/weather_192_96 | GridTST | "2023-12-14T02:59:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:24Z" | Entry not found |
GridTST/weather_192_192 | GridTST | "2023-12-14T02:59:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:26Z" | Entry not found |
GridTST/weather_192_336 | GridTST | "2023-12-14T02:59:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:27Z" | Entry not found |
GridTST/weather_192_720 | GridTST | "2023-12-14T02:59:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:29Z" | Entry not found |
GridTST/weather_512_96 | GridTST | "2023-12-14T02:59:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:32Z" | Entry not found |
GridTST/weather_512_192 | GridTST | "2023-12-14T02:59:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:34Z" | Entry not found |
GridTST/weather_512_336 | GridTST | "2023-12-14T02:59:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:36Z" | Entry not found |
GridTST/weather_512_720 | GridTST | "2023-12-14T02:59:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:37Z" | Entry not found |
GridTST/solar_96_96 | GridTST | "2023-12-14T02:59:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:40Z" | Entry not found |
GridTST/solar_96_192 | GridTST | "2023-12-14T02:59:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:41Z" | Entry not found |
GridTST/solar_96_336 | GridTST | "2023-12-14T02:59:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:43Z" | Entry not found |
GridTST/solar_96_720 | GridTST | "2023-12-14T02:59:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:44Z" | Entry not found |
GridTST/solar_192_96 | GridTST | "2023-12-14T02:59:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:46Z" | Entry not found |
GridTST/solar_192_192 | GridTST | "2023-12-14T02:59:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:48Z" | Entry not found |
GridTST/solar_192_336 | GridTST | "2023-12-14T02:59:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:49Z" | Entry not found |
GridTST/solar_192_720 | GridTST | "2023-12-14T02:59:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:51Z" | Entry not found |
GridTST/solar_512_96 | GridTST | "2023-12-14T03:00:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T02:59:59Z" | Entry not found |
GridTST/solar_512_192 | GridTST | "2023-12-14T03:00:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:00Z" | Entry not found |
GridTST/solar_512_336 | GridTST | "2023-12-14T03:00:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:02Z" | Entry not found |
GridTST/solar_512_720 | GridTST | "2023-12-14T03:00:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:04Z" | Entry not found |
GridTST/electricity_96_96 | GridTST | "2023-12-14T03:00:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:08Z" | Entry not found |
GridTST/electricity_96_192 | GridTST | "2023-12-14T03:00:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:09Z" | Entry not found |
GridTST/electricity_96_336 | GridTST | "2023-12-14T03:00:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:11Z" | Entry not found |
GridTST/electricity_96_720 | GridTST | "2023-12-14T03:00:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:13Z" | Entry not found |
GridTST/electricity_192_96 | GridTST | "2023-12-14T03:00:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:15Z" | Entry not found |
GridTST/electricity_192_192 | GridTST | "2023-12-14T03:00:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:17Z" | Entry not found |
GridTST/electricity_192_336 | GridTST | "2023-12-14T03:00:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:19Z" | Entry not found |
GridTST/electricity_192_720 | GridTST | "2023-12-14T03:00:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:00:21Z" | Entry not found |
GridTST/electricity_512_96 | GridTST | "2023-12-14T03:02:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:02:23Z" | Entry not found |
GridTST/electricity_512_192 | GridTST | "2023-12-14T03:02:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gridtst",
"endpoints_compatible",
"region:us"
] | null | "2023-12-14T03:02:25Z" | Entry not found |