modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
707M
| likes
int64 0
11k
| library_name
stringclasses 250
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Dremmar/serenexl_v15 | Dremmar | "2024-01-15T10:27:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T10:15:41Z" | Entry not found |
hamidei/ag | hamidei | "2024-01-15T10:21:11Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | "2024-01-15T10:16:14Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** hamidei]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [ai model]
- **Language(s) (NLP):** [En]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hkro/phi-2-aes-phi-2-v0.1 | hkro | "2024-01-15T10:21:29Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"region:us"
] | null | "2024-01-15T10:21:02Z" | ---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
hxxris/haaris-transformer-final | hxxris | "2024-01-15T10:39:39Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-01-15T10:23:57Z" | Entry not found |
Seawolf/mistral_7b_xiwu | Seawolf | "2024-01-15T10:24:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T10:24:32Z" | Entry not found |
karim27/l | karim27 | "2024-01-15T10:27:24Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-15T10:27:24Z" | ---
license: apache-2.0
---
|
Yehoon/tmp | Yehoon | "2024-01-15T10:30:20Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2024-01-15T10:30:12Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
youdiniplays/ceb-tl-model | youdiniplays | "2024-01-16T07:27:27Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-01-15T10:31:35Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: ceb-tl-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ceb-tl-model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6649
- Bleu: 3.6178
- Gen Len: 18.154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.0551 | 1.0 | 6516 | 0.9019 | 2.8382 | 18.183 |
| 0.879 | 2.0 | 13032 | 0.7772 | 3.1412 | 18.182 |
| 0.7844 | 3.0 | 19548 | 0.7146 | 3.4508 | 18.18 |
| 0.728 | 4.0 | 26064 | 0.6773 | 3.5651 | 18.17 |
| 0.6838 | 5.0 | 32580 | 0.6649 | 3.6178 | 18.154 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Federic/lora-fine-tuning-llama2-SQL-lora-1000-2-dataset-size-open-hermes | Federic | "2024-01-16T09:35:00Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2024-01-15T10:32:00Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- generated_from_trainer
model-index:
- name: lora-fine-tuning-llama2-SQL-lora-1000-2-dataset-size-open-hermes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-fine-tuning-llama2-SQL-lora-1000-2-dataset-size-open-hermes
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
shivanikerai/Llama-2-7b-chat-hf-adapter-sku-title-ner-generation-v1.2 | shivanikerai | "2024-01-15T10:32:36Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | "2024-01-15T10:32:28Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
LinxuanPastel/Oriol2022RVC | LinxuanPastel | "2024-01-15T10:44:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T10:32:58Z" | Entry not found |
avemio-digital/lora_model_scipy_merged | avemio-digital | "2024-01-15T10:38:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T10:33:37Z" | Entry not found |
shivanikerai/Llama-2-7b-chat-hf-sku-title-ner-generation-v1.2 | shivanikerai | "2024-01-15T10:40:11Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T10:34:19Z" | Entry not found |
KushagraSingh/paradigm | KushagraSingh | "2024-01-15T10:39:58Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-01-15T10:39:58Z" | ---
license: mit
---
|
kwwww/bert-base-uncased_64_40000 | kwwww | "2024-01-15T10:43:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T10:43:26Z" | Entry not found |
DMLuck/phi_finetuned2.0 | DMLuck | "2024-01-16T12:55:59Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | "2024-01-15T10:46:52Z" | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: phi_finetuned2.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi_finetuned2.0
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
jvh/Mistral-Hermes-GEITje | jvh | "2024-01-15T10:51:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Rijgersberg/GEITje-7B-chat-v2",
"base_model:argilla/distilabeled-Hermes-2.5-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T10:48:21Z" | ---
base_model:
- Rijgersberg/GEITje-7B-chat-v2
- argilla/distilabeled-Hermes-2.5-Mistral-7B
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Rijgersberg/GEITje-7B-chat-v2](https://huggingface.co/Rijgersberg/GEITje-7B-chat-v2)
* [argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Rijgersberg/GEITje-7B-chat-v2
layer_range: [0, 32]
- model: argilla/distilabeled-Hermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: argilla/distilabeled-Hermes-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
elnasharomar2/tashkeel_Gpt | elnasharomar2 | "2024-01-15T10:50:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T10:50:52Z" | Entry not found |
hxxris/haaris-transformer-final-1 | hxxris | "2024-01-15T10:59:13Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-01-15T10:53:19Z" | Entry not found |
dalyaff/phi2-viggo-finetune | dalyaff | "2024-01-15T10:54:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"en",
"dataset:GEM/viggo",
"base_model:microsoft/phi-2",
"region:us"
] | null | "2024-01-15T10:54:12Z" | ---
language:
- en
library_name: peft
tags:
- generated_from_trainer
datasets:
- GEM/viggo
base_model: microsoft/phi-2
model-index:
- name: phi-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2
This model is a fine-tuned version of [microsoftl](https://huggingface.co/microsoftl) on the GEM/viggo dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.917 | 0.04 | 50 | 1.4649 |
| 0.7037 | 0.08 | 100 | 0.4905 |
| 0.4209 | 0.12 | 150 | 0.3564 |
| 0.3534 | 0.16 | 200 | 0.3127 |
| 0.311 | 0.2 | 250 | 0.2940 |
| 0.2944 | 0.24 | 300 | 0.2798 |
| 0.2838 | 0.27 | 350 | 0.2710 |
| 0.2744 | 0.31 | 400 | 0.2634 |
| 0.2657 | 0.35 | 450 | 0.2577 |
| 0.2692 | 0.39 | 500 | 0.2513 |
| 0.263 | 0.43 | 550 | 0.2475 |
| 0.2664 | 0.47 | 600 | 0.2451 |
| 0.2535 | 0.51 | 650 | 0.2421 |
| 0.2594 | 0.55 | 700 | 0.2396 |
| 0.234 | 0.59 | 750 | 0.2379 |
| 0.2383 | 0.63 | 800 | 0.2361 |
| 0.2419 | 0.67 | 850 | 0.2350 |
| 0.2448 | 0.71 | 900 | 0.2337 |
| 0.241 | 0.74 | 950 | 0.2332 |
| 0.219 | 0.78 | 1000 | 0.2330 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
nabscut/nabs | nabscut | "2024-01-15T10:55:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T10:55:10Z" | Entry not found |
Xenova/pix2struct-tiny-random | Xenova | "2024-03-20T22:46:22Z" | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"pix2struct",
"text2text-generation",
"region:us"
] | text2text-generation | "2024-01-15T10:57:22Z" | ---
library_name: transformers.js
---
https://huggingface.co/fxmarty/pix2struct-tiny-random with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
Xenova/pix2struct-textcaps-base | Xenova | "2024-03-20T22:47:31Z" | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"pix2struct",
"text2text-generation",
"region:us"
] | text2text-generation | "2024-01-15T10:57:27Z" | ---
library_name: transformers.js
---
https://huggingface.co/google/pix2struct-textcaps-base with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
Xenova/deplot | Xenova | "2024-03-20T22:48:40Z" | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"pix2struct",
"text2text-generation",
"region:us"
] | text2text-generation | "2024-01-15T10:58:19Z" | ---
library_name: transformers.js
---
https://huggingface.co/google/deplot with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
iohadrubin/llama-c5-1b | iohadrubin | "2024-01-15T12:31:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T11:00:05Z" | Entry not found |
z24s1q/SDUGU-factory | z24s1q | "2024-01-15T11:00:32Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-15T11:00:29Z" | ---
license: apache-2.0
---
|
golesheed/whisper-small-hi | golesheed | "2024-01-16T08:47:08Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-15T11:02:00Z" | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4300
- Wer: 34.1192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0824 | 2.44 | 1000 | 0.2958 | 35.3424 |
| 0.0218 | 4.89 | 2000 | 0.3518 | 34.1954 |
| 0.001 | 7.33 | 3000 | 0.4082 | 34.1446 |
| 0.0005 | 9.78 | 4000 | 0.4300 | 34.1192 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
hxxris/haaris-transformer-final-2 | hxxris | "2024-01-15T11:14:02Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-01-15T11:03:39Z" | Entry not found |
kenmaro/my-wizardMath-weight-huggingface-repo | kenmaro | "2024-01-16T00:23:08Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-15T11:06:18Z" | ---
license: apache-2.0
---
|
idontgoddamn/KurosakiKoyuki | idontgoddamn | "2024-01-15T11:09:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:08:49Z" | Entry not found |
IveniumMarketing/im_map_marketo | IveniumMarketing | "2024-01-15T11:09:23Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-01-15T11:09:23Z" | ---
license: openrail
---
|
iqranaz/WeatherPrediction | iqranaz | "2024-01-15T11:10:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:10:52Z" | Entry not found |
brainer/ecg-detect | brainer | "2024-01-18T14:59:02Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50-dc5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-01-15T11:11:34Z" | ---
license: apache-2.0
base_model: facebook/detr-resnet-50-dc5
tags:
- generated_from_trainer
model-index:
- name: ecg-detect
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ecg-detect
This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-10
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ArijBRH/outputs | ArijBRH | "2024-01-15T11:12:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:12:18Z" | Entry not found |
slplab/whisper-large-v2_asd-syl-240115 | slplab | "2024-01-15T11:13:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:13:13Z" | Entry not found |
IB13/opt-350m_sft | IB13 | "2024-01-15T11:16:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:16:38Z" | Entry not found |
JordiBM/sisi | JordiBM | "2024-01-15T11:17:37Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-15T11:17:37Z" | ---
license: apache-2.0
---
|
Artazar/RC_3D_V13 | Artazar | "2024-01-15T11:26:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:18:04Z" | Entry not found |
MaxEnergyCapsule/MaxEnergyCapsule | MaxEnergyCapsule | "2024-01-15T11:21:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:18:56Z" | <p><a href="https://www.nutritioncrawler.com/MaxEnerPaki"> <img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWmmKD99UfE0cLRWp5kSsyAN4VWZ8SSwSAhkBlGSHj7S8jMGIh4TMU42MYXHsR55GdWCpqOS9PmTzNnt4fevgrRzNlE82jxB4XUlGxbUL1uU7pz6GaodZR3kN6RtTq9RVDVpy0GZFLpuh2hTp832OrmzLqGHl_15gSs4ftdYDW_g4dnkUMIov1-sUaBRc/w664-h521/Max%20Energy%20pakistan.png" alt="enter image description here"> </a></p>
➢Product Name — Max Energy
➢Category - Male Enhancement
➢Main Benefits — Improves health and sexual performance
➢ Composition — Natural Organic Compound
➢ Side-Effects—NA
➢Final Rating: — 4.8
➢ Availability — Online
➢Offers & Discounts; SAVE TODAY! SHOP NOW TO buy SPECIAL OFFER!!!
what is Max Energy?
As of late, logical investigation into normal improvement has seen monstrous development, starting to unravel nature's secret privileged insights. Normal mixtures, spices, and substances are being read up for their capability to fuel our bodies, work on our solidarity, and endurance, and even improve delight. Whether the objective is a game related accomplishment, defeating actual obstacles, or guaranteeing one's prosperity, normal upgrades are ending up an intense partner. The reasons are complex, however essential among them is the of the human body and its ability to utilize normal substances all the more really and with less impeding incidental effects.
Max Energy Buy now!! Click the Link Below for more information and get 50% discount now !! hurry up !!
Read More: https://www.nutritioncrawler.com/MaxEnerPaki
Max Energy Max Energy Pill Max Energy Capsules Max Energy Pill Max Energy price Max Energy reviews Max Energy ingredients Max Energy benefits Max Energy Side Effects Max Energy Capsules Price Max Energy Capsules Reviews Max Energy Blend Max Energy Complaint Where to buy Max Energy How to use Max Energy Max Energy Cost Max Energy works Max Energy Forum Max Energy original Max Energy Pharmacy
https://www.nutritioncrawler.com/MaxEnerPaki
https://sites.google.com/view/max-energy-capsule/home
https://healthtoned.blogspot.com/2024/01/max-energy-male-enhancement-capsule.html
https://medium.com/@healthytalk24x7/max-energy-capsule-49a25517a615
https://medium.com/@healthytalk24x7/max-energy-male-enhancement-capsule-pakistan-price-reviews-benefits-ingredients-cost-b1e21e199f8d
https://www.weddingwire.com/website/max-energy-and-capsule
https://www.weddingwire.com/website/max-energy-and-capsule/maxenergy-2
https://infogram.com/max-energy-1h1749vpypnvl6z?live
https://softisenilspain.hashnode.dev/max-energy
https://sway.cloud.microsoft/FRQdmLLhHu7CFlc9
https://maxenergycapsule.company.site/
https://gamma.app/docs/Max-Energy-Male-Enhancement-Capsule-Pakistan-Price-reviews-Benefi-qga9p6uf7qjs736?mode=doc
https://groups.google.com/g/snshine/c/S4t9rb_rYOg
https://healthytalk24x7.wixsite.com/sunshine/post/max-energy-male-enhancement-capsule-pakistan-price-reviews-benefits-ingredients-cost
https://community.thebatraanumerology.com/user/maxenergycapsule
https://community.thebatraanumerology.com/post/max-energy-male-enhancement-capsule-pakistan-price-reviews-benefits-ingredi--65a514032f84f85c13a303dd
https://enkling.com/read-blog/12805
https://replit.com/@maxenergycapsul
https://nepal.tribe.so/post/max-energy-male-enhancement-capsule-pakistan-price-reviews-benefits-ingredi--65a514892b5077b32f57bc46
|
dalyaff/phi2-viggo-finetun | dalyaff | "2024-01-15T11:24:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T11:22:00Z" | Entry not found |
hxxris/haaris-transformer-final-3 | hxxris | "2024-01-15T11:36:25Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-01-15T11:25:51Z" | Entry not found |
axra/mistral-4x7B | axra | "2024-01-15T11:39:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T11:26:41Z" | Entry not found |
Joe1111/bert-base-chinese | Joe1111 | "2024-01-15T11:27:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:27:42Z" | Entry not found |
Abdoulahi07/results | Abdoulahi07 | "2024-01-15T11:28:36Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-01-15T11:28:21Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
diogodsa/ia-ibovespa-ri-tech | diogodsa | "2024-01-15T11:29:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:29:34Z" | Entry not found |
Destiny0621/rl_course_vizdoom_health_gathering_supreme | Destiny0621 | "2024-01-15T11:39:41Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-15T11:39:30Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.24 +/- 3.97
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Destiny0621/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
farquasar/whisper-large-medgical-augmented | farquasar | "2024-01-16T17:52:19Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-15T11:43:52Z" | ---
language:
- pt
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: medgical pt large augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medgical pt large augmented
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the medgical large synthetic augmented dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 14
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.3.2
- Tokenizers 0.15.0
|
hxxris/haaris-transformer-final-model | hxxris | "2024-01-15T11:58:02Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-01-15T11:43:57Z" | Entry not found |
aksds/checkpoint-100 | aksds | "2024-01-15T13:24:29Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-15T11:44:23Z" | Entry not found |
TunahanGokcimen/Question-Answering-Electra-base | TunahanGokcimen | "2024-01-15T13:27:50Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:deepset/electra-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-01-15T11:47:52Z" | ---
license: cc-by-4.0
base_model: deepset/electra-base-squad2
tags:
- generated_from_trainer
model-index:
- name: Question-Answering-Electra-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Question-Answering-Electra-base
This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Maaeedd/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 | Maaeedd | "2024-01-28T05:32:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:54:21Z" | Entry not found |
Sacralet/llama2-7B | Sacralet | "2024-01-15T12:11:28Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T11:54:30Z" | ---
license: apache-2.0
---
|
kimjisoobkkai/MIYEON_GIDlE_1000EPOCHES | kimjisoobkkai | "2024-01-15T11:54:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T11:54:35Z" | Entry not found |
Mik99/phi-2_test_01 | Mik99 | "2024-01-15T11:56:00Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"region:us"
] | null | "2024-01-15T11:55:56Z" | ---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
gbsim/ddpm-ema-cifar-32 | gbsim | "2024-01-17T06:05:53Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2024-01-15T11:56:10Z" | Entry not found |
prp131/my_awesome_billsum_model | prp131 | "2024-01-15T12:18:13Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-01-15T11:58:13Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5437
- Rouge1: 0.1434
- Rouge2: 0.0526
- Rougel: 0.1205
- Rougelsum: 0.1203
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8449 | 0.1267 | 0.0375 | 0.1083 | 0.1082 | 19.0 |
| No log | 2.0 | 124 | 2.6263 | 0.1384 | 0.0484 | 0.1163 | 0.1163 | 19.0 |
| No log | 3.0 | 186 | 2.5599 | 0.1423 | 0.0505 | 0.1194 | 0.1192 | 19.0 |
| No log | 4.0 | 248 | 2.5437 | 0.1434 | 0.0526 | 0.1205 | 0.1203 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
meenham/tapt-roberta-large-bs256-ep100 | meenham | "2024-01-15T12:10:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-01-15T11:58:29Z" | Entry not found |
Tsuinzues/rath | Tsuinzues | "2024-01-15T12:03:10Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-01-15T12:02:57Z" | ---
license: openrail
---
|
ElonTusk2001/zephyr-7b-sft-qlora | ElonTusk2001 | "2024-01-18T20:53:26Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-01-15T12:03:54Z" | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: zephyr-7b-sft-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-qlora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.913 | 1.0 | 17428 | 0.9523 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.1
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Spacyzipa/sam_15_01_24_neom | Spacyzipa | "2024-01-17T10:30:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | "2024-01-15T12:05:01Z" | Entry not found |
wenjing1205/test-dialogue-summarization | wenjing1205 | "2024-01-15T12:23:24Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-01-15T12:05:46Z" | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test-dialogue-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-dialogue-summarization
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0710
- Rouge: {'rouge1': 46.5916, 'rouge2': 21.9208, 'rougeL': 22.0124, 'rougeLsum': 22.0124}
- Bert Score: 0.8784
- Bleurt 20: -0.7903
- Gen Len: 15.58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge | Bert Score | Bleurt 20 | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------:|:----------:|:---------:|:-------:|
| 2.5288 | 1.0 | 186 | 2.1809 | {'rouge1': 48.087, 'rouge2': 21.7173, 'rougeL': 21.5447, 'rougeLsum': 21.5447} | 0.877 | -0.8143 | 15.63 |
| 2.3277 | 2.0 | 372 | 2.1230 | {'rouge1': 47.3856, 'rouge2': 21.3069, 'rougeL': 21.6399, 'rougeLsum': 21.6399} | 0.8788 | -0.786 | 15.54 |
| 2.2381 | 3.0 | 558 | 2.0912 | {'rouge1': 45.9843, 'rouge2': 21.1854, 'rougeL': 21.4006, 'rougeLsum': 21.4006} | 0.8776 | -0.817 | 15.235 |
| 2.2123 | 4.0 | 744 | 2.0761 | {'rouge1': 46.5269, 'rouge2': 21.7291, 'rougeL': 21.8936, 'rougeLsum': 21.8936} | 0.8785 | -0.7809 | 15.515 |
| 2.2443 | 5.0 | 930 | 2.0710 | {'rouge1': 46.5916, 'rouge2': 21.9208, 'rougeL': 22.0124, 'rougeLsum': 22.0124} | 0.8784 | -0.7903 | 15.58 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ranaShams/trial | ranaShams | "2024-01-15T12:06:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:06:07Z" | Entry not found |
Mik99/phi-2_test_01_merged | Mik99 | "2024-01-15T12:20:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T12:07:13Z" | Entry not found |
kakooza/micho | kakooza | "2024-01-15T12:07:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:07:27Z" | Entry not found |
AleksDMR/My_ch | AleksDMR | "2024-01-22T09:31:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:07:42Z" | Entry not found |
Tuan22/22 | Tuan22 | "2024-01-15T12:08:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:08:24Z" | Entry not found |
Maaeedd/test_bug_temporary | Maaeedd | "2024-01-15T12:08:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:08:26Z" | Entry not found |
DenisTheDev/Openchat-Passthrough | DenisTheDev | "2024-01-15T12:13:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"openchat/openchat-3.5-1210",
"openchat/openchat-3.5-0106",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T12:08:44Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- openchat/openchat-3.5-1210
- openchat/openchat-3.5-0106
---
# Openchat-Passthrough
Openchat-Passthrough is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: openchat/openchat-3.5-1210
layer_range: [0, 32]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [16, 32]
merge_method: passthrough
dtype: bfloat16
``` |
hxxris/haaris-transformer-final-model1 | hxxris | "2024-01-15T12:23:52Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-01-15T12:09:59Z" | Entry not found |
walebadr/mamba-2.8b-SFT | walebadr | "2024-01-15T16:24:14Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-01-15T12:12:34Z" | ---
license: apache-2.0
---
This is a the state-spaces mamba-2.8b model, fine-tuned using Supervised Fine-tuning method (SFT) on llama-2-7b-miniguanaco dataset.
To run inference on this model, run the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
#Load the model
model = MambaLMHeadModel.from_pretrained("walebadr/mamba-2.8b-SFT", dtype=torch.bfloat16, device="cuda")
device = "cuda"
messages = []
user_message = f"[INST] what is a language model? [/INST]"
input_ids = tokenizer(user_message, return_tensors="pt").input_ids.to("cuda")
out = model.generate(input_ids=input_ids, max_length=500, temperature=0.9, top_p=0.7, eos_token_id=tokenizer.eos_token_id)
decoded = tokenizer.batch_decode(out)
print("Model:", decoded[0])
```
### Model Evaluation
Coming soon
|
AswanthCManoj/azma-OpenHermes-2.5-Mistral-7B-agent-v1 | AswanthCManoj | "2024-01-16T11:56:25Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"region:us"
] | null | "2024-01-15T12:16:30Z" | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
jen5000/whisper_checkpoint | jen5000 | "2024-01-15T12:18:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:18:14Z" | Entry not found |
beibeif/CartPole_v1 | beibeif | "2024-01-15T12:18:37Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-15T12:18:30Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kxk254/my_awesome_model | kxk254 | "2024-01-15T12:23:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:23:48Z" | Entry not found |
DiolixMexico/DiolixMexico | DiolixMexico | "2024-01-15T12:26:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:24:37Z" | <p><a href="https://www.boxdrug.com/DioMexi"> <img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzgZN9R7VEe7ofv789lKnSh9TTqPjWFuvn0z8_XS0zfnc5Rs24VoXT4MBqJwJqMo_bUFtuyBA2xxZY4raccPy0ro2kZyGX25v_rxJctKeiDYYJ8eyJQ1QUJToFWJAzUGHNs7w3Xu0uDgc6-croGb2jJA4LIP-8RvGe6tKz9fzuDI26g-2IrndK3fS8DNE/w604-h396/Diolix%20Caps%20-%20MX%20.png" alt="enter image description here"> </a></p>
Diolix cápsulas para diabetes! ¡Compre en Mexico y obtenga descuento! leer reseñas 2024 y precio!
Diolix es la fórmula para la diabetes más eficaz que favorece un mejor control del azúcar en la sangre y, sobre todo, una salud saludable.
Diolix ¡¡Comprar ahora!! ¡Haga clic en el enlace a continuación para obtener más información y obtenga un 50% de descuento ahora! apresúrate !!
comprar ahora: https://www.boxdrug.com/DioMexi
➢Nombre del producto: Diolix
➢Categoría – Diabetes
➢Principales beneficios: mantener los niveles de azúcar en sangre
➢ Composición — Compuesto Orgánico Natural
➢ Efectos secundarios—NA
➢Calificación final: — 4.8
➢ Disponibilidad: en línea
➢Ofertas y descuentos; ¡AHORRA HOY! COMPRAR AHORA PARA comprar ¡¡¡OFERTA ESPECIAL!!!
<p><a href="https://www.boxdrug.com/DioMexi"> <img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzgZN9R7VEe7ofv789lKnSh9TTqPjWFuvn0z8_XS0zfnc5Rs24VoXT4MBqJwJqMo_bUFtuyBA2xxZY4raccPy0ro2kZyGX25v_rxJctKeiDYYJ8eyJQ1QUJToFWJAzUGHNs7w3Xu0uDgc6-croGb2jJA4LIP-8RvGe6tKz9fzuDI26g-2IrndK3fS8DNE/w604-h396/Diolix%20Caps%20-%20MX%20.png" alt="enter image description here"> </a></p>
¿Qué es el azúcar en sangre Diolix?
Diolix es un suplemento dietético formulado para ayudar a mantener niveles equilibrados de azúcar en sangre. Está compuesto por una mezcla única de ingredientes naturales, cada uno elegido por su potencial para respaldar la salud metabólica general y ayudar en el control de los niveles de glucosa en el cuerpo.
https://www.boxdrug.com/DioMexi
https://sites.google.com/view/diolix-capsula-mexico/home
https://healthtoned.blogspot.com/2024/01/diolix-capsulas-para-diabetes-compre-en.html
https://medium.com/@healthytalk24x7/diolix-c%C3%A1psulas-para-diabetes-compre-en-mexico-y-obtenga-descuento-leer-rese%C3%B1as-2024-y-precio-73ae92b8d62a
https://medium.com/@healthytalk24x7/diolix-mexico-c499929fffb6
https://www.weddingwire.com/website/diolix-and-mexico
https://www.weddingwire.com/website/diolix-and-mexico/diolixmexico-2
https://infogram.com/diolix-mexico-1hnp27mw93m3n2g?live
https://softisenilspain.hashnode.dev/diolix-capsulas-para-diabetes-compre-en-mexico-y-obtenga-descuento-leer-resenas-2024-y-precio
https://sway.cloud.microsoft/nzFAXv1sfH6EY73g
https://diolixmexico.company.site/
https://gamma.app/docs/Diolix-Mexico-kivprcidvub25je?mode=doc
https://groups.google.com/g/snshine/c/DcmH_e_T6D8
https://healthytalk24x7.wixsite.com/sunshine/post/diolix-c%C3%A1psulas-para-diabetes-compre-en-mexico-y-obtenga-descuento-leer-rese%C3%B1as-2024-y-precio
https://community.thebatraanumerology.com/user/diolixmexico
https://community.thebatraanumerology.com/post/diolix-capsulas-para-diabetes-compre-en-mexico-y-obtenga-descuento-leer-res--65a523703b2331bb8a25d594
https://replit.com/@diolixmexico
https://nepal.tribe.so/post/diolix-capsulas-para-diabetes-compre-en-mexico-y-obtenga-descuento-leer-res--65a523f26b054453deb6eff7
https://enkling.com/read-blog/12821
|
Kurokenshin/flaviasayuri | Kurokenshin | "2024-01-15T12:45:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:26:33Z" | Entry not found |
jvh/Mistral-Openchat-GEITje | jvh | "2024-01-15T12:33:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:openchat/openchat-3.5-0106",
"base_model:Rijgersberg/GEITje-7B-chat-v2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T12:30:10Z" | ---
base_model:
- openchat/openchat-3.5-0106
- Rijgersberg/GEITje-7B-chat-v2
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [Rijgersberg/GEITje-7B-chat-v2](https://huggingface.co/Rijgersberg/GEITje-7B-chat-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Rijgersberg/GEITje-7B-chat-v2
layer_range: [0, 32]
- model: openchat/openchat-3.5-0106
layer_range: [0, 32]
merge_method: slerp
base_model: openchat/openchat-3.5-0106
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
AHToone/llama-7b-qlora-ultrachat | AHToone | "2024-01-22T08:35:43Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-01-15T12:30:22Z" | Entry not found |
leedddd/xlm-roberta-base-finetuned-panx-de | leedddd | "2024-01-15T12:35:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:35:08Z" | Entry not found |
felipesampaio/modelotommy | felipesampaio | "2024-01-15T13:17:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:35:37Z" | Entry not found |
dolo650/mistral_instruct_generation | dolo650 | "2024-01-15T12:37:28Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-01-15T12:37:25Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4346 | 0.83 | 20 | 1.2796 |
| 1.3373 | 1.67 | 40 | 1.2653 |
| 1.1182 | 2.5 | 60 | 1.3080 |
| 0.9329 | 3.33 | 80 | 1.3794 |
| 0.8498 | 4.17 | 100 | 1.5338 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
SZ0/Rosalina | SZ0 | "2024-01-15T12:41:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:39:36Z" | Entry not found |
dolo650/Mistral-7B-mosaicml-instruct-v3-500 | dolo650 | "2024-01-15T12:41:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:41:35Z" | Entry not found |
Omarqq/code_phi-2.7b1 | Omarqq | "2024-01-15T12:51:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T12:47:58Z" | Entry not found |
DataVare/OST-To-EML-Converter | DataVare | "2024-01-15T12:51:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:49:30Z" | The smartest way to directly import an OST file with all mail
folders with attachments into an EML is the DataVare OST to
EML Converter Tool. Before migrating, you can examine the live
preview, and you can easily import both single and numerous MBOX
files. It supports a variety of email clients, including those
supported by EML, such as Applemail, WLM, Thunderbird, etc.
The software supported Numerous versions of Outlook like - 2003, 2007, 2013, 2016, 2019, and 2021. Only user-specified OST files
are selected with the use of the advanced filtering key. After the
migration, the internal layout of the OST database is intact.
The application exports OST email data natively and
securely into EML file format. The utility is made simple,
quick, and precise by the features. It is compatible with every
version of Windows OS, including Windows 11, 10, 8, 8.1, 7, Vista, and
XP. There is no need to install Outlook or other programs for
the conversion. It is easy to use for both technical and
non-technical users due to its user-friendly interface.
Read More:-
https://www.datavare.com/software/ost-to-eml-converter-expert.html |
vitu98/Liam | vitu98 | "2024-01-15T12:52:40Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-01-15T12:51:48Z" | ---
license: unknown
---
|
KingJulian687/q-FrozenLake-v1-4x4-noSlippery | KingJulian687 | "2024-01-15T12:53:30Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-15T12:53:28Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="KingJulian687/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jeppe-style/distilbert-base-uncased-italian-cr-entry-classification | jeppe-style | "2024-01-15T12:54:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:54:47Z" | Entry not found |
jvh/Mistral-Openchat-GEITje-v2 | jvh | "2024-01-15T13:00:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Rijgersberg/GEITje-7B-chat-v2",
"base_model:openchat/openchat-3.5-0106",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T12:57:34Z" | ---
base_model:
- Rijgersberg/GEITje-7B-chat-v2
- openchat/openchat-3.5-0106
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Rijgersberg/GEITje-7B-chat-v2](https://huggingface.co/Rijgersberg/GEITje-7B-chat-v2)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Rijgersberg/GEITje-7B-chat-v2
layer_range: [0, 32]
- model: openchat/openchat-3.5-0106
layer_range: [0, 32]
merge_method: slerp
base_model: Rijgersberg/GEITje-7B-chat-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
KingJulian687/My-Taxi-v3 | KingJulian687 | "2024-01-15T12:57:40Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-15T12:57:38Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: My-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="KingJulian687/My-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gzbang-test-org/model1 | gzbang-test-org | "2024-01-15T12:58:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T12:58:05Z" | Entry not found |
jlvdoorn/whisper-tiny-atco2-asr | jlvdoorn | "2024-01-15T14:03:19Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"doi:10.57967/hf/1626",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-15T12:59:29Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-atco2-asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-atco2-asr
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0505
- Wer: 112.9893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8611 | 12.5 | 50 | 1.5491 | 100.1779 |
| 0.5484 | 25.0 | 100 | 1.1962 | 91.5036 |
| 0.1272 | 37.5 | 150 | 1.0106 | 158.7189 |
| 0.0125 | 50.0 | 200 | 1.0290 | 124.3327 |
| 0.0074 | 62.5 | 250 | 1.0401 | 116.7705 |
| 0.005 | 75.0 | 300 | 1.0461 | 118.6833 |
| 0.0044 | 87.5 | 350 | 1.0493 | 113.0783 |
| 0.004 | 100.0 | 400 | 1.0505 | 112.9893 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.0
|
JoshXT/zephyr-7b-beta-32k | JoshXT | "2024-01-15T13:03:31Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/zephyr-sft",
"region:us"
] | null | "2024-01-15T13:02:39Z" | ---
library_name: peft
base_model: unsloth/zephyr-sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Dhanraj1503/Huggy | Dhanraj1503 | "2024-01-15T13:06:00Z" | 0 | 0 | ml-agents | [
"ml-agents",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2024-01-15T13:05:37Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Dhanraj1503/Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Prasanna16/FineTunedLlamaWithPython | Prasanna16 | "2024-01-17T05:31:55Z" | 0 | 1 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2024-01-15T13:10:31Z" | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an [python](https://huggingface.co/iamtarun/python_code_instructions_18k_alpaca) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
duanyu027/loyal-piano-m7-dpo-0115-125steps | duanyu027 | "2024-01-15T13:50:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T13:14:32Z" | Entry not found |
Kamsaka/Dehya | Kamsaka | "2024-01-15T13:25:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T13:18:21Z" | Entry not found |
Praghxx/Red | Praghxx | "2024-01-15T13:21:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T13:20:37Z" | Entry not found |
michaelhu1/poem | michaelhu1 | "2024-01-15T13:21:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-15T13:21:21Z" | Entry not found |
datalawyer/pedidos-transformerscrf-v5.3-8bit | datalawyer | "2024-01-16T22:58:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-01-15T13:21:28Z" | Entry not found |